You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 7 hours 5 min ago

Library of Congress: The Signal: Recommended Formats Statement: Expanding the Use, Expanding the Scope

Tue, 2016-07-26 17:32

This is a guest post by Ted Westervelt, head of acquisitions and cataloging for U.S. Serials – Arts, Humanities & Sciences at the Library of Congress.

“Model Photo: Parametric Gridshell.” Photo by James Diewald on Flickr.

As summer has fully arrived now, so too has the revised 2016-2017 version of the Library of Congress’s Recommended Formats Statement.

When the Library of Congress first issued the Recommended Formats Statement, one aim was to provide our staff with guidance on the technical characteristics of formats, which they could consult in the process of recommending and acquiring content. But we were also aware that preservation and long-term access to digital content is an interest shared by a wide variety of stakeholders and not simply a parochial concern of the Library. Nor did we have any mistaken impression that we would get all the right answers on our own or that the characteristics would not change over time. Outreach has therefore been an extremely important aspect of our work with the Recommended Formats, both to share the fruits of our labor with others who might find them useful and to get feedback on ways in which the Recommended Formats could be updated and improved.

We are grateful that the Statement is proving of value to others, as we had hoped. Closest to home, as the Library and the Copyright Office begin work on expanding mandatory deposit of electronic-only works to include eBooks and digital sound recordings, they are using the Recommended Formats as the starting point for the updates to the Best Edition Statement that will result from this. But its value is being recognized outside of our own institution.

The American Library Association’s Association for Library Collections Technical Services has recommended the Statement as a resource in one of its e-forums. And even farther afield, the UK’s Digital Preservation Coalition included it in their Digital Preservation Handbook this past autumn, bringing the Statement to a wider international audience.

The Statement has even caught the attention of those who fall outside the usual suspects of libraries, creators, publishers and vendors. Earlier this year, we were contacted by a representative from an architectural software firm. He (and others in the architectural field) has been concerned about the potential loss of architectural plans, as architectural files are now primarily created in digital formats with little thought as to their preservation. Though the Library of Congress has a significant Architecture, Design and Engineering collection, this is a community that overlaps little with our own. But he saw the intersection between the Recommended Formats and the needs of his own field and he came to us to see how the Recommended Formats might relate to digital files and data produced within the fields of architecture, design and engineering and how they might help encourage preservation of those creative works as well. This, in turn, led to the addition of Industry Foundation Classes — a data model developed to facilitate interoperability in the building industry — to the Statement. We hope it will lead to future interest, not simply from the architectural community but from any community of creators of digital content who wish their creations to last and to remain useful.

We have committed to an annual review and revision of the Recommended Formats Statement to ensure its usefulness to as wide a spectrum of stakeholders as possible. In doing so, we hope to encourage others to offer their knowledge and to prevent the Statement from falling out of sync with the technical realities of the world of digital creation. As we progress down this path, one of the benefits is that the changes each year to the hierarchies of technical characteristics and metadata become fewer and fewer. More and more stakeholders have provided their input already and, happily, the details of how digital content is created are not so revolutionary as to need to be completely rewritten annually. This allows for a sense of stability in the Statement without a sense of inertia. It also allows us to engage with types of digital creation with which we might not have addressed as closely or directly as was possible. This is proving to be the case with digital architectural plans and it is proving to be even more the case with the biggest change to the Recommended Formats with this new edition: the inclusion of websites as a category of creative content.

At the time of the launch of the first iteration of the Recommended Formats Statement, websites per se were not included as a category of creative content. This omission was the result of various concerns and perspectives held then but there was no gainsaying that it was definitely an omission. Of all the types of digital works, websites are probably the most open to creation and dissemination and probably the most common digital works available to users, but also not something that content creators have tended to preserve.

Unsurprisingly, this also tends to make them the type of digital creation that causes the most concern to those interested in digital preservation. So when the Federal Web Archiving Working Group reached out about how the Recommended Formats Statement might be of use in furthering the preservation of websites, this filled a notable gap in the Statement.

Naturally, the new section of the Statement on websites is not being launched into a vacuum. The prevalence of websites and much of their development is predicated on the enhancement of the user experience, either in creating them or in using them, which is not the same as encouraging their preservation. It is made very clear that the Statement’s section on websites is focused specifically on the actions and characteristics that will encourage its archivability and thereby its preservation and long-term use.

Nor does the Statement ignore the work that has been done already by other groups and other institutions to inform content creators of best practices for preservation-friendly websites, but instead builds upon them and links to them from the Statement itself. The intention of this section on websites is twofold. One is to provide a clear and simple reminder of the importance of considering the archivability of a website when creating it, not merely the ease of creating it and the ease of using it. The other is to bring together those simple actions along with links to other guidance in order to provide website creators with easy steps that they can take to ensure the works in which they are investing their time and energy can be archived and thereby continue to entertain, educate and inform well into the future.

As always, the completion of the latest version of the Recommended Formats Statement means the beginning of a new cycle, in which we shall work to make it as useful as possible. Having the community of stakeholders involved with digital works share a common commitment to the preservation and long-term access of those works will help ensure we succeed in saving these works for future generations.

So, use and share this version of the Statement and please provide any and all comments and feedback on how the 2016-2017 Recommended Formats Statement might be improved, expanded or used. This is for anyone who can find value in it; and if you think you can, we’d love to help you do so.

Equinox Software: Altoona Area Public Library Joins SPARK

Tue, 2016-07-26 17:21

FOR IMMEDIATE RELEASE

Duluth, Georgia–July 26, 2016

Equinox is proud to announce that Altoona Area Public Library was added to SPARK, the Pennsylvania Consortium overseen by PaILS.  Equinox has been providing full hosting, support, and migration to PaILS since 2013.  In that time, SPARK has seen explosive growth.  As of this writing, 105 libraries have migrated or plan to migrate within the next year.  Over 3,000,000 items have circulated in 2016 to over 550,000 patrons.  We are thrilled to be a part of this amazing progress!

Altoona went live on June 16.  Equinox performed the migration and also provided training to Altoona staff.  They are the first of 8 libraries coming together into the Blair County Library System.  This is the first SPARK migration where libraries within the same county are on separate databases and are merging patrons and coming together to resource share within a unified system.  Altoona serves 46,321 patrons with 137,392 items.

Mary Jinglewski, Equinox Training Services Librarian, had this to say about the move:  “I enjoyed training with Altoona Area Public Library, and I think they will be a great member of the PaILS community moving forward!”

About Equinox Software, Inc.

Equinox was founded by the original developers and designers of the Evergreen ILS. We are wholly devoted to the support and development of open source software in libraries, focusing on Evergreen, Koha, and the FulfILLment ILL system. We wrote over 80% of the Evergreen code base and continue to contribute more new features, bug fixes, and documentation than any other organization. Our team is fanatical about providing exceptional technical support. Over 98% of our support ticket responses are graded as “Excellent” by our customers. At Equinox, we are proud to be librarians. In fact, half of us have our ML(I)S. We understand you because we *are* you. We are Equinox, and we’d like to be awesome for you. For more information on Equinox, please visit http://www.esilibrary.com.

About Pennsylvania Integrated Library System

PaILS is the Pennsylvania Integrated Library System (ILS), a non-profit corporation that oversees SPARK, the open source ILS developed using Evergreen Open Source ILS.  PaILS is governed by a 9-member Board of Directors. The SPARK User Group members make recommendations and inform the Board of Directors.  A growing number of libraries large and small are PaILS members.

For more information about about PaILS and SPARK, please visit http://sparkpa.org/.

About Evergreen

Evergreen is an award-winning ILS developed with the intent of providing an open source product able to meet the diverse needs of consortia and high transaction public libraries. However, it has proven to be equally successful in smaller installations including special and academic libraries. Today, over 1400 libraries across the US and Canada are using Evergreen including NC Cardinal, SC Lends, and B.C. Sitka.

For more information about Evergreen, including a list of all known Evergreen installations, see http://evergreen-ils.org.

LibUX: Library Services for People with Memory Loss, Dementia, and Alzheimers

Tue, 2016-07-26 15:43

Sarah Houghton (@TheLiB) summarizes what her team has learned about serving older adults with memory issues. We can make accommodations in our design, too. In May, Laurence Ivil and Paul Myles wrote Designing A Dementia-Friendly Website, which makes the point that

An ever-growing number of web users around the world are living with dementia. They have very varied levels of computer literacy and may be experiencing some of the following issues: memory loss, confusion, issues with vision and perception, difficulties sequencing and processing information, reduced problem-solving abilities, or problems with language. Just when we thought we had inclusive design pegged, a completely new dimension emerges.

I think specifically their key lessons about layout and navigation are really good.

What’s more, as patrons these people may be even more vulnerable because, as Sarah says, libraries are trusted entities. So these design decisions demand even greater consideration.

Libraries are uniquely positioned to see changes in our regular users. We have people who come in all the time, and we can see changes in their behavior, mood, and appearance that others who see them less often would never recognize. Likewise, libraries and librarians are trusted entities–you may have people being more open and letting their guard down with you in a way that lets you observe what’s happening to them more directly. Finally, people who work in libraries generally really care a lot about other people–and that in-built sensitivity and care can help when seeing a change in someone’s mental health and abilities. Sarah Houghton

Library Services for People with Memory Loss, Dementia, and Alzheimers

The post Library Services for People with Memory Loss, Dementia, and Alzheimers appeared first on LibUX.

District Dispatch: Coding at the library? Join the 2016 Congressional App Challenge

Tue, 2016-07-26 15:21

Last week marked the official start of the 2016 Congressional App Challenge, an annual nationwide event to engage student creativity and encourage participation in STEM (science, technology, engineering, and math) and computer science (CS) education. The Challenge allows high school students from across the country to compete against their peers by creating and exhibiting their software application (or app) for mobile, tablet, or computer devices. Winners in each district will be recognized by their Member of Congress. The Challenge is sponsored by the Internet Education Foundation and supported by ALA.

Why coding at the library? Coding could come across as the latest learning fad, but skills developed through coding align closely with core library activities such as critical thinking, problem solving, collaborative learning, and now connected learning and computational thinking. Coding in libraries is a logical progression in services for youth.

If you’ve never tried coding before, the prospect of teaching it at your library may seem daunting. But even a cursory scan of libraries across the country reveals that library professionals everywhere, at all levels of experience, are either teaching kids how to code or enabling it through the use of community volunteers.   Teens and tweens are learning to code using LED lights and basic circuits, creating animated GIFs, designing games using JavaScript and Python in CodeCombat and the youngest learners are experiencing digitally enhanced storytime with apps and digital media at the Orlando (FL) Public Library. Kids at the Onondaga (NY) Public Library learn coding skills by developing a Flatverse game over the course of a 4 day camp. Girls at the Gaithersburg (MD) Public Library are learning to code in “Girls Just Want to Compute,” a two week camp for teen and tween girls. These programs and many others are a prime way to expose kids to coding and inspire them to want to keep learning.

The App Challenge can be another means to engage teens at your library. Libraries can encourage students to participate in the Challenge by having an App Challenge event-  host an “App-a-thon,” have a game night for teens to work on their Apps, or start an App building club.

At the launch, over 140 Members of Congress from 38 states signed up to participate in the 2016 Congressional App Challenge.  Check to see if your district is participating and if not, you can use a letter template on the Challenge Website to send a request to your Member of Congress.

If you do decide to participate we encourage you to share what you’re doing using the App Challenge hashtag #HouseofCode and ALA’s hashtag #readytocode @youthandtech. The App Challenge runs through November 2. Look for more information throughout the competition.

The post Coding at the library? Join the 2016 Congressional App Challenge appeared first on District Dispatch.

David Rosenthal: The Citation Graph

Tue, 2016-07-26 15:00
An important point raised during the discussions at the recent JISC-CNI meeting is also raised by Larivière et al's A simple proposal for the publication of journal citation distributions:
However, the raw citation data used here are not publicly available but remain the property of Thomson Reuters. A logical step to facilitate scrutiny by independent researchers would therefore be for publishers to make the reference lists of their articles publicly available. Most publishers already provide these lists as part of the metadata they submit to the Crossref metadata database and can easily permit Crossref to make them public, though relatively few have opted to do so. If all Publisher and Society members of Crossref (over 5,300 organisations) were to grant this permission, it would enable more open research into citations in particular and into scholarly communication in general. In other words, despite the importance of the citation graph for understanding and measuring the output of science, the data are in private hands, and are analyzed by opaque algorithms to produce a metric (journal impact factor) that is easily gamed and is corrupting the entire research ecosystem.

Simply by asking to flip a bit, publishers already providing their citations to CrossRef can make them public, but only a few have done so.

Larivière et al's painstaking research shows that journal publishers and others with access to these private databases (Web of Science and Scopus) can use it to graph the distribution of citations to the articles they publish. Doing so reveals that:
the shape of the distribution is highly skewed to the left, being dominated by papers with lower numbers of citations. Typically, 65-75% of the articles have fewer citations than indicated by the JIF. The distributions are also characterized by long rightward tails; for the set of journals analyzed here, only 15-25% of the articles account for 50% of the citations Thus, as has been shown many times before, the impact factor of a journal conveys no useful information about the quality of a paper it contains. Further, the data on which it is based is itself suspect:
On a technical point, the many unmatched citations ... that were discovered in the data for eLife, Nature Communications, Proceedings of the Royal Society: Biology Sciences and Scientific Reports raises concerns about the general quality of the data provided by Thomson Reuters. Searches for citations to eLife papers, for example, have revealed that the data in the Web of ScienceTM are incomplete owing to technical problems that Thomson Reuters is currently working to resolve. ... Because the citation graph data is not public, audits such as Larivière et al's are difficult and rare. Were the data to be public, both publishers and authors would be able to, and motivated to, improve it. It is perhaps a straw in the wind that Larivière's co-authors include senior figures from PLoS, AAAS, eLife, EMBO, Nature and the Royal Society.

LITA: Stop Helping! How to Resist All of Your Librarian Urges and Strategically Moderate a Pain Point in Computer-Based Usability Testing

Tue, 2016-07-26 14:00

Editor’s note: This is a guest post by Jaci Paige Wilkinson.

Librarians are consummate teachers, helpers, and cheerleaders.  We might glow at the reference desk when a patron walks away with that perfect article or a new search strategy.  Or we fist pump when a student e-mails us at 7pm on a Friday to ask for help identifying the composition date of J.S. Bach’s BWV 433.  But when we lead usability testing that urge to be helpful must be resisted for the sake of recording accurate user behavior (Krug, 2000). We won’t be there, after all, to help the user when they’re using our website for their own purposes.

What about when a participant gets something wrong or gets stuck?  What about a nudge? What about a hint?  No matter how much the participant struggles, it’s crucial for both the testing process and the resulting data that we navigate these “pain points” with care and restraint.  This is  particularly tricky in non-lab, lightweight testing scenarios.  If you have only 10-30 minutes with a participant or you’re in an informal setting, you, as the facilitator, are less likely to have the tools or the time to probe an unusual behavior or a pain point (Travis, 2014).  However, pain points, even the non-completion of a task, provide insight.  Librarians moderating usability testing must carefully navigate these moments to maximize the useful data they provide.  

How should we move the test forward without helping but also without hindering a participant’s natural process?  If the test in question is a concurrent think-aloud protocol, you, as the test moderator, are probably used to reminding participants to think out loud while they complete the test.  Those reminders sound like “What are you doing now?”, “What was that you just did?”, or “Why did you do that?”.  Drawing from moderator cues used in think aloud protocols, this article explains four tips to optimize computer-based usability testing in those moments when a participant’s activity slows, or slams, to a halt.

There are two main ways for the tips described below to come into play.  Either the participant specifically asks for help or you intervene because of a lack of progress.  The first case is easy because a participant self-identified as experiencing a pain point.  In the second case, identify indicators that this participant is not moving forward or they are stalling: they stay on one page for a period of time or they keep pressing the back button.  One frequently observed behavior that I never interfere with is when a participant repeats a step or click-path even when it didn’t work the first time.  This is a very important observation for two reasons: first, does the participant realize that they have already done this?  If so, why does the participant think this will work the second time?  Observe as many useful behaviors as possible before stepping in.  When you do step in, use these tips in this order:  

ASK a participant to reflect on what they’ve done so far!

Get your participant talking about where they started and how they got here.  You can be as blunt as: “OK, tell me what you’re looking at and why you think it is wrong”.  This particular tip has the potential to yield valuable insights.  What did the participant THINK they were going to see on the page and now what do they think this page is?  When you look at this data later, consider what it says about the architecture and language of the pages this participant used.  For instance, why did she think the library hours would be on “About” page?

Notice that nowhere have I mentioned using the back button or returning to the start page of the task.  This is usually the ideal course of action; once a user goes backwards through his/her clickpath he/she can make some new decisions.  But this idea should come from the user, not from you.  Avoid using language that hints at a specific direction such as “Why don’t you back up a couple of steps?”  This sort of comment is more of a prompt for action than reflection.         

Read the question or prompt again! Then ask the participant to pick out key words in what you read that might help them think of different ways to conquer the task at hand.

“I see you’re having some trouble thinking of where to go next.  Stop for one moment and listen to me read the question again”.  An immediate diagnosis of this problem is that there was jargon in the script that misdirected the participant.  Could the participant’s confusion about where to find the “religion department library liaison” be partially due to that fact that he had never heard of a “department library liaison” before?  Letting the participant hear the prompt for a second or third time might allow him to connect language on the website with language in the prompt.  If repetition doesn’t help, you can even ask the participant to name some of the important words in the prompt.   

Another way to assist a participant with the prompt is to provide him with his own script.  You can also ask him to read each task or question out loud: in usability testing, it has been observed that this direction “actually encouraged the “think aloud” process” that is frequently used” (Battleson et al., 2001). The think aloud process and its “additional cognitive activity changes the sequence of mediating thoughts.  Instructions to explain and describe the content of thought are reliably associated with changes in ability to solve problems correctly” (Ericsson & Simon, 1993).  Reading the prompt on a piece of paper with his own eyes, especially in combination with hearing you speak the prompt out loud, gives the participant multiple ways to process the information.

Choose a Point of No Return and don’t treat it as a failure.

Don’t let an uncompleted or unsuccessful task tank your overall test.  Wandering off with the participant will turn the pace sluggish and reduce the participant’s morale. Choose a point of no return.  Have an encouraging phrase at ready: “Great!  We can stop here, that was really helpful.  Now let’s move on to the next question”.  There is an honesty to that phrasing: you demonstrate to your participant that what he is doing, even if he doesn’t think it is “right” is still helpful.  It is an unproductive use of your time, and his, to let him continue if you aren’t collecting any more valuable data in the process.   The attitude cultivated at a non-completed task or pain point will definitely impact performance and morale for subsequent tasks.  

Include a question at the end to allow the participant to share comments or feelings felt throughout the test.

This is a tricky and potentially controversial suggestion.  In usability testing and user experience, the distinction between studying use instead of opinion is crucial.  We seek to observe user behavior, not collect their feedback.  That’s why we scoff at market research and regard focus groups suspiciously (Nielsen, 1999).  However, I still recommend ending a usability test with a question like “Is there anything else you’d like to tell us about your experience today?” or “Do you have any questions or further comments or observations about the tasks you just completed?”  I ask it specifically because if there was one or more pain points in the course of a test, a participant will likely remember it.  This gives her the space to give you more interesting data and, like with tip number three, this final question cultivates positive morale between you and the participant.  She will leave your testing location feeling valued and listened to.

As a librarian, I know you were trained to help, empathize, and cultivate knowledge in library users.  But usability testing is not the same as a shift at the research help desk!  Steel your heart for the sake of collecting wonderfully useful data that will improve your library’s resources and services.  Those pain points and unfinished tasks are solid gold.  Remember, too, that you aren’t asking a participant to “go negative” on the interface (Wilson, 2010) or manufacture failure, you are interested in recording the most accurate user experience possible and understanding the behavior behind it.  Use these tips, if not word for word, then at least to meditate on the environment you curate when conducting usability testing and how to optimize data collection.    

 

Bibliography

Battleson, B., Booth, A., & Weintrop, J. (2001). Usability testing of an academic library web site: a case study. The Journal of Academic Librarianship, 27(3), 188-198.

Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis.

Travis, David “5 Provocative Views on Usability Testing” User Focus 12 October 2014. <http://www.userfocus.co.uk/articles/5-provocative-views.html>

Nielsen, Jakob. “Voodoo Usability” Nielsen Norman Group 12 December 1999. <https://www.nngroup.com/articles/voodoo-usability/>
Wilson, Michael. “Encouraging Negative Feedback During User Testing” UX Booth 25 May 2010. <http://www.uxbooth.com/articles/encouraging-negative-feedback-during-user-testing/>

Islandora: Dispatches from the User List: new tools for creating and ingesting derivatives outside of a production Islandora

Tue, 2016-07-26 13:48
This week's blog is another visit to the user listserv to highlight something really great you may have missed if you are not a subscriber. We're bringing you a single entry this time around, from Mark Jordan (from Simon Fraser University and Chairman of the Islandora Foundation when he's not busy writing new modules):   Coming out of the last DevOps Interest Group call, which helped me focus some ideas we've been throwing around for a while here at SFU, and on the heels of an incredible iCamp, which demonstrated once again that our community is extraordinarily collaborative and supportive, I've put together two complementary Islandora modules intended to help address an issue many of us face: how to scale large ingests of content into Islandora. The two modules are:   https://github.com/mjordan/islandora_dump_datastreams https://github.com/mjordan/islandora_batch_with_derivs   The first one writes out each object's datastreams onto the server's filesystem, and the second provides a drush command that allows batch ingests to bypass the standard time-consuming derivative creation process for images, PDFs, and other types of objects.   Batch ingests that are run with Islandora's "Defer derivative generation during ingest" configuration option enabled (which means that derivative creation is turned off) are hugely faster than batch ingests run with derivative generation left on. In particular, generating OCR from images can be very time consuming, not to mention generating web-friendly versions of video and audio files. There are a number of ways to generate derivatives independently of Islandora's standard ingestion workflow, such as UPEI's Taverna-based microservices and several approaches taken by DiscoveryGarden. The currently-on-hiatus Digital Preservation Interest Group spent some time thinking about generating derivatives, and part of that activity compelled me to produce a longish "discussion paper" on the topic. Islandora CLAW is being built with horizontal scaling as a top-tier feature, but for Islandora 7.x-1.x, we're stuck for the moment with working around the problem of scaling ingestion.   The approach taken by the two new modules I introduce here is based on the ability to generate derivatives outside of a production Islandora instance, and then, ingest the objects with all their datastreams into the production Islandora. This approach raises the question of where to generate those derivatives. The answer is "in additional Islandora instances." The Islandora Vagrant provides an excellent platform for doing this. Capable implementers of Islandora could set up 10 throw-away Vagrants (a good use for out-of-warranty PCs?) running in parallel to generate derivatives for loading into a single production instance. All that would be required is to enable the Islandora Dump Datastreams module on the Vagrants and configure it to save the output from each Vagrant to storage space accessible to the production instance. When all the derivatives have been generated on the Vagrants, running the drush command provided by Islandora Batch with Derivatives on the production instance (with "Defer derivative generation during ingest" enabled of course) would ingest the full objects in a fraction of the time it would take to have the single production Islandora generate all the derivatives by itself.   Islandora Batch with Derivatives is not the first module to allow the ingestion of pregenerated derivatives. The Islandora Book Batch and Islandora Newspaper Batch modules have had that feature for some time. During SFU's recent migration of around 900,000 pages of newspapers, we saved months of ingestion time because we pulled all our derivatives for the newspaper pages out of CONTENTdm, and then, with "Defer derivative generation during ingest" enabled, batch ingested full newspaper issue packages, with all derivatives in place. Running OCR on our server for that many pages would not have been practical. All the Islandora Batch with Derivatives module linked above does is let you do that with basic images, large images, PDFs, videos, and audio objects too.   I've mentioned the Move to Islandora Kit a few times in the past in these user groups; since the Open Repositories conference, we've added functionality to it to support migrating to Islandora from OAI-PMH compliant platforms. At SFU, we are developing workflows that combine MIK with the approach I describe above as we move on to post-migration ingests of content.   If you foresee a large migration to Islandora in your short-term future, or are planning to ingest an especially large collection of objects and are looking for ways to speed up the process, introduce your project here on the user groups so that we can share knowledge and tools. If you're waiting for CLAW to migrate to Islandora, help push things along by writing up some migration use cases or by getting involved in the Fedora Performance and Scalability Group.

Open Knowledge Foundation: Sinar Project in Malaysia works to open budget data at all levels of government

Tue, 2016-07-26 11:02

“Open Spending Data in Constrained Environments” is a project being lead by Sinar Project in Malaysia aimed exploring ways to of making critical information public and accessible to Malaysian citizens. The project is supported by the Open Data for Development programme and has been run in collaboration with Open Knowledge International & OpenSpending

In Malaysia, fiscal information exists at all three levels of government, the federal, the state and the municipal. There are complicated relationships and laws that dictate how budget flows through the different levels of government and, as the information is not published as open data, by any level of government, it is incredibly challenging for citizens to understand and track how public funds are being spent. This lack of transparency creates an environment for potential mismanagement of funds and facilitates corruption.

Earlier this year, the prime minister of Malaysia, Dato’ Seri Najib Razak, announced the revised budgets for 2016 in response to slow economic growth, that is a result of declining oil and commodity price coupled with stagnant demand from China. As a result, it was paramount to restructure the 2016 federal budget in order to find a savings of US $2.1 billion. That will make possible for the government  to maintain its 2016 fiscal budget target at least at 3.1 percent of the country’s GDP. One of the biggest cuts from the revised 2016 budget is the public scholarships for higher education.

“Higher education institutions had their budget slashed by RM2.4 billion (US$573 million), from RM15.78 billion (US$3.8 billion) in 2015 to RM13.37 billion (US$3.2 billion) for the year 2016.” – Murray Hunter, Asian Correspondent

When numbers get this big, it is often difficult for people to understand what the real impact and implications of these cuts are going to be on the service citizens depend on. While it is the role of journalists and civil society to act as an infomediary and relay this information to citizens, without access to comprehensive, reliable budget and spending data it becomes impossible for us to fulfil our civic duty of keeping citizens informed. Open budget and spending data is vital in order to demonstrate to the public the real life impact large budget cuts will have. Over the past few months, we have worked on a pilot project to try to make this possible.

While the federal budgets that have been presented to Parliament are accessible on the Ministry of Finance website, we were only able to access state and municipal governments budgets through directly contacting state assemblyman and local councillors.
Given this lack of proactive transparency and limited mechanisms for reactive transparency, it was necessary to employ alternative mechanism devised to hold governments accountable. In this case, we decided to conduct a social audit.

Kota Damansara public housing. Credit: Sze Ming

Social audits are mechanisms in which users collect evidence to publicly audit, as a community, the provision of services by government. One essential component of a social audit is taking advantage of the opportunity to work closely with communities in order to connect and empower traditionally disenfranchised communities.

Here in Malaysia, we started our social audit work by conducting several meetings with communities living in public house in Kota Damansara, a town in the district of Petaling Jaya in Selangor State, in order to gain a better understanding of the challenges they were facing and to map these issues against various socio-economic and global development indicators.

Then, we conducted an urban poverty survey where we managed to collect essential data on 415 residents from 4 blocks in Kota Damansara public housing. This urban poverty survey covered several indicators that were able to tell us more about the poverty rate, the unemployment rate, the child mortality rate and the literacy rate within this community. From the preliminary results of the survey, we have found that all residents are low income earners, currently living under the poverty line. These findings stand in contrast to the question asked in the Parliament last year on income distribution of the nation’s residents, where it was declared that there is a decrease of about 0.421% of people in poverty in Malaysia. Moreover, in order for citizens to hold the Selangor state government accountable, civil society could use this data as evidence to demand that allocated budgets are increased in order to give financial/welfare support to disenfranchised communities in Kota Damansara public housing.

What’s next? In order to measure the impact of open data and social audit, we are planning a follow up of urban poverty surveys. Since the upcoming general elections will be held on 2018, the follow up of the surveys  will be applied each 4 months after the first survey, in order to document if there are any changes or improvements made by the decision makers for better policies in the respective constituency and making better budget priorities that match the proposed/approved public policies.

 

DuraSpace News: REGISTER for Fedora Camp in NYC

Tue, 2016-07-26 00:00

Austin, TX  The Fedora Project is pleased to announce that Fedora Camp in NYC, hosted by Columbia University Libraries, will be offered at Columbia University’s Butler Library in New York City November 28-30, 2016.

DuraSpace News: CALL for Expressions of Interest in Hosting Annual Open Repositories Conference, 2018 and 2019

Tue, 2016-07-26 00:00

From William Nixon and Elin Stangeland for the Open Repositories Steering Committee

Glasgow, Scotland  The Open Repositories Steering Committee seeks Expressions of Interest (EoI) from candidate host organizations for the 2018 and 2019 Open Repositories Annual Conference series. The call is issued for two years this time to enable better planning ahead of the conferences and to secure a good geographical distribution over time. Proposals from all geographic areas will be given consideration. 

LITA: Call for Nominations: LITA Top Tech Trends Panel at ALA Midwinter 2017

Mon, 2016-07-25 20:08

It’s that time of year again! We’re asking for you to either nominate yourself or someone you know who would be a great addition to the panel of speakers for the 2017 Midwinter Top Tech Trends program in Atlanta, GA.

LITA’s Top Trends Program has traditionally been one of the most popular programs at ALA. Each panelist discusses two trends in technology impacting libraries and engages in a moderated discussion with each other and the audience.

Submit a nomination at: http://bit.ly/lita-toptechtrends-mw2017.  Deadline is Sunday, August 28th.

The LITA Top Tech Trends Committee will review each submission and select panelist based on their proposed trends, experience, and overall balance to the panel.

For more information about past programs, please visit http://www.ala.org/lita/ttt.

LITA: Call for Proposals, LITA @ ALA Annual 2017

Mon, 2016-07-25 17:02

Call for Proposals for the 2017 Annual Conference Programs and Preconferences!

The LITA Program Planning Committee (PPC) is now accepting innovative and creative proposals for the 2017 Annual American Library Association Conference.  We’re looking for full or half day pre-conference ideas as well as 60- and 90-minute conference presentations. The focus should be on technology in libraries, whether that’s use of, new ideas for, trends in, or interesting/innovative projects being explored – it’s all for you to propose.

When and Where is the Conference?

The 2017 Annual ALA Conference will be held  in Chicago, IL, from June 22nd through 27th.

What kind of topics are we looking for?

We’re looking for programs of interest to all library/information agency types, that inspire technological change and adoption, or/and generally go above and beyond the everyday.

We regularly receive many more proposals than we can program into the 20 slots available to LITA at the ALA Annual Conference. These great ideas and programs all come from contributions like yours. We look forward to hearing the great ideas you will share with us this year.

This link from the 2016 ALA Annual conference scheduler shows the great LITA programs from this past year.

When are proposals due?

September 9, 2016

How I do submit a proposal?

Fill out this form bit.ly/litacfpannual2017

Program descriptions should be 150 words or less.

When will I have an answer?

The committee will begin reviewing proposals after the submission deadline; notifications will be sent out on October 3, 2016

Do I have to be a member of ALA/LITA? or a LITA Interest Group (IG) or a committee?

No! We welcome proposals from anyone who feels they have something to offer regarding library technology. Unfortunately, we are not able to provide financial support for speakers. Because of the limited number of programs, LITA IGs and Committees will receive preference where two equally well written programs are submitted. Presenters may be asked to combine programs or work with an IG/Committee where similar topics have been proposed.

Got another question?

Please feel free to email Nicole Sump-Crethar (PPC chair) (sumpcre@okstate.edu)

LITA: To LISTSERV or to Not LISTSERV

Mon, 2016-07-25 14:23

Beginning in August 2016, the Special Libraries Association (SLA) discontinued its traditional discussion-based listserv in favor of a new service: SLA Connect. If you click through to the post on Information Today, Inc. you can see the host of services and tools and enhancements moving to SLA Connect provides for SLA members. However, change is difficult and this change caught a number of members by surprise. We all know how difficult it is to communicate change to patrons. It’s no easier with fellow professionals.

The rollout was going to start July 1, 2016 but got pushed back a month because of member feedback. Since this is technology, of course there were compliant issues with the new server so some services that were scheduled for a slower transition got moved more quickly and old platforms were shut down. The whole enterprise is a complete change to how people were used to communicating with fellow SLA professionals. Small changes are hard, wholesale changes even more so. It looks like the leaders of SLA have a good plan in mind and are listening to member feedback which is great.

We recently went through a transition here in WI where the state-wide public library listserv was transitioned to Google+. The Department of Public Instruction (DPI) did a good job in getting the message out to people but the decision was not popular. I came to the discussion late because historically I would check in with broader reach listservs (CODE4LIB, LITA, WISPUBLIB, Polaris, etc.) about once a month. Sometimes even less frequently. We have local listservs that I check on a daily basis, but those impact my job directly.

I wasn’t thrilled about the move to Google+ for a few reasons. First, while I had a Google account, I try to keep my personal and work lives separated. This would mean creating a new Google account to use with work. Which meant all the work needed with setting up a new account and making sure that I’m checking it on a regular basis. Second, the thing I like about an email listserv is that I can create a rule to move all the messages into a folder and then when I scan the folder I can see which subjects had the most discussion. That disappears using Google+. I can get the initial post sent to my inbox but any follow-up posts/discussion doesn’t show up there.

This was a problem since instead of seeing twenty messages on a subject I’d now see one. I’d have to launch that message in Google+ to see whether or not people were talking about it. It’s also a problem as the new platform was not getting the traffic the traditional email listserv got so a lot of the state-wide community knowledge was not being shared. It’s getting better and DPI is doing a great job in leading the initiative for discussions. It doesn’t have the volume it used to, but it’s improving.

I needed to figure out a way to make myself check the Google+ discussions with more regularity. In comes Habitica. Our own inestimable Lindsay Cronk wrote about Habitica back in February. Habitica gamifies your to-do list. You create a small avatar and work your way through leveling him/her up to become a more powerful character. There are three basic categories: habits, dailies, and to-dos. Habits are things to improve yourself. For me it’s things like hitting my step count for the day or not drinking soda. There can be a positive and/or negative effect for your habits. You can lose health. Your little character can die. To-dos are traditional to-do list things. You can add due dates, checklists, all sorts of things. Dailies are things you have to do on a regular basis.

This is where Habitica helps me most. I have weekly reminders to check my big listservs including DPI’s Google+ feed. I have daily reminders to check in with the new supervisors who report to me. These are all things that I should be doing anyway but it’s a nice little reminder when I got bogged down in a task to take a break and get something checked off my list. I’ve set these simple dailies at the ‘trivial’ difficulty level so I’m not leveling up my character too quickly. I’m currently a 19th level fighter on Habitica but there are still times when my health gets really low. More importantly its kept me on top of my listservs and communication with fellow professionals in a way that I was not doing of my own volition.

What’s your favorite way to keep on top of communication with fellow professionals?

OCLC Dev Network: WorldCat Knowledge base API Outage

Mon, 2016-07-25 14:00

The WorldCat Knowledge base is currently experiencing issues where all requests to the API are failing.

LITA: Digital Displays on a Budget: Hardware

Mon, 2016-07-25 12:00

 

Introduction

At the JPL Library we recently remodeled our collaborative workspace. This process allowed us to repurpose underutilized televisions into digital displays. Digital displays can be an effective way to communicate key events and information to our patrons. However, running displays has usually required either expensive hardware (installing new cables to tap into local media hosts) or software (Movie Maker, 3rd Party software), sometimes both. We had the displays ready but needed cost effective solutions for hosting and creating the content. Enter Raspberry Pi and a movie creator that can be found in any Microsoft Office Suite purchased since 2010… Microsoft PowerPoint.

In this post I will cover how to select, setup, and install the hardware. The follow up post will go over the content creation aspect.

Hardware Requirements Displays

Luckily for us, this part took care of itself. If you need to obtain a display, I have two recommendations:

  • Verify the display has a convenient HDMI port. You are looking for a port that allows you to discreetly tuck the Raspberry Pi behind the display. Additionally, the port should be easily accessible if the need arises to swap out HDMI cables.
  • Opt for a display that is widescreen capable (16:9 aspect ratio). This will provide you with a greater canvas for your content. Whatever aspect ratio you decide upon, make sure your content settings match. This graphic sums up the difference between the aspect ratios of widescreen and standard (4:3 aspect ratio).
Raspberry Pi Description

There are plenty of blog posts and documentation that cover the basics of what Raspberry Pi is and what is fully capable of. In short, you can think of it as a mini and price effective computer. For this project are interested in its price point, native movie player, and operating system customization prowess.

Selection Devices

There are three main iterations available for purchase:

Obviously I would recommend the Pi 3, which was just released in late February, over the rest. All three are capable of running HD quality videos, but the Pi 3 will definitely run smoother. Also, the Pi 3 has on-board Wi-Fi and Bluetooth connectivity, on previous versions this required purchasing add-ons and used up USB slots.

However, these prices are only for the computer itself. You would still need, at the minimum, an SD card to store the operating system and files, power adaptor, keyboard and mouse, and an HDMI cable. The only advantage of selecting the 2 is that there are several pre-selected bundles created by 3rd party sellers that can lower the costs. Make sure to check the bundle details to confirm it contains the Raspberry Pi iteration that you want.

Bundles

Here are some recommended bundles that contain all you need (minus keyboard and mouse) for this project:

Keyboard & Mouse

Most USB keyboards and mice will work with a Pi but opt for simple ones to avoid drawing too much power from it. If you do not have a spare one consider this Bluetooth Keyboard and Mouse Touchpad. The touchpad is a bit wonky but it’ll get the job done and the portability is worth it.

Physical Setup

Getting the Raspberry Pi ready to boot is fairly easy. We just need to plug in the power supply, insert Micro SD Card with the operating system, and attach a display. Granted this all just gets to you a basic screen with the Pi awaiting instructions. A mouse, keyboard, and network connection are pretty much required for setting up the Pi software in order to get the device into a usable state.

Software Setup

The program we use is the Raspberry Pi Video Looper. This setup works exactly how it sounds: the Raspberry Pi plays and loops videos. However, before we can install that we need to get the Raspberry Pi up and running with the latest Raspbian operating system.

Installing Raspbian Using personal SD

If you decided to use your own SD card, see this guide on how to get up and running.

Using NOOBS

If you bought a bundle, chances are that it came with a Micro SD Card pre-loaded with NOOBS (New Out of Box Software). With NOOBS we can just boot up the Pi and select Raspbian from the first menu. Make sure to also change the Language and Keyboard to your preferred settings, such as English (US) and us.

Once you hit Install, the NOOBS software will do its thing. Grab a cup of coffee or walk the dog as it will take a bit to complete the install. After installation the Pi will reboot and load up Raspi-config to let you adjust settings. There is a wide range of options but the two that should be adjusted right now are:

  1. Change User Password
  2. SSH – If you want remote access, you will need to Enable to SSH. For more information on this option see the Raspberry Pi Documentation.

After adjusting the settings, the Pi will boot the desktop environment. Because the NOOBS version loaded onto the card might be dated, the next step is to update the firmware and packages. To do this, click on the start menu and select the terminal and type in the following commands:

  • sudo apt-get update
  • sudo apt-get upgrade
  • sudo rpi-update
  • sudo reboot

Once the Pi reboots we can continue to the next phase, installing the video looper.

Installing Video Looper

For a complete guide on installing and adjusting the Video Looper, see Adafruit’s Raspberry Pi Video Looper documentation. In short, the installation process is all but three terminal commands:

After a few minutes the install is complete and the Video Looper is good to go! If you do not have any movies loaded your PI will now display “Insert USB drive with compatible movies”. Inserting a USB drive into the Pi will initiate a countdown followed by video playback.

Using Video Looper

Now that the Pi is all set you can load your videos onto an USB stick and the Looper will take care of the rest.  The Video Looper is quite versatile and can display movies in the following formats:

  • AVI
  • MOV
  • MKV
  • MP3
  • M4V

If your Pi fails to read the files on the USB drive, try loading them on another. I had several USB sticks that I went through before it read the files. Sadly, most of the vendor USB stick freebies were incompatible.

Lastly, the Video Looper has a few configuration options that you adjust to best fit your needs. Of those listed in the documentation I would recommend adjusting the file locations (USB stick vs on the Pi itself) and video player. The last one only being relevant if you cannot live with the loop delay between movies.

Unit Install

               After the Video Looper Steup we can now install the unit behind the display. We opted to attach the device using Velcro tape and a 0.3m Flat HDMI cable. Thanks toe the Velcro I can remove and reattach the Pi as needed. The flat HDMI cable reduces the need for cable management . The biggest issue we had was tucking away the extra cable from the power supply, a few well placed Velcro ties. Velcro, is there anything it can’t solve?

Wrap Up

Well if you’ve made it this far I hope you are on your way to creating a digital display for your institution. In my next post I will cover how we used Microsoft PowerPoint to create our videos in a quick and efficient manner.

The Raspberry Pi is a wonderful device so even if it the Video Looper setup fails to live up to your needs, you can easily find another project for it to handle. May I suggest the Game Boy Emulator?

Open Knowledge Foundation: Towards the conformation of the Third Greek OGP Action Plan: Open Knowledge Greece makes three commitments

Mon, 2016-07-25 09:37

This blog post was written by Olga Kalatzi from OK Greece

On the 5th of July in Athens, the open dialogue on Greece’s Third National Action Plan to the Open Government Partnership commenced where Open Knowledge Greece presented its 3 commitments for the third action plan.

The commitments of OK Greece included School of Data for public servants, the Open Data Index for cities and local administrations and linked open and participatory budgets. All of them come with implementation resources and timetables and satisfy all the OGP principles.

The event has been supported by the Bodossaki Foundation and different stakeholders participated: OK Greece, Openwise (IRM), Gov2u, GFOSS, Vouliwatch, diaNEOsis, as well experts from OGP Support Unit and Mrs. Nancy Routzouni, advisor on e-Government to the Alternate Minister for Administrative Reform.
OK Greece was represented in the event by its President Dr. Charalampos Bratsas and Marinos Papadopoulos, while OK Greece OGP team in Thessaloniki participated remotely through Skype.

Tonu Basu from OGP Support Unit said that “Staff from the OGP Support Unit had some very productive meetings with representatives from both government and civil society. We were greatly encouraged to see that civil society and government are taking concrete steps to collaborate among themselves and with each other through the development of collaborative networks. Civil society and government collaboration is the key to the strengthening of the OGP process and to establishing a strong culture of a transparent, accountable, and responsive government”.

The discussion has been focused on the improvement of the third action plan and the importance of the collaboration between civil society and government on promoting and strengthening open governance and transparency in Greece.

“Bodossaki Foundation participates actively in the conformation of the Third Action Plan aiming to develop and act as an intermediary between civil society bodies and this cause. The goal is the conformation of the action plan with the participation of the civil society and its successful implementation through monitoring and evaluation”, comments Fay Koutzoukou, Deputy Program Director.

Among the challenges addressed in the meeting, great attention was given to the small ownership of the civil society groups in participating in the formation and implementation of the action plan that holds the process back. The suggestions made by the civil society organizations that participated were on monitoring closer the process with regular meetings and assigning specific commitments leveraging both people and government.

According to experts from OGP Support Unit, some of the potential commitments of the action plan, which include issues like subnational, open education, open justice, parliament and administrative reform, if implemented as scheduled, they could position Greece as a regional and global leader among the 70 OGP countries.

Nancy Routzouni, advisor on e-Government to the Alternate Minister for Administrative Reform, concludes the event by saying that: “We are very pleased to work and collaborate with civil society bodies as their ideas, knowledge, and feedback are crucial in the process of forming the national action plan”.

The third National OGP action plan had been discussed and approved by the Parliament last week, where the commitments by OK Greece were mentioned, as Nancy Routzouni said in the event in Athens.

Loading…

Access Conference: We’re looking for Access 2017 hosts!

Fri, 2016-07-22 19:59

The Access 2016 Planning Committee is now accepting proposals from institutions and groups to host Access 2017! Bring Canada’s leading library tech conference (not to mention one of the best conference audiences to be found ANYWHERE) to your campus or city!

Interested? Submit your proposal to accesslibcon@gmail.com, including:

  • The host organization(s) name
  • Proposed dates
  • The location the event will likely be held (e.g. campus facility, hotel name, etc.)
  • Considerations noted in the hosting guidelines
  • Anything else to convince us that you would put on another fabulous Access conference!

Proposals will be accepted until September 2, 2016. The 2017 hosts will be selected by the 2016 Planning Committee, and notified in early September. The official announcement will be made on October 7th at Access 2016 in Fredericton!

Questions? Let us know at accesslibcon@gmail.com!

District Dispatch: Libraries cheerlead for fair copyright

Fri, 2016-07-22 18:11

I’ve written a lot about 3D printing on the District Dispatch. One of the most unlikely topics I’ve discussed in connection with this technology is cheerleading…That’s right. If you’re a loyal DD reader, think back to May. If you’re hearing bells ring, that’s because the first week of that month, I outlined a court case between two manufacturers of cheerleading uniforms. The case pits international supplier Varsity Brands against the much smaller supplier Star Athletica. Varsity Brands is suing Star Athletica on the grounds that the latter’s uniforms infringe on its copyrighted designs. Even though copyright protects creative expression and cheerleading uniforms are fundamentally utilitarian, Varsity Brands’ argument rests on a liberal interpretation of something called “separability.” If a utilitarian item has creative elements that can be clearly separated from its core “usefulness,” it may receive copyright protection. Varsity Brands says that the stripes and squiggles in their uniform designs represent these sorts of elements.

Photo by Peter Griffin.

The courts are divided on this argument…But the U.S. Supreme Court has agreed to hear the case and give Varsity Brands, Star Athletica and copyright junkies everywhere a final and – hopefully – clarifying ruling. So, what the heck does this have to do with 3D printing? Actually, a lot. If the Supremes were to come down in favor of Varsity Brands’ interpretation of separability, they would set a dangerous precedent: any design that’s not 100 percent functional – i.e., has one or more elements with even a whit of creativity – might be protected by copyright. Imagine the fear of infringement this might instill in an avid “maker.”…It would likely be enough to hamstring his or her creative potential. Thankfully, the 3D printing community thought of this early.

As I mentioned in my last post about this case, industry players Shapeways, Formlabs and Matter and Form already submitted an amicus brief to the Supreme Court warning of the “chill” an overbroad interpretation of separability in the Varsity Brands case might place on 3D innovation. Believing as we do in the importance of creativity inside and beyond library walls, the library community has decided to pick up its pom-poms and stand alongside them. ALA, the Association of Research Libraries (ARL) and the Association of College and Research Libraries (ACRL) have signed onto a similar brief penned by the D.C.-based public policy organization Public Knowledge. The brief argues that: “…copyright in useful articles ought to continue to be highly limited, such that a feature of a useful article may be copyrighted only upon a clear showing that the feature is obviously separable and indisputably independent of the utilitarian aspects of the article.”

Our argument on this case is in keeping with one of the basic tenets of our efforts to promote public access to information: that copyright should be limited and promote progress and innovation. Lucky for us, we have the Constitution on our side.

The post Libraries cheerlead for fair copyright appeared first on District Dispatch.

District Dispatch: OITP welcomes new chair, Marc Gartler

Fri, 2016-07-22 14:57

Marc Gartler, new chair of OITP’s Advisory Committee

I’m pleased to announce that Marc Gartler is the new chair of the Advisory Committee for ALA’s Office for Information Technology Policy (OITP), as appointed by ALA President Julie Todaro. Marc succeeds Dan Lee of the University of Arizona, who served as OITP chair for two years. We are deeply grateful for Dan’s leadership and service to OITP and ALA.

Marc Gartler recently chaired OITP’s subcommittee on America’s Libraries in the 21st Century, served on the advisory committee for ALA’s Policy Revolution! initiative, and served on OITP’s Copyright Education subcommittee. Over the past few years he has participated in policy discussions with representatives from the FCC, Google, the Gates Foundation, and other organizations whose interests dovetail with those of ALA. OITP, ALA, and libraries have benefited from Marc’s counsel on diverse issues from copyright to maker spaces.

Marc manages two neighborhood libraries for Madison (Wisc.) Public Library, a recipient of the 2016 National Medal for Museum and Library Service. He leads one of the City of Madison’s Neighborhood Resource Teams, which coordinate local government services and develop relationships among City staff, neighborhood residents, and other stakeholders. A former college library director, Marc served as a consultant for the Ohio Board of Regents and Colorado Department of Higher Education. He is a graduate of the PLA Leadership Academy, and holds an MS in Library & Information Science from the University of Illinois at Urbana-Champaign and an MA in Humanities from the University of Chicago.

We look forward to working with Marc.

The post OITP welcomes new chair, Marc Gartler appeared first on District Dispatch.

Open Knowledge Foundation: Announcing IODC Unconference

Fri, 2016-07-22 09:21

We all know the feeling of the end of a conference, where after long days full of content, you leave with more unanswered questions. Conferences are a great place for networking, learning different topics and sharing achievements (and sometimes even failures), but in their nature, they are organised in a way that is less participatory and more broadcast than an exchange of information.

The organising committee of the International Open Data Conference are aware of this, and try to use other ways to share ideas with people without stages and slideshows, that can complement the main event. This is why one of the pre-events to the conference will be an unconference that will give inputs to the main event.

Post its from the unfestival @okfest

An unconference is an open event that allows its members to propose their own topics for discussion. Just like last year, the unconference will enable people to discuss open data issues that are close to their heart with like-minded peers from across the world. We hope that by having an unconference, we can give voice to a broad range of different experiences and points of view.

We believe that this will help us ignite discussions and find new ways to continue the conversation during the conference. So even if you are not part of a panel in the main event, you can influence the IODC’s outcomes by participating in the unconference.

This year, Open Knowledge International will lead the efforts of the unconference for IODC, with the support of the IDRC, The Web Foundation, ILDA and Civica Digital, and we want to share with you every step of the way. The goals that we set are:

  • To offer a safe space to promote understanding and experience sharing from the open data movement across the world, to have honest and open reflection on how we create change
  • To initiate new relationships and build solidarity within the open data community.
  • To create an opportunity to dive deeper into topics and issues that are important to the community.

To do so, we want to invite you to take an active role in the running of the event. Firstly, we need to hear from you and to set the mood for the event. We opened this forum category, and we are looking forward to seeing what kind of topics can be explored during the unconference.

In the next couple of weeks, we will send more information and registration details. In the meanwhile, save the date: Tuesday, October 4th, at 9.30 at IFEMA, North convention centre.

We hope to see you there and share experiences!

 

Pages