You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 1 hour 22 min ago

District Dispatch: Johnna Percell selected for 2015 ALA Google Policy Fellowship

Wed, 2015-04-22 18:59

Google Policy Fellow Johnna Percell.

Today, the American Library Association (ALA) announced that Johnna Percell will serve as its 2015 Google Policy Fellow. As part of her summer fellowship, Percell will spend ten weeks in Washington, D.C. working on technology and Internet policy issues. As a Google Policy Fellow, Percell will explore diverse areas of information policy, including copyright law, e-book licenses and access, information access for underserved populations, telecommunications policy, digital literacy, online privacy, and the future of libraries. Google, Inc. pays the summer stipends for the fellows and the respective host organizations determine the fellows’ work agendas.

Percell will work for the American Library Association’s Office for Information Technology Policy (OITP), a unit of the association that works to ensure the library voice in information policy debates and promote full and equitable intellectual participation by the public. Percell is a graduate student at the University of Maryland, pursuing a master’s degree in Library Science from the university’s College of Information Studies. She currently works as an intern at the District of Columbia Public Library in Washington, D.C. Percell completed her undergraduate education with a major in English at Harding Universtiy in Arkansas.

“ALA is pleased to participate for the eighth consecutive year in the Google Policy Fellowship program,” said Alan S. Inouye, director of the ALA Office for Information Technology Policy. “We look forward to working with Johnna Percell in finding new information policy opportunities for libraries, especially in the realm of services for diverse populations.”

Find more information the Google Policy Fellowship Program.

The post Johnna Percell selected for 2015 ALA Google Policy Fellowship appeared first on District Dispatch.

District Dispatch: ALA says “NO!” to Section 215 reauthorization gambit

Wed, 2015-04-22 17:40

As both chambers of Congress prepare to take up and debate long-needed surveillance law reform, Senate Majority Leader Mitch McConnell’s (R-KY) bill (introduced late yesterday) to simply reauthorize the “library provision” (Section 215) of the USA PATRIOT Act until 2020 without change of any kind was met today by a storm of opposition from leading privacy and civil liberties groups with ALA in the vanguard.  In a statement released this morning, American Library Association (ALA) President Courtney Young said unequivocally of S.1035:

“Nothing is more basic to democracy and librarianship than intellectual freedom. And, nothing is more hostile to that freedom than the knowledge that the government can compel a library—without a traditional judicial search warrant—to report on the reading and Internet records of library patrons, students, researchers and entrepreneurs. That is what Section 215 did in 2001 and what it still does today.

“The time is long past for Section 215 to be meaningfully reformed to restore the civil liberties massively and unjustifiably compromised by the USA PATRIOT Act. For libraries of every kind, for our hundreds of millions of users, ALA stands inimically against S. 1035 and the reauthorization of Section 215 without significant and urgently needed change.”

In the coming days and weeks the ALA Washington Office will be working intensively to fight for real changes to Section 215 and other provisions of the PATRIOT Act, but it will need the help of all librarians and library supporters to succeed.  Sign up now for the latest on ALA and its coalition partners’ efforts, and how you can help sway your Members of Congress when the time comes.  That will be very soon, so don’t wait!

 

The post ALA says “NO!” to Section 215 reauthorization gambit appeared first on District Dispatch.

Library of Congress: The Signal: Libraries Looking Across Languages: Seeing the World Through Mass Translation

Wed, 2015-04-22 13:32

The following is a guest post by Kalev Hannes Leetaru, Senior Fellow, George Washington University Center for Cyber & Homeland Security. Portions adapted from a post for the Knight Foundation.

Geotagged tweets November 2012 colored by language.

Imagine a world where language was no longer a barrier to information access, where anyone can access real-time information from anywhere in the world in any language, seamlessly translated into their native tongue and where their voice is equally accessible to speakers of all the world’s languages. Authors from Douglas Adams to Ethan Zuckerman have long articulated such visions of a post-lingual society in which mass translation eliminates barriers to information access and communication. Yet, even as technologies like the web have broken down geographic barriers and increasingly made it possible to access information from anywhere in the world, linguistic barriers mean most of those voices remain steadfastly inaccessible. For libraries, mass human and machine translation of the world’s information offers enormous possibilities for broadening access to their collections. In turn, as there is greater interest in the non-Western and non-English world’s information, this should lead to a greater focus on preserving it akin to what has been done for Western online news and television.

There have been many attempts to make information accessbile across language barriers using both human and machine translation. During the 2013 Egyptian uprising, Twitter launched live machine translation of Arabic-language tweets from select political leaders and news outlets, an experiment which it expanded for the World Cup in 2014 and made permanent this past January with its official “Tweet translation” service. Facebook launched its own machine translation service in 2011, while Microsoft recently unveiled live spoken translation for Skype. Turning to human translators, Wikipedia’s Content Translation program combines machine translation with human correction in its quest to translate Wikipedia into every modern language and TED’s Open Translation Project has brought together 20,000 volunteers to translate 70,000 speeches into 107 languages since 2009. Even the humanitarian space now routinely leverages volunteer networks to mass translate aid requests during disasters, while mobile games increasingly combine machine and human translation to create fully multilingual chat environments.

Yet, these efforts have substantial limitations. Twitter and Facebook’s on-demand model translates content only as it is requested, meaning a user must discover a given post, know it is of possible relevance, explicitly request that it be translated and wait for the translation to become available. Wikipedia and TED attempt to address this by pre-translating material en masse, but their reliance on human translators and all-volunteer workflows impose long delays before material becomes available.

Journalism has experimented only haltingly with large-scale translation. Notable successes such as Project Lingua, Yeeyan.org and Meedan.org focus on translating news coverage for citizen consumption, while journalist-directed efforts such as Andy Carvin’s crowd-sourced translations are still largely regarded as isolated novelties. Even the U.S. government’s foreign press monitoring agency draws nearly half its material from English-language outlets to minimize translation costs. At the same time, its counterterrorism division monitoring the Lashkar-e-Taiba terrorist group remarks of the group’s communications, “most of it is in Arabic or Farsi, so I can’t make much of it.

Libraries have explored translation primarily as an outreach tool rather than as a gateway to their collections. Facilities with large percentages of patrons speaking languages other than English may hire bilingual staff, increase their collections of materials in those languages and hold special events in those languages. The Denver Public Library offers a prominent link right on its homepage to its tailored Spanish-language site that includes links to English courses, immigration and citizenship resources, job training and support services. Instead of merely translating their English site into Spanish wording, they have created a completely customized parallel information portal. However, searches of their OPAC in Spanish will still only return works with Spanish titles: a search for “Matar un ruisenor” will return only the single Spanish translation of “To Kill a Mockingbird” in their catalog.

On the one hand, this makes sense, since a search for a Spanish title likely indicates an interest in a Spanish edition of the book, but if no Spanish copy is available, it would be useful to at least notify the patron of copies in other languages in case that patron can read any of those other languages. Other sites like the Fort Vancouver Regional Library District use the Google Translate widget to perform live machine translation of their site. This has the benefit that when searching the library catalog in English, the results list can be viewed in any of Google Translate’s 91 languages. However, the catalog itself must still be searched in English or the language that the book title is published in, so this only solves part of the problem.

In fact, the lack of available content for most of the world’s languages was identified in the most recent Internet.org report (PDF) as being one of the primary barriers to greater connectivity throughout the world. Today there are nearly 7,000 languages spoken throughout the world of which 99.7% are spoken by less than 1% of the world’s population. By some measures, just 53% of the Earth’s population has access to measurable online content in their primary language and almost a billion people speak languages for which no Wikipedia content is available. Even within a single country there can be enormous linguistic variety: India has 425 primary languages and Papua New Guinea has 832 languages spoken within its borders. As ever-greater numbers of these speakers join the online world, even English speakers are beginning to experience linguistic barriers: as of November 2012, 60% of tweets were in a language other than English.

Web companies from Facebook and Twitter to Google and Microsoft are increasingly turning to machine translation to offer real-time access to information in other languages. Anyone who has used Google or Microsoft Translate is familiar with the concept of machine translation and both its enormous potential (transparently reading any document in any language) and current limitations (many translated documents being barely comprehensible). Historically, machine translation systems were built through laborious manual coding, in which a large team of linguists and computer programmers sat down and literally hand-programmed how every single word and phrase should be translated from one language to another. Such models performed well on perfectly grammatical formal text, but often struggled with the fluid informal speech characterizing everyday discourse. Most importantly, the enormous expense of manually programming translation rules for every word and phrase and all of the related grammatical structures of both the input and output language meant that translation algorithms were built for only the most widely-used languages.

Advances in computing power over the past decade, however, have led to the rise of “statistical machine translation” (SMT) systems. Instead of humans hand-programming translation rules, SMT systems examine large corpora of material that have been human-translated from one language to another and learn which words from one language correspond to those in the other language. For example, an SMT system would determine that when it sees “dog” in English it almost always sees “chien” in French, but when it sees “fan” in English, it must look at the surrounding words to determine whether to translate it into “ventilateur” (electric fan) or “supporter” (sports fan).

Such translation systems require no human intervention – just a large library of bilingual texts as input. United Nations and European Union legal texts are often used as input given that they are carefully hand translated into each of the major European languages. The ability of SMT systems to rapidly create new translation models on-demand has led to an explosion in the number of languages supported by machine translation systems over the last few years, with Google Translate translating to/from 91 languages as of April 2015.

What would it look like if one simply translated the entirety of the world’s information in real-time using massive machine translation? For the past two years the GDELT Project has been monitoring global news media, identifying the people, locations, counts, themes, emotions, narratives, events and patterns driving global society. Working closely with governments, media organizations, think tanks, academics, NGO’s and ordinary citizens, GDELT has been steadily building a high resolution catalog of the world’s local media, much of which is in a language other than English. During the Ebola outbreak last year, GDELT actually monitored many of the earliest warning signals of the outbreak in local media, but was unable to translate the majority of that material. This led to a unique initiative over the last half year to attempt to build a system to literally live-translate the world’s news media in real-time.

Beginning in Fall 2013 under a grant from Google Translate for Research, the GDELT Project began an early trial of what it might look like to try and mass-translate the world’s news media on a real-time basis. Each morning all news coverage monitored by the Portuguese edition of Google News was fed through Google Translate until the daily quota was exhausted. The results were extremely promising: over 70% of the activities mentioned in the translated Portuguese news coverage were not found in the English-language press anywhere in the world (a manual review process was used to discard incorrect translations to ensure the results were not skewed by translation error). Moreover, there was a 16% increase in the precision of geographic references, moving from “rural Brazil” to actual city names.

The tremendous success of this early pilot lead to extensive discussions over more than a year with the commercial and academic machine-translation communities on how to scale this approach upwards to be able to translate all accessible global news media in real-time across every language. One of the primary reasons that machine translation today is still largely an on-demand experience is the enormous computational power it requires. Translating a document from a language like Russian into English can require hundreds or even thousands of processors to produce a rapid result. Translating the entire planet requires something different: a more adaptive approach that can dynamically adjust the quality of translation based on the volume of incoming material, in a form of “streaming machine translation.”

Geographic focus of world’s news media by language 8-9AM EST on April 1, 2015 (Green = locations mentioned in Spanish media, Red = French media, Yellow = Arabic media, Blue = Chinese media).

The final system, called GDELT Translingual, took around two and a half months to build and live-translates all global news media that GDELT monitors in 65 languages in real-time, representing 98.4% of the non-English content it finds worldwide each day. Languages supported include Afrikaans, Albanian, Arabic (MSA and many common dialects), Armenian, Azerbaijani, Bengali, Bosnian, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kannada, Kazakh, Korean, Latvian, Lithuanian, Macedonian, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian (Bokmal), Norwegian (Nynorsk), Persian, Polish, Portuguese (Brazilian), Portuguese (European), Punjabi, Romanian, Russian, Serbian, Sinhalese, Slovak, Slovenian, Somali, Spanish, Swahili, Swedish, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu and Vietnamese.

Building the system didn’t require starting from scratch, as there is an incredible wealth of open tools and datasets available to support all of the pieces of the machine translation pipeline. Open source building blocks utilized include the Moses toolkit and a number of translation models contributed by researchers in the field; the Google Chrome Compact Language Detector; 22 different WordNet datasets; multilingual resources from the GEOnet Names Server, Wikipedia, the Unicode Common Locale Data Repository; word segmentation algorithms for Chinese, Japanese, Thai and Vietnamese; and countless other tools. Much of the work lay in how to integrate all of the different components and constructing some of the key unique new elements and architectures to enable the system to scale to GDELT’s needs. A more detailed technical description of the final architecture, tools, and datasets used in the creation of GDELT Translingual is available on the GDELT website.

Just as we digitize books and use speech synthesis to create spoken editions for the visually impaired, we can use machine translation to provide versions of those digitized books into other languages. Imagine a speaker of a relatively uncommon language suddenly being able to use mass translation to access the entire collections of a library and even to search across all of those materials in their native language. In the case of a legal, medical or other high-importance text, one would not want to trust the raw machine translation on its own, but at the very least such a process could be used to help a patron locate a specific paragraph of interest, making it much easier for a bilingual speaker to assist further. For more informal information needs, patrons might even be able to consume the machine translated copy directly in many cases.

Machine translation may also help improve the ability of human volunteer translation networks to bridge common information gaps. For example, one could imagine an interface where a patron can use machine translation to access any book in their native language regardless of its publication language, and can flag key paragraphs or sections where the machine translation breaks down or where they need help clarifying a passage. These could be dispatched to human volunteer translator networks to translate and offer back those translations to benefit others in the community, perhaps using some of the same volunteer collaborative translation models of the disaster community.

As Online Public Access Catalog software becomes increasingly multilingual, eventually one could imagine an interface that automatically translates a patron’s query from his/her native language into English, searches the catalog, and then returns the results back in that person’s language, prioritizing works in his/her native language, but offering relevant works in other languages as well. Imagine a scholar searching for works on an indigenous tribe in rural Brazil and seeing not just English-language works about that tribe, but also Portuguese and Spanish publications.

Much of this lies in the user interface and in making language a more transparent part of the library experience. Indeed, as live spoken-to-spoken translation like Skype’s becomes more common, perhaps eventually patrons will be able to interact with library staff using a Star Trek-like universal translator. As machine translation technology improves and as libraries focus more on multilingual issues, such efforts also have the potential to increase visibility of non-English works for English speakers, countering the heavily Western-centric focus of much of the available information on the non-Western world.

Finally, it is important to note that language is not the only barrier to information access. The increasing fragility and ephemerality of information, especially journalism, poses a unique risk to our understanding of local events and perspectives. While the Internet has made it possible for even the smallest news outlet to reach a global audience, it has also has placed journalists at far greater risk of being silenced by those who oppose their views. In the era of digitally published journalism, so much of our global heritage is at risk of disappearing at the pen stroke of an offended government, at gunpoint by masked militiamen, by regretful combatants or even through anonymized computer attacks. A shuttered print newspaper will live on in library archives, but a single unplugged server can permanently silence years of journalism from an online-only newspaper.

In perhaps the single largest program to preserve the online journalism of the non-Western world, each night the GDELT Project sends a complete list of the URLs of all electronic news coverage it monitors to the Internet Archive under its “No More 404” program, where they join the Archive’s permanent index of more than 400 billion web pages. While this is just a first step towards preserving the world’s most vulnerable information, it is our hope that this inspires further development in archiving high-risk material from the non-Western and non-English world.

We have finally reached a technological junction where automated tools and human volunteers are able to take the first, albeit imperfect, steps towards mass translation of the world’s information at ever-greater scales and speeds. Just as the internet reduced geographic boundaries in accessing the world’s information, one can only imagine the possibilities of a world in which a single search can reach across all of the world’s information in all the world’s languages in real-time.

Machine translation has truly come of age to a point where it can robustly translate foreign news coverage into English, feed that material into automated data mining algorithms and yield substantially enhanced coverage of the non-Western world. As such tools gradually make their way into the library environment, they stand poised to profoundly reshape the role of language in the access and consumption of our world’s information. Among the many ways that big data is changing our society, its empowerment of machine translation is bridging traditional distances of geography and language, bringing us ever-closer to the notion of a truly global society with universal access to information.

In the Library, With the Lead Pipe: Adopting the Educator’s Mindset: Charting New Paths in Curriculum and Assessment Mapping

Wed, 2015-04-22 13:00

Photo by Flickr user MontyAustin (CC BY-NC-ND 2.0)

In Brief:

The greatest challenge that I faced in my role as Information Literacy Librarian occurred as a result of a Higher Learning Commission (HLC) initiative at my institution, requiring all academic programs/departments to create/review/revise program-level student learning outcomes  (PLSLOs), curriculum maps, and assessment maps. This initiative served as a catalyst for the information literacy program, prompting  me to seek advice from faculty in the Education Department at Southwest Baptist University (SBU), who were more familiar with educational theory and curriculum/assessment mapping methods. In an effort to accurately reflect the University Libraries’ impact on student learning inside and outside of the classroom, I looked for ways to display this visually. The resulting assessment map included classes the faculty and I could readily assess, as well as an evaluation of statistics on library services and resources that also impact student learning, such as data from LibGuide and database usage, reference transactions, interlibrary loans, course reserves, annual gate count trends, the biennial student library survey, and website usability testing.

Embarking on a Career in Information Literacy

Like most academic librarians there was little focus on instruction in my graduate school curriculum. My only experience with classroom instruction occurred over a semester-long internship, during which I taught less than a handful of information literacy sessions. Although I attended  ACRL’s Immersion-Teacher Track Conference as a new librarian, I was at a loss as to how I should strategically apply the instruction and assessment best practices gleaned during that experience to the environment in which I found myself.

When I embraced the role of Information Literacy Librarian at Southwest Baptist University (SBU) Libraries in 2011, I joined a faculty of six other librarians. The year I started, the University Libraries transitioned to a liaison model, with six of the seven librarians, excluding the Library Dean, providing instruction for each of the academic colleges represented at the University. Prior to this point, one librarian provided the majority of instruction across all academic disciplines. As the Information Literacy Librarian, I was given the challenge of directing all instruction and assessment efforts on behalf of the University Libraries. Although my predecessor developed an information literacy plan, the Library Dean asked me to create a plan that spanned the curriculum.

Charting A New Course  

The greatest challenge that I faced in my role as Information Literacy Librarian occurred as a result of a Higher Learning Commission (HLC) initiative at my institution, requiring all academic programs/departments to create/review/revise program-level student learning outcomes  (PLSLOs), curriculum maps, and assessment maps. I found assessment mapping particularly nebulous, since the librarians at my institution do not teach semester long classes. In lieu of this, I looked for new ways to document and assess the University Libraries’ impact on student learning not only inside, but outside of the classroom setting. The resulting assessment map included classes faculty and I could readily assess, as well as an evaluation of statistics on library services and resources that also impact student learning, such as data from LibGuide and database usage, reference transactions, interlibrary loans, course reserves, annual gate count trends, the biennial student library survey, and website usability testing.

As is the case when discovering all uncharted territories, taking a new approach required me to seek counsel from Communities of Practice at my institution, defined as “staff bound together by common interests and a passion for a cause, and who continually interact. Communities are sometimes formed within the one organisation, and sometimes across many organisations. They are often informal, with fluctuating membership and people can belong to more than one community at a time” (Mitchell 5). At SBU, I forged a Community of Practice with faculty in the Education Department, with whom I could meet, as needed, to discuss how the University Libraries could most effectively represent its impact on student learning.

Learning Theory: A Framework for Information Literacy, Instruction, & Assessment

Within the library literature educational and instructional design theorists are frequently cited. Instructional theorists have significantly shaped my pedagogy over the past three and a half years. In their book, Understanding by Design, educators  Grant Wiggins and Jay McTighe point out the importance of developing a cohesive plan that serves as a compass for learning initiatives. They write: “Teachers are designers. An essential act of our profession is the crafting of curriculum and learning experiences to meet specified purposes. We are also designers of assessments to diagnose student needs to guide our teaching and to enable us, our students, and others (parents and administrators) to determine whether we have achieved our goals” (13).  They propose that curriculum designers embrace the following strategic sequence in order to achieve successful learning experiences – 1. “Identify desired results,” 2. “Determine acceptable evidence,” and 3. “Plan learning experiences and instruction” (Wiggins and McTighe 18).

As librarians, we are not only interested in our students’ ability to utilize traditional information literacy skill sets, but we also have a vested interest in scaffolding “critical information literacy,” skills which “differs from standard definitions of information literacy (ex: the ability to find, use, and analyze information) in that it takes into consideration the social, political, economic, and corporate systems that have power and influence over information production, dissemination, access, and consumption” (Gregory and Higgins 4). The time that we spend with students is limited, since many information literacy librarians do not teach semester-long classes nor do we meet each student who steps foot on our campuses. However, as McCook and Phenix point out, awakening critical literacy skills is essential to “the survival of the human spirit” (qtd. in Gregory and Higgins 2). Therefore, librarians must look for ways to invest in cultivating students’ literacy beyond the traditional four walls of the classroom.

Librarians and other teaching faculty recognize that “Students need the ability to think out of the box, to find innovative solutions to looming problems…” (Levine 165). In his book, Generation on a Tightrope: A Portrait of Today’s College Student, Arthur Levine notes that the opportunity academics have to cultivate students’ intellect is greatest during the undergraduate years. While some of them may choose to pursue graduate-level degrees later on, at this point their primary objective will be to obtain ‘just in time education’ at the point of need (165). It is this fact that continues to inspire an urgency in our approaches to information literacy education.

One of the most challenging aspects of pedagogy is that it is messy. While educators are planners, learning and assessment is by no means something that can be wrapped up and decked out with a beautiful bow. Education requires us to give of ourselves, assess what does and does not work for our students and then make modifications as a result. According to educator Rick Reiss, while students are adept at accessing information via the internet, “Threshold concepts and troublesome knowledge present the core challenges of higher learning” (n. pag.). Acquiring new knowledge requires us to grapple with preconceived notions and to realize that not everything is black and white. Despite the messy process in which I found myself immersed, knowledge gleaned from educational and instructional theorists began to bring order to the curriculum and assessment mapping process.

Eureka Moments in Higher Education: Seeing Through a New Lens for the First Time

Eureka moments are integral to the world of education and often consist of a revelation or intellectual discovery. This concept is best depicted in the story of a Greek by the name of Archimedes. Archimedes was tasked  by the king of his time with determining whether or not some local tradesmen had crafted a crown out of pure gold or substituted some of the precious metal with a less valuable material like silver to make a surplus on the project at hand (Perkins 6). As water began flowing out of the tub, legend has it that “In a flash, Archimedes discovered his answer: His body displaced an equal volume of water. Likewise, by immersing the crown in water, Archimedes could determine its volume and compare that with the volume of an equal weight of gold” (Perkins 7). He quickly emerged from the tub naked and ran across town announcing his discovery. Although we have all experienced Eureka moments to some extent or another, not all of them are as dramatically apparent as Archimedes’s discovery.

In his book entitled Archimedes’ Bathtub: The Art and Logic of Breakthrough Thinking, David Perkins uses the the phrase “cognitive snap,” to illustrate a breakthrough that comes suddenly, much like Archimedes’s Eureka moment (10).  Although the gestational period before my cognitive snap was almost three and a half years in the making, when I finally  began to grasp and apply learning theory to the development of PLSLOs, curriculum, and assessment maps, I knew that it was the dawning of a new Eureka era for me.

Librarians play a fundamental role in facilitating cognitive snaps among the  non-library faculty that they partner with in the classroom. Professors of education, history, computer science, etc. enlighten their students to subject-specific knowledge, while librarians have conveyed the value of incorporating information literacy components into the curriculum via the Association of College and Research Libraries’ (ACRL) – Information Literacy Competency Standards for Higher Education. Now, through the more recently modified Framework for Information Literacy for Higher Education, librarians are establishing their own subject-specific approach to information literacy that brings “cognitive snaps” related to the research process into the same realm as disciplinary knowledge (“Information Literacy Competency Standards”; “Framework for Information Literacy”).

Like most universities, each academic college at SBU is comprised of multiple departments, with each consisting of a department chair. The University Libraries is somewhat unique within this framework, in that it is not classified as an academic college, nor does it consist of multiple departments. In 2013, the Library Dean asked me to assume the role of department chair for the University Libraries, because he wanted me to attend the Department Chair Workshops led by the Assessment Academy Team (comprised of the Associate Provost for Teaching and Learning and designated faculty across the curriculum) at SBU. These workshops took place from January 2013 through August 2014. All Department Chairs were invited to participate in four workshops geared towards helping faculty across the University review, revise, and /or create PLSLOs, curriculum, and assessment maps. While my review of educational theory and best practices certainly laid a framework for the evolving information literacy program at SBU, it was during this period that I began charting a new course, as I applied the concepts gleaned during these workshops to the curriculum and assessment maps that I designed for the University Libraries.

What I Learned About the Relationship Between Curriculum & Assessment Mapping

In conversations with Assessment Academy Team members currently serving in the Education Department, I slowly adopted an educator’s lens through which to view these processes. Prior to this point, my knowledge of PLSLOs and curriculum mapping came from the library and education literature that I read. Dialogues with practitioners in the Education Department at my campus slowly enabled me to address teaching and assessment from a pedagogical standpoint, employing educational best practices.

Educator Heidi Hayes-Jacobs believes that mapping is key to the education process. She writes: “Success in a mapping program is defined by two specific outcomes: measurable improvement in student performance in the targeted areas and the institutionalization of mapping as a process for ongoing curriculum and assessment review” (Getting Results with Curriculum Mapping 2). While Hayes-Jacobs’s expertise is in curriculum mapping within the K-12 school system, the principles that she advances apply to higher education, as well as information literacy. She writes about the gaps that often exist as a result of teachers residing in different buildings or teaching students at different levels of the educational spectrum, for example the elementary, middle school, or high school levels (Mapping the Big Picture 3). The mapping process establishes greater transparency and awareness of what is taught across the curriculum and establishes accountability, in spite of the fact that teachers, professors, or librarians might not interact on a daily or monthly basis. It provides a structure for assessment mapping because all of these groups must not only evaluate what they are teaching, but whether or not students are grasping PLSLOs.

Curriculum Maps Are Just a Stepping Stone: Assessment Mapping for the Faint of Heart

When I assumed the role of Information Literacy Librarian at SBU, I knew nothing about assessment. Sure, I knew how to define it and I was familiar with being on the receiving end as a student,  but frankly as a new librarian it scared me. Perhaps that is because I saw it as a solo effort that would most likely not provide a good return on my investment. I quickly realized, however, that facilitating assessment opportunities was critical because I wanted to cultivate Eureka moments for my students. In the event that  students do not understand something, it is my job to look for strategies to address the gap in their knowledge and scaffold the learning process.

Assessment mapping is the next logical step in the mapping process. While curriculum maps give us the opportunity to display the PLSLOs integrated across the curriculum, assessment maps document the tools and assignments that we will utilize to determine whether or not our students have grasped designated learning outcomes. Curriculum and assessment maps do not require input from one person, but rather collaboration among faculty. According Dr. Debra Gilchrist, Vice President for Learning and Student Success at Pierce College, “Assessment is a thoughtful and intentional process by which faculty and administrators collectively, as a community of learners, derive meaning and take action to improve. It is driven by the intrinsic motivation to improve as teachers, and we have learned that, just like the students in our classes, we get better at this process the more we actively engage it” (72). Assessment is not about the data, but strategically getting better at what we do (Gilchrist 76).

Utilizing the Educator’s Lens to Develop Meaningful Curriculum & Assessment Maps

Over the last three and a half years I have learned a great deal about applying the educator’s lens to information literacy. It has made a difference not only in the way I teach and plan, but in the collaboration that I facilitate among the library faculty at my institution who also visit the classroom regularly. Perhaps what scared me the most about assessment initially was my desire to achieve perfection in the classroom, a concept that is completely uncharacteristic of education. I combated this looming fear by immersing myself in pedagogy and asking faculty in the Education Department at SBU endless questions about their own experiences with assessment. The more I read and conversed on the topic, the more I realized that assessment is always evolving. It does not matter how many semesters a professor has taught a class, there is always room for improvement. It was then that I could boldly embrace assessment, knowing that it was messy, but was important to making improvements in the way my colleagues and I conveyed PLSLOs and scaffolded student learning moving forward. In their article “Guiding Questions for Assessing Information Literacy in Higher Education,” Megan Oakleaf and Neal Kaske write: “Practicing continuous assessment allows librarians to ‘get started’ with assessment rather than waiting to ‘get it perfect.’ Each repetition of the assessment cycle allows librarians to adjust learning goals and outcomes, vary instructional strategies, experiment with different assessment of methods, and improve over time” (283).

The biggest challenge for librarians interested in implementing curriculum and assessment maps at their institutions stems from the fact that we often do not have the opportunity to interact with students like the average professor, who meets with a class for nearly four consecutive months a semester and provides feedback through regular assessments and grades. The majority of librarians teach one-shot information literacy sessions. So, what is the most practical way to visually represent librarians’ influence over student learning? I would like to advocate for a new approach, which may be unpopular among some in my field and readily embraced by others. It is a customized approach to curriculum and assessment mapping, which was suggested by faculty in the Education Department at my institution.

A typical curriculum map contains  PLSLOs for designated programs, along with course numbers/titles, and boxes where you can designate whether a skill set was introduced (i), reinforced (r), or mastered (m) (“Create a Curriculum Map”). For traditional academic departments, there is an opportunity to build on skill sets through a series of required courses. For academic libraries, however, it is difficult to subscribe to the standard  curriculum mapping schema because librarians do not always have the opportunity to impact student learning beyond general education classes and a few major-specific courses. This leads to an uneven representation of information literacy across the curriculum.  As a result, it is often more efficient to use an “x” instead to denote a program-level student learning outcome for which the library is responsible, rather than utilizing three progressive symbols.

One of the reasons why curriculum and assessment mapping at my academic library is becoming increasingly valuable, is largely due to the fact that administrators at my institution are interested in fostering a greater deal of accountability in the learning process, namely because of an upcoming HLC visit. In her article entitled, “Assessing Your Program-Level Assessment Plan,” Susan Hatfield, Professor of Communication Studies at Winona State University writes: “Assessment needs to be actively supported at the top levels of administration. Otherwise, it is going to be difficult (if not impossible) to get an assessment initiative off the ground. Faculty listen carefully to what administrators say – and don’t say. Even with some staff support, assessment is unlikely to be taken seriously until administrators get on board” (2). In his chapter entitled “Rhetoric Versus Reality: A Faculty Perspective on Information Literacy Instruction,” Arthur Sterngold embraces the view that “For [information literacy (IL)] to be effective…it must be firmly embedded in an institution’s academic curriculum and…the faculty should assume the lead responsibility for developing and delivering IL instruction (85). He believes that librarians should “serve more as consultants to the faculty than as direct providers of IL instruction” (Sterngold 85).

To some extent, I acknowledge the value of  Hatfield’s and Sterngold’s views on the importance of administration-driven and faculty led assessment initiatives in the realm of assessment.  Campus-wide discussions and initiatives centered around this subject stimulate collaboration among interdisciplinary faculty who would not otherwise meet outside of an established structure. As a librarian and member of the faculty at my institution, their stance on assessment creates some internal tension. While it is ideal for our administrations to care about the issues that are closest to their faculty’s hearts, many times they are driven to lead assessment efforts as a result of an impending accreditation visit (Gilchrist 71; Hatfield 5). While I would love to say that information literacy matters to my administration just as much as it does to me, this is an unrealistic viewpoint. The development, assessment, and day-to-day oversight of information literacy is an uphill battle that requires me to take the lead. My library faculty and I must establish value for our information literacy program among the faculty that we partner with on a daily basis. So, how do we as librarians assess the University Libraries’ impact on student learning when information literacy sessions are unevenly represented across the curriculum? In a conversation with a colleague in the Education Department, I was encouraged to determine and assess all forms of learning that the library facilitates by nature of its multidisciplinary role. In Brenda H. Manning and Beverly D. Payne’s article “A Vygotskian-Based Theory of Teacher Cognition: Toward the Acquisition of Mental Reflection and Self-Regulation,” they write:

Because of the spiral restructuring of knowledge, based on the history of each individual as he or she remembers it, a sociohistorical/cultural orientation may be very appropriate to the unique growth and development of each teaching professional. Such a theory in Vygotsky’s sociohistorical explanation for the development of the mind. In other words, the life history of preservice teachers is an important predictor of how they will interpret what it is that we are providing in teacher preparation programs (362).

My colleague in the Education Department challenged me to think about the multiple points of contact that students have with the library, outside of the one-shot information literacy session and include those in our assessment.

As a result, I developed curriculum and assessment maps that not only contained a list of courses in which specific PLSLOs were advanced, but also began including assessment of data from LibGuides, gate count, interlibrary loan, course reserve, biennial library survey, and website usability testing on the maps as well. All of these statistics can be tied to student-centered learning. Assessment of them enables my library faculty and I to make changes in the way that we market services and resources to constituents.

The maps illustrated in Table 1 and Table 2 below are intentionally simplistic. They  provide the library liaisons and faculty in their liaison areas with a visual overview of the  information literacy PLSLOs taught and assessed. When the University Libraries moved to the liaison model in 2011, the librarian teaching education majors was not necessarily familiar with the PLSLOs advanced  by the library liaison to the Language & Literature Department. Mapping current library involvement in the curriculum created a shared knowledge of PLSLOs among the library faculty. I also asked each librarian to create a lesson plan, which we published on the University Libraries’ website. Since we utilize the letter “x” to denote PLSLOs covered, rather than letters that display the  depth of coverage – introduction, reinforcement, mastery, lesson plans provide the librarians and their faculty with a detailed outline of how the PLSLO is developed in the classroom.

Apart from the general visual appeal, these maps also enable us to recognize holes in our information literacy program. For example, there are several departments that are not listed on the curriculum map because we do not currently provide instruction in these classes. Many of the classes that we visit with are freshman and sophomore level. It helps us to identify areas that we need to target moving forward, such as juniors through graduate students.

Table 1-Adapted Curriculum Map – Click to enlarge

Table 2 reveals a limited number of courses we hope to assess in the upcoming year. In discussions with library faculty, I quickly discovered that it was more important to start assessing, rather than assess every class we are involved in at present. We can continue to build in formal assessments over time, but for now the important thing is to begin the process of evaluating the learning process, so that we can make modifications to more effectively impact student learning (Oakleaf & Kaske 283).

The University Libraries is a unique entity in comparison to the other academic units represented across campus. This is largely because information literacy is not a core curriculum requirement. As a result, some of the PLSLOs reflected on the assessment map include data collected outside of the traditional classroom that is specific to the services, resources, and educational opportunities that we facilitate. This is best demonstrated by PLSLOs two and five. For example, we know that students outside of our sessions are using the LibGuides and databases, which are integral to PLSLO two – “The student will be able to use sources in research.” For PLSLO five – “The student will be able to identify the library as a place in the learning process” we are not predominantly interested in whether or not students are using our electronic classrooms during an information literacy session. We are interested in students’ awareness and use of the physical and virtual library as a whole, so we are assessing student learning by whether or not students can find what they need on the University Libraries’ website or whether they utilize the University Libraries’ physical space in general.

Table 2-Adapted Assessment Map (First Half) – Click to enlarge

Table 2-Adapted Assessment Map (Second Half) – Click to enlarge

Transparency in the Assessment Process

Curriculum and assessment maps provide librarians and educators alike with the opportunity to be transparent about the learning that is or is not happening inside and outside of the classroom. I am grateful for the information I have gleaned from the Education Department at SBU along the way because it has inspired a newfound commitment and dedication to the students that we serve.

Although curriculum and assessment mapping is not widespread in the academic library world, some information literacy practitioners have readily embraced this concept. For example, in Brian Matthews and Char Booth’s invited paper, presented at the California Academic & Research Libraries Conference (CARL), Booth discusses her use of the concept mapping software, Mindomo, to help library and departmental faculty visualize current curriculum requirements, as well as opportunities for library involvement in the education process (6). Some sample concept maps that are especially interesting include one geared towards first-year students and another customized to the Environmental Analysis program at Claremont Colleges (Booth & Matthews 8-9). The concept maps then link to rubrics that are specific to the programs highlighted. Booth takes a very visual and interactive approach to curriculum mapping.

In their invited paper, “A More Perfect Union: Campus Collaborations for Curriculum Mapping Information Literacy Outcomes,” Moser et al. discuss the mapping project they undertook at the Oxford College of Emory University. After revising their PLSLOs, the librarians met with departmental faculty to discuss where the library’s PLSLOs were currently introduced and reinforced in the subject areas. All mapping was then done in Weave (Moser et al. 333). While the software Emory University utilizes is a subscription service, Moser et al. provide a template of the curriculum mapping model they employed (337).

So, which of the mapping systems discussed is the best fit for your institution? This is something that you will want to determine based on the academic environment in which you find yourself. For example, does your institution subscribe to mapping software like Emory University or will you need to utilize free software to construct concept maps like Claremont Colleges? Another factor to keep in mind is what model will make the most sense to your librarians and the subject faculty they partner with in the classroom. As long as the maps created are clear to the audiences that they serve, the format they take is irrelevant. In Janet Hale’s book, A Guide to Curriculum Mapping: Planning, Implementing, and Sustaining the Process, she discusses several different kinds of maps for the K-12 setting. While each map outlined contains benefits, she argues that the “Final selection should be based on considering the whole mapping system’s capabilities” (Hale 228).

The curriculum and assessment mapping models I have used for the information literacy competency program at SBU reflect the basic structure laid out by the Assessment Academy Team at my institution. I have customized the maps to reflect the ways the University Libraries facilitates and desires to impact student  learning inside and outside of the classroom. In an effort to foster collaboration and create more visibility for the Information Literacy Competency Program, I have created two LibGuides that are publicly available to our faculty, students, and the general public. The first one, which is entitled  Information Literacy Competency Program, consists of PLSLOs, our curriculum and assessment maps, outlines of all sessions taught, etc. The Academic Program Review LibGuide provides an overview of the different ways that we are assessing student learning – including website usability testing feedback, annual information literacy reports and biennial student survey reports. Due to confidentiality, all reports are accessible via the University’s intranet.

Acknowledging the Imperfections of Curriculum and Assessment Mapping

Curriculum and assessment mapping is not an exact science. I wish I could bottle it up and distribute a finished product to all of  the information literacy librarians out there who grapple with the imprecision of our profession. While it would eliminate our daily struggle, it would also lead to the discontinuation of  Eureka moments that we all experience as we grow with and challenge the academic cultures in which we find ourselves.

So, what have I learned as a result of the mapping process? It requires collaboration on the part of library and non-library faculty. When I began curriculum and assessment mapping, I learned pretty quickly that without the involvement of each liaison librarian and the departmental faculty, mapping would be in vain. Map structures must be based on the pre-existing partnerships librarians have, but will identify gaps or areas of growth throughout the curriculum. I would love to report that our curriculum maps encompass the entire curriculum at SBU, but that would be a lie. Initially, I did a content analysis of the curriculum and reviewed syllabi for months in an effort to develop well-rounded maps. I learned all too quickly, however, that mapping requires us to work with what we already have and set goals for the future. So, while the University Libraries’ maps are by no means complete, I have challenged each liaison librarian to identify PLSLOs they can advance in the classroom now, while looking for new ways to impact student learning moving forward.

During the mapping process, I was overwhelmed by the fact that the University Libraries was unable to represent student learning in the same way the other academic departments across campus did. I liked the thought of creating maps identifying the introduction, reinforcement, and mastery of certain skill sets throughout students’ academic tenure with us. However, I quickly realized that this was impractical because it does not take into account the variables that librarians encounter, such as one-shot sessions, uneven representation in each section of a given class, transfer students, and learning scenarios that happen outside of the classroom itself. Using the “x” to define areas where our PLSLOs are currently impacting student learning was much less daunting and far more practical.

It is important to anticipate pushback in the mapping process (Moser et al. 333-334; Sterngold 86-88). When I began attending the Department Chair Workshops in 2013, I quickly discovered that not all of the other departmental faculty were amenable to my presence. One individual asked why I was attending, while another questioned my boss about my expertise in higher education. In the assessment mapping process, faculty in my library liaison area were initially  reluctant to collaborate with me on assessing student work. Despite some faculty’s resistance, I was determined to persevere. As a result of the workshops, I established a Community of Practice with faculty in the Education Department and grew more confident in my role as an educator.

I know that there are gaps in the maps, but I have come to terms with the healthy tension that this knowledge creates. While I have a lot more to learn about information literacy, learning theory, curriculum and assessment mapping, etc., I no longer feel under-qualified. As an academic, I continue to glean knowledge from my fellow librarians and the Education Department, looking for opportunities to make modifications as necessary. I have reconciled with the fact that this is a continual process of recognizing gaps in my professional practice and identifying opportunities for change. After all, that is what education is all about, right?

Many thanks to Annie Pho, Ellie Collier, and Carrie Donovan for their tireless editorial advice. I would like to extend a special thank you to my Library Dean, Dr. Ed Walton for believing in my ability to lead information literacy efforts at Southwest Baptist University Libraries back in 2011 when I was fresh out of library school. Last, but certainly not least, my gratitude overflows to the educators at my present institution who helped me to wrap my head around curriculum and assessment mapping. Assessment is no longer a scary thing because I now have a plan!

Works Cited

Booth, Char, and Brian Matthews. “Understanding the Learner Experience: Threshold Concepts & Curriculum Mapping.” California Academic & Research Libraries Conference.
San Diego, CARL: 7 Apr. 2012. Web. 17 Mar. 2015.

“Create a Curriculum Map: Aligning Curriculum with Student Learning Outcomes.” Office of Assessment. Santa Clara University, 2014. Web. 13 Apr. 2015.

“Framework for Information Literacy for Higher Education.” 2015. Association of College and Research Libraries. 11 Mar. 2015.

Gilchrist, Debra. “A Twenty Year Path: Learning About Assessment; Learning from Assessment.” Communications in Information Literacy 3.2 (2009): 70-79. Web. 4 Mar. 2015.

Gregory, Lua, and Shana Higgins. Introduction. Information Literacy and Social Justice: Radical Professional Praxis. Ed. Lua Gregory and Shana Higgins. Sacramento: Library Juice. 1-11. Library Juice Press. Web. 13 Apr. 2015.

Hale, Janet A. A Guide to Curriculum Mapping: Planning, Implementing, and Sustaining the Process. Thousand Oaks: Corwin, 2008. Print.

Hatfield, Susan. “Assessing Your Program-Level Assessment Plan.” IDEA Paper. 45 (2009): 1-9. IDEA Center. Web. 27 Feb. 2015.

Hayes-Jacobs, Heidi. Getting Results with Curriculum Mapping. Alexandria: Association for Supervision and Curriculum Development, 2004. eBook Academic Collection. Web. 26 Feb. 2015.

—. Mapping the Big Picture: Integrating Curriculum & Assessment K-12. Alexandria: Association for Supervision and Curriculum Development. 1997. Print.

“Information Literacy Competency Standards for Higher Education.” 2000. Association of College & Research Libraries. 11 Mar. 2015.

Levine, Arthur. Generation on a Tightrope: A Portrait of Today’s College Student. San Francisco: Jossey-Bass, 2012. Print.

Manning, Brenda H., and Beverly D. Payne. “A Vygotskian-Based Theory of Teacher Cognition: Toward the Acquisition of Mental Reflection and Self-Regulation.” Teaching and Teacher Education 9.4 (1993): 361-372. Web. 25 May 2012.

Mitchell, John. The Potential for Communities of Practice to Underpin the National Training Framework. Melbourne: Australian National Training Authority. 2002. John Mitchell & Associates. Web. 18 Mar. 2015.

Moser, Mary, Andrea Heisel, Nitya Jacob, and Kitty McNeill. “A More Perfect Union: Campus Collaborations for Curriculum Mapping Information Literacy Outcomes.” Association of College and Research Libraries Conference. Philadelphia, ACRL: Mar.-Apr. 2011. Web. 17 Mar. 2015.

Oakleaf, Megan, and Neal Kaske. “Guiding Questions for Assessing Information Literacy in Higher Education.” portal: Libraries and the Academy 9.2 (2009): 273-286. Web. 21 Dec. 2011.

Perkins, David. Archimedes’ Bathtub: The Art and Logic of Breakthrough Thinking. New York: W.W. Norton, 2000. Print.

Reiss, Rick. “Before and After Students ‘Get It': Threshold Concepts.” Tomorrow’s Professor Newsletter 22.4 (2014): n. pag. Stanford Center for Teaching and Learning. Web. 7 Mar. 2015.

Sterngold, Arthur H. “Rhetoric Versus Reality: A Faculty Perspective on Information Literacy Instruction.” Defining Relevancy: Managing the New Academic Library. Ed. Janet McNeil Hurlbert. West Port: Libraries Unlimited, 2008. 85-95. Google Book Search. Web. 17 Mar. 2015.

Wiggins, Grant, and Jay McTighe. Understanding by Design. Alexandria: Association for Supervision and Curriculum Development, 2005. ebrary. Web. 3 Mar. 2015.

Open Knowledge Foundation: Meet the 2015 School of Data fellows!

Wed, 2015-04-22 12:48

This is a cross-post from the School of Data blog, written by their Community Manager Cédric Lombion. See the original.

We’re delighted to announce that after much difficult deliberation, our Class of 2015 School of Data Fellows have now been decided! We ended up with nearly 600 applicants from 82 different countries – so it was no mean feat to pick just 7 from this group – we wish we had more resources to work with many more of you!

A huge thanks to our local partners SocialTIC and Metamorphosis, who put in much of the hard work in the selection process for fellows from their respective areas.

Their fellowships will run from now until the end of December, and they’ll be working with civil society, and journalists, in their areas. A key part of the fellowships is building the data literate community in their local areas – so, if you’re based nearby and you’d like to take part in trainings, sign up to our newsletter to be kept up to date with news of upcoming workshops and training sessions that they’ll be running!

All of our 2015 fellows, along with a number of our key community members from our local instances, will be attending the International Open Data Conference in Ottawa at the end of May, so we look forward to seeing many of you there.

Without further ado: here are our 2015 fellows!

Camila Salazar, Alajuela, Costa Rica

Camila studied journalism at the University of Costa Rica and is currently finishing her bachelor degree in Economics. Since 2009 she has worked in TV, print and digital media. In the past years she has used data and quantitative tools to write journalistic stories that encourage people to question their reality and participate in an informed public debate. In 2013 she worked in the first political factchecking project in Central America. This project was a finalist in the Global Editors Network Data Journalism Awards of 2014. More recently she worked in a data journalism project called DataBase, in one of the most prestigious digital media in Costa Rica. You can follow Camila on Twitter at @milamila07

 

David Selassie Opoku, Accra, Ghana

David Selassie Opoku is a graduate of the United World College Costa Rica, Swarthmore College in Pennsylvania with a B.A. in Biology and the New Jersey Institute of Technology with an M.S. in Computer Science. His interest in data stems from an academic background in science and passion as a technologist in various civic and social contexts.

David is a developer and aspiring data scientist currently at the Meltwater Entrepreneurial School of Technology (MEST) in Accra, Ghana where he teaches and mentors young entrepreneurs-in-training on software development skills and best practices. He has had the opportunity to work with the Boyce Thompson Institute for Plant Research, the Eugene Lang Center for Civic and Social Responsibility, the UNICEF Health Division and a tech startup in New York City. In the past year, he has helped organize and facilitate several hackathons and design thinking workshops in Accra. You can follow David on Twitter at @sdopoku

 

Goran Rizaov, Skopje, Macedonia

Goran Rizaov, data-journalist based in Skopje, Republic of Macedonia, with several years of experience in investigative journalism. Goran was a Professional Development Year fellow in 2011/2012 studying data journalism, precision journalism and online media at the Walter Cronkite School of Journalism and Mass Communication, ASU, Phoenix, Arizona.

He was also a part of the 2013 Balkan Fellowship for Journalistic Excellence. Worked for six months on an investigative story about corruption in the communication sector in Macedonia and for the first time published the names of the officials that were part of the court process that is under way in USA. Did this mostly by obtaining data from the US PACER system and other civil organizations like Transparency International.

He works with online data analyzing tools and loves to prepare inphographics.

Goran has a Bachelor degree in journalism from the St Cyril and Methodious University in Skopje, Macedonia and more than seven years of experience in working as a reporter. You can follow Goran on Twitter at @goxo

 

Julio Lopez, Quito, Ecuador

 

Julio is currently finishing a Master’s Degree in Energy and Resources Management at University College London (UCL), Australian campus. He became interested in open data after joining “Extrayendo Transparencia“, which translates to “Extracting Transparency”, a Grupo FARO’s initiative that promotes the dissemination of citizen-oriented government data to improve the accessibility and use of information from the oil and mining industries in civil society organisations and local governments in Ecuador. Julio graduated in Economics in 2010 and has conducted studies and supervised training on fiscal policy, public finance and the governance of the oil and mining industries in Latin America. As part of his fellowship, he is interested in promoting open data initiatives in the energy sector in Ecuador and Latin America. You can follow him on twitter at @jalp_ec

 

Nirab Pudasaini, Kathmandu, Nepal

Nirab is the lead mobile application developer at Kathmandu Living Labs. Working with the team at Kathmandu Living Labs Nirab has been championing the OpenStreetMap and Open Map data movement in Nepal.

By training and mobilizing volunteers they have been successful to make OpenStreetMap as the most detailed map data source for Kathmandu. His team is involved in application of Open Data and OpenStreetMap in different sectors like disaster resilience, governance, agriculture, food security, water health and sanitation. Nirab has experience in training very diverse groups ranging from undergrad geo informatics engineering students to map illiterate farmers.

Nirab has deployed the site Map My School and is developer of apps like Mero Bhada Meter – An app to help citizens find taxi fares using government provided rates and OpenStreetMap data and Citizen Report – An app that allows citizens to report problems in their locality. Nirab is a huge RMS fan and loves playing bamboo flute. You can follow him on Twitter at @NirabPudasaini.

 

Nkechi Okwuone, Benin City, Edo State, Nigeria

Nkechi is the Open Data Manager of the Edo State Open Data portal in Nigeria, the first sub-national Open Data portal in Africa. She is an alumnus of Federal Government Girls College, Ibusa and the University of Port Harcourt, where she received her B. Eng in Electrical Electronics Engineering.

She leads a team of professionals who implement, promote the Governments agenda of transparency, collaborative and participatory governance. she has worked on various data driven projects for the past 2 years ranging from building applications/visualization with data, training/seminars on data to organizing/participating in data driven hackathons.

Nkechi is also a director in SabiHub, a not for profit organization with a vision to solve social problems using technology where she mentors entrepreneurs and open data enthusiast to. She recently organized the first open data hackathon on Agriculture in her state that saw the attendance of journalists, developers, CSOs and students.

She is well respected in the open data community of Nigeria and has been recognized as the youngest Open Data Manager in Africa and nominated for the Future Awards Africa prize in Public Service (2014). You can follow her on Twitter at @enkayfreda

 

Sheena Carmel Opulencia-Calub, Makati City, the Philippines

Sheena has managed projects on gender-based violence and protection of the rights of women and their children in the Philippines funded by the European Union and set-up online monitoring systems on cases of GBV and VAWC. She worked with ACF International and UNICEF as the National Information Manager of the Water, Sanitation and Hygiene cluster co-led by UNICEF and the Department of Health (DOH) where she provides support to communities, non-government and government agencies in managing and establishing information management systems during emergencies such as Typhoon Pablo in 2012, Zamboanga Crisis, Bohol Earthquake, and Supertyphoon Yolanda in 2013. She is assisting DOH in setting-up a national database on zero open defecation, training local government staff to use mobile-based technologies to collect data. You can follow her on Twitter at @sheena.orlson

 

 

Delivery partners The Fellowship Programme is developed and delivered with Code for Africa, Social-Tic (Mexico), Metamorphosis (Macedonia) and Connected Development (Nigeria).

Funding partners This year’s fellowships will be supported by the Partnership for Open Development (POD) OD4D, Hivos, and the Foreign and Commonwealth Office in Macedonia.

LibUX: Designing for User Experience

Wed, 2015-04-22 11:00

Hey there! We — Amanda and Michael — will talk for an hour about data-driven design at this year’s eGathering on Wednesday, May 20, 2015. We are rounding out the end of the conference–which will be packed – digitally–following what’s sure to be a killer keynote by Jason Griffey. It is free! We hope to see you there.

May 20, 2015 | Designing for User Experience

Every interaction with the library has a measurable user experience. Each touchpoint — parking, library card signup, searching the catalog, browsing the stacks, finding the bathroom, using the website — leaves an impression that can drag a positive user experience into the negative. This has real impact on the bottom line: circulation, database usage, foot traffic, and so on.

User Experience (UX) Design is about determining the needs of your patrons based off their actual behavior to inform and transform the way your library provides services. Your library is no longer a place to pick up a book, but the centerpiece of a far-reaching experience plan. In this 60 minute workshop, Amanda and Michael provide a holistic introduction to the fundamentals of user experience design, how to evaluate your library with heuristics and no-budget usability testing, and convince decision makers to focus on the users’ needs first.

The post Designing for User Experience appeared first on LibUX.

LITA: No Rules

Wed, 2015-04-22 06:00

Jack Black in School of Rock (2003)

Librarians are great at making rules. Maybe it’s in our blood or maybe it’s the nature of public service. Whatever it is, creating rules comes naturally to many of us. But don’t worry, this isn’t a post about how to make the rules, it’s about how to avoid them.

We recently introduced a new digital media space at the Robert Morgade Library in Stuart, Florida. The idea lab includes tablets, laptops, and cameras that can be checked out; a flexible space that encourages collaboration; tech classes that go beyond our traditional computer classes; as well as three iMac computers and a flight simulator. With all this technology, you would expect to find people lining up, but we’ve actually noticed that our patrons seem intimidated by these new tools. In 2012 the first idea lab opened at the Peter & Julie Cummings Library, but the idea of a digital media lab at the library is still a relatively new service for our community. In order to welcome all of our patrons to the idea lab, we’ve lessened the barriers to access by having as few rules as possible. Here are a few of the risks we’ve taken.

Less Paperwork
One of our biggest changes was reducing the amount of paperwork involved in checking out equipment. The original procedure called for a library card, picture ID, and a double-sided form in order to check out something as small as a pair of headphones. Now we offer an idea lab borrower’s card. Signing up for an idea lab card is as easy as signing up for a regular library card. The only additional requirement is a one-time signature on a simple form where patrons accept responsibility for any equipment that they destroy. After the initial registration, all the patron needs is their idea lab card from that point forward. The result is less paperwork, less staff time, and more use.

Open Access
The flight simulator has been a completely new venture for the library and we’ve encountered a lot of unknowns in terms of policies and access, such as: Is there an age requirement? Should patrons have to complete a training session in order to use it? Do you need a library card? We looked to other libraries to see how to regulate this new service, but ultimately decided to start with as few barriers as possible. As it stands, anyone can walk up and try out the flight simulator. You don’t need a card, you don’t need a reservation, and you don’t need any previous experience or training. It’s been a month since our grand opening, with no fatal injuries or broken equipment, just a lot of people crashing and burning (only digitally, of course).

Don’t RSVP
We took another risk by choosing not to use Envisionware’s PC Reservation system for our iMac computers. The 20 public PC workstations at Morgade use PC Res and are generally booked, but we knew that our patrons would be hesitant to sit down at a Mac for the first time. Instead we opted for a low-tech solution: labels that read “Multimedia Priority Workstation.” We welcome anyone to try our iMacs, with the understanding that video editing trumps checking your email. I actually stole this idea from my alma mater. I figured if college freshman could handle it, the general public probably could too.

We’re incredibly lucky to be offering these services to our community and always looking for better ways to share and teach technology. Over time we might have to step back and make some rules, but for now we’re in a good place. If your library is considering offering similar services, I highly recommend starting with as few rules as possible. And I can’t wrap this up without acknowledging my supervisor, who has helped create an environment where it’s okay to challenge the rules. I hope that you have like-minded folks on your team as well.

Library Tech Talk (U of Michigan): Fuzzy Math: Using Google Analytics

Wed, 2015-04-22 00:00

We talk about using Google Analytics in DLPS and HathiTrust, and how the Analytics interface will have changed before you've finished this sentence.

District Dispatch: Strong coalition calls on libraries to plan now to secure E-rate funding

Tue, 2015-04-21 20:13

Libraries now have an extraordinary opportunity to upgrade their broadband following the Federal Communications Commission (FCC) vote to modernize the E-rate program and address the broadband capacity gap facing many public libraries. Today, a broad coalition of library associations, which includes the American Library Association (ALA), calls upon libraries to act to convert this policy win in Washington to real benefit for America’s communities. The organizations released a letter (pdf) today updating library leaders on the next phase of E-rate advocacy.

Library coalition members include the American Indian Library Association; the American Library Association; the Association for Rural & Small Libraries; the Association of Tribal Archives, Libraries, and Museums; the Chief Officers of State Library Agencies; the Public Library Association; and the Urban Libraries Council. Now that the 2015 E-rate application window is closed, the library organizations encourage libraries to revisit their plans for 2016 and beyond with the new opportunities in mind. The coalition released a joint letter today.

“Our associations came together during the E-rate modernization proceeding at the Federal Communications Commission to provide a library voice to ensure libraries across the country—tribal, rural, suburban, and urban—have access to affordable high-capacity broadband to the building and robust Wi-Fi within the building,” coalition partners wrote in a letter (pdf) to library leaders. “The Commission opened a door for libraries, and it is in our collective best interest to walk through it and demonstrate the positive impact of the additional $1.5 billion in funding and the opportunity provided by the changes.”

“The additional $1.5. billion in funding translates to hundreds of millions for libraries each year,” said Courtney Young, president of the American Library Association in a statement. “The library community worked diligently and collaboratively for nearly two years to advocate on behalf of libraries across the country in connection with the FCC’s E-rate proceeding—but our work is not finished. We must look forward and think of new ways that E-rate can be used to support our broadband network and connectivity goals.”

Library leaders are encouraged to share details with their library associations on their experiences applying for and receiving E-rate funds. Send your comments to Marijke Visser, associate director of the American Library Association’s Office for Information Technology Policy, at mvisser[at]alawash[dot]org. Discover library E-rate resources and tools at Got E-rate?. Additionally, follow E-rate news on the District Dispatch.

The post Strong coalition calls on libraries to plan now to secure E-rate funding appeared first on District Dispatch.

District Dispatch: California library advocate receives WHCLIST Award

Tue, 2015-04-21 18:25

This week, the American Library Association (ALA) Washington Office announced that Mas’ood Cajee, a library advocate from Stockton, Calif., is the winner of the 2015 White House Conference on Library and Information Services (WHCLIST) Award. Given to a non-librarian participant attending National Library Legislative Day, the award covers hotel fees and includes $300 stipend to defray the cost of attending the event.

A longtime supporter of libraries, Cajee works in a community hit hard by the recession. Stockton, which is home to over 300,000 residents, is now served by just four libraries. Dedicated to seeing his library system revitalized to meet the growing needs of the community, Cajee serves on the board of the Library & Literacy Foundation for San Joaquin County. He also serves as Chair for Strong Libraries = Strong Communities, a group working toward a ballot measure that will provide stable support for his community’s county library system.

Cajee’s hard work and dedication has produced several notable successes for his library system. Last fall, input from his group led to a librarian being hired as head of Stockton’s Community Services department, a position responsible for managing the library system. In January, an op-ed he wrote unleashed an outpouring of support that reopened a closed library branch. In addition, Cajee has contributed in the establishment of a county-wide network of library advocates to help heighten awareness of and build support for Stockton’s library system.

“I tell people that if we support our libraries today, our libraries will be there to support us tomorrow,” Cajee said.

The White House Conference on Library and Information Services—an effective force for library advocacy nationally, statewide and locally—transferred its assets to the ALA Washington Office in 1991 after the last White House conference. These funds allow ALA to participate in fostering a spirit of committed, passionate library support in a new generation of library advocates. Leading up to National Library Legislative Day each year, the ALA seeks nominations for the award. Representatives of WHCLIST and the ALA Washington office choose the recipient.

The post California library advocate receives WHCLIST Award appeared first on District Dispatch.

DPLA: DPLA and HathiTrust Partnership Supports Open E-Book Programs

Tue, 2015-04-21 16:00

Written by Dan Cohen and Mike Furlough

The Digital Public Library of America and HathiTrust have had a strong relationship since DPLA’s inception in 2013. As part of our ongoing collaboration to host and make digitized books widely available, we are now working to see how we can provide our services to exciting new initiatives that bring ebooks to everyone.

The Humanities Open Book grant program, a joint initiative of the National Endowment for the Humanities and the Andrew W. Mellon Foundation, is exactly the kind of program we wish to support, and we stand ready to do so. Under this funding program, NEH and Mellon will award grants to publishers to identify previously published books and acquire the appropriate rights to produce an open access e-book edition available under a Creative Commons license.  Participants in the program must deposit an EPUB version of the book in a trusted preservation service to ensure future access.

HathiTrust and DPLA together offer a preservation and access service solution for these re-released titles. Since 2013 public domain and open access titles in HathiTrust have been made available through the Digital Public Library of America. HathiTrust recently added its 5 millionth open e-book volume to its collection, and as a result DPLA now includes over 2.3 million unique e-book titles digitized by HathiTrust’s partner institutions, providing readers with improved ability to find and read these works. Materials added to the HathiTrust collections can be made available to users with print disabilities, and they become part of the corpus of materials available for computational research at the HathiTrust Research Center.  By serving as a DPLA content hub, HathiTrust can ensure that open access e-books are immediately discoverable through DPLA.

Improving the e-book ecosystem is a major focus of DPLA’s and was an important theme at DPLAfest 2015 in Indianapolis. The Humanities Open Book program is just one example of current work to make previously published books available again in open electronic form. A parallel initiative from the Authors Alliance focuses on helping authors regain the rights to their works so that they can be released under more permissive licenses. Publishers are also exploring open access models for newly published scholarly books through programs such as the University of California Press’s Luminos. DPLA and HathiTrust applaud these efforts, and we hope that these initiatives can avoid becoming fragmented by being aggregated through community-focused platforms like DPLA and HathiTrust.

We are both very pleased that we can provide additional support for the fantastic work that NEH and Mellon are supporting through the Humanities Open Book Program. Publishers who are contemplating proposals to NEH may find that works are already digitized in HathiTrust, and may choose to open them as part of the grant planning process. In the coming months we’ll be happy to advise potential applicants to this program, or any other rightsholder who would like to know more about the services of DPLA and HathiTrust.

Karen Coyle: Come in, no questions asked

Tue, 2015-04-21 15:40
by Eusebia Parrotto, Trento Public Library*

He is of an indeterminate age, somewhere between 40 and 55. He's wearing two heavy coats, one over the other, even though it's 75 degrees out today (shirt-sleeve weather) and a large backpack. He's been a regular in the library for a couple of months, from first thing in the morning until closing in the evening. He moves from the periodicals area along the hall to the garden on fair weather days. Sundays, when the library is closed, he is not far away, in the nearby park or on the pedestrian street just outside.

I run into him at the coffee vending machine. He asks me, somewhat hesitantly, if I have any change. I can see that he's missing most of his front teeth. I've got a euro in my hand, and I offer it to him. He takes it slowly, looks at it carefully, and is transformed. His face lights up with a huge smile, and like an excited child, but with a mere whisper of a voice, he says: "Wow!! A euro! Thanks!" I smile back at him, and I can see that he's trying to say something else but he can't, it tires him. I can smell the alcohol on his breath and I assume that's the reason for his lapse. He motions to me to wait while he tries to bring forth the sounds, the words. I do wait, watching. He lifts a hand to the center of his neck as if to push out the words, and he says, with great effort and slowly: "I don't speak well, I had an operation. Look." There is a long scar on his throat that goes from one ear to the other. I recognize what it is. He says again, "Wait, look" and pulls up his left sleeve to show me another scar along the inside of his forearm that splits in two just before his wrist. "I know what that is," I say.

Cancer of the throat. An incision is made from under the chin to arrive at the diseased tissue. They then reconstruct the excised portion using healthy tissue taken from the arm. That way the damaged area will recover, to the extent it can, its original functions.

With great effort and determination he tells me, giving me the signal to wait when he has to pause, that he was operated on nearly a year go, after three years in which he thought he had a stubborn toothache. When he couldn't take it any more he was taken to the emergency room and was admitted to hospital immediately. I tell him that he's speaking very clearly, and that he has to exercise his speech often to improve his ability to articulate words; it's a question of muscle tone and practice. I ask him if he is able to eat. I know that for many months, even years, after the operation you can only get down liquids and liquified foods. He replies "soups, mainly!" It will get better, I tell him.

His eyes shine with a bright light, he smiles at me, signals to me to wait. Swallows. Concentrates and continues his story, about a woman doctor friend, who he only discovered was a doctor after he got sick. He tells me some details about the operation; the radiation therapy. This is the second time that he has cheated death, he says. The first was when he fell and hit his head and was in a coma for fifteen days. "So now this, and it's the second time that I have been brought back from the brink." He says this with a smile, even a bit cocky, with punch. And then tears come to his eyes. He continues to smile, impishly, toothlessly. "I'm going to make it, you'll see. Right now I'm putting together the forms to get on disability, maybe that will help." "Let's hope it works out," I say as we part. And he replies: "No, not hope. You've got to believe."

The derelicts of the library. A few months back it was in all of the local papers. One student wrote a letter to the newspaper complaining that the presence of the homeless and the vagabonds profaned the grand temple of culture that is the library. Suddenly everyone had something to say on the matter; even those who had never even been to the library were upset about the derelicts there. They said it made them feel unsafe. Others told how it made them feel uncomfortable to come into the library and see them occupying the chairs all day long. Even when half of the chairs were free they were taking up the places of those who needed to study. Because you can't obviously mix with them.

I don't know how often the person I chatted with today had the occasion to speak to others about his illness. It's a terrible disease, painful, and it leaves one mutilated for life. Recovery from the operation is slow, over months, years. It's an infliction that leaves you with a deep fear even when you think you are cured. That man had such a desire to tell the story of his victory over the disease, his desire to live, his faith that never left him even in the darkest moments. I know this from the great light that radiated from his visage, and from his confident smile.

I don't know of any other place but standing at the vending machine of a library where such an encounter is possible between two worlds, two such distant worlds. I don't know where else there can be a simple conversation between two persons who, by rule or by necessity, occupy these social extremities; between one who lives on the margins of society and another who lives the good life; who enjoys the comforts of a home, a job, clean clothes and access to medical care. Not in other public places, which are open only to a defined segment of the population: consumers, clients, visitors to public offices. These are places where you are defined momentarily based on your social activities. Not in the street, or in the square, because there are the streets and squares that are frequented by them, and the others, well-maintained, that are for us. And if one of them ventures into our space he is surely not come to tell us his story, nor are we there to listen to it.

He is called a derelict, but this to me is the beauty of the public library. It is a living, breathing, cultural space that is at its best when it gathers in all of those beings who are kept outside the walls of civil society, in spite of the complexity and contradictions that implies.

The library is a place with stories; there are the stories running through the thousands of books in the library as well as the stories of the people who visit it. In the same way that we approach a new text with openness and trust, we can also be open and trusting as listeners. Doing so, we'll learn that the stories of others are not so different from our own; that the things that we care about in our lives, the important things, are the same for everyone. That they are us, perhaps a bit more free, a bit more suffering, with clothes somewhat older than our own.

Then I read this. It tells the story of the owner of a fast food restaurant who, having noticed that after closing someone was digging through the trash cans looking for something to eat. So she put a sign on store window, inviting the person to stop in one day and have a fresh meal, for free. The sign ends with: "No questions asked."

So this is what I want written on the front door of all libraries: "Come in, whoever you are. No questions asked."

*Translated and posted with permission. Original.

[Note: David Lankes tweeted (or re-tweeted, I don't remember) a link to Eusebia's blog, and I was immediately taken by it. She writes beautifully of the emotion of the public library. I will translate other posts as I can. And I would be happy to learn of other writers of this genre that we can encourage and publicize. - kc]

David Rosenthal: The Ontario Library Research Cloud

Tue, 2015-04-21 15:00
One of the most interesting sessions at the recent CNI was on the Ontario Library Research Cloud (OLRC). It is a collaboration between universities in Ontario to provide a low-cost, distributed, mutually owned private storage cloud with adequate compute capacity for uses such as text-mining. Below the fold, my commentary on their presentations.

For quite some time I've been greeted by skepticism as I've argued that, once you  get to a reasonable scale, commercial cloud storage is significantly more expensive than doing it yourself. It was very nice to hear a talk that agreed with me.

Admittedly, Ontario is a nearly ideal environment for a collaborative private storage cloud. It has nearly 40% of Canadians, almost all concentrated together close to the US border, and a quarter of the top Canadian Universities. And the Universities have a long history of collaborating. Among them are ORION, a shared high-bandwidth network connecting the campuses, and the striking success of Scholar's Portal, which ingests the e-journals to which Ontario subscribes and provides local access to them. They currently have about 38M articles, about 610K e-books.

Scholar's Portal are branching out to act as a data repository. Their partners estimated that their storage needs would grow rapidly to over a petabyte. OLRC's goals for a shared storage infrastructure were four-fold:
They estimated the cost for using commercial cloud services, even though they would not meet the other three goals, and were confident that using off-the-shelf hardware and open source software they could build a system that would provide significant savings.

They received a grant from the provincial government to cover the cost of the initial hardware, but the grant conditions meant they had only three months to purchase it. This turned out to be a major constraint on the procurement. Dell supplied 4.8PB of raw disk in 77 MD1200 shelves, and 19 PowerEdge R720xd heads for a total usable storage capacity of 1.2PB. Two nodes were in Toronto, one each in Ottawa, Kingston, Guelph. They were connected by 10G Ethernet VLAN links.

The software is the Swift OpenStack open source object storage infrastructure. This is hardware agnostic, so future hardware procurement won't be so constrained.

The partners initially set up a test network with three nodes and ran tests to see what the impact of Bad Things happening would be on the network. The bottom line is that 1G Ethernet is the absolute minimum you need - recovering from the loss of a 48TB shelf over a 1G link took 8 days.

As I've pointing out for some time, archives are going through a major change in access pattern. Scholars used to access a few individual items from an archive, but increasingly they want to mine from the entire corpus. OLRC realized that they needed to provide this capability, so they added another 5 R720xd servers as a compute cluster.

Now that the system is up and running, they have actual costs to work with. OLRC's next step is to figure out pricing, which they are confident will be significantly less than commercial clouds. I will be very interested to follow their progress - this is exactly how Universities should collaborate to build affordable infrastructure.

Alf Eaton, Alf: Access-Control-Allow-Origin: *

Tue, 2015-04-21 13:52
  1. A client (web browser) is not allowed to read data sent in response to a GET request containing authentication credentials, if Access-Control-Allow-Origin: * is present in the HTTP headers of the response.
  2. The only time data with an Access-Control-Allow-Origin: * header is available to the client is when no authentication details (e.g. cookies) are sent.
  3. When an Access-Control-Allow-Origin: * header is set on the response, the data that can be read is guaranteed to be anonymous.

[†] This is a special case that only applies when * is set as the origin.

[‡] The only exception is when authentication is by IP address - in that case the Access-Control-Allow-Origin: * header should not be set.

Open Knowledge Foundation: Open Trials: Open Knowledge Announce Plans for Open, Online Database of Clinical Trials

Tue, 2015-04-21 13:00

Open Knowledge today announced plans to develop Open Trials, an open, online database of information about the world’s clinical research trials funded by The Laura and John Arnold Foundation. The project, which is designed to increase transparency and improve access to research, will be directed by Dr. Ben Goldacre, an internationally known leader on clinical transparency.

Open Trials will aggregate information from a wide variety of existing sources in order to provide a comprehensive picture of the data and documents related to all trials of medicines and other treatments around the world. Conducted in partnership with the Center for Open Science and supported by the Center’s Open Science Framework, the project will also track whether essential information about clinical trials is transparent and publicly accessible so as to improve understanding of whether specific treatments are effective and safe.

“There have been numerous positive statements about the need for greater transparency on information about clinical trials, over many years, but it has been almost impossible to track and audit exactly what is missing,” Dr. Goldacre, the project’s Chief Investigator and a Senior Clinical Research Fellow in the Centre for Evidence Based Medicine at the University of Oxford, explained. “This project aims to draw together everything that is known around each clinical trial. The end product will provide valuable information for patients, doctors, researchers, and policymakers—not just on individual trials, but also on how whole sectors, researchers, companies, and funders are performing. It will show who is failing to share information appropriately, who is doing well, and how standards can be improved.”

Patients, doctors, researchers, and policymakers use the evidence from clinical trials to make informed decisions about which treatments are best. But studies show that roughly half of all clinical trial results are not published, with positive results published twice as often as negative results. In addition, much of the important information about the methods and findings of clinical trials is only made available outside the normal indexes of academic journals.

“This project will help to shed light on both good and bad practices by the sponsors of clinical trials,” Stuart Buck, LJAF Vice President of Research Integrity, explained. “If those sponsors become more transparent about their successes and failures, medical science will advance more quickly, thus benefitting patients’ health.”

“We are thrilled to partner with Open Knowledge on the use of the Open Science Framework (OSF) for this project. Open Trials is a great example of how the free, open source OSF infrastructure can be utilized by the community in different ways to increase transparency in scientific research,” Andrew Sallans, Center for Open Science Partnerships Lead, explained.

Open Trials will help to automatically identify which trial results have not been disclosed by matching registry data on trials that have been conducted against documents containing trial results. This will facilitate routine public audit of undisclosed results. It will also improve discoverability of other documents around clinical trials, which will be indexed and, in some cases, hosted. Lastly, it will help improve recruitment for clinical trials by making information and commentary on ongoing trials more accessible.

“This is an incredible opportunity to identify which trial results are being withheld,” Rufus Pollock, President and Founder of Open Knowledge, explained. “It is the perfect example of a project where opening up data and presenting it in a usable form will have a direct impact—it can literally save lives. We’re absolutely delighted to partner with Ben Goldacre, a leading expert and advocate in this space, as well as with the Center for Open Science and LJAF to conduct this groundbreaking work.”

The first phase of the Open Trials project is scheduled for completion in March 2017. For project updates, please follow @opentrials on twitter

or get in touch with us at opentrials@okfn.org.

CrossRef: Update to CrossRef Web Deposit Form for CrossCheck Indexing

Tue, 2015-04-21 08:59

CrossRef has recently updated the Web Deposit form to help CrossCheck members enable their content for indexing in the CrossCheck database. As of February 2015, the aim has been to get all CrossCheck member publishers enabling indexing via the 'as-crawled' URL method, where publishers deposit full-text links to the content in the CrossRef metadata so that iParadigms can find and index it more easily.

Publishers who use the Web Deposit Form to deposit DOIs and metadata with CrossRef can now enter the full-text link to the PDF or HTML version of the article in the 'iParadigms URL' field on the web deposit form.

Backfiles can also be populated with this information in bulk using the .csv upload option: http://help.crossref.org/as-crawled-csv-upload.

These measures are being put in place to try to increase the efficacy of CrossCheck indexing by iParadigms and make sure the Similarity Reports accessed by CrossCheck users in iThenticate are as comprehensive as possible.

Galen Charlton: How long does it take to change the data, part I: confidence

Tue, 2015-04-21 00:09

A few days ago, I asked the following question in the Mashcat Slack: “if you’re a library data person, what questions do you have to ask of library systems people and library programmers?”

Here is a question that Alison Hitchens asked based on that prompt:

I’m not sure it is a question, but a need for understanding what types of data manipulations etc. are easy peasy and would take under hour of developer time and what types of things are tricky — I guess an understanding of the resourcing scope of the things we are asking for, if that makes sense

That’s an excellent question – and one whose answer heavily depends on the particulars of the data change needed, the people requesting it, the people who are to implement it, and tools that are available.  I cannot offer a magic box that, when fed specifics and given a few turns of its crank, spits out a reliable time estimate.

However, I can offer up a point of view: asking somebody how long it takes to change some data is asking them to take the measure of their confidence and of their constraints.

In this post I’ll focus on the matter of confidence.  If you, a library data person, are asking me, a library systems person (or team, or department, or service provider), to change a pile of data, I may be perfectly confident in my ability to so.  Perhaps it’s a routine record load that for whatever reason cannot be run directly by the catalogers but for which tools and procedures already exist.  In that case, answering the question of how long it would take to do it might be easy (ignoring, for the moment, the matter of fitting the work onto the calendar).

But when asked to do something new, my confidence could start out being quite low.  Here are some of the questions I might be asking myself:

Am I confident that I’m getting the request from the right person?  Am I confident that the requester has done their homework?

Ideally, the requester has the authority to ask for the change, knows why the change is wanted, has consulted with the right data experts within the organization to verify that the request makes sense, and has ensured that all of the relevant stakeholders have signed off on the request.

If not, then it will take me time to either get the requester to line up the political ducks or to do so myself.

Am I confident that I understand the reason for the change?

If I know the reason for the change – which presumably is rooted in some expected benefit to the library’s users or staff – I may be able to suggest better approaches.  After all, sometimes the best way to do a data change is to change no data at all, and instead change displays or software configuration options.  If data does need to be changed, knowing why can make it easier for me to suss out some of the details or ask smarter questions.

If the reason for the change isn’t apparent, it will take me time to work with the requester and other experts and stakeholders until I have enough understanding of the big picture to proceed (or to be told to do it because the requester said so – but that has its own problems).

Am I confident that I understand the details of the requested change?

Computers are stupid and precise, so ultimately any process and program I write or use to effect the change has to be stupid and precise.

Humans are smart and fuzzy, so to bring a request down to the level of the computer, I have to analyze the problem until I’m confident that I’ve broken it down enough. Whatever design and development process I follow to do the analysis – waterfall, agile, or otherwise – it will take time.

Am I confident in the data that I am to change?

Is the data to be changed nice, clean and consistent?  Great! It’s easier to move a clean data set from one consistent state to another consistent state than it is to clean up a messy batch of data.

The messier the data, the more edge cases there are to consider, the more possible exceptions to worry about – the longer the data change will take.

Am I confident that I have the technical knowledge to implement the change?

Relevant technical knowledge can include knowledge of any update tools provided by the software, knowledge of programming languages that can use system APIs, knowledge of data manipulation and access languages such as SQL and XSLT, knowledge of the underlying DBMS, and so forth.

If I’m confident in my knowledge of the tools, I’ll need less time to figure out how to put them together to deal with the data change.  If not, I’ll need time to teach myself, enlist the aid of colleagues who do have the relevant knowledge, or find contractors to do the work.

Am I confident in my ability to predict any side-effects of the change?

Library data lives in complicated silos. Sometimes, a seemingly small change can have unexpected consequences.  As a very small example, Evergreen actually cares about the values of indicators in the MARC21 856 field; get them wrong, and your electronic resource URLs disappear from public catalog display.

If I’m familiar with the systems that store and use the data to be changed and am confident that side-effects of the change will be minimal, great! If not, it may take me some time to investigate the possible consequences of the change.

Am I confident in my ability to back out of the change if something goes wrong?

Is the data change difficult or awkward to undo if something is amiss?  If so, it presents an operational risk, one whose mitigation is taking more time for planning and test runs.

Am I confident that I know how often requests for similar data changes will be made in the future?

If the request is a one-off, great! If the request is the harbinger of many more like it – or looks that way – I may be better off writing a tool that I can use to make the data change repeatedly.  I may be even better off writing a tool that the requester can use.

It may take more time to write such a tool than it would to just handle the request as a one-off, in which case it will take time to decide which direction to take.

Am I confident in the organization?

Do I work for a library that can handle mistakes well?  Where if the data change turns out to be misguided, is able to roll with the punches?  Or do I work for an unhealthy organization where a mistake means months of recriminations? Or where the catalog is just one of the fronts in a war between the public and technical services departments?

Can I expect to get compensated for performing the data change successfully? Or am I effectively being treated as if were the stupid, over-precise computer?

If the organization is unhealthy, I may need to spend more time than ought to be necessary to protect my back – or I may end up spending a lot of time not just implementing data changes, but data oscillations.

The pattern should be clear: part of the process of estimating how long it might take to effect a data change is estimating how much confidence I have about the change.  Generally speaking, higher confidence means less time would be needed to make the change – but of course, confidence is a quality that cannot be separated from the people and organizations who might work on the change.

In the extreme – but common – case, if I start from a state of very low confidence, it will take me time to reach a sufficient degree of confidence to make any time estimate at all.  This is why I like a comment that Owen Stephens made in the Slack:

Perhaps this is part of the answer to [Alison]: Q: Always ask how long it will take to investigate and get an idea of how difficult it is.

In the next post, I discuss how various constraints can affect time estimates.

DuraSpace News: UPDATE: 2015 VIVO Conference Workshops Announced

Tue, 2015-04-21 00:00

Boston, MA  The Sixth Annual VIVO Conference will be held August 12-14, 2015 at the Hyatt Regency Cambridge, overlooking Boston. The VIVO Conference creates a unique opportunity for people from across the country and around the world to come together to explore ways to use semantic technologies and linked open data to promote scholarly collaboration and research discovery.

Tara Robertson: May conferences

Mon, 2015-04-20 22:58

I’m a bit of a nervous public speaker. Most people assume that because of my personality or pink hair that I’m really comfortable presenting in front of a group of people. Those people also assume I like rollercoasters. This is not true.

Instead of feeling a sense of dread I’m feeling pretty excited about these upcoming presentations. I’m going to be talking about work that I feel really passionate about and co-presenting with some of my favourite colleagues means that there’s support and that I need to be prepared well ahead of time.

BCLA conference, May 20-22

  • I’ll be on a panel Small Changes, Big Impact: New and Affordable Solutions for Document Delivery where I’ll be talking about the process of figuring out what you need software to do and how to look beyond library software vendors to meet your needs. I will reference Monty Python’s Ministry of Silly Walks and talk about workflows.
  • Co-presenting with Amanda Coolidge, Manager, Open Education at BCcampus Can I actually Use It? Testing Open Textbooks for Accessibility where we’ll be talking about the user testing we did with the open textbooks and the toolkit we wrote with Sue Doner, Instructional Designer at Camosun College.
  • I’ll be one of many on the Oh Glorious Failures! Lightning Talks on How to Succeed Through Failure. We know that valuable learning happens through failure but many librarians are reluctant to share our professional failures. I’m going to talk about something I messed up in the open textbooks user testing focus group.

CAUCUSS conference, May 24-27

This will be my first time attending CAUCUSS, the national conference for student services folks in post-secondary. I’m really looking forward to meeting disability service folks from across Canada as well as attending a session on universal design for learning.

  • I’m also looking forward to co-presenting Alternate Formats 101 with Heidi Nygard from UBC’s Access and Diversity, Crane Library. Both of our organizations have  a long history of producing alternate formats and we’re going to go through how the similarities and differences in how we produce various alternate formats: accessible PDF, e-text, mp3, DAISY, Large Print and how we deal with pesky things like tables, math formulas and image descriptions. We’re going to sneak in some stuff about core library values and protecting user rights.

Open Textbook Summit, May 28-29

  • This will be the first time Amanda, Sue and I will present together in person. We’re doing a 30 minute session on the user testing and we’ll be co-presenting with one of the students who did the testing, Shruti Shravah. This project was the highlight of my last year of work: collaborating with Amanda and Sue was the best thing, the students were amazing, and I’m proud of the process and outcome. I’m super excited about this talk.

Pages