You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 17 min ago

David Rosenthal: Don't Panic

Sat, 2015-02-28 17:51
I was one of the crowd of people who reacted to Wednesday's news that Argonne National Labs would shut down the NEWTON Ask A Scientist service, on-line since 1991, this Sunday by alerting Jason Scott's ArchiveTeam. Jason did what I should have done before flashing the bat-signal. He fed the URL into the Internet Archive's Save Page Now, to be told "relax, we're all over it". The site has been captured since 1996 and the most recent capture before the announcement was Feb 7th. Jason arranged for captures Thursday and today.

As you can see by these examples, the Wayback Machine has a pretty good copy of the final state of the service and, as the use of Memento spreads, it will even remain accessible via its original URL.

Hydra Project: Hydra Europe events Spring 2015

Sat, 2015-02-28 10:02

Registration is now open for two Hydra events in Europe this spring:

Hydra Europe Symposium – a free event for digital collection managers, collection owners and their software developers that will provide insights into how Hydra can serve your needs

  • Thursday 23rd April – Friday 24th April 2015 | LSE, London

Hydra Camp London – a training event enabling technical staff to learn about the Hydra technology stack so they can establish their own implementation

  • Monday 20th April – lunchtime Thursday 23rd April 2015 | LSE, London

Full details and booking arrangements for both events can be found here.

Mark E. Phillips: DPLA Metadata Analysis: Part 3 – Where to go from here.

Fri, 2015-02-27 23:31

This is the last of three posts about working with the Digital Public Library of America’s (DPLA) metadata to demonstrate some of the analysis that can be done using Solr and a little bit of time and patience. Here are links to the first and second post in the series.

What I wanted to talk about in this post is how can we use this data to help improve access to our digital resources in the DPLA, and also be able to measure that we’ve in fact improved when we go out to spend resources, both time and money on metadata work.

The first thing I think we need to do is to make an assumption to frame this conversation. For now let’s say that the presence of subjects in a metadata record is a positive indicator of quality. And that for the most part a record that has three or more subjects (controlled, keywords, whatever) improves the access to resources in metadata aggregation systems like the DPLA which doesn’t have the benefit of full-text for searching.

So out of the numbers we’ve looked at so far, which ones are the ones to pay the most attention to.

Zero Subjects

For me it is focusing on the number of records that have zero subject headings that are already online. Going from 0-1 subject headings is much more of an improvement for access than going from 1-2, 2-3,3-4,4-8,8-15 subjects per record. So once we have all records with at least one subject we can move on. We can measure this directly with the metric for how many records have zero subjects that I introduced last post.

There are currently 1,827,276 records in the DPLA that have no subjects or keywords. This accounts for 23% of the DPLA dataset analyzed for these blog posts. I think this is a pretty straightforward area to work on related to metadata improvement.

Dead end subjects

One are we could work to improve is when we have subjects that are either only used once in the DPLA as a whole, or only once within a single Hub. Reducing this number would allow for more avenues for navigation between records by connecting them via subject when available. There isn’t anything bad about unique subject headings within a community, but if a record doesn’t have a way to get you to like records (assuming there are like records within a collection) then it isn’t as useful as one that connects you to more, similar items.  There of course are many legitimate reasons that there is only one instance of a subject in a dataset and I don’t think that we should strive to remove them completely,  but reducing the number overall would be an indicator of improvement in my book.

In the last post I had a table that had the number of unique subjects and the number of subjects that were unique to a single Hub.  I was curious about the percentage of subjects from a Hub that were unique to just that Hub based on the number of unique subjects.  Here is that table.

Hub Name Records Unique Subjects # of subjects unique to hub % of subjects that are unique to hub ARTstor 56,342 9,560 4,941 52% Biodiversity Heritage Library 138,288 22,004 9,136 42% David Rumsey 48,132 123 30 24% Digital Commonwealth 124,804 41,704 31,094 75% Digital Library of Georgia 259,640 132,160 114,689 87% Harvard Library 10,568 9,257 7,204 78% HathiTrust 1,915,159 685,733 570,292 83% Internet Archive 208,953 56,911 28,978 51% J. Paul Getty Trust 92,681 2,777 1,852 67% Kentucky Digital Library 127,755 1,972 1,337 68% Minnesota Digital Library 40,533 24,472 17,545 72% Missouri Hub 41,557 6,893 4,338 63% Mountain West Digital Library 867,538 227,755 192,501 85% National Archives and Records Administration 700,952 7,086 3,589 51% North Carolina Digital Heritage Center 260,709 99,258 84,203 85% Smithsonian Institution 897,196 348,302 325,878 94% South Carolina Digital Library 76,001 23,842 18,110 76% The New York Public Library 1,169,576 69,210 52,002 75% The Portal to Texas History 477,639 104,566 87,076 83% United States Government Printing Office (GPO) 148,715 174,067 105,389 61% University of Illinois at Urbana-Champaign 18,103 6,183 3,076 50% University of Southern California. Libraries 301,325 65,958 51,822 79% University of Virginia Library 30,188 3,736 2,425 65%

Here is the breakdown when grouped by type of Hub,  either Service-Hub or Content-Hub

Hub Type Records Unique Subjects Subjects unique to Hub Type % of Subjects unique to Hub Type Content Hubs 5,736,178 1,311,830 1,253,769 96% Service Hubs 2,276,176 618,081 560,049 91%

Or another way to look at how the subjects are shared between the different types of Hubs is the following graph.

Subjects unique to and shared between Hub Types.

It appears that there is a small number (3%) of subjects that are shared between Hub types.  Would increasing this number improve the ability for users to discover resources better from multiple Hubs?

More, More, More

I think once we’ve looked at the ways mentioned above I think that we should work to up the number of subjects per record within a given Hub. I don’t think there is a magic number for everyone, but at UNT we try and have three subjects for each record whenever possible. So that’s what we are shooting for. We can easily see improvement by looking at the mean and see if it goes up (even ever so slightly up)

Next Steps

I think that there is some work that we could do to identify which records need specific kinds work for subjects based on more involved processing of the input records, but I’m going to leave that for another post and probably another flight somewhere to work on.

Hope you enjoyed these three posts and hope they resonate at least a bit with you.

Feel free to send me a not on twitter if you have questions, comments, or idea for me about this.

District Dispatch: Copyright Office should modernize its operation

Fri, 2015-02-27 21:06

Photo by Steve Akers

The U.S. House Judiciary Committee has mulled its way through 16 well-attended and sometimes contentious hearings on comprehensive copyright reform since 2013. Thursday’s hearing—“the U.S. Copyright Office: Its Function and Resources—sounds like one that keen copyright followers might think is mundane enough to skip, but they would be wrong. The Office’s functions tremendously affect how libraries, businesses, authors, and other creators also operate, because the Office holds the official record of what works are protected and who holds the copyright. Plus there was a little excitement.

The witnesses were Keith Kupferschmid, Software & Information Industry Association general counsel; Lisa Dunner of Dunner Law on behalf of the American Bar Association; Nancy Mertzel of Schoeman Updike on behalf of the American Intellectual Property Law Association; and Bob Brauneis, George Washington University Law School professor.

In an unusual accord, the witnesses agreed, the Representatives agreed, probably everybody in the room agreed that the time is now for the modernization of the Copyright Office. Stuck in the 1970s, the Copyright Office is ill-equipped to manage its basic function—recording what works are protected by copyright and who holds the rights to those works. Based on their testimony, the panel agreed that the Copyright Office requires more technical expertise and additional resources to build a 21st century digital infrastructure for registration, recordation and search. The Judiciary committee member statements demonstrated that in light of the Office’s role in enabling an effective and efficient structure to enable transactions in the copyright industry— an industry that Representative Deutch said was worth over a trillion dollars—something must be done. A functioning, modern registration system would help alleviate the orphan works problem by making it possible to locate rights holders and track the provenance of copyrighted works. (Perhaps now all stakeholders can agree with the Library Copyright Alliance’s position that a full scale searchable and interoperable system that meets the needs of commerce, creators, and the public is preferable to an orphan works solution legislated by Congress).

Only Representative Zoe Lofgren went off script, asking cynically if modernization could also help the Copyright Office represent “a greater diversity of viewpoints?” Rep. Lofgren continued by saying the Office has made some “bone-headed mistakes.” Its strong endorsement of Stop Online Piracy Act (SOPA) did not take into account the public interest, and resulted in an unpreceded backlash. A recorded 14 million people contacted Congress to protest SOPA and the world witnessed the first internet blackout campaign. Lofgren continued that it was the Copyright Office that recommended not extending the 1201 exemption for cell phone unlocking, leading to another public outcry, a “We The People” petition to the President, and the need for Congress to pass the “Unlocking Consumer Choice and Wireless Competition Act.” Lofgren was not done. At an earlier 1201 rulemaking the Office advised against an exemption for circumvention of e-readers to enable to enable text to speech functionality for people with print disabilities. Really? Thankfully, this decision was overturned by the Librarian of Congress in his final decision.

Lofgren, who represents constituents in Silicon Valley, has butted heads with the Copyright Office in the past. At an earlier hearing, Register Maria Pallante was quizzed by Lofgren over the purpose of the copyright. In the American Bar Association Landslide Magazine, Pallante was quoted as saying that “copyright is for the author first and the nation second.” Lofgren countered “it seems to me when you look at the Constitution, which empowers Congress to grant exclusive rights in creative works in order, and I quote, “to promote the progress of science and the useful arts.” It seems to me that the Constitution is very clear that copyright does not exist inherently for the author but for the benefit for society at large.”

It is in the public interest to provide the necessary resources and expertise to upgrade the Copyright Office’s infrastructure. Whether the Copyright Office’s will be able to balance the interests of all parties for the benefit for society at large in other areas of copyright review is another more fundamental matter. Protection of legacy business models and copyright enforcement continue to dominate policy discussions in both the legislative and executive branches of government, so the Copyright Office is not alone in its perspective that the public interest is a secondary matter. Let’s hope that the public continues to pay close attention to the House Judiciary copyright review. They have been more than willing to speak up on their behalf.

The post Copyright Office should modernize its operation appeared first on District Dispatch.

Chris Prom: Rights in the Digital Era

Fri, 2015-02-27 20:31

When working with electronic records, we often feel like we stand on shaky ground.

For example, when I teach my DAS course, some of the most difficult questions that course participants raise are related to rights: “How do we know what is copyrighted?” “How do we identify private materials?” “How can we provide access to ________”  “How can we track rights information?” “What do we do if we get sued?”

As archivists, we like to provide as much access as possible to the great things our repositories hold.  How can we do that and still sleep soundly at night?

These are the kinds of questions that led the SAA Publications Board to commission the newest entry in our Trends in Archives Practice Series, Rights in the Digital Era.    As with the courses in the SAA Digital Archives Specialist curriculum, I see the current and forthcoming works in this series as falling very much within the original spirit of this blog, making digital archives work practical and accessible.

Rights in the Digital Era, for instance, lays out risk management strategies and techniques you can use to provide responsible access to analog and digital collections, while meeting legal and ethical obligations, whatever your professional status or repository profile  As Peter B. Hirtle notes in his introduction to the volume, “A close reading of the modules will provide you with the rights information that all archivists should know, whether you are a repository’s director, reference archivist, or processing assistant.”

  • Module 4: Understanding Copyright Law by Heather Briston – provides a short-but-sweet introduction to copyright law and unpublished materials, emphasizing practical steps that can be taken to make archives more accessible and useful.
  • Module 5: Balancing Privacy and Access in Manuscript Collections by Menzi Behrnd-Klodt – navigates the difficult terrain of personal privacy, developing a roadmap to the law and describing risk management strategies applicable for a range or repositories and collections.
  • Module 6: Balancing Privacy and Access in the Records of Organizations by Menzi Behrnd-Klodt – unpacks legal and ethical requirements for records that are restricted under law, highlighting hands-on techniques you can use to develop thoughtful access policies and pathways in a variety of repository settings.
  • Module 7: Managing Rights and Permissions by Aprille McKay – examines provides methods and steps you can take to control rights information, supplying many useful approaches that you can easily adapt to your own circumstances.

It gives me and the entire SAA Publications Board great pride to see the fruits of our authors’ thinking and of our society’s work emerge in high quality, peer reviewed, and attractively designed books.

But don’t just take my word for it—read Rights in the Digital Era or any of the dozens of other publications that comprise the SAA catalog of titles!

 

 

District Dispatch: School libraries can’t afford to wait

Fri, 2015-02-27 19:46

A student at the ‘Iolani School in Hawaii

Today, we are ending a very busy week of work to include school library provisions in the reauthorization of the Elementary and Secondary Education Act (ESEA). As we move forward on this very important legislative effort to secure federal funding for school libraries, we are asking library advocates, teachers, parents and students to ask their Senators to become co-sponsors of the SKILLS Act (S 312). We are encouraging those lucky few advocates who live in states where one of their Senators is on on the Senate HELP Committee to call their Washington Office and ask their Senators to co-sponsor Senator Sheldon Whitehouse’s (D-RI) efforts to include the SKILLS Act in the Committee’s ESEA bill.

I’ve had many meetings in the Senate recently and none of the congressional staff members I’ve met with have heard from any library supporters…..so PLEASE MAKE THESE CALLS. AND ASK EVERYONE ELSE YOU KNOW TO CALL TOO.

Students deserve to go to a school with an effective school library program-take action now!

The post School libraries can’t afford to wait appeared first on District Dispatch.

District Dispatch: ALA welcomes Alternative Spring Break student Natalie Yee

Fri, 2015-02-27 19:25

Natalie Yee

Next week, we welcome University of Michigan student Natalie Yee to the American Library Association (ALA) Washington Office, where she will learn about information-related fields as part of her university’s School of Information Alternative Spring Break program. Yee will conduct research on online resources and tools produced by the ALA Washington Office (under my guidance as the press officer).

Yee is currently working to earn a master’s degree in Information, with a focus in Human-Computer Interaction. She previously earned a bachelors in Anthropology with a minor in Asian Studies from Colorado College.

The University of Michigan Alternative Spring Break program creates the opportunity for students to engage in a service-oriented integrative learning experience; connects public sector organizations to the knowledge and abilities of students through a social impact project; and facilitates and enhances the relationship between the School and the greater community.

In addition to ALA, the students are hosted by other advocacy groups such as the Future of Music Coalition as well as federal agencies such as the Library of Congress, the Smithsonian Institution, and the National Archives. The students get a taste of work life here in D.C. and an opportunity to network with information professionals.

“We are pleased to support students from the University of Michigan’s spring break program,” said Alan S. Inouye, director of ALA’s Office for Information Technology Policy. “We look forward to working collaboratively with Yee in the next week.”

The post ALA welcomes Alternative Spring Break student Natalie Yee appeared first on District Dispatch.

M. Ryan Hess: Three Emerging Digital Platforms for 2015

Fri, 2015-02-27 15:43

‘Twas a world of limited options for digital libraries just a few short years back. Nowadays, however, the options are many more and the features and functionalities are truly groundbreaking.

Before I dive into some of the latest whizzbang technologies that have caught my eye, let me lay out the platforms we currently use and why we use them.

  • Digital Commons for our institutional repository. This is a simple yet powerful hosted repository service. It has customizable workflows built into it for managing and publishing online journals, conferences, e-books, media galleries and much more. And, I’d emphasize the “service” aspect. Included in the subscription comes notable SEO power, robust publishing tools, reporting, stellar customer service and, of course, you don’t have to worry about the technical upkeep of the platform.
  • CONTENTdm for our digital collections. There was a time that OCLC’s digital collections platform appeared to be on a development trajectory that would take out of the clunky mire it was in say in 2010. They’ve made strides, but this has not kept up.
  • LUNA for restricted image reserve services. You and your faculty can build collections in this system popular with museums and libraries alike. Your collection also sits within the LUNA Commons, which means users of LUNA can take advantage of collections outside their institutions.
  • Omeka.net for online exhibits and digital humanities projects. The limited cousin to the self-hosted Omeka, this version is an easy way to launch multiple sites for your campus without having to administer multiple installs. But it has a limited number of plugins and options, so your users will quickly grow out of it.
The Movers and Shakers of 2015

There are some very interesting developments out there and so here is a brief overview of a few of the three most ground-breaking, in my opinion.

PressForward

If you took Blog DNA and spliced it with Journal Publishing, you’d get a critter called PresForward: a WordPress plug-in that allows users to launch publications that approach publishing from a contemporary web publishing perspective.

There are a number of ways you can use PressForward but the most basic publishing model its intended for starts with treating other online publications (RSS feeds from individuals, organizations, other journals) as sources of submissions. Editors can add external content feeds to their submission feed, which bring that content into their PressForward queue for consideration. Editors can then go through all the content that is brought in automatically from outside and then decide to include it in their publication. And of course, locally produced content is also included if you’re so inclined.

Examples of PressForward include:

Islandora

Built on Fedora Commons with a Drupal front-end layer, Islandora is a truly remarkable platform that is growing in popularity at a good clip. A few years back, I worked with a local consortia examining various platforms and we looked at Islandora. At the time, there were no examples of the platform being put into use and it felt more like an interesting concept more than a tool we should recommend for our needs. Had we been looking at this today, I think it would have been our number one choice.

Part of the magic with Islandora is that it uses RDF triples to flatten your collections and items into a simple array of objects that can have unlimited relationships to each other. In other words, a single image can be associated with other objects that all relate as a single object (say a book of images) and that book object can be part of a collection of books object, or, in fact, be connected to multiple other collections. This is a technical way of saying that it’s hyper flexible and yet very simple.

And because Islandora is built on two widely used open source platforms, finding tech staff to help manage it is easy.

But if you don’t have the staff to run a Fedora-Drupal server, Lyrasis now offers hosted options that are just as powerful. In fact, one subscription model they offer allows you to have complete access to the Drupal back end if customization and development are important to you, but you dont’ want to waste staff time on updates and monitoring/testing server performance.

Either way, this looks like a major player in this space and I expect it to continue to grow exponentially. That’s a good thing too, because some aspects of the platform are feeling a little “not ready for prime time.” The Newspaper solution pack, for example, while okay, is no where near as cool as what Veridian currently can do.

ArtStor’s SharedShelf

Rapid development has taken this digital image collection platform to a new level with promises of more to come. SharedShelf integrates the open web, including DPLA and Google Images, with their proprietary image database in novel ways that I think put LUNA on notice.

Like LUNA, SharedShelf allows institutions to build local collections that can contain copyrighted works to be used in classroom and research environments. But what sets it apart is that it allows users to also build beyond their institutions and push that content to the open web (or not depending on the rights to the images they are publishing).

SharedShelf also integrates with other ArtStor services such as their Curriculum Guides that allow faculty to create instructional narratives using all the resources available from ArtStor.

The management layer is pretty nice and works well with a host of schema.

And, oh, apparently audio and video support is on the way.


CrossRef: Update for CrossCheck Users: iThenticate bibliography exclusion issue fix

Fri, 2015-02-27 14:18

This is an update for CrossRef members who participate in the CrossCheck service, powered by iThenticate.

Users of the iThenticate system had reported that the bibliography/reference exclusion option did not work properly when tables were appended after the bibliography of a document and a cell of the table contained a bibliography keyword i.e. references, works cited, bibliography.

A fix was released on February 16th 2015 for this issue, so that reference exclusion will work for documents that fit the criteria above. Please note that the enhancement applies to new manuscripts submitted to iThenticate from February 16th.

CrossCheck users should feel free to test out the bibliography exclusion feature, and do contact us if you have any questions; or if you continue to experience this problem on any new submissions to the service.

LITA: Librarians: We Open Access

Fri, 2015-02-27 12:00

Open Access (storefront). Credit: Flickr user Gideon Burton

In his February 11 post, my fellow LITA blogger Bryan Brown interrogated the definitions of librarianship. He concluded that librarianship amounts to a “set of shared values and duties to our communities,” nicely summarized in the ALA’s Core Values of Librarianship. These core values are access, confidentiality / privacy, democracy, diversity, education and lifelong learning, intellectual freedom, preservation, the public good, professionalism, service, and social responsibility. But the greatest of these is access, without which we would revert to our roots as monastic scriptoriums and subscription libraries for the literate elite.

Bryan experienced some existential angst given that he is a web developer and not a “librarian” in the sense of job title or traditional responsibilities–the ancient triad of collection development, cataloging, and reference. In contrast, I never felt troubled about my job, as my title is e-learning librarian (got that buzzword going for me, which is nice) and as I do a lot of mainstream librarian-esque things, especially camping up front doing reference or visiting classes doing information literacy instruction.

Meme by Michael Rodriguez using Imgflip

However, I never expected to become manager of electronic resources, systems, web redesign, invoicing and vendor negotiations, and hopefully a new institutional repository fresh out of library school. I did not expect to spend my mornings troubleshooting LDAP authentication errors, walking students through login issues, running cost-benefit analyses on databases, and training users on screencasting and BlackBoard.

But digital librarians like Bryan and myself are the new faces of librarianship. I deliver and facilitate electronic information access in the library context; therefore, I am a librarian. A web developer facilitates access to digital scholarship and library resources. A reference librarian points folks to information they need. An instruction librarian teaches people how to find and evaluate information. A cataloger organizes information so that people can access it efficiently. A collection developer selects materials that users will most likely desire to access. All of these job descriptions–and any others that you can produce–are predicated on the fundamental tenet of access, preferably open, necessarily free.

Democracy, diversity, and the public good is our vision. Our active mission is to open access to users freely and equitably. Within that mission lie intellectual freedom (open access to information regardless of moralistic or political beliefs), privacy (fear of publicity can discourage people from openly accessing information), preservation (enabling future users to access the information), and other values that grow from the opening of access to books, articles, artifacts, the web, and more.

The Librarians (Fair use – parody)

By now you will have picked up on my wordplay. The phrase “open access” (OA) typically refers to scholarly literature that is “digital, online, free of charge, and free of most copyright and licensing restrictions” (Peter Suber). But when used as a verb rather than an adjective, “open” means not simply the state of being unrestricted but also the action of removing barriers to access. We librarians must not only cultivate the open fields–the commons–but also strive to dismantle paywalls and other obstacles to access. Recall Robert Frost’s Mending Wall:

Before I built a wall I’d ask to know
What I was walling in or walling out,
And to whom I was like to give offense.
Something there is that doesn’t love a wall,
That wants it down.’ I could say ‘Elves’ to him…

Or librarians, good sir. Or librarians.

District Dispatch: Free webinar: Library partnerships

Fri, 2015-02-27 01:49

Library e-government resource Lib2gov today announced “A Library Partnership = Neighborhood Resource Center,” a free webinar that will explore library community partnerships:

Alachua County Library District

This webinar highlights the Library Partnership of the Alachua County Library District (ACLD), which is a collaboration with the Partnership for Strong Families and over 30 other service providers expanding the traditional library role in its community. By sharing space, library staff and social service partners provide coordinated and complementary services to meet a client’s full needs. ACLD has now modeled a second branch, Cone Park, to offer similar services.

Both of these branches are in at-risk, low-income areas of the City of Gainesville. Alachua County Library District (ACLD) originally became involved with e-government through a collaboration with NEFLIN (Northeast Florida Library Information Network) to administer a two-year LSTA grant. The grant goals were to research, develop and provide e-government service at libraries throughout the north-central Florida region. As networking for this project expanded, the concept of a neighborhood resource center combining a library and all its resources with community service providers, both governmental and non-profit, became a reality.

Date: March 4, 2015
Time: 3:00-4:00 p.m. EST
Register for this free event

Speaker: Chris Culp is Public Services Division Director in the Alachua County Library District. Chris has a background in academic and school libraries, as well as non-profit organizations.

If you cannot attend this live session, a recorded archive will be available to view at your convenience. To view past webinars also done in collaboration with iPAC, please visit Lib2Gov.org.

The post Free webinar: Library partnerships appeared first on District Dispatch.

District Dispatch: ALA applauds FCC vote to protect open Internet

Fri, 2015-02-27 01:04

The Federal Communications Commission (FCC) voted today to assert the strongest possible open Internet protections—banning paid prioritization and the blocking and throttling of lawful content and services. In a statement issued today, the American Library Association (ALA), applauded this bold step forward in ensuring a fair and open Internet.

ALA President Courtney Young

“America’s libraries collect, create and disseminate essential information to the public over the Internet, and ensure our users are able to access the Internet and create and distribute their own digital content and applications. Network neutrality is essential to meeting our mission in serving America’s communities,” said ALA President Courtney Young. “Today’s FCC vote in favor of strong, enforceable net neutrality rules is a win for students, creators, researchers and learners of all ages.”

As is usually the case, the final Order language is not yet available, but statements from Chairman Tom Wheeler and fellow Commissioners, as well as an earlier fact sheet (pdf) on the draft Order, outline several key provisions. The Order:

  • Reclassifies “broadband Internet access service”–including both fixed and mobile—as a telecommunications service under Title II.
  • Asserts “bright line” rules that ban blocking or throttling of legal content, applications and services; and paid prioritization of some Internet traffic over other traffic.
  • Enhances transparency rules regarding network management and practices.
  • Distinguishes between the public and private networks.

“After almost a year of robust engagement across the spectrum of stakeholders, the FCC has delivered the rules we need to ensure equitable access to online information, applications and services for all,” said Larra Clark, deputy director for the ALA Office for Information Technology Policy. “ALA worked closely with nearly a dozen library and higher education organizations to develop and advocate for network neutrality principles, and we are pleased the FCC’s new rules appear to align nearly perfectly.”

The FCC vote marks the end of one chapter in a lively debate over the future of the Internet, but it’s unlikely to be the last word on the matter. Yesterday the House Energy & Commerce Committee held a hearing to discuss the issue, and several Internet service providers (ISPs) have signaled they will challenge the rules in court. ALA, working with our allies, will continue our engagement to maintain net neutrality.

“On the eve of the FCC’s vote, the House Energy and Commerce Committee provided a preview of the challenges ahead in defending the open Internet,” said Kevin Maher, assistant director of the ALA Office of Government Relations. “Committee Chairman Greg Walden (R-OR) argued that the Order may lead to future regulation while not protecting consumers, while ranking member Anna Eshoo (D-CA) countered that the Order, in fact, guarantees an open Internet.”

More information on libraries and network neutrality is available on the ALA website. Stay tuned at the District Dispatch for updates.

The post ALA applauds FCC vote to protect open Internet appeared first on District Dispatch.

DuraSpace News: Report From the Advanced DSpace Course

Fri, 2015-02-27 00:00
Winchester, MA  Last month DuraSpace and the Texas Digital Library partnered to offer Advanced DSpace Training at the Perry Casteñeda Library on the University of Texas at Austin campus. DuraSpace Members were eligible to register at reduced prices. Course participants learned about advanced features and customization.

DPLA: Profit & Pleasure in Goat Keeping

Thu, 2015-02-26 22:17

Two weeks ago, we officially announced the initial release of Krikri, our new metadata aggregation, mapping, and enrichment toolkit.

In light of its importance, we would like to take a moment for a more informal introduction to the newest members of DPLA’s herd. Krikri and Heiðrún (a.k.a. Heidrun; pronounced like hey-droon) are key to many of DPLA’s plans and serve as a critical piece of infrastructure for DPLA.

They are also names for, or types, of goats.

National Archives and Records Administration.

Why goats? As naturally curious, browsing animals that will try to consume almost anything, our new caprine friends are especially suited to their role in uncovering and sharing treasures of our cultural heritage (by consuming metadata and producing enriched Linked Data).

Krikri makes it possible to harvest from multiple sources (e.g. OAI-PMH), map the resulting metadata to Linked Data, and perform quality control and data enrichment.

We’ve taken care to build these features to be broadly useful so we can share with others doing similar work.  So the name is fitting, as Kri-Kri is a feral goat.

Heidrun, Krikri’s ruminating counterpart, is DPLA’s local implementation of the Krikri features. This is the application that handles the harvests specific to our partners and enriches metadata for the DPLA Portal and Platform API. Heiðrún is named after the mythical Norse goat that eats the leaves and buds off the tree Læraðr and produces mead—enough for all to have their fill.  We couldn’t think of a better goat to represent DPLA!

The title of this post notwithstanding, we remain as committed as ever to openness and to free, democratic access to knowledge. Accordingly, both Krikri and Heidrun are free and open source software.

Goats are notorious escape artists, anyhow. Let us know if any of ours wander into your backyards.

You can find more information about Krikri and Heidrun at our overview page for the project, and in the form of a recent presentation at Code4lib 2015 (video).

The Miriam and Ira D. Wallach Division of Art, Prints and Photographs: Print Collection, The New York Public Library.

FOSS4Lib Recent Releases: Hydra - 8.0.0

Thu, 2015-02-26 21:18

Last updated February 26, 2015. Created by Peter Murray on February 26, 2015.
Log in to edit this page.

Package: HydraRelease Date: Wednesday, February 25, 2015

Journal of Web Librarianship: A Review of “User Experience (UX) Design for Libraries”

Thu, 2015-02-26 18:34
10.1080/19322909.2014.983413
Aida M. Smith

Eric Hellman: "Free" can help a book do its job

Thu, 2015-02-26 18:32


(Note: I wrote this article for NZCommons, based on my presentation at the 2015 PSP Annual Conference in February.)

Every book has a job to do. For many books, that job is to make money for its creators. But a lot of books have other jobs to do. Sometimes the fact that people pay for books helps that job, but other times the book would be able to do its job better if it was free for everyone.

That's why Creative Commons licensing is so important. But while CC addresses the licensing problem nicely, free ebooks face many challenges that make it difficult for them to do their jobs.

Let's look at some examples.

When Oral Literature in Africa was first published in 1970, its number one job was to earn tenure for the author, a rising academic. It succeeded, and then some. The book became a classic, elevating an obscure topic and creating an entire field of scholarly inquiry in cultural anthropology. But in 2012, it was failing to do any job at all. The book was out of print and unavailable to young scholars on the very continent whose culture it documented. Ruth Finnegan, the author, considered it her life's work and hoped it would continue to stimulate original research and new insights. To accomplish that, the book needed to be free. It needed to be translatable, it needed to be extendable.


Nga Reanga Youth Development: Maori Styles, an Open Access book by Josie Keelan, is another example of an academic book with important jobs to do. While its primary job is a local one, the advancement of understanding and practice in Maori youth development, it has another job, a global one. Being free helps it speak to scholars and researchers around the world.

Leanne Brown's Good and Cheap is a very different book. It's a cookbook. But the job she wanted it to do made it more than your usual cookbook. She wanted to improve the lives of people who receive "nutrition assistance"- food stamps, by providing recipes for nutritious and healthy meals that can be made without spending much money. By being free, Good and Cheap helps more people in need eat well.

My last example is Casey Fiesler's Barbie™ I Can Be A Computer Engineer The Remix! Now With Less Sexism! The job of this book is to poke fun at the original Barbie™ I Can Be A Computer Engineer, in which Barbie needs boys to do the actual computer coding. But because Fiesler uses material from the original under "fair use", anything other than free, non-commercial distribution isn't legal. Barbie, remixed can ONLY be a free ebook.

But there's a problem with free ebooks. The book industry runs on a highly evolved and optimized cradle-to-grave supply chain, comprising publishers, printers, production houses, distributors, wholesalers, retailers, aggregators, libraries, publicists, developers, cataloguers, database suppliers, reviewers, used-book dealers, even pulpers. And each entity in this supply chain takes its percentage. The entire chain stops functioning when an ebook is free. Even libraries (most of them) lack the processes that would enable them to include free ebooks in their collections.

At Unglue.it, we ran smack into this problem when we set out to bring books into the creative commons. We helped Open Book Publishers crowd fund a new ebook edition of Oral Literature in Africa. The ebook was then freely available, but it wasn't easy to make it free on Amazon, which dominates the ebook market. We couldn't get the big ebook aggregators that serve libraries to add it to their platforms. We realized that someone had to do the work that the supply chain didn't want to do.

Over the past year, we've worked to turn Unglue.it into a "bookstore for free books". The transformation isn't done yet, but we've built a database of over 1200 downloadable ebooks, licensed under Creative Commons or other free licenses. We have a long way to go, but we're distributing over 10,000 ebooks per month. We're providing syndication feeds, developing relationships with distributors, improving metadata, and promoting wonderful books that happen to be free.

The creators of these books still need to find support. To help them, we've developed three revenue programs. For books that already have free licenses, we help the creators ask for financial support in the one place where readers are most appreciative of their work- inside the books themselves. We call this "thanks for ungluing".

For books that exist as ebooks but need to recoup production costs, we offer "buy-to-unglue". We'll sell these books until they reach a revenue target, after which they'll become open access. For books that exist in print but need funding for conversion to open access ebook, we offer "pledge-to-unglue", which is a way of crowd-funding the conversion.

After a book has finished its job, it can look forward to a lengthy retirement. There's no need for books to die anymore, but we can help them enjoy retirement, and maybe even enjoy a second life. Project Gutenberg has over 50,000 books that have "retired" into the public domain. We're starting to think about the care these books need. Formats change along with the people that use them, and the book industry's supply chain does its best to turn them back into money-earners to pay for that care.

Recently we received a grant from the Knight Foundation to work on ways to provide the long-term care that these books need to be productive AND free in their retirements. GITenberg, a collaboration between the folks at Unglue.it and ebook technologist Seth Woodward is exploring the use of Github for free ebook maintenance. Github is a website that supports collaborative software development with source control and workflow tools. Our hope is that the ingredients that have made Github wildly successful in the open source software world will will prove to by similarly effective in supporting ebooks.

It wasn't so long ago that printing costs made free ebooks impossible. So it's no wonder that free ebooks haven't realized their full potential. But with cooperation and collaboration, we can really make wonderful things happen.

State Library of Denmark: Long tail packed counters for faceting

Thu, 2015-02-26 18:14

Our home brew Sparse Faceting for Solr is all about counting: When calculating a traditional String facet such as

Author - H.C.Andersen (120) - Brothers Grimm (90) - Arabian Nights (7) - Carlo Collodi (5) - Ronald Reagan (2)

the core principle is to have a counter for each potential term (author name in this example) and update that counter by 1 for each document with that author. There are different ways of handling such counters.

Level 0: int[]

Stock Solr uses an int[] to keep track of the term counts, meaning that each unique term takes up 32 bits or 4 bytes of memory for counting. Normally that is not an issue, but with 6 billion terms (divided between 25 shards) in our Net Archive Search, this means a whopping 24GB of memory for each concurrent search request.

Level 1: PackedInts

Sparse Faceting tries to be clever about this. An int can count from 0 to 2 billion, but if the maximum number of documents for any term is 3000, there will be a lot of wasted space. 2^12 = 4096, so in the case of maxCount=3000, we only need 12 bits/term to keep track of it all. Currently this is handled by using Lucene’s PackedInts to hold the counters. With the 6 billion terms, this means 9GB of memory. Quite an improvement on the 24GB from before.

Level 2: Long tail PackedInts with fallback

Packing counters has a problem: The size of all the counters is dictated by the maxCount. Just a single highly used term can nullify the gains: If all documents share one common term, the size of the individual counters will be log(docCount) bits. With a few hundred millions of documents, that puts the size to 27-29 bits/term, very close to the int[] representation’s 32 bits.

Looking at the Author-example at the top of this page, it seems clear that the counts for the authors are not very equal: The top-2 author has counts a lot higher than the bottom-3. This is called a long tail and it is a very common pattern. This means that the overall maxCount for the terms is likely to be a lot higher than the maxCount for the vast majority of the terms.

While I was loudly lamenting of all the wasted bits, Mads Villadsen came by and solved the problem: What if we keep track of the terms with high maxCount in one structure and the ones with a lower maxCount in another structure? Easy enough to do with a lot of processing overhead, but tricky do do efficiently. Fortunately Mads also solved that (my role as primary bit-fiddler is in serious jeopardy). The numbers in the following explanation are just illustrative and should not be seen as the final numbers.

The counter structure

We have 200M unique terms in a shard. The terms are long tail-distributed, with the most common ones having maxCount in the thousands and the vast majority with maxCount below 100.

We locate the top-128 terms and see that their maxCount range from 2921 to 87. We create an int[128] to keep track of their counts and call it head.

head Bit 31 30 29 … 2 1 0 Term h_0 0 0 0 … 0 0 0 Term h_1 0 0 0 … 0 0 0 … 0 0 0 … 0 0 0 Term h_126 0 0 0 … 0 0 0 Term h_127 0 0 0 … 0 0 0

The maxCount for the terms below the 128 largest ones is 85. 2^7=128, so we need 7 bits to hold each of those. We allocate a PackedInt structure with 200M entries of 7+1 = 8 bits and call it tail.

tail Bit 7* 6 5 4 3 2 1 0 Term 0 0 0 0 0 0 0 0 0 Term 1 0 0 0 0 0 0 0 0 … 0 0 0 0 0 0 0 0 Term 199,999,998 0 0 0 0 0 0 0 0 Term 199,999,999 0 0 0 0 0 0 0 0

The tail has an entry for all terms, including those in head. For each of the large terms in head, we locate its position in tail. At the tail-counter, we set the value to term’s index in the head counter structure and set the highest bit to 1.

Let’s say that head entry term h_0 is located at position 1 in tail, h_126 is located at position 199,999,998 and h_127 is located at position 199,999,999. After marking the head entries in the tail structure, it would look like this:

tail with marked heads Bit 7* 6 5 4 3 2 1 0 Term 0 0 0 0 0 0 0 0 0 Term 1 1 0 0 0 0 0 0 0 … 0 0 0 0 0 0 0 0 Term 199,999,998 1 1 1 1 1 1 1 0 Term 199,999,999 1 1 1 1 1 1 1 1

Hanging on so far? Good.

Incrementing the counters
  1. Read the counter value from the tail structure: count = tail.get(ordinal)
  2. Check if bit 7 is set: if (count & 128 == 128)
  3. If bit 7 is set, increment the head counter: head.inc(count & 127)
  4. If bit 7 is not set, increment the tail counter: tail.set(ordinal, count+1)
Pros and cons

In this example, the counters takes up 6 billion * 8 bits + 25 * 128 * 32 bits = 5.7GB. The performance overhead, compared to the PackedInts version, is tiny: Whenever a head bit is encountered, there will be an extra read to get the old head value before writing the value+1. As head will statistically be heavily used, it is likely to be in Level 2/3 cache.

This is just an example, but it should be quite realistic as approximate values from the URL field in our Net Archive Index has been used. Nevertheless, it must be stressed that the memory gains from long tail PackedInts is highly dictated by the shape of the long tail curve.

Afterthought

It is possible to avoid the extra bit in tail by treating the large terms as any other term, until their tail-counters reaches maximum (127 in the example above). When a counter’s max has been reached, the head-counter can then be located using a lookup mechanism, such as a small HashMap or maybe just a linear scan through a short array with the ordinals and counts for the large terms. This would reduce the memory requirements to approximately 6 billion * 7 bits = 5.3GB. Whether this memory/speed trade-off is better or worse is hard to guess and depends on result set size.

Implementation afterthought

The long tail PackedInts could implement the PackedInts-interface itself, making it usable elsewhere. Its constructor could take another PackedInts filled with maxCounts or a histogram with maxbit requirements.

Heureka update 2015-02-27

There is no need to mark the head terms in the tail structure up front. All the entries in tail acts as standard counters until the highest bit is set. At that moment the bits used for counting switches to be a pointer into the next available entry in the head counters. The update workflow thus becomes

  1. Read the counter value from the tail structure: count = tail.get(ordinal)
  2. Check if bit 7 is set: if (count & 128 == 128)
  3. If bit 7 is set, increment the head counter: head.inc(count & 127)
  4. If bit 7 is not set, increment the tail counter: tail.set(ordinal, count+1)
  5. If the counter reaches bit 7, change the counter-bits to be pointer-bits:
    if ((count+1) == 128) tail.set(ordinal, 128 & headpos++)
Pros and cons

The new logic means that initialization and resetting of the structure is simply a matter of filling them with 0. Update performance will be on par with the current PackedInts implementation for all counters, whose value is within the cutoff. After that the penalty of an extra read is paid, but only for the overflowing values.

The memory overhead is unchanged from the long tail PackedInts implementation and still suffers from the extra bit used for signalling count vs. pointer.

Real numbers 2015-02-28

The store-pointers-as-values has the limitation that there can only be as many head counters as the maxCount for tail. Running the numbers on the URL-field for three of the shards in our net archive index resulted in bad news: The tip of the long tail shape was not very pointy and it is only possible to shave 8% of the counter size. Far less than the estimated 30%. The Packed64 in the table below is the current structure used by sparse faceting.

Shard 1 URL: 228M unique terms, Packed64 size: 371MB tail BPV required memory saved head size 11 342MB 29MB / 8% 106 12 371MB 0MB / 0% 6

However, we are in the process of experimenting with faceting on links, which has quite a higher point in the long tail shape. From a nearly fully build test shard we have:

8/9 build shard links: 519M unique terms, Packed64 size: 1427MB tail BPV required memory saved head size 15 1038MB 389MB / 27% 14132 16 1103MB 324MB / 23% 5936 17 1168MB 260MB / 18% 3129 18 1233MB 195MB / 14% 1374 19 1298MB 130MB / 9% 909 20 1362MB 65MB / 5% 369 21 1427MB 0MB / 0% 58

For links, the space saving was 27% or 389MB for the nearly-finished shard. To zoom out a bit: Doing faceting on links for our full corpus with stock Solr would take 50GB. Standard sparse faceting would use 35GB and long tail would need 25GB.

Due to sparse faceting, response time for small result sets is expected to be a few seconds for the links-facet. Larger result sets, not to mention the dreaded *:* query, would take several minutes, with worst-case (qualified guess) around 10 minutes.

Three-level long tail 2015-02-28

Previously:

  • pointer-bit: Letting the values in tail switch to pointers when they reach maximum has the benefit of very little performance overhead, with the downside of taking up an extra bit and limiting the size of head.
  • lookup-signal: Letting the values in tail signal “find the counter in head” when they reach maximum, has the downside that a sparse lookup-mechanism, such as a HashMap, is needed for head, making lookups comparatively slow.

New idea: Mix the two techniques. Use the pointer-bit principle until there is no more room in head. head-counters beyond that point all get the same pointer (all value bits set) in tail and their position in head is determined by a sparse lookup-mechanism ord2head.

This means that

  • All low-value counters will be in tail (very fast).
  • The most used high-value counters will be in head and will be referenced directly from tail (fast).
  • The less used high-value counters will be in head and will require a sparse lookup in ord2head (slow).

Extending the pseudo-code from before:

value = tail.get(ordinal) if (value == 255) { // indirect pointer signal head.inc(ord2head.get(ordinal)) } else if (value & 128 == 128) { // pointer-bit set head.inc(value & 127) } else { // term-count = value value++; if (value != 128) { // tail-value ok tail.set(value) } else { // tail-value overflow head.set(headpos, value) if (headpos < 127) { // direct pointer tail.set(128 & headpos++) } else { // indirect pointer tail.set(255) ord2head.put(ordinal, headpos++) } } }

Raffaele Messuti: SKOS Nuovo Soggettario, api e autocomplete

Thu, 2015-02-26 17:00

Come creare una api per un form con autocompletamento usando i termini del Nuovo Soggettario, con i Sorted Sets di Redis e Nginx+Lua.

District Dispatch: Boots on the ground advocacy

Thu, 2015-02-26 15:27

I don’t think I need to overemphasize for you the important roles that libraries play in our communities. The epicenter of progress and knowledge, libraries have evolved to meet the educational and technological needs of their patrons.

These institutions of learning and creation need to be protected and improved at all costs. And in the wake of the sweeping changes to both the House and the Senate in the 2014 Congressional elections, it is more important than ever that we speak up on behalf of libraries and the communities they serve.

Your firsthand library experience – from behind the reference desk or as a patron – is an invaluable part of helping legislators to understand the impact that libraries have in the day to day lives of their constituents. Without you, they may not realize what happens to a community when library budgets get cut and staff are let go, let alone how legislation on net neutrality, copyright, or privacy can involve libraries too. We need to urge Members of Congress to think about how the policy and legislation they are working on could harm or help libraries.  To do that, we need boots on the ground here in Washington, D.C. – we need library advocates. That’s why we’re inviting YOU to National Library Legislative Day 2015!

This two-day advocacy event brings hundreds of librarians, trustees, library supporters, and patrons to Washington, D.C. to meet with their Members of Congress to rally support for libraries issues and policies. This year, National Library Legislative Day will be held May 4-5, 2015. Participants will receive advocacy tips and training, along with important issues briefings prior to their meetings.

Registration information and hotel booking information are available on the ALA Washington Office website.

The post Boots on the ground advocacy appeared first on District Dispatch.

Pages