What about now? As of 2014, the Acquisitions and Bibliographic Access unit has 238 staff3.
While I’m sure one could quibble about the details (counting FTE vs. counting humans, accounting for the reorganizations, and so forth), the trend is clear: there has been a precipitous drop in the number of cataloging staff employed by the Library of Congress.
I’ll blithely ignore factors such as shifts in the political climate in the U.S. and how they affect civil service. Instead, I’ll focus on library technology, and spin three tales.
The tale of the library technologists
The decrease in the number of cataloging staff are one consequence of a triumph of library automation. The tools that we library technologists have written allow catalogers to work more efficiently. Sure, there are fewer of them, but that’s mostly been due to retirements. Not only that, the ones who are left are now free to work on more intellectually interesting tasks.
If we, the library technologists, can but slip the bonds of legacy cruft like the MARC record, we can make further gains in the expressiveness of our tools and the efficiencies they can achieve. We will be able to take advantage of metadata produced by other institutions and people for their own ends, enabling library metadata specialists to concern themselves with larger-scale issues.
Moreover, once our data is out there – who knows what others, including our patrons, can achieve with it?
This will of course be pretty disruptive, but as traditional library catalogers retire, we’ll reach buy-in. The library administrators have been pushing us to make more efficient systems, though we wish that they would invest more money in the systems departments.
We find that the catalogers are quite nice to work with one-on-one, but we don’t understand why they seem so attached to an ancient format that was only meant for record interchange.
The tale of the catalogers
The decrease in the number of cataloging staff reflects a success of library administration in their efforts to save money – but why is it always at our expense? We firmly believe that our work with the library catalog/metadata services counts as a public service, and we wish more of our public services colleagues knew how to use the catalog better. We know for a fact that what doesn’t get catalogued may as well not exist in the library.
We also know that what gets catalogued badly or inconsistently can cause real problems for patrons trying to use the library’s collection. We’ve seen what vendor cataloging can be like – and while sometimes it’s very good, often it’s terrible.
We are not just a cost center. We desperately want better tools, but we also don’t think that it’s possible to completely remove humans from the process of building and improving our metadata.
We find that the library technologists are quite nice to work with one-on-one – but it is quite rare that we get to actually speak with a programmer. We wish that the ILS vendors would listen to us more.
The tale of the library directors
The decrease in the number of cataloging staff at the Library of Congress is only partially relevant to the libraries we run, but hopefully somebody has figured out how to do cataloging more cheaply. We’re trying to make do with the money we’re allocated. Sometimes we’re fortunate enough to get a library funding initiative passed, but more often we’re trying to make do with less: sometimes to the point where flu season makes us super-nervous about our ability to keep all of the branches open.
We’re concerned not only with how much of our budgets are going into electronic resources, but with how nigh-impossible it is to predict increases in fees for ejournal subscriptions/ fees for ebook services.
We find that the catalogers and the library technologists are pleasant enough to talk to, but we’re not sure how well they see the big picture – and we dearly wish they could clearly articulate how yet another cataloging standard / yet another systems migration will make our budgets any more manageable.
Each of these tales is true. Each of these tales is a lie. Many other tales could be told. Fuzziness abounds.
However, there is one thing that seems clear: conversations about the future of library data and library systems involve people with radically different points of view. These differences do not mean that any of the people engaged in the conversations are villains, or do not care about library users, or are unwilling to learn new things.
The differences do mean that it can be all too easy for conversations to fall apart or get derailed.
We need to practice listening.1. From testmony by the president of the Library of Congress Professional Guild to Congress on 6 March 2015. 2. From the BA FY 2004 report. This including 32 staff from the Cataloging Distribution Service, which had been merged into BA and had not been part of the Cataloging Directorate. 3. From testmony by the president of the Library of Congress Professional Guild to Congress on 6 March 2015.
Thirty-two conferences started this journey, and now only two remain. The OCLC Research Collective Collection tournament is just one step away from crowning a Champion. Throw your brackets away and buckle your seat belts, because the tournament semi-finals are over and the finals are next!How many languages does your conference collective collection speak? Competition in the semi-finals centered around the number of languages represented in each conference’s collective collection.* In the first semi-finals match-up, Conference USA cruised to an easy victory over Summit League, 366 languages to 265 languages. In the second match-up, Atlantic 10 also had little trouble with its opponent, moving past Missouri Valley 374 languages to 289 languages. So Conference USA and Atlantic 10 will square off in the tournament finals, with the honor and glory of the title “2015 Collective Collections Tournament Champion” at stake!
As the results of the semi-finals competition show, conference collective collections are very multilingual. Atlantic 10 had the most languages of any competitor in this round, with more than 370. But even the conference with the fewest languages – Summit League – had 265 languages in its collective collection! Suppose that an average book is 1.25 inches thick. If Summit League stacked up one book for every language represented in its collection, the resulting pile would be almost 28 feet tall! If Atlantic 10 did it, the stack would be nearly 40 feet tall!
The mega-collective-collection of all libraries – as represented in the WorldCat bibliographic database – contains publications in 481 different languages. English is the most common language in WorldCat; here’s a look at the top 50 most frequently-found languages other than English:After English, the most common languages in WorldCat are German, French, Spanish, and Chinese. Despite the high number of English-language materials, more than half of the materials in WorldCat are non-English! And as we’ve seen, many of these non-English-language publications have found their way into the collective collections of our tournament semi-finalists! So are you interested in reading something in Urdu? Atlantic 10 has nearly 2,300 Urdu-language publications to choose from. How about Welsh? Conference USA can furnish you with nearly 1,400 publications in Welsh. No matter what language you’re interested in, these collective collections likely have something for you – they speak a lot of languages!
Bracket competition participants: Remember, even if the conference you chose is not in the Finals, hope still flickers! If no one picked the tournament Champion, all entrants will be part of a random drawing for the big prize!
Get set for the Tournament Finals! Results will be posted April 6.
*Number of languages represented in language-based (text or spoken) publications comprising each conference collective collection. Data is current as of January 2015.
More information:Brian Lavoie
Brian Lavoie is a Research Scientist in OCLC Research. Brian's research interests include collective collections, the system-wide organization of library resources, and digital preservation.Mail | Web | LinkedIn | More Posts (13)
Last updated April 3, 2015. Created by Jim Craner on April 3, 2015.
Log in to edit this page.
From The Great Reading Adventure website:
"The Great Reading Adventure is a robust, open source software designed to manage library reading programs. It is currently in its second version... The Great Reading Adventure was developed by the Maricopa County Library District with support by the Arizona State Library, Archives and Public Records, a division of the Secretary of State, with federal funds from the Institute of Museum and Library Services."
The Great Reading Adventure lets libraries and library consortia set up a full online summer reading program for patrons. Features include reporting, customization per library, digital badges, avatars, reading lists, and much more.
The software runs on a Windows IIS/MSSQL server.License: MIT License Package Links Production/StableOperating System: WindowsDatabase: MsSQL
John Miedema: Lila “tears down” old categories and suggests new ways of looking at content. Word concreteness is a good candidate.
Many of the good things we love about language are essentially hierarchical. Narrative is linear: a beginning, middle, and end. Order shapes the story. Hierarchy gives a bird’s eye view, a table of contents, a summary that allows a reader to consider a work as a whole.
Lila will compute hierarchy by comparing passages on word qualities that suggest order. Concreteness is considered a good candidate. Passages with more abstract words express ideas and concepts, whereas passages with more concrete words express examples. Of the views that Lila can suggest, it is useful to have a view that presents abstract concepts first and concrete examples second. I have listed four candidate qualities here, but I will focus in the posts that follow on concreteness.Quality Description Examples 1 Abstract Intangible qualities, ideas and concepts. Different than frequency of word usage. Both academic terms and colorful prose can have low word frequency. freedom (227*), justice (307), love (311) Concrete Tangible examples, illustrations and sensory experience grasshopper (660*), tomato (662), milk (670) 2 General Categories and groupings. Similar to 1, but 1 is more dichotomous and this one is more of a range. furniture Specific Particular instances La-Z-Boy rocker-recliner 3 Logical Analytical thinking, understatement and fact. Note the conflict with 1 and 2 — facts are both logical and concrete. The fastest land dwelling creature is the Cheetah. Emotional/Sentimental Feeling, emphasis, opinion. Can take advantage of the vast amount of sentiment measures available. The ugliest sea creature is the manatee. 4 Static Constancy and passivity It was earlier demonstrated that heart attacks can be caused by high stress. Dynamic Change and activity. Energy. Researchers earlier showed that high stress can cause heart attacks.
* Concreteness index. MRC Psycholinguistic database. Grasshopper is a more concrete word than freedom. Indexes like the MRC can be used to compute concreteness for passages.
Lila can compute hierarchy for passages, and for groups of passages. Together, it builds a hierarchy, a view of how the content can be organized. Think of what this offers a writer. A writer stuck in his or her manually produced categories and view can ask Lila for alternate views. Lila “tears down” the old categories and suggests a new way of looking at the content. It is unlikely that the writer will stick exactly to Lila’s view, but it could provide a fresh start or give new insight. And Lila can compute new views dynamically, on demand, as the content changes.
In my last post, I discussed effort estimation and scheduling, which leads into the beginning of actual development. But first, you need to decide how you’re going to track progress. Here are some commonly used methods:
The Big Board
In keeping with Agile philosophy, you should choose the simplest tool that gives you the functionality you need. If your team does all of its development work in the same physical space, you could get by with post-it notes on a big white board. There’s a lot to be said for a tangible object: it communicates the independent nature of each task or story in a way that software may not. It provides the team with a ready-made meeting point: if you want to see how the project is going, you have to go stand in front of the big board. A board can also help to keep projects lean and simple, because there’s only so much available space on it. There are no multiple screens or pages to hide complexity.
Sticky notes, however, are ephemeral in nature. You can lose your entire project plan to an overzealous janitor; more importantly, unless you periodically take pictures of your board, there’s no way to trace user story evolution. Personally, I like to use this method in the initial stages of planning; the board is a very useful anchor for user story definition and prioritization. Once we move into the development process, I find that moving into the virtual realm adds crucial flexibility and tracking functionality.
If the scope of the project is limited, it may be possible to track it using a basic office productivity suite like MS Office. MS Excel and similar spreadsheet tools are fairly easy to use, and they’re ubiquitous, which means your team will likely face a lower learning curve. Remember that in Agile the business side of the organization is an integral part of the development effort, and it may not make sense to spend time and effort to train sales and management staff on a complex tracking tool.
If you choose to go the spreadsheet route, however, you are giving up some functionality: it’s easy enough to create and maintain spreadsheets that give you project snapshots and track current progress, but this type of software is not designed to accurately measure long term progress and productivity, which helps you upgrade your processes and increase your team’s efficiency. There are ways to track Agile metrics using Excel, but if you find that you need to do that you may just want to switch to dedicated software anyway.
There are several tracking tools out there that can help manage Agile projects, although my personal experience so far has been limited to to JIRA and its companion GreenHopper. JIRA is a fairly simple issue-tracking tool: you can create issues (manually or directly from a reporting form), add a description, estimate effort, prioritize, and assign to a team member for completion. You can also track it through the various stages of development, adding comments at each step of the way and preserving meaningful conversations about its progress and evolution. As you can see in this article comparing similar tools, JIRA’s main advantage is the lack of unnecessary UI complexity, which makes it easier to master. Its main shortcoming is the lack of sprint management functionality, which is what GreenHopper provides. With the add-on, users can create sprints, assign tickets to them, and track sprint progress.
Can all of this functionality be replicated using spreadsheets? Yes, although maintenance and authentication can becomes problematic as the complexity of the project increases. At some point a tool like JIRA starts to pay for itself in terms of increased efficiency, and most if not all of these products are web-based and offer some sort of free trial or small enterprise pricing. My advice is to do analyze your operations to determine if you need to go the tracking tool route, and then some basic research to identify popular options and their pros and cons. Once you’ve identified one or two options that seem to fit your needs, give then a try to see if they’re what you’re looking for.
Again, which method you go with will depend on how much effort you will need to spend up front (in training and adapting new software) versus later on (added maintenance and decreased efficiency).
How do you track user story progress? What are the big advantages/disadvantages of your chosen method? JIRA in particular seems to elicit strong feelings in users, positive or negative; what are your thoughts on it?
DuraSpace News: OR2015 Conference Stands Behind Commitment to Ensure All Participants are Treated With Respect
Indianapolis, IN The Open Repositories 2015 conference will take place June 8-11 in Indianapolis and is wholly committed to creating an open and inclusive conference environment. As expressed in its Code of Conduct, OR is dedicated to providing a welcoming and positive experience for everyone and to having an environment in which all colleagues are treated with dignity and respect.
Friday, June 26, 2015, 8:30am – 4:00pm
In this hackathon attendees will learn to use the Bootstrap front-end framework and the Git version control system to create, modify and share code for a new library website. Expect a friendly atmosphere and a creative hands-on experience that will introduce you to web literacy for the 21st century librarian. The morning will consist of in-depth introductions to the tools, while the afternoon will see participants split into working groups to build a single collaborative library website.
Bootstrap is an open-source, responsive designed, and front-end web framework that can be used to create complete website redesigns to rapid prototyping. It is useful for many library web applications, such as customizing LibGuides (version 2) or creating responsive sites. This workshop will give attendees a crash-course into the basics of what Bootstrap can do and how to code it. Attendees can work individually or in teams.
Git is an open-source software tool that allows you to manage drafts and collaboratively work on projects – whether you’re building a library app, writing a paper, or organizing a talk. We will also talk about GitHub, a massively popular website that hosts git projects and has built-in features like issue tracking and simple web page hosting.
Bootstrap, LibGuides, & Potential Web Domination – Discussion of the use of Bootstrap at the Van Library, University of St. Francis
Libraries using Bootstrap example:
Bradford County Public Library
Library Code Year Interest Group
Kate Bronstad, Web Developer, Tisch Library, Tufts University
Kate is a librarian-turned-web developer for Tufts University’s Tisch Library. She works with git on a daily basis and teaches classes on git for the Boston chapter of Girl Develop It. Kate is originally from Austin, TX and has a MSIS from UT-Austin.
Junior Tidal, New York City College of Technology
Junior is the Multimedia and Web Services Librarian and Assistant Professor for the Ursula C. Schwerin Library at the New York City College of Technology, City University of New York. His research interests include mobile web development, usability, web metrics, and information architecture. He has published in the Journal of Web Librarianship, OCLC Systems & Services, Computers in Libraries, and code4Lib Journal. He has written a LITA guide entitled Usability and the Mobile Web published by ALA TechSource. Originally from Whitesburg, Kentucky, he has earned a MLS and a Master’s in Information Science from Indiana University.
- LITA Member $235 (coupon code: LITA2015)
- ALA Member $350
- Non-Member $380
To register for any of these events, you can include them with your initial conference registration or add them later using the unique link in your email confirmation. If you don’t have your registration confirmation handy, you can request a copy by emailing firstname.lastname@example.org. You also have the option of registering for a preconference only. To receive the LITA member pricing during the registration process on the Personal Information page enter the discount promotional code: LITA2015
Register online for the ALA Annual Conference and add a LITA Preconference
Call ALA Registration at 1-800-974-3084
Onsite registration will also be accepted in San Francisco.
Questions or Comments?
For all other questions or comments related to the course, contact LITA at (312) 280-4269 or Mark Beatty, email@example.com
Journal of Web Librarianship: BUILDING COMMUNITIES: SOCIAL NETWORKING FOR ACADEMIC LIBRARIES. Garofalo, Denise A. Oxford, UK: Chandos Publishing, 2013, 242 pp., $80.00, ISBN-13: 978-1-84334-735-4.
Journal of Web Librarianship: DIGITAL HUMANITIES IN PRACTICE. Warwick, Claire, Melissa Terras, and Julianne Nyhan, Eds. London: Facet Publishing, 2012, 233 pp., $97.42, ISBN: 978-1-85604-766-1.
Bradford Lee Eden
Journal of Web Librarianship: GUIDE TO REFERENCE IN MEDICINE AND HEALTH. Modschiedler, Christa, and Denise Beaubien Bennett. Chicago: ALA Editions, 2014, 480 pp., $75.00, ISBN-13: 978-0-83891-221-8.
Kristen L. Young
Journal of Web Librarianship: THE METADATA MANUAL: A PRACTICAL WORKBOOK. Lubas, Rebecca, Amy Jackson, and Ingrid Schneider. Oxford, UK: Chandos Publishing, 2013, 240 pp., $80.00, ISBN: 978-1-84334-729-3.
Journal of Web Librarianship: ANNUAL REVIEW OF CULTURAL HERITAGE INFORMATICS: 2012–2013. Hastings, Samantha K., ed. New York: Rowman & Littlefield, 2014, 290 pp., $84.99, ISBN-13: 978-0-75912-333-5.
Dena L. Luce
Journal of Web Librarianship: PRIVATIZING LIBRARIES. Jerrard, Jane, Nancy Bolt, and Karen Strege. Chicago, IL: ALA Editions, 2012, 72 pp., $46.00, ISBN-13: 978-0-83891-154-9.
Journal of Web Librarianship: DIGITAL LIBRARIES AND INFORMATION ACCESS: RESEARCH PERSPECTIVES. Chowdhury, G. G., and Schubert Foo, Eds. Chicago: Neal Schuman, 2012, 256 pp., $99.95, ISBN-13: 978-1-55570-914-3.
The Andrew W. Mellon Foundation is aggressively funding efforts to support new forms of academic publishing, which researchers say could further legitimize digital scholarship.
The foundation in May sent university press directors a request for proposals to a new grant-making initiative for long-form digital publishing for the humanities. In the e-mail, the foundation noted the growing popularity of digital scholarship, which presented an “urgent and compelling” need for university presses to publish and make digital work available to readers.Note in particular:
The foundation’s proposed solution is for groups of university presses to ... tackle any of the moving parts that task is comprised of, including “...(g) distribution; and (h) maintenance and preservation of digital content.”Below the fold, some thoughts on this based on experience from the LOCKSS Program.
Since a Mellon-funded meeting more than a decade ago at the NYPL with humanities librarians, the LOCKSS team has been involved in discussions of, and attempts to, preserve the "long tail" of smaller journal publishers, especially in the humanities. Our observations:
- The cost of negotiating individually with publishers for permission to preserve their content, and the fact that they need to take action to express that permission, is a major problem. Creative Commons licenses and their standard electronic representation greatly reduce the cost of preservation. If for-pay access is essential for sustainability, some standard electronic representation of permission and standard way of allowing archives access is necessary.
- Push preservation models, in which the publisher sends content for preservation, are not viable in the long tail. Pull preservation, in which the archive(s) harvest content from the publisher, is essential.
- Further, the more the "new digital work flows and publication models" diverge from the e-book/PDF model, the less push models will work. They require the archive replicating the original publishing platform, easy enough if it is delivering static files, but not so easy once the content gets dynamic.
- The cost of pull preservation is dominated by the cost of the first publisher on a given platform. Subsequent publishers have much lower cost. Thus driving publishing to a few, widely-used platforms is very important.
- Once a platform has critical mass, archives can work with the platform to reduce the cost of preservation. We have worked with the Open Journal System (OJS) to (a) make it easy for publishers to give LOCKSS permission by checking a box, and (b) provide LOCKSS with a way of getting the content without all the highly variable (and thus impossibly expensive) customization. See, for example, work by the Public Knowledge Project.
- The problem with OJS has been selection - much of the content is too low quality to justify the effort of preserving it. Finding the good stuff is difficult for archives because the signal-to-noise ratio is low.
There are significant differences between the University Press market for long-form digital humanities and the long tail of humanities journals. The journals are mostly open-access and many are low-quality. The content that Mellon is addressing is mostly paid access and uniformly high-quality; the selection process has been done by the Presses. But these observations are still relevant, especially the cost implications of a lack of standards.
It is possible that no viable cost-sharing model can be found for archiving the long tail in general. In the University Press case, a less satisfactory alternative is a "preserve in place" strategy in which a condition of funding would be that the University commit to permanent access to the output of its press, with an identified succession plan. At least this would make the cost of preservation visible, and eliminate the assumption that it was someone else's problem.
John Miedema: Hierarchy has a bad rap but language is infused with it. We must find ways to tear down hierarchy almost as quickly as we build it up.
Hierarchy has a bad rap. Hierarchy is a one-sided relation, one thing set higher than another. In society, hierarchy is the stage for abuse of power. The rich on the poor, white on black, men on women, straight on gay. In language too, hierarchy is problematic. Static labels are laden with power and stereotypes, favoring some over others. Aggressive language, too, can overshadow small worthy ideas.
I read Lila the year it was published, 1991. I have a special fondness for this book because my girlfriend bought it for me; she is now my wife. Lila is not a romantic book, and I don’t mean in the classic-romantic sense of Pirsig’s first famous book. I re-read Lila this year. Philosophy aside, I cringe at Pirsig’s portrayal of his central female character, Lila. She is a stereotype, a dumb blonde, operating only on the level of biology and sexuality, the subject of men’s debates about quality. Pirsig is more philosopher than storyteller.
We cannot escape that many of the good things we love about language are essentially hierarchical. Narrative is linear: a beginning, middle, and end. Order shapes the story. Hierarchy gives a bird’s eye view, a table of contents, a summary that allows a reader to consider a work as a whole. For the reader’s evaluation of a book, or for choosing to only enter a work at a particular door, the table provides a map. Hierarchy is a tree, a trunk on which the reader can climb, and branches on which the reader can swing.
Granted, a hierarchy is just one view, an author’s take on how the work should be understood. There is merit in deconstructing the author’s take and analyzing the work in other ways. It is static hierarchy that is the problem.
Many writers are inspired to start a project with a vision of the whole, a view of how all the pieces hang together, as if only keystrokes were needed to fill in the details. The writer gets busy, happily tossing content into categories. Inevitably new material is acquired and new thinking takes place. Sooner or later a crisis occurs — the new ideas do not fit the original view. Either the writer does the necessary work to uproot the original categories and build a new better view, or the work will flounder. Again, it is static hierarchy that is the problem.
We must find ways to tear down hierarchy almost as quickly as we build it up. Pirsig’s metaphysics is all about the tension between static and dynamic quality. My writing technology, Lila, named after Pirsig’s book, uses word qualities to compute hierarchy. What word qualities measure hierarchy? I have several ideas. I propose that passages with abstract words are higher order than those with more concrete words. Closer to Pirsig’s view, passages that are dynamic — measured by agency, activity, and heat — are higher order than those that are static. Or does cool clear static logic trump heated emotion? There are several ways to measure it, and plenty of issues to work out. It will take more posts.
Join us for a two-day hackathon during DPLAfest 2015 (Indianapolis, April 17-18) to collaborate with members of the DPLA community and build something awesome with our API. A hackathon is a concentrated period of time for creative people to come together and make something new. In their excellent hackathon planning guide, DPLA community reps Chad Nelson and Nabil Kashyap described a hackathon as “an alternative space–outside of day-to-day assignments, project management procedures, and decision-making processes–to think differently about a problem, a tool, a dataset, or even an institution.”
The hackathon at DPLAfest 2015 will provide a space for people to build off the DPLA API, which provides access to almost 9 million (and counting!) CC0 licensed metadata records from America’s libraries, archives, and museums in a common metadata format. We support this open API so that the world can access our common cultural heritage, and use it to build something transformative. Our ever-growing app library has examples of innovative projects that have been built using the API. Many people have also contributed ideas for apps and tools – perhaps someone at the hackathon will take one on!
Coders of all levels – from beginning to advanced – are welcome at the hackathon. During the first hour on Friday, we will cover API basics, the capabilities of the DPLA API, available toolsets, and tips for using records from the API effectively. After that, there will be ample opportunity to teach and learn from one another as we build our apps. As always, you can find helpful documentation on our website, such as the API codex and the glossary of terms.
Non-programmers are also welcome. Whatever your expertise – design, metadata, business development – you can help generate ideas and create prototypes. The only requirements for participation are curiosity and a desire to collaborate.
The hackathon is Friday, April 17, 1:30pm-4:00pm, and Saturday, April 18, 10:30am-3:00pm (with a break for lunch). It culminates with a Developer Showcase on Saturday at 3:15pm. Visit the full schedule to find out more about what’s happening at DPLAfest 2015. Registration is still open!