news aggregator

Morgan, Eric Lease: Archival linked data use cases

planet code4lib - Fri, 2014-02-07 02:43

What can you do with archival linked data once it is created? Here are three use cases:

  1. Do simple publishing – At its very root, linked data is about making your data available for others to harvest and use. While the “killer linked data application” has seemingly not reared its head, this does not mean you ought not make your data available at linked data. You won’t see the benefits immediately, but sooner or later (less than 5 years from now), you will see your content creeping into the search results of Internet indexes, into the work of both computational humanists and scientists, and into the hands of esoteric hackers creating one-off applications. Internet search engines will create “knowledge graphs”, and they will include links to your content. The humanists and scientists will operate on your data similarly. Both will create visualizations illustrating trends. They will both quantifiably analyze your content looking for patterns and anomalies. Both will probably create network diagrams demonstrating the flow and interconnection of knowledge and ideas through time and space. The humanist might do all this in order to bring history to life or demonstrate how one writer influenced another. The scientist might study ways to efficiently store your data, easily move it around the Internet, or connect it with data set created by their apparatus. The hacker (those are the good guys) will create flashy-looking applications that many will think are weird and useless, but the applications will demonstrate how the technology can be exploited. These applications will inspire others, be here one day and gone the next, and over time, become more useful and sophisticated.?
  2. Create a union catalog – If you make your data available as linked data, and if you find at least one other archive who is making their data available as linked data, then you can find a third somebody who will combine them into a triple store and implement a rudimentary SPARQL interface against the union. Once this is done a researcher could conceivably search the interface for a URI to see what is in both collections. The absolute imperative key to success for this to work is the judicious inclusion of URIs in both data sets. This scenario becomes even more enticing with the inclusion of two additional things. First, the more collections in the triple store the better. You can not have enough collections in the store. Second, the scenario will be even more enticing when each archive publishes their data using similar ontologies as everybody else. Success does not hinge on similar ontologies, but success is significantly enhanced. Just like the relational databases of today, nobody will be expected to query them using their native query language (SQL or SPARQL). Instead the interfaces will be much more user-friendly. The properties of classes in ontologies will become facets for searching and browsing. Free text as well as fielded searching via drop-down menus will become available. As time goes on and things mature, the output from these interfaces will be increasingly informative, easy-to-read, and computable. This means the output will answer questions, be visually appealing, as well as be available in one or more formats for other computer programs to operate upon. ?
  3. Tell a story – You and your hosting institution(s) have something significant to offer. It is not just about you and your archive but also about libraries, museums, the local municipality, etc. As a whole you are a local geographic entity. You represent something significant with a story to tell. Combine your linked data with the linked data of others in your immediate area. The ontologies will be a total hodgepodge, at least at first. Now provide a search engine against the result. Maybe you begin with local libraries or museums. Allow people to search the interface and bring together the content of everybody involved. Do not just provide lists of links in search results, but instead create knowledge graphs. Supplement the output of search results with the linked data from Wikipedia, Flickr, etc. You don’t have to be a purist. In a federated search sort of way, supplement the output with content from other data feeds such as (licensed) bibliographic indexes or content harvested from OAI-PMH repositories. Creating these sorts of things on-the-fly will be challenging. On the other hand, you might implement something that is more iterative and less immediate, but more thorough and curated if you were to select a topic or theme of interest, and do your own searching and story telling. The result would be something that is at once a Web page, a document designed for printing, or something importable into another computer program.

This text is a part of a draft sponsored by LiAM — the Linked Archival Metadata: A Guidebook.

Tennant, Roy: The OPAC is Dead

planet code4lib - Fri, 2014-02-07 00:30

Anyone who has heard me speak in the last decade or so has likely heard my mini-diatribe against the acronym “OPAC”. Besides being impenetrable jargon, it is thoroughly anachronistic. It owes its life to an extremely brief period of modern librarianship when we had automated circulation systems that didn’t have a publicly available instantiation. That is the only explanation for the “public access” part of “online public access catalog”.

And then we saddled ourselves and the library literature with this monster for decades to come. We are still trying to shake this mistake.

Long ago I swore to never use that term again, and waited for everyone else to follow. And I waited. And waited. I’m done waiting. I’m going to go after it with hammer and tongs. Again.

Not only am I to kill off the term, but I’m endeavoring to bury the thing itself deep. I’ve even said this before, well over a decade ago. Not that you listened to me then, mind you. But perhaps I have your attention now. “Most integrated library systems, as they are currently configured and used,” I had asserted, “should be removed from public view.”

My point was that although we may have been justified at putting them in front of the public in the early days, we have no such justification any more. Not when we have much better finding tools that cover not just the books and journals in our collections, but articles and so much more. But more importantly, as studies like that at Utrecht University have pointed out, information discovery has left the building.

So it’s time to move on. Take that anachronistic library catalog and turn it back into what it really only ever was — an inventory control system. That’s right, put it back into the back room where it has really always belonged. And stop saying “OPAC”. For cryin’ out loud. Just stop.

Tennant, Roy: The OPAC is Dead

planet code4lib - Fri, 2014-02-07 00:30

Anyone who has heard me speak in the last decade or so has likely heard my mini-diatribe against the acronym “OPAC”. Besides being impenetrable jargon, it is thoroughly anachronistic. It owes its life to an extremely brief period of modern librarianship when we had automated circulation systems that didn’t have a publicly available instantiation. That is the only explanation for the “public access” part of “online public access catalog”.

And then we saddled ourselves and the library literature with this monster for decades to come. We are still trying to shake this mistake.

Long ago I swore to never use that term again, and waited for everyone else to follow. And I waited. And waited. I’m done waiting. I’m going to go after it with hammer and tongs. Again.

Not only am I to kill off the term, but I’m endeavoring to bury the thing itself deep. I’ve even said this before, well over a decade ago. Not that you listened to me then, mind you. But perhaps I have your attention now. “Most integrated library systems, as they are currently configured and used,” I had asserted, “should be removed from public view.”

My point was that although we may have been justified at putting them in front of the public in the early days, we have no such justification any more. Not when we have much better finding tools that cover not just the books and journals in our collections, but articles and so much more. But more importantly, as studies like that at Utrecht University have pointed out, information discovery has left the building.

So it’s time to move on. Take that anachronistic library catalog and turn it back into what it really only ever was — an inventory control system. That’s right, put it back into the back room where it has really always belonged. And stop saying “OPAC”. For cryin’ out loud. Just stop.

ALA Equitable Access to Electronic Content: FCC: The time is now to speed library broadband connections

planet code4lib - Thu, 2014-02-06 18:07

Tom Wheeler. Photo by Adweek.

In the long series of events that is the path to E-rate modernization, yesterday marked a rhetorical high point for libraries so far. Invoking Thomas Jefferson as he helped open the 2014 Digital Learning Day at the Library of Congress, FCC Chairman Tom Wheeler emphasized the crucial role of libraries as the “community on-ramp to the world of information.” He also turned the familiar refrain of E-rate as a program for schools and libraries to a program for libraries and schools.

Now, in the larger scheme of E-rate reform, this may seem an insignificant turn of phrase. However, let it be a metaphor for the kind of vision the Chairman has outlined for an E-rate program that delivers on President Obama’s goal of connecting students and their communities to high-capacity broadband within five years – or sooner. By actively engaging and asking E-rate stakeholders, to “turn things around” and think differently, we have been challenged to identify the strengths of the program, weed out what is less efficient or effective, and focus on bringing scalable, high-capacity broadband to libraries and schools at affordable rates.

Chairman Wheeler reminded us that we “have a problem that must be fixed” when there is digital inequity for our students and communities with a majority of our schools and libraries connected at internet speeds on par with the average U.S. home. We agree with the Chairman that we can do better. What will it take for libraries?

Clearly, we must focus on high-speed broadband, knowing that insufficient capacity is a very real barrier for modern libraries to meet growing community needs. This must be accomplished, however, through a phased transition rather than flash cuts. And we must bring capital investment to support high-capacity connections to libraries and schools where it currently is unavailable or unaffordable.

We support the Commission’s efforts to speed review of consortia applications. Where libraries are able to participate in consortia, the American Library Association (ALA) believes there can be economies of scale in purchasing services. Consortium applicants also may benefit from technical expertise otherwise unavailable at the local level – especially for smaller libraries. As FCC focuses on these more complex applications, however, we hope individual library and school applications will not be delayed. ALA also has long advocated for simplifying the application process so that more libraries are encouraged to apply, so we are encouraged by the current focus on this aspect of reform.

We agree that we need to immediately maximize existing funds but are glad the Commission will also seriously consider the need to increase permanent funding for a program that has been largely capped at the original level set18 years ago. The demand is evident, and if a data-driven recommendation can be achieved, we support such purposeful stewardship of public funds.

What is the library of the future? We cannot predict perfectly, but we can see some trends. Libraries are moving to cloud-based services so users can access digital resources anywhere, anytime. Libraries digitize local histories – photographs, oral histories (sometimes in vanishing languages), and unique ephemera – and upload these collections to platforms accessible anywhere on the globe with an internet connection. How do we accommodate symmetrical upload and download speeds for multiples of users creating and sharing audio and video files in our digital learning labs? Today libraries provide video conferencing to connect remote users to resources otherwise out of reach. It may not be long before a video is part of a typical job interview or a college admission requirement. We are investigating library applications for Google Glass, so how do we account for that kind of device? Our libraries must be equipped so that no one is denied opportunity because of inadequate broadband.

ALA’s goals align with those of the Commission and those of the President. We appreciate the comments from the Chairman and White House staff to call out the critical roles of public and school librarians. Even more so, we commend the commitment of FCC Commissioners and staff to engage libraries and schools at this critical time. We look forward to the upcoming Public Notice and the opportunity to help shape the transition away from legacy services in ways that do not unintentionally disadvantage the libraries furthest behind.

The post FCC: The time is now to speed library broadband connections appeared first on District Dispatch.

ALA Equitable Access to Electronic Content: FCC: The time is now to speed library broadband connections

planet code4lib - Thu, 2014-02-06 18:07

Tom Wheeler. Photo by Adweek.

In the long series of events that is the path to E-rate modernization, yesterday marked a rhetorical high point for libraries so far. Invoking Thomas Jefferson as he helped open the 2014 Digital Learning Day at the Library of Congress, FCC Chairman Tom Wheeler emphasized the crucial role of libraries as the “community on-ramp to the world of information.” He also turned the familiar refrain of E-rate as a program for schools and libraries to a program for libraries and schools.

Now, in the larger scheme of E-rate reform, this may seem an insignificant turn of phrase. However, let it be a metaphor for the kind of vision the Chairman has outlined for an E-rate program that delivers on President Obama’s goal of connecting students and their communities to high-capacity broadband within five years – or sooner. By actively engaging and asking E-rate stakeholders, to “turn things around” and think differently, we have been challenged to identify the strengths of the program, weed out what is less efficient or effective, and focus on bringing scalable, high-capacity broadband to libraries and schools at affordable rates.

Chairman Wheeler reminded us that we “have a problem that must be fixed” when there is digital inequity for our students and communities with a majority of our schools and libraries connected at internet speeds on par with the average U.S. home. We agree with the Chairman that we can do better. What will it take for libraries?

Clearly, we must focus on high-speed broadband, knowing that insufficient capacity is a very real barrier for modern libraries to meet growing community needs. This must be accomplished, however, through a phased transition rather than flash cuts. And we must bring capital investment to support high-capacity connections to libraries and schools where it currently is unavailable or unaffordable.

We support the Commission’s efforts to speed review of consortia applications. Where libraries are able to participate in consortia, the American Library Association (ALA) believes there can be economies of scale in purchasing services. Consortium applicants also may benefit from technical expertise otherwise unavailable at the local level – especially for smaller libraries. As FCC focuses on these more complex applications, however, we hope individual library and school applications will not be delayed. ALA also has long advocated for simplifying the application process so that more libraries are encouraged to apply, so we are encouraged by the current focus on this aspect of reform.

We agree that we need to immediately maximize existing funds but are glad the Commission will also seriously consider the need to increase permanent funding for a program that has been largely capped at the original level set18 years ago. The demand is evident, and if a data-driven recommendation can be achieved, we support such purposeful stewardship of public funds.

What is the library of the future? We cannot predict perfectly, but we can see some trends. Libraries are moving to cloud-based services so users can access digital resources anywhere, anytime. Libraries digitize local histories – photographs, oral histories (sometimes in vanishing languages), and unique ephemera – and upload these collections to platforms accessible anywhere on the globe with an internet connection. How do we accommodate symmetrical upload and download speeds for multiples of users creating and sharing audio and video files in our digital learning labs? Today libraries provide video conferencing to connect remote users to resources otherwise out of reach. It may not be long before a video is part of a typical job interview or a college admission requirement. We are investigating library applications for Google Glass, so how do we account for that kind of device? Our libraries must be equipped so that no one is denied opportunity because of inadequate broadband.

ALA’s goals align with those of the Commission and those of the President. We appreciate the comments from the Chairman and White House staff to call out the critical roles of public and school librarians. Even more so, we commend the commitment of FCC Commissioners and staff to engage libraries and schools at this critical time. We look forward to the upcoming Public Notice and the opportunity to help shape the transition away from legacy services in ways that do not unintentionally disadvantage the libraries furthest behind.

The post FCC: The time is now to speed library broadband connections appeared first on District Dispatch.

Rochkind, Jonathan: Royal Library of Denmark goes live with Umlaut

planet code4lib - Thu, 2014-02-06 18:04

The Royal Library of Denmark has gone live with an Umlaut implementation.

They’ve done some local UI customizations, including multi-lingualization. (We hope to get the i18n stuff merged into Umlaut core).

You can see their start page here.

Although at most libraries, users more often use Umlaut as a target of OpenURL linking from search platforms, rather than starting at the Umlaut start page. I’m not sure if the Royal Library’s use cases are typical in that way or not.  Royal Library’s Google Scholar preferences still seem to point directly to their SFX instance, not to their Umlaut instance. (And Google Scholar makes it increasingly hard for users to find and use this preference anyhow, honestly).


Filed under: General

Rochkind, Jonathan: Royal Library of Denmark goes live with Umlaut

planet code4lib - Thu, 2014-02-06 18:04

The Royal Library of Denmark has gone live with an Umlaut implementation.

They’ve done some local UI customizations, including multi-lingualization. (We hope to get the i18n stuff merged into Umlaut core).

You can see their start page here.

Although at most libraries, users more often use Umlaut as a target of OpenURL linking from search platforms, rather than starting at the Umlaut start page. I’m not sure if the Royal Library’s use cases are typical in that way or not.  Royal Library’s Google Scholar preferences still seem to point directly to their SFX instance, not to their Umlaut instance. (And Google Scholar makes it increasingly hard for users to find and use this preference anyhow, honestly).


Filed under: General

ALA Equitable Access to Electronic Content: Confused about e-government services? Participate in free Lib2Gov webinars

planet code4lib - Thu, 2014-02-06 16:31

The American Library Association (ALA) and the Information Policy & Access Center (iPAC) at the University of Maryland at College Park are pleased to announce the re-launch of Lib2Gov, an online e-government resource for librarians. Over the past few months, both organizations have worked to transition LibEGov—a project supported by the Institute of Museum and Library Services through a National Leadership Grant—into Lib2Gov.

Lib2Gov now provides a dedicated space where librarians can share materials, lesson plans, tutorials, stories, and other e-government content. The redesigned website Lib2Gov allows libraries and government agencies to come together and collaborate and build a community of practice. The website offers a variety of resources from government agencies and organizations, including information on immigration, taxation, social security and healthcare.

In addition, both organizations will host a new monthly webinar series, “E-government @ Your Library.” The webinars will explore a variety of e-government topics that will be of interest to librarians, including mobile government and emergency preparedness, response and recovery. All webinars are free and will be archived on the Lib2Gov site. The webinar schedule for Winter/Spring 2014:

Webinar 1: E-government @ Your Library
Wednesday, February 26, 2014, at 2 p.m. EST
This webinar offers general insights into how libraries can help meet the e-government needs of their communities in general and through the Lib2Gov web resource. [This webinar is now full. Let us know if you would like ALA to host a second introductory webinar.]

Speakers:

  • John Bertot, Ph.D., co-director, Information Policy & Access Center (iPAC), and professor, in University of Maryland College Park’s iSchool
  • Ursula Gorham, graduate research associate, iPAC and doctoral candidate, University of Maryland College Park iSchool
  • Jessica McGilvray, assistant director, Office of Government Relations at the American Library Association’s Washington, D.C. office

Webinar 2: Government Information Expertise Online: Beyond the First Century of Federal Depository Library Program Practice
Thursday, March 27, 2014, at 3 p.m. EST
This webinar will offer insights and techniques in how practicing Government Information professionals can take the strengths and opportunities of the depository library experience into several promising areas of digital reference, discovery tools for government information, and deliberative outreach to your community. Register now.

Speakers:

  • Cynthia Etkin, senior program planning specialist, Office of the Superintendent of Documents, U.S. Government Printing Office (GPO)
  • John A. Shuler, associate professor, University of Illinois, Chicago University Library

Webinar 3: An Introduction to Mobile Government Apps (mgov) for Librarians Wednesday, April 30, 2014, at 2 p.m. EST
The webinar will cover how librarians can teach patrons to use mobile devices, provide links on our webpages to government apps, and create apps for their own e-government websites. Register now.

Speakers:

  • Isabelle Fetherston, teen librarian, Pasco County Library System
  • Nancy Fredericks, member, Pasco County Library System Library Leadership Team

Webinar 4: Roles for Libraries and Librarians in Disasters
Thursday, May 15, 2014, at 2 p.m. EST
This webinar presents information on libraries’ and librarians’ roles supporting their communities and the disaster workforce before, during, and after hazardous events and disasters. Register now.

Speakers:

  • Siobhan Champ-Blackwell, librarian, U.S. National Library of Medicine Disaster Information Management Research Center
  • Cindy Love, librarian, U.S. National Library of Medicine Disaster Information Management Research Center
  • Elizabeth Norton, librarian, U.S. National Library of Medicine Disaster Information Management Research Center

Webinar 5: Beta.Congress.Gov
Thursday, June 12, 2014, at 2 p.m. EST

Sign-up information, as well as more information about webinar topics and speakers, is available. Please contact Jessica McGilvray (jmcgilvray@alawash.org) or John Bertot (jbertot@umd.edu) with any questions about Lib2Gov or the webinar series.

The post Confused about e-government services? Participate in free Lib2Gov webinars appeared first on District Dispatch.

ALA Equitable Access to Electronic Content: Confused about e-government services? Participate in free Lib2Gov webinars

planet code4lib - Thu, 2014-02-06 16:31

The American Library Association (ALA) and the Information Policy & Access Center (iPAC) at the University of Maryland at College Park are pleased to announce the re-launch of Lib2Gov, an online e-government resource for librarians. Over the past few months, both organizations have worked to transition LibEGov—a project supported by the Institute of Museum and Library Services through a National Leadership Grant—into Lib2Gov.

Lib2Gov now provides a dedicated space where librarians can share materials, lesson plans, tutorials, stories, and other e-government content. The redesigned website Lib2Gov allows libraries and government agencies to come together and collaborate and build a community of practice. The website offers a variety of resources from government agencies and organizations, including information on immigration, taxation, social security and healthcare.

In addition, both organizations will host a new monthly webinar series, “E-government @ Your Library.” The webinars will explore a variety of e-government topics that will be of interest to librarians, including mobile government and emergency preparedness, response and recovery. All webinars are free and will be archived on the Lib2Gov site. The webinar schedule for Winter/Spring 2014:

Webinar 1: E-government @ Your Library
Wednesday, February 26, 2014, at 2 p.m. EST
This webinar offers general insights into how libraries can help meet the e-government needs of their communities in general and through the Lib2Gov web resource. Register now.

Speakers:

  • John Bertot, Ph.D., co-director, Information Policy & Access Center (iPAC), and professor, in University of Maryland College Park’s iSchool
  • Ursula Gorham, graduate research associate, iPAC and doctoral candidate, University of Maryland College Park iSchool
  • Jessica McGilvray, assistant director, Office of Government Relations at the American Library Association’s Washington, D.C. office

Webinar 2: Government Information Expertise Online: Beyond the First Century of Federal Depository Library Program Practice
Thursday, March 27, 2014, at 3 p.m. EST
This webinar will offer insights and techniques in how practicing Government Information professionals can take the strengths and opportunities of the depository library experience into several promising areas of digital reference, discovery tools for government information, and deliberative outreach to your community. Register now.

Speakers:

  • Cynthia Etkin, senior program planning specialist, Office of the Superintendent of Documents, U.S. Government Printing Office (GPO)
  • John A. Shuler, associate professor, University of Illinois, Chicago University Library

Webinar 3: An Introduction to Mobile Government Apps (mgov) for Librarians Wednesday, April 30, 2014, at 2 p.m. EST
The webinar will cover how librarians can teach patrons to use mobile devices, provide links on our webpages to government apps, and create apps for their own e-government websites. Register now.

Speakers:

  • Isabelle Fetherston, teen librarian, Pasco County Library System
  • Nancy Fredericks, member, Pasco County Library System Library Leadership Team

Webinar 4: Roles for Libraries and Librarians in Disasters
Thursday, May 15, 2014, at 2 p.m. EST
This webinar presents information on libraries’ and librarians’ roles supporting their communities and the disaster workforce before, during, and after hazardous events and disasters. Register now.

Speakers:

  • Siobhan Champ-Blackwell, librarian, U.S. National Library of Medicine Disaster Information Management Research Center
  • Cindy Love, librarian, U.S. National Library of Medicine Disaster Information Management Research Center
  • Elizabeth Norton, librarian, U.S. National Library of Medicine Disaster Information Management Research Center

Webinar 5: Beta.Congress.Gov
Thursday, June 12, 2014, at 2 p.m. EST

Sign-up information, as well as more information about webinar topics and speakers, is available. Please contact Jessica McGilvray (jmcgilvray@alawash.org) or John Bertot (jbertot@umd.edu) with any questions about Lib2Gov or the webinar series.

The post Confused about e-government services? Participate in free Lib2Gov webinars appeared first on District Dispatch.

OCLC Dev Network: Systems Maintenance for Web Service Authentication Infrastructure on Feb 9

planet code4lib - Thu, 2014-02-06 15:18
Related Web Service(s):  WMS Acquisitions API WMS Circulation API WMS Collection Management API WMS License Manager API WMS NCIP Service WMS Vendor Information Center API WorldCat Metadata API

Web services that require user level authentication will be down for systems maintenance to the Identity Management system (IDM) for one hour beginning Feb 9th.    This down time will affect OCLC’s worldwide data centers as follows:  

read more

OCLC Dev Network: Systems Maintenance for Web Service Authentication Infrastructure on Feb 9

planet code4lib - Thu, 2014-02-06 15:18
Related Web Service(s):  WMS Acquisitions API WMS Circulation API WMS Collection Management API WMS License Manager API WMS NCIP Service WMS Vendor Information Center API WorldCat Metadata API

Web services that require user level authentication will be down for systems maintenance to the Identity Management system (IDM) for one hour beginning Feb 9th.    This down time will affect OCLC’s worldwide data centers as follows:  

read more

Farkas, Meredith: Library DIY (and my team!) honored with the ACRL IS Innovation Award

planet code4lib - Thu, 2014-02-06 14:30

At the end of my last post was a little love letter to the “pocket of wonderful” at my work; a group of librarians I think of as my learning community. At first, when I was the Head of Instruction, these people reported to me and/or were part of my Instructional Design Team. This was the team that built Library DIY. And even after my job was reorganized (new management, new structures) over the summer — something I’ve barely talked to anyone outside of work about because I was so sad — and the instructional design team ceased to exist, and they didn’t report to me anymore, we still behaved very much like a team. We make each other better. We inspire each other. We still work together (when we don’t have to) to improve the things we’ve built. We’re a great group and I’m so proud of what we’ve all accomplished.

But even better than all the kudos I give to to Amy Hofer, Lisa Molinelli, and Kim Willson-St. Clair every chance I get is the recognition of our peers in the profession. I recently learned that we have been awarded the ACRL IS Innovation Award for Library DIY. We are so honored and thrilled to receive this award. It just feels like such validation of our work and also of our team.

Thanks to everyone else who helped make Library DIY a reality: the amazing Tom Boone who made my crazy idea work behind the scenes, Mike Flakus, Chris Geib, C. K. Worrell, Andrea Bullock, and all our colleagues who provided feedback along the way. And thanks to the members of the award committee who chose to recognize our project! We couldn’t be more thrilled.

Farkas, Meredith: Library DIY (and me! and my team!) honored with the ACRL IS Innovation Award

planet code4lib - Thu, 2014-02-06 14:30

At the end of my last post was a little love letter to the “pocket of wonderful” at my work; a group of librarians I think of as my learning community. At first, when I was the Head of Instruction, these people reported to me and/or were part of my Instructional Design Team. This was the team that built Library DIY. And even after my job was reorganized (new management, new structures) over the summer — something I’ve barely talked to anyone outside of work about because I was so sad — and the instructional design team ceased to exist, and they didn’t report to me anymore, we still behaved very much like a team. We make each other better. We inspire each other. We still work together (when we don’t have to) to improve the things we’ve built. We’re a great group and I’m so proud of what we’ve all accomplished.

But even better than all the kudos I give to to Amy Hofer, Lisa Molinelli, and Kim Willson-St. Clair every chance I get is the recognition of our peers in the profession. I recently learned that we have been awarded the ACRL IS Innovation Award for Library DIY. We are so honored and thrilled to receive this award. It just feels like such validation of our work and also of our team.

Thanks to everyone else who helped make Library DIY a reality: the amazing Tom Boone who made my crazy idea work behind the scenes, Mike Flakus, Chris Geib, C. K. Worrell, Andrea Bullock, and all our colleagues who provided feedback along the way. And thanks to the members of the award committee who chose to recognize our project! We couldn’t be more thrilled.

Rosenthal, David: Worth Reading

planet code4lib - Thu, 2014-02-06 08:00
I'm working on a long post about a lot of interesting developments in storage, but right now they are happening so fast I have to keep re-writing it. In the meantime, follow me below the fold for links to some recent posts on other topics that are really worth reading.

First, an excellent and thorough explanation from Cory Doctorow of why Digital Rights Management is such a disaster, not just for archives but for everyone. This is a must-read even for people who are used to the crippling effects of DRM and the DMCA on preservation, because Cory ends by proposing a legal strategy to confront them. It would be one thing if there were major benefits to offset the many downsides of DRM, but there aren't. Ernesto at TorrentFreak pointed to research by Laurina Zhang of the University of Toronto:
It turns out that consumers find music with DRM less attractive than the pirated alternative, and some people have argued that it could actually hurt sales. A new working paper published by University of Toronto researcher Laurina Zhang confirms this.
For her research Zhang took a sample of 5,864 albums from 634 artists and compared the sales figures before and after the labels decided to drop DRM.
“I exploit a natural experiment where the four major record companies – EMI, Sony, Universal, and Warner – remove DRM on their catalogue of music at different times to examine whether relaxing an album’s sharing restrictions impacts the level and distribution of sales,” she explains.
This is the first real-world experiment of its kind, and Zhang’s findings show that sales actually increased after the labels decided to remove DRM restrictions. “I find that the removal of DRM increases digital sales by 10%,” Zhang notes.Second, three posts on the effects of neo-liberal policies. Stanford's Chris Bourg gave a talk at Duke entitled The Neoliberal Library: Resistance is not futile making the case that:Neoliberalism is toxic for higher education, but research libraries can & should be sites of resistance.On the LSE's Impact of Social Sciences blog, Joanna Williams of the University of Kent makes similar arguments with a broader focus in a post entitled ‘Value for money’ rhetoric in higher education undermines the value of knowledge in society:For some students, value for money may just mean getting what they want – satisfaction in the short term and a high level qualification – for minimal effort. The role of universities should be to challenge this assumption. But the notion that educational quality can be driven upwards by a market based on perceived value for money is more likely to lead to a race to the bottom in terms of educational standards as branding, reputation management and the perception of quality all become more important than confronting students with intellectual challenge.On the same blog, Eric Kansa of Open Context has a post entitled It’s the Neoliberalism, Stupid: Why instrumentalist arguments for Open Access, Open Data, and Open Science are not enough. The key to his argument is "be careful what you measure, because that is what you will get":One’s position as a subordinate in today’s power structures is partially defined by living under the microscope of workplace monitoring. Does such monitoring promote conformity? The freedom, innovation, and creativity we hope to unlock through openness requires greater toleration for risk. Real and meaningful openness means encouraging out-of-the-ordinary projects that step out of the mainstream. Here is where I’m skeptical about relying upon metrics-based incentives to share data or collaborate on GitHub.
By the time metrics get incorporated into administrative structures, the behaviors they measure aren’t really innovative any more!Worse, as certain metrics grow in significance (meaning – they’re used in the allocation of money), entrenched constituencies build around them. Such constituencies become interested parties in promoting and perpetuating a given metric, again leading to conformity.Third, I've been pointing to the importance of text-mining the scientific literature since encountering Peter Murray-Rust (link found by Memento!) at the 2007 workshop that started this blog. Nature reports on Elsevier's move to allow researchers to download articles in XML in bulk for this purpose, under some restrictive conditions. Other publishers will follow:
CrossRef, a non-profit collaboration of thousands of scholarly publishers, will in the next few months launch a service that lets researchers agree to standard text-mining terms and conditions by clicking a button on a publisher’s website, a ‘one-click’ solution similar to Elsevier’s set-up. But these terms and conditions may be preempted:
The UK government aims this April to make text-mining for non-commercial purposes exempt from copyright, allowing academics to mine any content they have paid for.On the liblicense mailing list, Michael Carroll of American University argues that in the US subscribers already have the right to text-mine under copyright, but even if he is right this is yet another case where contract trumps copyright. Effective text-mining requires access to the content in bulk, not as individual articles, and ideally in a form more suited to the purpose than PDF. Bulk access to XML is the important part of what Elsevier is providing. Their traditional defenses against bulk downloading make the theoretical right to text-mine without permission in the US and perhaps shortly in the UK pretty much irrelevant.

Elsevier hasn't helped relations with researchers recently by issuing take-down notices for papers their authors had posted to academia.edu. Of course, it was stupid of the authors to post Elsevier's PDF rather than their own, but it wasn't good PR. See here for an interesting discussion of the question, which I thought was settled, as to whether transfer of copyright transfers the rights to every version leading up to the version transferred.

Fourth, a brief but important note on the concept of the Internet by one of those present at its birth, David Reed.

Finally, I was very skeptical of the New York Times paywall even if early experience was encouraging. Ryan Chittum at CJR reported last August that:
But for now, the pile of paywall money is still growing and for the first time, the Times Company has broken out how big it is: More than $150 million a year, including the Boston Globe, ... To put that $150 million in new revenue in perspective, consider that the Times Company as a whole will take in roughly $210 million in digital ads this year. And that $150 million doesn’t capture the paywall’s positive impact on print circulation revenue. Altogether, the company has roughly $360 million in digital revenue. One of my financial advisors writes:
On September 23rd the New York Times’ Board of Directors elected to reinstate the company’s quarterly dividend at a rate of $.04/share. ... This decision was based on the continued and dramatic improvement in the company’s balance sheet, which is now net cash positive and shows almost $1 billion in cash and equivalents, along with improved operating margins and cash flows. In the past two years sales of non-core assets have totaled approximately $700 million as management continues to focus on the “New York Times” brand. ... The company continues to have the capacity to generate cash flow of $2.00 - $2.50/share which should drive the business value, and dividend capacity, even higher.I may have been wrong.

Rosenthal, David: Worth Reading

planet code4lib - Thu, 2014-02-06 08:00
I'm working on a long post about a lot of interesting developments in storage, but right now they are happening so fast I have to keep re-writing it. In the meantime, follow me below the fold for links to some recent posts on other topics that are really worth reading.

First, an excellent and thorough explanation from Cory Doctorow of why Digital Rights Management is such a disaster, not just for archives but for everyone. This is a must-read even for people who are used to the crippling effects of DRM and the DMCA on preservation, because Cory ends by proposing a legal strategy to confront them. It would be one thing if there were major benefits to offset the many downsides of DRM, but there aren't. Ernesto at TorrentFreak pointed to research by Laurina Zhang of the University of Toronto:
It turns out that consumers find music with DRM less attractive than the pirated alternative, and some people have argued that it could actually hurt sales. A new working paper published by University of Toronto researcher Laurina Zhang confirms this.
For her research Zhang took a sample of 5,864 albums from 634 artists and compared the sales figures before and after the labels decided to drop DRM.
“I exploit a natural experiment where the four major record companies – EMI, Sony, Universal, and Warner – remove DRM on their catalogue of music at different times to examine whether relaxing an album’s sharing restrictions impacts the level and distribution of sales,” she explains.
This is the first real-world experiment of its kind, and Zhang’s findings show that sales actually increased after the labels decided to remove DRM restrictions. “I find that the removal of DRM increases digital sales by 10%,” Zhang notes.Second, three posts on the effects of neo-liberal policies. Stanford's Chris Bourg gave a talk at Duke entitled The Neoliberal Library: Resistance is not futile making the case that:Neoliberalism is toxic for higher education, but research libraries can & should be sites of resistance.On the LSE's Impact of Social Sciences blog, Joanna Williams of the University of Kent makes similar arguments with a broader focus in a post entitled ‘Value for money’ rhetoric in higher education undermines the value of knowledge in society:For some students, value for money may just mean getting what they want – satisfaction in the short term and a high level qualification – for minimal effort. The role of universities should be to challenge this assumption. But the notion that educational quality can be driven upwards by a market based on perceived value for money is more likely to lead to a race to the bottom in terms of educational standards as branding, reputation management and the perception of quality all become more important than confronting students with intellectual challenge.On the same blog, Eric Kansa of Open Context has a post entitled It’s the Neoliberalism, Stupid: Why instrumentalist arguments for Open Access, Open Data, and Open Science are not enough. The key to his argument is "be careful what you measure, because that is what you will get":One’s position as a subordinate in today’s power structures is partially defined by living under the microscope of workplace monitoring. Does such monitoring promote conformity? The freedom, innovation, and creativity we hope to unlock through openness requires greater toleration for risk. Real and meaningful openness means encouraging out-of-the-ordinary projects that step out of the mainstream. Here is where I’m skeptical about relying upon metrics-based incentives to share data or collaborate on GitHub.
By the time metrics get incorporated into administrative structures, the behaviors they measure aren’t really innovative any more!Worse, as certain metrics grow in significance (meaning – they’re used in the allocation of money), entrenched constituencies build around them. Such constituencies become interested parties in promoting and perpetuating a given metric, again leading to conformity.Third, I've been pointing to the importance of text-mining the scientific literature since encountering Peter Murray-Rust (link found by Memento!) at the 2007 workshop that started this blog. Nature reports on Elsevier's move to allow researchers to download articles in XML in bulk for this purpose, under some restrictive conditions. Other publishers will follow:
CrossRef, a non-profit collaboration of thousands of scholarly publishers, will in the next few months launch a service that lets researchers agree to standard text-mining terms and conditions by clicking a button on a publisher’s website, a ‘one-click’ solution similar to Elsevier’s set-up. But these terms and conditions may be preempted:
The UK government aims this April to make text-mining for non-commercial purposes exempt from copyright, allowing academics to mine any content they have paid for.On the liblicense mailing list, Michael Carroll of American University argues that in the US subscribers already have the right to text-mine under copyright, but even if he is right this is yet another case where contract trumps copyright. Effective text-mining requires access to the content in bulk, not as individual articles, and ideally in a form more suited to the purpose than PDF. Bulk access to XML is the important part of what Elsevier is providing. Their traditional defenses against bulk downloading make the theoretical right to text-mine without permission in the US and perhaps shortly in the UK pretty much irrelevant.

Elsevier hasn't helped relations with researchers recently by issuing take-down notices for papers their authors had posted to academia.edu. Of course, it was stupid of the authors to post Elsevier's PDF rather than their own, but it wasn't good PR. See here for an interesting discussion of the question, which I thought was settled, as to whether transfer of copyright transfers the rights to every version leading up to the version transferred.

Fourth, a brief but important note on the concept of the Internet by one of those present at its birth, David Reed.

Finally, I was very skeptical of the New York Times paywall even if early experience was encouraging. Ryan Chittum at CJR reported last August that:
But for now, the pile of paywall money is still growing and for the first time, the Times Company has broken out how big it is: More than $150 million a year, including the Boston Globe, ... To put that $150 million in new revenue in perspective, consider that the Times Company as a whole will take in roughly $210 million in digital ads this year. And that $150 million doesn’t capture the paywall’s positive impact on print circulation revenue. Altogether, the company has roughly $360 million in digital revenue. One of my financial advisors writes:
On September 23rd the New York Times’ Board of Directors elected to reinstate the company’s quarterly dividend at a rate of $.04/share. ... This decision was based on the continued and dramatic improvement in the company’s balance sheet, which is now net cash positive and shows almost $1 billion in cash and equivalents, along with improved operating margins and cash flows. In the past two years sales of non-core assets have totaled approximately $700 million as management continues to focus on the “New York Times” brand. ... The company continues to have the capacity to generate cash flow of $2.00 - $2.50/share which should drive the business value, and dividend capacity, even higher.I may have been wrong.

Rundle, Hugh: VALA Conference – Wednesday highlights

planet code4lib - Wed, 2014-02-05 22:38
Plenary 3 – Gene Tan and Singapore Memory

First up on Wednesday we heard Gene Tan, Director of the Singapore National Library, officially talking about the Singapore Memory project, but really talking about his philosophy of libraries and how to manage them. Gene has worked to ensure that Singapore Memory found interesting, real stories about Singaporean lives, rather than finding stories that meet a particular target for total number, or ‘type’ of Singaporean, or that suit the political needs of Singapore’s government. He argued for small data over big data and human scale over web scale.

Gene looked for interesting stories and followed them up, rather than going for big numbers of ‘stories’. The Minister announced they would collect 5 million stories, but Gene went to him and said “I’ll give you 5000 stories not 5 million, but they’ll be really great”.

He didn’t just want stories that fit ‘types’ of Singaporeans – Gene wanted it to be real, messy, interesting.

Gene refused to have a strategic statement as SNL Director, instead he built a 50-staff skunkworks and told them to build stuff people can use, touch and feel.

Gene said that “libraries aren’t just places for storing knowledge, they are also sites of emotional experience”.

Hunters and Collectors

Mylee Joseph from SLNSW talked about the New South Wales State Library’s project to collect and archive social media content from NSW. This was part of the same ‘Innovation Project’ as the Wikipedia project that Simon Cootes spoke about on Tuesday. They looked at whether they should collect NSW tweets, Facebook posts etc as part of NSW written heritage as per their mandate.

Mylee pointed out that the technology to publish and distribute is in advance of technology to collect and archive. They used ‘VIZIE’ – software specifically designed for state gov agencies by CSIRO. They were particularly interested in people using social media for building/supporting community, to show what is around them, and to share funny/interesting moments. They had to decide what best represented NSW life. What topics do you capture? How do you do it? how/what do you search? An example is politics. We already have Hansard and press releases, but where are the other voices? A lot are on Twitter using hashtags like #auspol #ausvotes #spill #leadersdebate and #ausdebate

Sometimes things need context e.g. #democracysausage – this is a very Australian thing so it’s a good example of something that may be of interest to future researchers. They missed a lot of things, but also had to make decisions about what to keep. Is a retweet a duplicate? An amplification? Something else? There were also the problem of how to capture things. People don’t always/usually use the ‘official hashtag’ especially for big open public events.

The perfect storm: the convergence of social, mobile and photo technologies in libraries. (Bond University Library)

In this presentation we heard about a research project on photo sharing by Australian libraries, particularly through Instagram.

The researchers did interviews / surveys with actual library staff and also used ‘Nitrogram’ – an analytics program to map the most liked images. They classified pictures as three types:
Identity, Functional and Affective. The most interacted-with images were ‘identity’ images, which surprised the team as they expected them to mostly be functional.

Instagram was difficult to manage corporately because it’s a mobile platform designed for individuals. The majority of libraries are using staff’s own equipment, some public libraries are using work equipment (iPads and iPods).

Libraries had differing views on the appropriateness of interacting with followers – should they follow them back? should they comment? Appropriate norms are still unclear.

Different libraries use it for different promotional and influencing tasks. “Library selfies” (pics of the library building) are very common. Also “Library Shelfies” as a way to promote the collections. North Carolina State Uni has set things up so that effectively their students are providing the pics.

To be successful with Instagram, you need to ensure you know what your goals are both overall and for each image:
*message
*target audience
*engagement
*evaluation
*use (where will they appear, just on Instagram or in your library, on your marketing material etc)

Is it tweet-worthy? (Kate David and Kathleen Smeaton)

Kate and Kathleen talked about a study they did of librarians on Twitter. The followed a bunch of librarians, and classified their tweets. Librarians tweeted personal things more than they thought they did. Most are reasonably cautious, although they did mostly retweet controversial things and political views.

Organisations need to ensure appropriate flexible guidelines for Twitter use as the line between professional and personal is a little unclear. But most individuals have already deeply thought things through regarding the collapse between work identity and personal identity.

A lot of tweets were just forwarding of content. The unknown question was – was there any offline collaboration as a result of tweets, as well as just information forwarding? We don’t know. Only 15 tweets from 4 participants ‘value added’ a professional opinion (contrasted with willingness to share political and personal opinions). As tweeting librarians we’re more likely to be controversial and political than engage in professional critique.

This paper (in untraditional format) is available at http://bi.ly/tweetworthy

Plenary 4 – Matt Finch

Matt gave a great plenary speech that tied in a bit with what Gene had said earlier in the day. Matt gently chastised us for chasing hipster cred,building ‘hubs’ and wanting to be rockstars. He championed the work of libraries on the fringes – in country towns, with the sullen sporty kids with a hidden urge to write novels, in the tough suburbs. Matt urged us to fix the small, petty problems before building big fancy things. This tied together nicely with Gene Tan’s emphasis on small personal stories from real Singaporeans. Matt also took off his pants on stage, just to wake everybody up!

A couple of interesting articles he quoted were:

A very quiet battle: librarians, publishers and the Pirate Bay

What was the hipster?


Tagged: conferences

Rundle, Hugh: VALA Conference – Wednesday highlights

planet code4lib - Wed, 2014-02-05 22:38
Plenary 3 – Gene Tan and Singapore Memory

First up on Wednesday we heard Gene Tan, Director of the Singapore National Library, officially talking about the Singapore Memory project, but really talking about his philosophy of libraries and how to manage them. Gene has worked to ensure that Singapore Memory found interesting, real stories about Singaporean lives, rather than finding stories that meet a particular target for total number, or ‘type’ of Singaporean, or that suit the political needs of Singapore’s government. He argued for small data over big data and human scale over web scale.

Gene looked for interesting stories and followed them up, rather than going for big numbers of ‘stories’. The Minister announced they would collect 5 million stories, but Gene went to him and said “I’ll give you 5000 stories not 5 million, but they’ll be really great”.

He didn’t just want stories that fit ‘types’ of Singaporeans – Gene wanted it to be real, messy, interesting.

Gene refused to have a strategic statement as SNL Director, instead he built a 50-staff skunkworks and told them to build stuff people can use, touch and feel.

Gene said that “libraries aren’t just places for storing knowledge, they are also sites of emotional experience”.

Hunters and Collectors

Mylee Joseph from SLNSW talked about the New South Wales State Library’s project to collect and archive social media content from NSW. This was part of the same ‘Innovation Project’ as the Wikipedia project that Simon Cootes spoke about on Tuesday. They looked at whether they should collect NSW tweets, Facebook posts etc as part of NSW written heritage as per their mandate.

Mylee pointed out that the technology to publish and distribute is in advance of technology to collect and archive. They used ‘VIZIE’ – software specifically designed for state gov agencies by CSIRO. They were particularly interested in people using social media for building/supporting community, to show what is around them, and to share funny/interesting moments. They had to decide what best represented NSW life. What topics do you capture? How do you do it? how/what do you search? An example is politics. We already have Hansard and press releases, but where are the other voices? A lot are on Twitter using hashtags like #auspol #ausvotes #spill #leadersdebate and #ausdebate

Sometimes things need context e.g. #democracysausage – this is a very Australian thing so it’s a good example of something that may be of interest to future researchers. They missed a lot of things, but also had to make decisions about what to keep. Is a retweet a duplicate? An amplification? Something else? There were also the problem of how to capture things. People don’t always/usually use the ‘official hashtag’ especially for big open public events.

The perfect storm: the convergence of social, mobile and photo technologies in libraries. (Bond University Library)

In this presentation we heard about a research project on photo sharing by Australian libraries, particularly through Instagram.

The researchers did interviews / surveys with actual library staff and also used ‘Nitrogram’ – an analytics program to map the most liked images. They classified pictures as three types:
Identity, Functional and Affective. The most interacted-with images were ‘identity’ images, which surprised the team as they expected them to mostly be functional.

Instagram was difficult to manage corporately because it’s a mobile platform designed for individuals. The majority of libraries are using staff’s own equipment, some public libraries are using work equipment (iPads and iPods).

Libraries had differing views on the appropriateness of interacting with followers – should they follow them back? should they comment? Appropriate norms are still unclear.

Different libraries use it for different promotional and influencing tasks. “Library selfies” (pics of the library building) are very common. Also “Library Shelfies” as a way to promote the collections. North Carolina State Uni has set things up so that effectively their students are providing the pics.

To be successful with Instagram, you need to ensure you know what your goals are both overall and for each image:
*message
*target audience
*engagement
*evaluation
*use (where will they appear, just on Instagram or in your library, on your marketing material etc)

Is it tweet-worthy? (Kate David and Kathleen Smeaton)

Kate and Kathleen talked about a study they did of librarians on Twitter. The followed a bunch of librarians, and classified their tweets. Librarians tweeted personal things more than they thought they did. Most are reasonably cautious, although they did mostly retweet controversial things and political views.

Organisations need to ensure appropriate flexible guidelines for Twitter use as the line between professional and personal is a little unclear. But most individuals have already deeply thought things through regarding the collapse between work identity and personal identity.

A lot of tweets were just forwarding of content. The unknown question was – was there any offline collaboration as a result of tweets, as well as just information forwarding? We don’t know. Only 15 tweets from 4 participants ‘value added’ a professional opinion (contrasted with willingness to share political and personal opinions). As tweeting librarians we’re more likely to be controversial and political than engage in professional critique.

This paper (in untraditional format) is available at http://bi.ly/tweetworthy

Plenary 4 – Matt Finch

Matt gave a great plenary speech that tied in a bit with what Gene had said earlier in the day. Matt gently chastised us for chasing hipster cred,building ‘hubs’ and wanting to be rockstars. He championed the work of libraries on the fringes – in country towns, with the sullen sporty kids with a hidden urge to write novels, in the tough suburbs. Matt urged us to fix the small, petty problems before building big fancy things. This tied together nicely with Gene Tan’s emphasis on small personal stories from real Singaporeans. Matt also took off his pants on stage, just to wake everybody up!

A couple of interesting articles he quoted were:

A very quiet battle: librarians, publishers and the Pirate Bay

What was the hipster?


Tagged: conferences

OCLC Dev Network: Developer House Projects

planet code4lib - Wed, 2014-02-05 22:23

We had a great brainstorming session on Monday where we talked a lot about the kinds of applications and services libraries might want and how those intersect with existing OCLC web services. Out of several ideas, a few floated to the top and work is well underway on these projects:  

read more

OCLC Dev Network: Developer House Projects

planet code4lib - Wed, 2014-02-05 22:23

We had a great brainstorming session on Monday where we talked a lot about the kinds of applications and services libraries might want and how those intersect with existing OCLC web services. Out of several ideas, a few floated to the top and work is well underway on these projects:  

read more

Manage Metadata (Phipps and Hillmann): Wake-up Call for CC:DA

planet code4lib - Wed, 2014-02-05 21:23

Presentations on innovative ways to gather data outside the library silo are happening all over ALA–generally hosted by committees and interest groups using speakers already planning to be at the conference. A great example of the kind of presentation I’m talking about was the Sunday presentation sponsored by the ALCTS CaMMS Cataloging & Classification Research Interest Group produced by the ProMusicaDB project, with founder Christy Crowl and metadata librarian Kimmy Szeto. They provided a veritable feast of slides and stories, all of them illustrating the new ways that we’ll all be operating in the very near future. Their slides should be available on the ALCTS Cataloging and Classification Research IG site sometime soon. [Full disclosure: I spoke at that session too--see previous blog post for more details.]

On the Saturday of Midwinter, I attended 2 parts of the CC:DA meeting (I had to leave to do a presentation to another group in the middle), but I dutifully returned for the last part. It was probably a mistake–my return occurred during the last gasp of a perfectly awful discussion. I had a brief chat with Peter Rolla (the current chair) after the meeting, and continued to think about why I was so appalled during the last part of the meeting. Later, when held hostage in a meeting by a conversation in which I had little interest, I wrote up some of my thoughts.

I would describe the discussion as one of the endless number of highly detailed conversations on improving the RDA rules that have been a “feature” of CC:DA meetings for the past few years. To be honest, I have a limited tolerance for such discussions, though I usually enjoy some of the ones at a less excruciating level of detail.

Somehow this discussion struck me as even more circular than most, and seemed to be aimed at “improving” the rules by limiting the choices allowed to catalogers–in a sense by mechanizing the descriptive process to an extreme degree. Now, I’m no foe of using automated means to create descriptive metadata, either as a sole technique or (preferably) for submission to catalogers or editors to complete. I think we ought to know a lot more about what can be done using technology rather than continue to flog any remaining potential for rule changes intended to push catalogers to supply a level of consistency that isn’t really achievable for humans. If you want consistency–particularly in transcription–use machines. Humans are far better utilized for reviewing the product and correcting errors and adding information to improve its usefulness.

But in cataloging circles, discussing the use of automated methods is generally considered off-topic. When the [technological] revolution comes, catalogers will be the first to go, or so it is too often believed. Copy cataloging and other less ‘professional’ means of cutting costs and increasing productivity is not a happy topic of conversation for this group.

But, looking ahead, I see no letup in this trajectory without some changes. Catalogers love rules, and rules are endlessly improvable, no? Maybe, maybe not, but just put a tech services administrator in the room for some of these discussions, and you’re likely to get a reaction pretty close to mine. But to my mind, the total focus on rules rather than a more practical approach to address the inevitability of change in the business of cataloging is doing more towards ensuring that the human role in the process will be limited in ways that make little sense, except monetarily.
What we need here is to change the conversation, and no group is more qualified to do that than CC:DA. To do that it’s absolutely necessary that its membership become more knowledgeable about what is now possible in automating metadata creation. Without that kind of awareness, it’s impossible to start thinking and discussing how to focus less of CC:DA’s efforts on that part of the cataloging process which should be done by machines, and more on what still needs humans to accomplish. There are several ways to do this. One is by dedicating some of CC:DA’s conference time to bringing in those folks who understand the technology issues to demonstrate, discuss, and collaborate.

Catalogers and their roles have been changing greatly over the past few years, and promises of more change must be taken seriously. Then the ultimate question might be asked: if resistance is futile (and it surely is), how can catalogers learn enough to help frame that change?

Syndicate content