You are here

Feed aggregator

HangingTogether: Are you ready for EAD3? It’s coming soon!

planet code4lib - Wed, 2014-11-05 16:52

The third version of Encoded Archival Description (EAD) is on the cusp of being released, which prompts me to offer up a quickie history of EAD’s development and to summarize what’s coming with EAD3.

EAD has come a long way since the launch of version 1.0 in 1998. I was one of the lucky members of the initial EAD research group, which was recognized in the same year by the Society of American Archivists with its annual Coker award. Having been there at the beginning, it has been nothing short of amazing to observe both the depth and breadth of EAD adoption all over the world over the ensuing sixteen years. It doesn’t seem like that long ago that lots of archivists didn’t think it was possible to wrangle the anarchic finding aid into submission by development of a standard. EAD version 2002 introduced quite a few changes to the initial DTD, particularly in response to needs of members of the international archival community who were among the early adopters.

And now EAD3 is on target to launch in the winter of 2015. Mike Rush, co-chair of SAA’s Technical Subcommittee on EAD, recently presented a webinar to bring us all into the loop about some of the significant changes that are coming. TS-EAD has done a great job of soliciting input and communicating about the revision process. An enormous amount of information is here, and a nice summary of the principles behind the array of changes is here.

In a nutshell, EAD3 is intended to “improve the efficiency and effectiveness of EAD as a standard for the electronic representation of descriptions of archival materials and a tool for the preservation and presentation of such data and its interchange between systems.” A specific objective is to achieve greater conceptual and semantic consistency in EAD’s use; this should be good news to techies responsible for implementations of the standard, some of whom have been vocal for years about the extent to which the excessive flexibility of EAD’s design has proven challenging. Two other goals are to find ways to make EAD-encoded finding aids connect more effectively with other protocols and to improve multilingual functionalities.

TS-EAD has been working madly to finalize the schema and isn’t making the latest version available as little tweaks are made, but you can see a relatively final version of the element list here.

So, what are some of the significant changes that we’ll see in EAD3? I’m going to assume that readers are familiar enough with EAD elements that these will make sense. Note that this is a very partial list.

  • Lots of changes are coming to the metadata about the finding aid, currently found in <eadheader>, which is changing to <control>. I particularly like the new element <otherrecordid> that enables record identifiers from other systems to be brought in. This will make it possible, for example, to add the record i.d. for a companion MARC record.
  • We’ll see new and modified <did> elements, which are the basic descriptive building blocks of a finding aid. A new one is <unitdatestructured>, an optional sibling of <unitdate>, which will enable the parts of a date to be pulled out for manipulation of whatever sort. It bears noting, however, that this is an example of a new functionality that won’t be useful unless entire bodies of finding aids are retrospectively enhanced. That said, I really like the new attributes @notbefore and @notafter.
  • A <relations> element is being added in concert with the same as found in Encoded Archival Context: Corporate Bodies, Persons, and Families (EAC-CPF). <relations> will be a provisional element due to debate within TS-EAD about whether it makes sense in descriptions of archival materials in addition to being found in authority records for the named entities that occur in those descriptions. I confess that my understanding of <relations> isn’t all it could be, but, like some TS-EAD members, I’m dubious that it belongs in a descriptive context. Isn’t it enough to point out relationships among named entities within authority records? Is it intended as a stopgap until we have masses of EAC-CPF records widely available? (If so, use of <relations> in EAD will perpetuate the mixing of descriptive and authority data …) One stated value of <relations> is that it’ll support uses of Linked Open Data. Experimentation will determine this element’s fate.
  • Access term elements (those found within <controlaccess> have been tweaked. For example, <persname> can now be parsed into multiple <part>s for name and life dates. <geographiccoordinate>, which is self-describing, is a new subelement of <geogname>. Nice.
  • The “mixed content” model in which some elements can contain both other elements and open text has been streamlined. For example, <repository> must now contain a specific element such as <corpname> rather than open text without specification of the type of name. This is good; adds to the name’s utility as an access point.
  • Some descriptive elements have been “disentangled,” such as <unitdate> no longer being available within <unittitle>. I like it; presumably a file name that consists solely of a date will now be coded as such. On the other hand, would it be a display and stylesheet problem to have no <unittitle> within a <did>?
  • Some minor elements have been deprecated (i.e., they’re going away). In general, my reaction is “good riddance.”
  • Multilingual functionalities have been expanded by adding language code and script codes to most elements. It’s now possible to encode this data inline via the new <foreign> element. “Foreign” in an international standard? Well, I wasn’t the one who had to come up with an ecumenical word, so I’m not throwing any stones.
  • Linking elements have been simplified, mostly by deprecating some that have been minimally used and by limiting where others are available. One thing does bug me: <dao> will be available only within <did>. Problematic for those who have included sample images at the head of the finding aid? Or who want to affiliate images with e.g. <scopecontent> or <bioghist> rather than within a particular <did>?

Observations? Disagreements? Worries? Please let me know what you think by leaving a comment.

 

About Jackie Dooley

Jackie Dooley leads OCLC Research projects to inform and improve archives and special collections practice. Activities have included in-depth surveys of special collections libraries in the U.S./Canada and the U.K./Ireland; leading the Demystifying Born Digital work agenda; a detailed analysis of the 3 million MARC records in ArchiveGrid; and studying the needs of archival repositories for specialized tools and services. Her professional research interests have centered on the development of standards for cataloging and archival description. She is a past president of the Society of American Archivists and a Fellow of the Society.

Mail | Web | Twitter | Facebook | More Posts (15)

Harvard Library Innovation Lab: Link roundup November 5, 2014

planet code4lib - Wed, 2014-11-05 16:44

Scholars, museums, and hustlers.

Jazzsoon

A hustler hustling. I want to hustle in the library.

The Met and Other Museums Adapt to the Digital Age – NYTimes.com

Inspiration. Let visitors change digital art on the walls by choosing from our archives on their mobile devices.

Apple Picking Season Is Here. Don’t You Want More Than a McIntosh? – NYTimes.com

THE book on apples is being publishes. I love that the author has been editing the same WordPerfect file since 1983.

The Gentleman Who Made Scholar

Google Scholar “asks the actual authors … to identify which groups of paper are theirs”

Maine Charitable Mechanic Association’s History

I love the history of this library. If I make it to Portland I want to pop in and visit.

OCLC Dev Network: Enhancements Planned for November 9

planet code4lib - Wed, 2014-11-05 14:45

This weekend will bring a new release on November 9 that will include changes to two of our WMS APIs.

Library of Congress: The Signal: Audio for Eternity: Schüller and Häfner Look Back at 25 Years of Change

planet code4lib - Wed, 2014-11-05 14:27

The following is a guest post by Carl Fleischhauer, a Digital Initiatives Project Manager in the Office of Strategic Initiatives.

During the first week of October, Kate Murray and I participated in the annual conference of the International Association of Sound and Audiovisual Archives in Cape Town, South Africa.  Kate’s blog describes the conference.  This blog summarizes a special presentation by two digital pioneers in the audio field, who looked back at a quarter century of significant change in audio preservation, change that they had both witnessed and helped lead. 

Dietrich Schüller, photo courtesy of the Phonogrammarchiv.

The main speaker was Dietrich Schüller, who served as the director of the Phonogrammarchiv of the Austrian Academy of Sciences (the world’s first sound archive, founded in 1899) from 1972-2008.  He was a member of the Executive Board of IASA from 1975 to 1987, and is a member of the Audio Engineering Society.  He has served as UNESCO Vice-President of the Information for All Programme.  In this presentation, Schüller was joined by his colleague Albrecht Häfner (recently retired from the German public broadcaster Südwestrundfunk).

Schüller came to the field in the 1970s.  For many years, he said, the prevailing paradigm had a focus on the medium, i.e., on the tape as much as on the sound on the tape.  This approach was more or less modeled on object conservation as practiced in museums, where copies are made to serve certain needs, e.g., reproductions of paintings for books or posters.  But the copies of museum objects are not intended to replace the original. 

For sound archives, however, there is an additional problem: the limited life expectancy of the original carriers.  Magnetic tapes, for example, may deteriorate over time, or the devices that play the tape may become obsolete and unavailable.  Therefore, sound archives must make replacement copies that will carry the content forward, extending the museum paradigm to embrace the replacement copy.  But in years past, there was a catch: copies were made on analog audio tape and suffered what is called generation loss, an inevitable reduction in signal quality each time a copy is made.

As a sidebar, Schüller noted that, before the 1980s, the scientific understanding of the properties of audio and video carriers was not as well developed as, say, what was known about film.  The first relevant citation that Schüller could find, as it happens, came from the Library of Congress: A.G. Pickett and M.M. Lemcoe’s 1959 publication, Preservation and Storage of Sound Recordings.

The 1980s brought change.  There was increased interest in the chemistry of audio carriers, looking at the decay of lacquer discs and brittle acetate tapes, and the study of what is called “sticky shed syndrome,” a condition created by the deterioration of the binders in a magnetic tape.  As the 1980s ended and the 1990s began, conferences began to focus on the degradation of the materials that carry recorded sound.  Nevertheless, many archivists still sought a permanent medium–the paradigm remained.

The year 1982 saw digital audio arrive in the form of the compact disk.  Some mistakenly expected that this medium would be stable for the long term.  In the late 1980s, consumer products like DAT digital tapes (developed to replace the compact cassette) entered the professional world, even used by some broadcasters for archiving (not a good idea).  In that same period, the Audio Engineering Society formed the Preservation and Restoration of Audio Recording committee, which brought together archivists and manufacturers of equipment and tape.  This was, Schüller said, “the first attempt to explain that archives are a market.” 

An important turning point occurred in 1989, the date of a UNESCO-related meeting in Vienna associated with the 90th anniversary of the Phonogrammarchiv.  The meeting brought together the manufacturers of technical equipment for audiovisual archives and, Schüller said, was the first time that the idea of a self-checking, self-generating sound archive was discussed.  This–what we might call a digital repository today–was a design concept that featured automated copying (after initial digitization) to support long-term content management. 

The findings that emerged from the 1989 UNESCO meeting included some guiding principles:

  • Sooner or later all carriers will decay beyond retrievability
  • All types of players (playback devices) will cease being operable, partly due to lack of parts
  • Long-term preservation can be accomplished in the digital realm by subsequent lossless copying of the bits

To over-simplify, the gist was “forget about the original carriers, copy and recopy the content.”  For calendar comparison, the term migration and related digital-preservation concepts reached many of us in the United States a few years later, with the 1996 publication of Donald Waters and John Garrett’s important work Preserving Digital Information: Report of the Task Force on Archiving of Digital Information.

The years that followed the UNESCO conference saw slow and grudging acceptance of these findings by audio preservation specialists.  A meeting in Ottawa in 1990 was marked by debate, Schüller said, with some archivists skeptical of the new concepts (“this is merely utopian”) and others arguing that the concepts were a betrayal of archival principles (“the original is the original, a copy is only a copy”). 

The year 1992 brought a distraction as lossy audio compression came on the scene, with MP3 soon becoming the most prominent format.  This led the IASA Technical Committee, meeting in Canberra, to declare that lossy data reduction was incompatible with archival principles (“data reduction is audio destruction”).  By the mid-1990s, however, lower data storage costs removed some of the motivation to use lossy compression for archiving.

Albrecht Häfner at a IASA meeting in 2009, photo courtesy of IASA

As these new ideas were being digested, it became clear to specialists in the field that digital preservation management would require automated systems that operated at scale.  And here is where Albrecht Häfner added his recollections to the talk.  He said that he had become the head of the Südwestrundfunk radio archive in 1984, just as digital production for radio was starting.  He saw that digitizing the older holdings would be a good idea, supporting the broadcasters’ need to repurpose old sounds in new programs. 

At about that same time, a trade show on satellite communication gave Häfner “a lucky chance.”  The show featured systems for the storage of big quantities of digital data, including the SONY DIR-1000 Digital Instrumentation Recorder.  Häfner said that this system had been developed for satellite-based systems such as interferometry used in cosmic radio research or earth observation imaging, and was marketed to customers with very extensive data, like insurance companies or financial institutions.  “My instant idea,” Häfner said, “was that digital audiovisual data produced by an A/D converter and digital image data delivered from a satellite are both streams of binary signals: why shouldn’t this system work in a sound archive?”  He added, “This trade fair was really the event of crucial importance that determined my future activities as to sound archiving.”

By the early 1990s, Südwestrundfunk and IBM were working together on a pilot project system with a high storage capacity, low error rate, and managed lossless copying.  But when Häfner first reported on the system to IASA, he found that few colleagues embraced the idea.  Some specialists, he said, “looked upon us rather incredulously, because they considered digitization to be under-developed and had their doubts about its functionality.  Rather they preferred the traditional analog technique. At the annual IASA conference 1995 in Washington, there was slide show about the preservation of the holdings of the Library of Congress and I never heard the word digital once!”

Today, attitudes are quite different.  Schüller closed the session by returning to the title the men had used for their talk, a slogan that spotlights the completion of the paradigm shift.  We have moved, he said, “from eternal carrier to eternal file.”  That’s a great bumper sticker for audio archivists!

In the Library, With the Lead Pipe: Responsive Acquisitions: A Case Study on Improved Workflow at a Small Academic Library

planet code4lib - Wed, 2014-11-05 11:30

Fast Delivery, CC-BY David, Bergin, Emmett and Elliott (Flickr)

In Brief: Fast acquisitions processes are beneficial because they get materials into patrons’ hands quicker. This article describes one library’s experience implementing a fast acquisitions process that dramatically cut turnaround times—from the point of ordering to the shelf—to under five days, all without increasing costs. This was accomplished by focusing on three areas: small-batch ordering, fast shipping and quick processing. Considerations are discussed, including the decision to rely on Amazon for the vast majority of orders.

I’m impatient. This is especially true when it comes to getting library materials for our patrons. I’m aware of the work required to get a book into someone’s hands: it has to be discovered or suggested; ordered; shipped; received; paid for; cataloged and processed—only then is it made available on a shelf. Skip or skimp on any one of these and the item never shows up or may never be found again once it leaves the technical services area. But performing these steps well mustn’t lead to months of delay. Patrons want—and deserve—today’s top sellers today, not next season. Students and faculty are knee-deep in research this week; next month is too late and who knows when that inter-library loan will actually arrive. It’s a cliché, but our society is fast-paced and instant gratification is king. I’m not the only one who is impatient.

Background

In 2011, I took advantage of an opportunity to put into place a fast acquisitions workflow that I’d been formulating. This effort followed in the footsteps of a wide variety of libraries that have prioritized the importance of getting materials into patrons’ hands as quickly as possible (Speas, 2012). At the time I was the newly hired Director of Library Services at Columbia Gorge Community College (CGCC), in The Dalles and Hood River, Oregon. CGCC is a small community college east of Portland that encourages innovation. The timing was right to try the new acquisitions workflow: the library staff—especially fellow librarian Katie Wallis—was receptive and ready for a challenge; the business office was supportive; I was new to the college and my boss believed in my ideas. The goal was to make the entire process, from the ordering decision until the item was in a patron’s hands, as fast as possible without sacrificing quality or spending more money. Specifically, we wanted the process to take less than a week from start to finish. That is, from the point when a decision was made to acquire an item we wanted it to be on the shelf within five business days. This would be a substantial improvement over CGCC’s existing practice and faster than any acquisitions process I had experienced. Other libraries, including large ones such as the Columbus Metropolitan Library, have managed an impressive 48 hours to process materials after they were received, but I am unaware of a library attempting such a short turnaround time from the point of ordering (Hatcher, 2006). Achieving such an ambitious goal would require rethinking all aspects of the process.

In other libraries I’ve worked at, acquisitions processes generally took from several weeks to a few months from start to finish. To be sure, occasional high priority rush orders were acquired and processed quickly, but they were the exception. Acquisitions typically took a long time and I’d realized that a few points in the process were especially prone to delay. The first delay often occurs during the selection process when lists of desired items are created. Lists I created regularly sat untouched for weeks or even months. This happened either because the list was waiting for someone else to do the actual ordering or because I had become distracted and hadn’t finalized it for some reason. This seemed like an area where a lot of time could be saved. Not only that, but the very practice of ordering big batches of items contributed to slowdowns later in the process, as we’ll see.

The second slow point was more clear-cut: shipping times. In order to get our entire process down to less than a week we clearly needed reliably quick shipping. Perhaps not surprisingly, faster shipping is probably the easiest way to speed up an acquisitions process as it doesn’t involve changing workflows or priorities. However, fast shipping is often expensive. Identifying fast shipping that didn’t increase our costs would likely determine how successful our overall effort would be.

The third slow point was the bottleneck that occurred in technical services when a big order arrived that, understandably, took a long time to catalog and process. Backlogs have been prevalent in libraries for decades and I’ve worked in several where it was not uncommon for items to spend more than a month being cataloged and processed (Howarth, Moor & Sze, 2010). To be sure, prioritizing fast, efficient cataloging is essential to getting acquisition turnaround times down, but dozens of items can only be processed so quickly, especially at a small library. At larger libraries the quantities are bigger but the concept is the same: there are only so many items existing staff can reasonably process in a given day. That being the case, this bottleneck provided two areas for improvement: improving the actual technical services workflow as well as re-thinking how orders are placed so as not be to overwhelmed when they arrive.

By focusing on these three areas—immediate, small-batch ordering; fast shipping; and quick processing—we identified solutions that led to a dramatic decrease in the overall turnaround times for our acquisitions process. The three areas and our methods of addressing them are similar to those often identified in the “buy instead of borrow” philosophy of collection development. With this method libraries monitor interlibrary loan requests and purchase those items that meet set criteria, a concept subsequently expanded to ebooks and often referred to as patron-driven acquisitions (Allen, Ward, Wray & Debus-López,  2003; Nixon, Freeman & Ward, 2010). Our process at CGCC differs from these efforts in that we applied the practices to all acquisitions rather than just interlibrary loan or ebooks.

Implementing Responsive Acquisitions

The most prominent change we implemented at CGCC was to move virtually all of our ordering to Amazon. At some institutions this might require completing a sole-source justification, but that wasn’t the case at CGCC. In any event, given the benefits outlined here I suspect it would have been straightforward to justify. Prior to Amazon, we used several vendors and while they each had their strengths, they were simply too slow. In contrast, Amazon is fast and offers competitive pricing. Additionally, we paid for an Amazon Prime membership (currently $99 annually) that made Amazon really fast because it includes free two-day shipping on most items.1

Relying almost exclusively on Amazon meant that we needed to have a credit account (essentially a credit card) with Amazon that allowed us to pay our bill monthly instead of with each order. Our business office worked with us to set up open purchase orders (POs) for different types of materials as well as a process for tracking the orders and paying the monthly bills. While seemingly simple, my experience is that not all business offices can or will allow such an arrangement.

Since we were ordering from Amazon it made sense to do some of our other collection development work, such as selection, on Amazon as well. It’s worth emphasizing the distinction between the selection process—deciding which items to purchase—and actually acquiring an item. This article focuses on the latter. While our selection process certainly evolved and no doubt sped up, we continued to take our time identifying the best materials to support the college’s curriculum. The changes came once we decided to order an item, whether it took weeks or only seconds to reach that decision. Once decided, we ordered the item immediately or typically within 24 hours. Ordering was facilitated through the use of Amazon’s wish lists to organize and prioritize acquisitions. We maintained three main lists, for books, movies, and music. We used additional lists for special projects.

Amazon’s wish lists have several valuable features that assisted selection and acquisitions: they help minimize unintentional duplicate purchases by notifying you if an item was previously purchased or is already on a wish list (helpfully, if you add an item a second time it moves to the top of the list); they have built-in priority and commenting capability; they can be shared, which means anyone can create a list and share the link so that all orders can be placed from the same account (lists can also be kept private); and overall, wish lists are as easy to use as Amazon itself. While other vendors have analogous collection development tools of varying complexity, my experience is that they are less intuitive to use than Amazon’s wish lists. For example, Ingram’s ipage doesn’t automatically warn users when they’ve added a duplicate title or if that title was previously purchased. It is possible to run a duplicate ISBN search in ipage selection lists, but it’s not automatic and previously purchased items are only included if they’re still on a selection list.

A significant benefit of using Amazon with a Prime membership is that it allowed us to intentionally move away from big orders and instead make frequent, small orders; sometimes even ordering a single item at a time. Small orders are easier to process than larger orders. We generally received new items in batches of one to ten. In comparison to dozens of items in a batch, even ten items seems manageable to process quickly—certainly within a day—and it was our practice to catalog items within 24 hours. Placing small orders is mentally-freeing as well, since you don’t have to put a lot of thought into compiling a complete list of titles. Ordering small batches through Amazon is relatively efficient; it’s a simple process to place orders once you’ve logged in and selected an item, as anyone who has ordered through Amazon has experienced. The only difference as an institution is that when completing the purchase we added the appropriate purchase order number for bookkeeping purposes.

Once an item arrived a librarian handled the cataloging, which for the most part was basic copy cataloging. Once cataloged, a library assistant or a student assistant did the remainder of the processing, again within 24 hours and often the same day. At that point the item was ready to go and either added to our new book/media display or placed on the hold shelf. To recap: from the time an order was placed items typically took two days to arrive, one day to catalog and another day to finish processing; four days total. But our emphasis on completing the process quickly—coupled with small-batch ordering—meant that we regularly bested even these times. For example, cataloging and processing was often completed in a day or even a single afternoon.

In practice, if someone requested an item that we decided to purchase we would order it immediately, sometimes while the patron was still standing there. This got the process started and drove home the notion that we were listening to their needs. With an Amazon Prime membership, shipping cost is the same regardless of the size of the order. More frequently, however, if an item was identified for purchase we gave it a “highest” priority in a wish list and then one person was responsible for regularly checking the wishlists and placing an order that included all of the highest priority items. This generally happened daily. The two methods helped give us the best of both worlds: a simple way to frequently order a handful of high priority items as well as the ability to order a single item immediately.

Super-Fast Acquisitions

Two-day shipping is fast and comes standard with an Amazon Prime membership, but we regularly had items delivered even faster, as in the following day. Shipping from our previous vendors took longer, and faster shipping led to the easiest time savings of all the changes we implemented. Depending on proximity to your library and order volume other vendors may be able to compete with Amazon’s two-day shipping, but overall I suspect Amazon has the most competitive shipping options for a majority of libraries, which is an important advantage. Whichever vendor you go with, you should need—and want—the fastest shipping you can afford.

For nearly all of our orders (90%+) the entire process took five business days or less and a majority of items were available for patrons two to three business days after the order was placed. On a number of occasions someone asked for an item and it was hand delivered to them the following afternoon. Research has shown quick turnaround times to be a driver of patron satisfaction and, indeed, at CGCC reaction to such quick turnaround times was positive (Hussong-Christian and Goergen-Doll, 2010). People were amazed that it was even possible for their item to be available so quickly because the fast shipping meant that in many cases we were faster than if they’d purchased the item from Amazon themselves. While most positive feedback CGCC received on this point was anecdotal, patron surveys from this period capture an increase in satisfaction with library services. This suggests that, overall, our efforts to improve services—including more responsive acquisitions—were working. Being responsive to our patrons’ needs and fulfilling their requests quickly helped to cement the library in their consciousness as a viable option for obtaining materials.

Things to Think About

While CGCC’s experience was a resounding success, there are a number of constraints and drawbacks to keep in mind. One prominent constraint is size. CGCC is a small academic library that spends approximately $14,000 annually on physical books and media. We seldom ordered multiple copies nor did we automate any of our acquisitions through the use of standing orders. Instead, we relied primarily on two related ways to track expenditures and ensure allocated funds would last the entire year. The first way stemmed from the fact that we knew a $14,000 budget meant we could spend a little over $1,100 per month. When we placed an order we would note basic information—date, amount, number of titles and PO number—in a simple spreadsheet that made our expenses to date easy to see. At the same time, we established multiple open purchase orders for a given category of materials (e.g. books or media), each for a portion of our total budgeted amount. For example, we might start the fiscal year with a $2,500 PO for books and a $1,000 PO for media, understanding that those amounts were expected to cover purchases for about three months. We established new POs quarterly before the existing POs were exhausted. In short, once our allocation for the year was established we determined roughly how much could be spent per month and stuck to it. If we went a little over one month we compensated for it the following month.

Other considerations range from the philosophical to the practical. On the philosophical side is the reality that some libraries may avoid supporting Amazon because of the role they’ve played in altering the bookselling landscape or concerns about supporting industry consolidation and the long term consequences of that trend. Indeed, Amazon was able to fulfill the vast majority of our orders (>95%), with most of the rest being textbooks we bought from our campus bookstore or independent films purchased directly from their distributor. While this consolidation is arguably good from the perspective of being able to efficiently fill orders from a single source, the long-term effects are hard to predict. To mention just one minor example, Amazon could change its policies governing how Prime works for institutional or high volume customers, perhaps by substantially increasing its cost or otherwise devaluing its benefits. Such negative changes should perhaps be expected if competition decreases. On the other hand, many libraries already purchase at least some materials from Amazon. A 2008 Association of American University Presses survey of academic librarians found that 31% of respondents used Amazon as their primary book distributor, a number that seems likely to have increased in the intervening years.

When implementing these changes at CGCC we initially tried to avoid using Amazon because of concerns about supporting industry consolidation as well as a desire to support more local alternatives. We looked into ordering through Powell’s Books, Portland’s well-known independent bookseller. Powell’s offers a generous discount to Oregon libraries that helps make their prices highly competitive. However, the library discount could not be combined with free shipping, meaning shipping charges must be factored in when doing a price comparison. Amazon’s combination of overall price and shipping speed—especially with an Amazon Prime membership—led us to decide it was the best value available to us and to a large extent forced our hand; as stewards of public funds we felt obligated to use the vendor that met our needs at the lowest cost. In the end, our desire and responsibility to quickly obtain competitively priced materials trumped our philosophical concerns about supporting Amazon’s industry-consolidating practices.

Cataloging practices are another consideration as proper cataloging is sometimes put forward as a necessarily slow and deliberate process. While high quality cataloging records should be valued and expected, libraries need to be careful not to sacrifice the good (i.e. fast processing) for perfect catalog records. This is not to say that error-ridden catalog records are acceptable; they aren’t. Like many things, however, there are diminishing returns when striving for perfection and immaculate records may not be worth the effort. Mary Bolin’s summation of the situation and her call for quantity as well as quality in cataloging remains as relevant today as when it was published more than 20 years ago. In short, she states how “high quality and high quantity in cataloging are not incompatible” (1991, p. 358). Moreover, Bolin opens her piece by referring to Andrew Osborn’s similar argument made a full fifty years earlier (1941). Given the prevalence of copy cataloging and the reasonably high quality records available through OCLC and some library consortiums, a skilled cataloger should be able to quickly obtain high quality records for most commonly held items, tweak them as needed and move on. If the process seems slow then the library needs to decide whether the improvements obtained from a more deliberate process are worth the delay. Libraries that rely more heavily on original cataloging will necessarily require more time per item, but they, too, should foster a culture that values quick cataloging.

Some libraries reduce the need for in-house cataloging and technical services through the purchase of pre-processed materials. Amazon launched its own processing program for libraries in 2006 (Amazon, 2006), but apparently it never took off and an Amazon representative I spoke with said it was discontinued in 2007, a mere year after it started. At CGCC, the vast majority of items we acquired were broadly held and good quality catalog records were generally available from OCLC or our consortium. As noted above, a librarian imported the records and made changes as necessary. We strove to catalog items within 24 hours of their arrival with an additional 24 hours allotted for further processing, a target that we typically met or exceeded. While the evidence supporting this practice is anecdotal, CGCC experienced increasing circulation statistics that suggest, at a minimum, the overall benefits of the changes outweighed the costs, including costs from an emphasis on quick cataloging.

Another consideration is Amazon’s frustrating practice of not consistently including packing slips in packages (forget drone delivery—consistent packing slips would make me a happy bookkeeper). When this happened we needed to look up item prices so that we could add their value into our library management system as well as print our order confirmation for documentation purposes. Something else to be aware of is that invoices are calculated per shipment, not per order, which further complicates bookkeeping. For example, the order you place for $200 may be shipped in three separate packages, resulting in invoices for $90, $60 and $50 to reconcile. Neither of these—lack of packing slips and per shipment invoices—are hard to handle, but they are added wrinkles. All told, the bookkeeping was straightforward and it took less than an hour per month to organize the paperwork for the business office, which paid the bills.

Finally, while I like the simplicity of Amazon’s wish lists and competitive prices, I can envision how libraries with a more robust materials budget may find that Amazon’s wish lists aren’t up to the task of large volume ordering or that their existing vendor’s discounts are superior to Amazon’s prices.

Amazon Alternatives

The most prominent change we implemented at CGCC was to move practically all of our ordering to Amazon. This was a positive move because it helped us to quickly address two of our problem areas (slow shipping and processing big batches of items). With that said, I see Amazon as a tool that we used to help speed up our acquisitions process; other libraries may find different tools that work as well or better for their specific circumstances. The point to emphasize is that your library should want and expect fast shipping along with the ability to place orders in small batches at a low cost—the goal being to get items into your patrons’ hands as quickly as logistically and financially possible.

Conclusion

CGCC’s responsive acquisitions workflow was a positive change for patrons, the library and the college as a whole. Most importantly, patrons had their items weeks faster than they otherwise would have. For the library, the faster workflow meant improvements in everything from happier patrons to requiring less space in technical services to store items that were waiting to be processed. At the same time, these benefits occurred without a higher cost, either in terms of higher prices or staff time and resources.

Implement Fast Acquisitions in Three Steps
  1. Commit to making the process fast and efficient; get staff buy-in.
  2. Identify and use the fastest shipping you can afford, either from your existing vendor or alternatives with fast shipping and similar levels of service.
  3. Review cataloging processes with an eye towards efficiencies. Determine how many items can reasonably be processed in a day and order roughly that many (or fewer) items at a time.
Acknowledgements

I want to thank everyone who read this article and provided feedback and/or encouragement: my reviewers Rachel Howard at University of Louisville and Hugh Rundle with the City of Boroondara for their time and thoughtful comments; Erin Dorney and the other editors at Lead Pipe for their guidance and support; Ellen Dambrosio and Iris Carroll at Modesto Junior College for reading an early draft and encouraging me to seek a wider audience for it; and Katie Wallis at Columbia Gorge Community College for her help implementing a super fast acquisitions process that far exceeded my expectations. Thank you all.

References

Allen, Megan, Suzanne M. Ward, Tanner Wray and Karl E. Debus-López (n.d.). “Patron-Focused Services: Collaborative Interlibrary Loan, Collection Development and Acquisitions.” Digital Repository at the University of Maryland. Retrieved from http://drum.lib.umd.edu/

Amazon (2006). “Amazon.com Announces Library Processing for Public and Academic Libraries Across the United States.” Amazon Media Room. Retrieved from http://phx.corporate-ir.net/phoenix.zhtml?p=irol-mediahome&c=176060

The American Association of University Presses (2008). “Marketing to Libraries: 2008 Survey of Academic Librarians.” AAUPNet. Retrieved from www.aaupnet.org

Bolin, Mary (1991).  “Make a Quick Decision in (Almost) All Cases: Our Perennial Crisis in Cataloging.” The Journal of Academic Librarianship, 16(6): 357-361.

Hatcher, Marihelen (2006). “On the Shelf in 48 Hours.Library Journal, 131(15): 30-31.

Howarth, Lynne. C., Les Moor and Elisa Sze (2010). “Mountains to Molehills: The Past, Present, and Future of Cataloging Backlogs.Cataloging & Classification Quarterly, 48(5): 423-444.

Hussong-Christian, Uta and Kerri Goergen-Doll (2010). “We’re Listening: Using Patron Feedback to Assess and Enhance Purchase on Demand.” Journal of Interlibrary Loan, Document Delivery & Electronic Reserve, 20(5): 319-335.

Nixon, Judith M., Robert S. Freeman and Suzanne M. Ward (2010). “Patron-Driven Acquisitions: An Introduction and Literature Review.” Collection Management, 35(3-4): 119-124.

Osborn, Andrew D. (1941). “The Crisis in Cataloging.” Library Quarterly, 11(4): 393-411.

Speas, Linda (2012). “Getting New Items into the Hands of Patrons: A Public Library Efficiency Evaluation.Public Libraries Online, 51(6).

  1. As a bonus, up to three other Amazon accounts that share the same address as the Prime member can also take advantage of the free two-day shipping, a benefit that was much appreciated by other departments on campus.

FOSS4Lib Updated Packages: ColorSharp

planet code4lib - Tue, 2014-11-04 09:51

Last updated November 4, 2014. Created by castarco on November 4, 2014.
Log in to edit this page.

ColorSharp is a .NET/Mono library to handle color spaces and light spectrums.

It currently supports light spectrums, CIE's 1931 XYZ & xyY color spaces,and the sRGB color space.

Package Type: Image Display and ManipulationLicense: MIT License Package Links Development Status: UnstableOperating System: LinuxMacWindowsProgramming Language: C#Open Hub Link: https://www.openhub.net/p/ColorSharpOpen Hub Stats Widget: 

Hydra Project: Sufia 4.1.0 released

planet code4lib - Tue, 2014-11-04 09:48

We are pleased to announce the release of version 4.1.0 of Hydra’s Sufia gem.

This release of Sufia includes functionality to support proxy deposits and transfers of ownership.

To upgrade from 4.0.x to 4.1.0, pin Sufia to version 4.1.0 in your Gemfile, then update your dependencies, generate the new database migrations required for proxies and transfers, and then apply those migrations to your database:

  •      bundle update sufia
  •      rails generate sufia:models:proxies
  •      rake db:migrate

Changes: https://github.com/projecthydra/sufia/compare/v4.0.1…v4.1.0

Thanks to Carolyn Cole, Justin Coyne, and Mike Giarlo for the work on this release.

HangingTogether: Synchronizing metadata among different databases

planet code4lib - Tue, 2014-11-04 09:00

That was the topic discussed recently by OCLC Research Library Partners metadata managers, initiated by Naun Chew of Cornell and Joan Swanekamp of Yale. As libraries have increased collecting commercial electronic resources, instituted local or shared digitization programs, and moved to cloud-based services, more bibliographic and inventory information is being managed outside the traditional catalog, such as in separate repositories or through commercial services. The need to manage and integrate data from many different sources presents a different set of challenges from working primarily within a single central system.

The discussions revolved around these themes:

Consequences of not keeping databases synchronized: A common disconnect is between the local catalog and the union catalog or WorldCat.  Different versions of a resource (print, digital) can easily get out of sync, so a user may find one but not the other. One example given was where libraries cannot correct a URL for a resource because the database matching-merging algorithm cannot distinguish between a replacement and a new URL. Maintaining the same metadata across multiple databases takes significant effort. Out-of-synch databases can result in frustrating users who cannot find items even though they are held or licensed by the library. Inconsistency and inaccuracy across databases can confuse users.

“Artificial” digital library:  Digital libraries also represent the resources of the library, but usually apply different descriptive metadata approaches from those used in the local catalog.  Digital libraries represent an “artificial split” from the local catalog, but absent mechanisms for maintaining, updating, and synching between the two, we’re left with two separate libraries going down two separate paths.

What is the “database of record”? Some databases have more functionality than can be provided by the local catalog. For example, Geographic Information Systems (GIS) include more data than can be accommodated in a MARC record. Perhaps for maps, the map database is the “database of record”.  Managers need to decide whether “buckets” of data are more useful than trying to pour everything into one system. There are good reasons for multiple platforms and different ways of describing things. Perhaps we don’t need one “database of record”.  Do researchers really expect to find everything they need in one place?

Focus on access rather than metadata? Instead of integrating metadata, let’s unify access. Some discovery layers provide a “bento box” display showing results from the different platforms in separate panes. There’s a tension between simplicity and complexity. Libraries cannot replicate the “apparent relevance” ranking Google and other search engines provide because we lack the transactional data needed to weigh into the ranking. Relying on a discovery layer to retrieve information from multiple databases still requires managers to decide what metadata to put where, and why. It also can highlight discrepancies in metadata approaches.

Maybe identifiers will help: Access points are valued by researchers. Possibly using the same identifiers in the metadata in different databases would resolve some of the synchronizing issues. Cornell has been experimenting with FAST (Faceted Application of Subject Terminology) headings in addition to or instead of LC subject headings in its 7-million record catalog.  George Washington University has been adding VIAF or id.loc.gov identifiers for names to its records. The hope and anticipation is that identifiers will help, but there still would be much work to address problems of duplicated effort and to reduce the labor devoted to maintenance.

About Karen Smith-Yoshimura

Karen Smith-Yoshimura, program officer, works on topics related to renovating descriptive and organizing practices with a focus on large research libraries and area studies requirements.

Mail | Web | Twitter | More Posts (53)

DuraSpace News: REGISTER for Upcoming Hot Topics–DuraSpace Community Webinars

planet code4lib - Tue, 2014-11-04 00:00
Winchester, MA  Register to tune-in to the following free DuraSpace Community Webinars.   SERIES 8Doing It: How Non-ARL Institutions are Managing Digital Collections, Curated by Liz Bishoff, Partner, The Bishoff Group LLC   • Doing It: Trends Toward Hosted Service Adoption and Implementation for Managing Digital Collections —Wednesday, November 12, 2014 at 11:00am-12:00pm ET

DuraSpace News: DSquare Technologies–New DuraSpace Registered Service Provider for DSpace in India

planet code4lib - Tue, 2014-11-04 00:00

Winchester, MA  Earlier this year DSquare Technologies [1] opened its doors to provide libraries, businesses and government agencies across India and beyond key DSpace support services and innovative end-to-end repository solutions including add-ons to DSpace functionality. The DSquare goal is “to create work that is honest and solutions that are exploratory, educational and inspirational”. DSquare is the first DuraSpace Registered Service provider based in India.

District Dispatch: Reminder: Free webinar “Giving legal advice to patrons”

planet code4lib - Mon, 2014-11-03 20:28

Photo by Cushing Library Holy Names University via Flickr

Reminder: To help reference staff build confidence in responding to legal inquiries, the American Library Association (ALA) and iPAC will host the free webinar “Lib2Gov.org: Connecting Patrons with Legal Information” on Wednesday, November 12, 2014, from 2:00–3:00 p.m. EDT.

The session will offer information on laws, legal resources and legal reference practices. Participants will learn how to handle a law reference interview, including where to draw the line between information and advice, key legal vocabulary and citation formats. During the webinar, leaders will offer tips on how to assess and choose legal resources for patrons. Register now as space is limited.

Catherine McGuire, head of Reference and Outreach at the Maryland State Law Library, will lead the free webinar. McGuire currently plans and presents educational programs to Judiciary staff, local attorneys, public library staff and members of the public on subjects related to legal research and reference. She currently serves as Vice Chair of the Conference of Maryland Court Law Library Directors and the co-chair of the Education Committee of the Legal Information Services to the Public Special Interest Section (LISP-SIS) of the American Association of Law Libraries (AALL).

Webinar: Lib2Gov.org: Connecting Patrons with Legal Information
Date: Wednesday, November 12, 2014
Time: 2:00–3:00 p.m. EDT

The archived webinar will be emailed to District Dispatch subscribers.

The post Reminder: Free webinar “Giving legal advice to patrons” appeared first on District Dispatch.

Cherry Hill Company: Drupal Workshops at Internet Librarian 2014

planet code4lib - Mon, 2014-11-03 20:10

Last week Cary and I attended Internet Librarian 2014 in Monterey, California. We spent 4 days meeting people, discussing projects, and promoting Drupal, Islandora, and open source in general.

Cherry Hill Drupal Workshops

On Sunday we gave two Drupal workshops to some very enthusiastic learners. The interest and excitment of the auduence allowed us to be more conversational and less reliant on a script. We were also able to focus on specific topics people wanted to learn about. The attendees at the first workshop, Drupal Essential Tools: Beyond the Basics (slides attached below), were so eager to learn more about what they could do with Drupal for their sites that it made...

Read more »

OCLC Dev Network: Resolved - WorldCat Metadata API Now Available

planet code4lib - Mon, 2014-11-03 19:45

The WorldCat Metadata API problem we reported this morning has been resolved. We tracked down the issue to a conflict between the API and our Identity Management (IDM) Service. We have resolved the problem and the Web service is now available. Thanks for your patience as we worked through this issue.

FOSS4Lib Upcoming Events: 6th Annual VIVO Conference

planet code4lib - Mon, 2014-11-03 18:50
Date: Wednesday, August 12, 2015 - 08:00 to Friday, August 14, 2015 - 17:00Supports: Vivo

Last updated November 3, 2014. Created by Peter Murray on November 3, 2014.
Log in to edit this page.

Cambridge, MA
Hyatt Regency Cambridge

The VIVO conference provides a unique opportunity for people from across the country and around the world to come together in the spirit of promoting scholarly collaboration and research discovery. This fun and exciting city will be the perfect backdrop for the 2015 conference. Join us to gain insight into the latest industry trends and innovations while enjoying all of the history, food, and culture Cambridge has to offer!

Roy Tennant: My Online Stalker

planet code4lib - Mon, 2014-11-03 18:09

No one enjoys being stalked. Well, at least no one I’ve spoken to. So recently, when I discovered I was being stalked online I felt…uncomfortable. Creeped out. Even freaked out.

But this kind of stalking wasn’t even as freaky as the usual kind. I’m being stalked by retailers. And so are you.

Of course I’ve known that retailers, Google, Facebook, and just about everyone tracks my every move. But what took me by surprise (which in hindsight, it shouldn’t have) was the level at which this information was following me around.

The first incident happened in Facebook, when I noticed that the ad off to the side was a camera that I had recently viewed and bookmarked at Amazon. It was like whomever was serving ads up on Facebook knew that I hadn’t bought it yet, and they were tantalizing me with the best clickbait possible — something they knew I was interested in.

The second happened only today when I went to a blog post by someone I follow and I noticed an ad for something I had added to my shopping cart at REI.com but hadn’t purchased. Again, this was a very specific item that was unlikely to appear in an ad except for the fact that I had recently viewed it.

So…yeah. I’m being followed online. You are being followed online. We all are, every single second of the day. Call me seriously creeped out.

 

Photo by Patrik, Creative Commons License CC BY-NC-SA 2.0

OCLC Dev Network: WorldCat Metadata API Currently Unavailable

planet code4lib - Mon, 2014-11-03 17:30

We are experiencing a problem with the WorldCat Metadata API web service and it is temporarily unavailable. Our investigation so far indicates that the problem is related to user authentication. We are analyzing options for resolving this as quickly as possible and will provide an update to the Developer Network later today. We apologize for any inconvenience this may cause. 

District Dispatch: Data powers advocacy: Please log onto Digital Inclusion Survey today!

planet code4lib - Mon, 2014-11-03 17:12

The Digital Inclusion Survey is open until November 22.

I can attest to the power of library data like that provided by thousands of libraries through the Digital Inclusion Survey throughout my career. From reporters calling the Public Information Office to other researchers and library students while in the Office for Research and Statistics to now with Beltway policymakers and legislators, the time librarians make to respond to national surveys puts our community “on the map” for those who might otherwise count us out of the Digital Age.

I know (and certainly hear from) librarians who participate in surveys ranging from the Institute of Museum and Library Service (IMLS) Public Libraries Survey to the Public Library Data Service report and can get understandably fatigued by the number of surveys and questions. It’s a fair question to ask “is this worth my time” among many pressing tasks—and even “what’s in it for me?” My colleagues in other American Library Association (ALA) units and at the Information Policy & Access Center at the University of Maryland take these questions seriously.

Here’s five reasons I think public library staff should say “yes” to the Digital Inclusion Survey:

  1. ALA and the University of Maryland iPAC team have made the online platform as easy to use as possible, plus allowing folks to import last year’s data if you’ve participated before.
  2. We make it easy to leverage data for advocacy at all levels. Issue briefs, state summaries, reports and infographics provide bite-size pieces, context and visual appeal on the topics ranging from digital inclusion writ large to e-government and employment.*
  3. We don’t sit on our laurels. Have you looked at the new, interactive mapping feature that combines GIS, community demographic data and your library information on the fly? Your city and county managers thought it was pretty cool when we showed it to them.
  4. These data and the resulting reports allow you to see your library and its programs and services among libraries of similar sizes, and within a state* and national context, as well as your local community.
  5. ALA puts the data to work for you and your colleagues. We take these state summaries to senators; use the data to inform and bolster our policy recommendations, testimony and public comments; and publicize the heck out of what we’ve learned from all of you in media ranging from Fast Company to Governing magazine.

As often as you are asked to respond to a survey, we are asked to document why libraries need more funding through the federal E-rate program, to answer how many libraries offer 3D printers, and to show how libraries are helping supporting a 21st century workforce. I can’t credibly answer these questions without your help.

The Digital Inclusion Survey is open until November 22. Be the answer!

[*We can only provide state-level summaries for those states where we have enough responses. Tell your neighbor!]

The post Data powers advocacy: Please log onto Digital Inclusion Survey today! appeared first on District Dispatch.

LITA: LITA Online Meeting – Monday, November 3, 2014 at 2:00 p.m. Central.

planet code4lib - Mon, 2014-11-03 16:54

The LITA Board invites you to join this meeting online on Monday, November 3, 2014 at 2:00 p.m. Central.

Join the meeting by clicking the following link:

http://ala.adobeconnect.com/r47eqi6dp6a/

View the meeting agenda:

http://connect.ala.org/node/230328

If you have any questions, recommendations, or wish to discuss any of this, please leave a comment or contact the LITA office at 312/280-4269

Islandora: New Islandora Module: Islandora Pathauto

planet code4lib - Mon, 2014-11-03 16:48

The Islandora Foundation is proud to announce the arrival of a new module into our stack: Islandora Pathauto, created and contributed by Rosemary Lefaive (who you may remember from past kudos). 

This simple but extremely handy little module allows the creation of more human-readable or SEO-friendly URLs for your Fedora objects by exposing Islandora objects to the alias-creating tools of pathauto. It will be included in the official Islandora 7.x-1.5 release, but should work just fine if you are running on 7.x-1.4 and want to try it out.

If you have a module or tool that you would like to contribute to Islandora, please check out our Licensed Software Acceptance Procedure to find out how.

David Rosenthal: First US web page

planet code4lib - Mon, 2014-11-03 16:00
Stanford's Web Archiving team of Nicholas Taylor and Ahmed AlSum have bought up SWAP, the Stanford Web Archive Portal, using the Open Wayback code developed under IIPC auspices from the Internet Archive's original. And, thanks to the Stanford staff's extraordinary ability to recover data from old backups, it features the very first US web page, bought up by Paul Kunz at SLAC around 6th Dec. 1991.

Pages

Subscribe to code4lib aggregator