You are here

Feed aggregator

Jonathan Rochkind: Useful lesser known ruby Regexp methods

planet code4lib - Thu, 2014-11-06 15:50
1. Regexp.union

Have a bunch of regex’s, and want to see if a string matches any of them, but don’t actually care which one it matches, just if it matches any one or more? Don’t loop through them, combine them with Regexp.union.

union_re = Regexp.union(re1, re2, re3, as_many_as_you_want) str =~ union_re 2. Regexp.escape

Have an arbitrary string that you want to embed in a regex, interpreted as a literal? Might it include regex special chars that you want interpreted as literals instead? Why even think about whether it might or not, just escape it, always.

val = 'Section 19.2 + [Something else]' re = /key: #{Regexp.escape val}/

Yep, you can use #{} string interpolation in a regex literal, just like a double quoted string.


Filed under: General

Eric Hellman: If your website still uses HTTP, the X-UIDH header has turned you into a snitch

planet code4lib - Thu, 2014-11-06 14:54
Does your website still use HTTP? It not, you're a snitch.

As I talk to people about privacy, I've found a lot of misunderstanding. HTTPS applies encryption to the communication channel between you and the website you're looking at. It's an absolute necessity when someone's making a password or sending a credit card number, but the modern web environment has also made it important for any communication that expects privacy.

HTTP is like sending messages on a postcard. Anyone handling the message can read the whole message. Even worse, they can change the message if they want. HTTPS is like sending the message in a sealed envelope. The messengers can read the address, but they can't read or change the contents.

It used to be that network providers didn't read your web browsing traffic or insert content into it, but now they do so routinely. This week we learned that Verizon and AT&T were inserting an "X-UIDH" header into your mobile phone web traffic. So for example, if a teen was browsing a library catalog for books on "pregnancy" using a mobile phone, Verizon's advertising partners could, in theory, deliver advertising for maternity products.

The only way to stop this header insertion is for websites to use HTTPS. So do it. Or you're a snitch.

Sorry, Blogger.com doesn't support HTTPS. So if you mysteriously get ads for snitch-related products, or if the phrase "Verizon and AT&T" is not equal to "V*erizo*n and A*T*&T" without the asterisks, blame me and blame Google.

Here's more on the X-UIDH header.

Open Knowledge Foundation: Open Knowledge Festival 2014 report: out now!

planet code4lib - Thu, 2014-11-06 14:46

Today we are delighted to publish our report on OKFestival 2014!

This is packed with stories, statistics and outcomes from the event, highlighting the amazing facilitators, sessions, speakers and participants who made it an event to inspire. Explore the pictures, podcasts, etherpads and videos which reflect the different aspects of the event, and uncover some of its impact as related by people striving for change – those with Open Minds to Open Action.

Want more data? If you are still interested in knowing more about how the OKFestival budget was spent, we have published details about the events income and expenses here.

If you missed OKFestival this year, don’t worry – it will be back! Keep an eye on our blog for news and join the Open Knowledge discussion list to share your ideas for the next OKFestival. Looking forward to seeing you there!

OCLC Dev Network: Planned Downtime for November 9 Release

planet code4lib - Thu, 2014-11-06 14:30

WMS Web services will be down during the install window for this weekend's release. The install time for this release is between 2:00 – 7:00 am Eastern USA, Sunday Nov 9th.

 

Ted Lawless: Connecting Python's RDFLib and Stardog

planet code4lib - Thu, 2014-11-06 00:00
Connecting Python's RDFLib and Stardog

For a couple of years I have been working with the Python RDFLib library for converting data from various formats to RDF. This library serves this work well but it's sometimes difficult to track down a straightforward, working example of performing a particular operation or task in RDFLib. I have also become interested in learning more about the commercial triple store offerings, which promise better performance and more features than the open source solutions. A colleague has had good experiences with Stardog, a commercial semantic graph database (with a freely licensed community edition) from Clark & Parsia, so I thought I would investigate how to use RDFLib to load data in to Stardog and share my notes.

A "SPARQLStore" and "SPARQLUpdateStore" have been included with Python's RDFLib since version 4.0. These are designed to allow developers to use the RDFLib code as a client to any SPARQL endpoint. Since Stardog supports SPARQL 1.1, developers should be able to connect to Stardog from RDFLib in the similar way they would to other triple stores like Sesame or Fuseki.

Setup Stardog

You will need a working instance of Stardog. Stardog is available under a community license for evaluation after going through a simple registration process. If you haven't setup Stardog before, you might want to checkout Geir Grnmo's triplestores repository where he has Vagrant provisioning scripts for various triple stores. This is how I got up and running with Stardog.

Once Stardog is installed, start the Stardog server with security disabled. This will allow the RDFLib code to connect without a username and password. Obviously you will not want to run Stardog in this way in production but it is convenient for testing.

$./bin/stardog-admin server start --disable-security

Next create a database called "demo" to store our data.

$./bin/stardog-admin db create -n demo

At this point a SPARQL endpoint is available at ready for queries at http://localhost:5820/demo/query.

RDF

For this example, we'll add three skos:Concepts to a named graph in the Stardog store.

@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix skos: <http://www.w3.org/2004/02/skos/core#> . @prefix xml: <http://www.w3.org/XML/1998/namespace> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . <http://example.org/n1234> a skos:Concept ; skos:broader <http://example.org/b5678> ; skos:preferredLabel "Baseball" . <http://example.org/b5678> a skos:Concept ; skos:preferredLabel "Sports" . <http://example.org/n1000> a skos:Concept ; skos:preferredLabel "Soccer" . Code

The complete example code here is available as a Gist.

Setting up the 'store'

We need to initialize a SPARQLUpdateStore as well as a named graph where we will store our assertions.

from rdflib import Graph, Literal, URIRef from rdflib.namespace import RDF, SKOS from rdflib.plugins.stores import sparqlstore #Define the Stardog store endpoint = 'http://localhost:5820/demo/query' store = sparqlstore.SPARQLUpdateStore() store.open((endpoint, endpoint)) #Identify a named graph where we will be adding our instances. default_graph = URIRef('http://example.org/default-graph') ng = Graph(store, identifier=default_graph) Loading assertions from a file

We can load our sample turtle file to an in-memory RDFLib graph.

g = Graph() g.parse('./sample-concepts.ttl', format='turtle') #Serialize our named graph to make sure we got what we expect. print g.serialize(format='turtle')

Since our data is now loaded as an in memory Graph we can add it to Stardog with a SPARQL INSERT DATA operation.

ng.update( u'INSERT DATA { %s }' % g.serialize(format='nt') ) Use the RDFLib API to inspect the data

Using the RDFLib API, we can list all the Concepts in the Stardog that were just added.

for subj in ng.subjects(predicate=RDF.type, object=SKOS.Concept): print 'Concept: ', subj

And, we can find concepts that are broader than others.

for ob in ng.objects(predicate=SKOS.broader): print 'Broader: ', ob Use RDFLib to issue SPARQL read queries.

RDFLib allows for binding a prefix to a namespace. This makes our queries easier to read and write.

store.bind('skos', SKOS)

A SELECT query to get all the skos:preferredLabel for skos:Concepts.

rq = """ SELECT ?s ?label WHERE { ?s a skos:Concept ; skos:preferredLabel ?label . } """ for s, l in ng.query(rq): print s.n3(), l.n3() Use RDFLib to add assertions.

The RDFLib API can also be used to add new assertions to Stardog.

soccer = URIRef('http://example.org/n1000') ng.add((soccer, SKOS.altLabel, Literal('Football')))

We can now Read statements about soccer using the RDFLib API, which issues the proper SPARQL query to Stardog in the background.

for s, p, o in ng.triples((soccer, None, None)): print s.n3(), p.n3(), o.n3() Summary

With a little setup, we can begin working with Stardog in RDFLib in a similar way that we work with RDFLib and other backends. The sample code here is included in this Gist.

DuraSpace News: Recordings available for the Fedora 4.0 Webinar Series

planet code4lib - Thu, 2014-11-06 00:00

Winchester, MA

On November 5, 2014 the Hot Topics DuraSpace Community Webinar series, “Early Advantage: Introducing New Fedora 4.0 Repositories,” concluded with its final webinar, “Fedora 4.0 in Action at Penn State and Stanford.”

DuraSpace News: Fedora 4 Almost Out the Door: Final Community Opportunity for Feedback!

planet code4lib - Thu, 2014-11-06 00:00

From Andrew Woods, Technical Lead for Fedora 

Winchester, MA  Fedora 4 Beta-04 will be released before this coming Monday, November 10, 2014. The development sprint that also begins on November 10 will be focused on testing and documentation as we prepare for the Fedora 4.0 production release.

SearchHub: What Could Go Wrong? – Stump The Chump In A Rum Bar

planet code4lib - Wed, 2014-11-05 22:56

The first time I ever did a Stump The Chump session was back in 2010. It was scheduled as a regular session — in the morning if I recall correctly — and I (along with the panel) was sitting behind a conference table on a dais. The session was fun, but the timing, and setting, and seating, made it feel very stuffy and corporate..

We quickly learned our lesson, and subsequent “Stump The Chump!” sessions have become “Conference Events”. Typically held at the end of the day, in a nice big room, with tasty beverages available for all. Usually, right after the winners are announced, it’s time to head out to the big conference party.

This year some very smart people asked me a very smart question: why make attendees who are having a very good time (and enjoying tasty beverages) at “Stump The Chump!”, leave the room and travel to some other place to have a very good time (and enjoy tasty beverages) at an official conference party? Why not have one big conference party with Stump The Chump right in the middle of it?

Did I mention these were very smart people?

So this year we’ll be kicking off the official “Lucene/Solr Revolution Conference Party” by hosting Stump The Chump at the Cuba Libre Restaurant & Rum Bar.

At 4:30 PM on Thursday, (November 13) there will be a fleet of shuttle buses ready and waiting at the Omni Hotel’s “Parkview Entrance” (on the South East side of the hotel) to take every conference attendee to Cuba Libre. Make sure to bring your conference badge, it will be your golden ticket to get on the bus, and into the venue — and please: Don’t Be Late! If you aren’t on a shuttle buses leaving the Omni by 5:00PM, you might miss the Chump Stumping!

Beers, Mojitos & Soft Drinks will be ready and waiting when folks arrive, and we’ll officially be “Stumping The Chump” from 5:45 to 7:00-ish.

The party will continue even after we announce the winners, and the buses will be available to shuttle people back to the Omni. The last bus back to the hotel will leave around 9:00 PM — but as always, folks are welcome to keep on partying. There should be plenty of taxis in the area.

To keep up with all the “Chump” news fit to print, you can subscribe to this blog (or just the “Chump” tag).

The post What Could Go Wrong? – Stump The Chump In A Rum Bar appeared first on Lucidworks.

LITA: Game Night at LITA Forum

planet code4lib - Wed, 2014-11-05 22:13

Are you attending the 2014 LITA Forum in Albuquerque? Like board games? If so, come to the LITA Game Night!

Thursday, November 6, 2014
8:00 – 11:00 pm
Hotel Albuquerque, Room Alvarado C

Games that people are bringing:

  • King of Tokyo
  • Cheaty Mages
  • Cards Against Humanity
  • One Night Ultimate Werewolf
  • Star Fluxx
  • Love Letter
  • Seven Dragons
  • Pandemic
  • Coup
  • Avalon
  • Bang!: The Dice Game
  • Carcassonne
  • Uno
  • Gloom
  • Monty Python Fluxx
  • and probably more…

Hope you can come!

FOSS4Lib Recent Releases: Evergreen - 2.7.1, 2.6.4, 2.5.8

planet code4lib - Wed, 2014-11-05 21:21
Package: EvergreenRelease Date: Wednesday, November 5, 2014

Last updated November 5, 2014. Created by Peter Murray on November 5, 2014.
Log in to edit this page.

"In particular, they fix a bug where even if a user had logged out of the Evergreen public catalog, their login session was not removed. This would permit somebody who had access to the user’s session cookie to impersonate that user and gain access to their account and circulation information."

Evergreen ILS: SECURITY RELEASES – Evergreen 2.7.1, 2.6.4, and 2.5.8

planet code4lib - Wed, 2014-11-05 21:11

On behalf of the Evergreen contributors, the 2.7.x release maintainer (Ben Shum) and the 2.6.x and 2.5.x release maintainer (Dan Wells), we are pleased to announce the release of Evergreen 2.7.1, 2.6.4, and 2.5.8.

The new releases can be downloaded from:

http://evergreen-ils.org/egdownloads/

THESE RELEASES CONTAIN SECURITY UPDATES, so you will want to upgrade as soon as possible.

In particular, they fix a bug where even if a user had logged out of the Evergreen public catalog, their login session was not removed. This would permit somebody who had access to the user’s session cookie to impersonate that user and gain access to their account and circulation information.

After installing the Evergreen software update, it is recommended that memcached be restarted prior to restarting Evergreen services and Apache.  This will clear out all user login sessions.

All three releases also contain bugfixes that not related to the security issue. For more information on the changes in these release, please consult the change logs:

District Dispatch: IRS provides update to libraries on tax form program

planet code4lib - Wed, 2014-11-05 21:06

Photo by AgriLifeToday via Flickr

On Tuesday, the Internal Revenue Service (IRS) announced that the agency will continue to deliver 1040 EZ forms to public libraries that are participating in the Tax Forms Outlet Program (TFOP). TFOP offers tax products to the American public primarily through participating libraries and post offices. The IRS will distribute new order forms to participating libraries in the next two to three weeks.

The IRS released the following statement on November 4, 2014:

Based on the concerns expressed by many of our TFOP partners, we are now adding the Form 1040 EZ, Income Tax Return for Single and Joint Filers with No Dependents, to the list of forms that can be ordered. We will send a supplemental order form to you in two to three weeks. We strongly recommend you keep your orders to a manageable level primarily due to the growing decline in demand for the form and our print budget. Taxpayers will be able to file Form 1040 EZ and report that they had health insurance coverage, claim an exemption from coverage or make a shared responsibility payment. However, those who purchased health coverage from the Health Insurance Marketplace must use the Form 1040 or 1040A.Your help communicating this to your patrons within your normal work parameters would be greatly appreciated.

We also heard and understood your concerns of our decision to limit the number of Publication 17 we plan to distribute. Because of the growing cost to produce and distribute Pub 17, we are mailing to each of our TFOP partners, including branches, one copy for use as a reference. We believe that the majority of local demand for a copy of or information from Publication 17 can be met with a visit to our website at www.irs.gov/formspubs or by ordering it through the Government Printing Office. We value and appreciate the important work you do providing IRS tax products to the public and apologize for any inconvenience this service change may cause.

Public library leaders will have the opportunity to discuss the management and effectiveness of the Tax Forms Outlet Program with leaders from the IRS during the 2015 American Library Association Midwinter Meeting session “Tell the IRS: Tax Forms in the Library.” The session takes place on Sunday, February 1, 2015.

The post IRS provides update to libraries on tax form program appeared first on District Dispatch.

Roy Tennant: How Some of Us Learned To Do the Web Before it Existed

planet code4lib - Wed, 2014-11-05 20:58

Perhaps you really had to be there to understand what I’m about to relate. I hope not, but it’s quite possible. Imagine a world without the Internet, as so totally strange as that is. Imagine that we had no world-wide graphical user interface to the world of information. Imagine that the most we had were green screens and text-based interfaces to “bulletin boards” and “Usenet usegroups”. Imagine that we were so utterly ignorant of the world we would very soon inhabit. Imagine that we were about to have our minds utterly blown.

But we didn’t know that. We only had what we had, and it wasn’t much. We had microcomputers of various kinds, and the clunkiest interfaces to the Internet that you can possibly imagine. Or maybe you can’t even imagine. I’m not sure I could, from this perspective. Take it from me — it totally sucked. But it was also the best that we had ever had.

And then along came HyperCard. 

HyperCard was a software program that ran on the Apple Macintosh computer. It would be easy to write it off as being too narrow a niche, as Microsoft was even more dominant in terms of its operating system than it is now. But that would be a mistake. Much of the true innovation at that point was happening on the Macintosh. This was because it had blown the doors off the user interface and Microsoft was still playing catchup. You could argue in some ways it still is. But back then there was absolutely no question who was pushing the boundaries, and it wasn’t Redmond, WA, it was Cupertino, CA. Remember that I’m taking you back before the Web. All we had were clunky text-based interfaces. HyperCard gave us this:

  • True “hypertext”. Hypertext is what we called the proto-web — that is, the idea of linking from one text document to another before Tim Berners-Lee created HTML.
  • An easy to learn programming language. This is no small thing. Having an easy-to-learn scripting language put the ability to create highly engaging interactive interfaces into the hands of just about anyone.
  • Graphical elements. Graphics, as we know, are a huge part of the Web. The Web didn’t really come into its own until graphics could show up in the UI. But we already had this in HyperCard. The difference was that anyone with a network connection could see your graphics — not just those who had your HyperCard “stack”.

As a techie, I was immediately taken with the possibilities, so as a librarian at UC Berkeley at the time I found some other willing colleagues and we built a guide to the UC Berkeley Libraries. Unfortunately I’ve been unable to locate a copy of it, since it’s still possible to run a HyperCard stack in emulation. I’d give a lot to be able to play with it again.

Doing this exposed us to principles of “chunking up” information and linking it together in different ways that we eventually took with us to the web. We also learned to limit the amount of text with online presentations, to enhance “scannability”. We were introduced to visual metaphors like buttons. We learned to use size to indicate priority. We experimented with bread crumb trails to give users a sense of where they were in the information space. And we strove to be consistent. All of these lessons helped us to be better designers of web sites, before the web even existed.

For more, here is another viewpoint on what HyperCard provided a web-hungry world.

Nicole Engard: Bookmarks for November 5, 2014

planet code4lib - Wed, 2014-11-05 20:30

Today I found the following resources and bookmarked them on <a href=

  • Brackets A modern, open source text editor that understands web design.

Digest powered by RSS Digest

The post Bookmarks for November 5, 2014 appeared first on What I Learned Today....

Related posts:

  1. ATO2014: Open Source – The Key Component of Modern Applications
  2. Code4Lib Programs Chosen
  3. Design for Firefox First

LITA: Jobs in Information Technology: November 5

planet code4lib - Wed, 2014-11-05 17:33

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Assistant University Archivist for Technical Services, Princeton University Library, Princeton, NJ

Dean of University Libraries, Oakland University, Rochester, MI

Digital Production Services Programmer – IT Expert, University of Florida, George A Smathers Libraries, Gainesville, FL

IT Expert – Programmer, University of Florida, George A Smathers Libraries, Gainesville, FL

Physician Directory Specialist, Froedtert Health, Menomonee Falls, WI 

Visit the LITA Job Site for more available jobs and for information on submitting a  job posting.

HangingTogether: Are you ready for EAD3? It’s coming soon!

planet code4lib - Wed, 2014-11-05 16:52

The third version of Encoded Archival Description (EAD) is on the cusp of being released, which prompts me to offer up a quickie history of EAD’s development and to summarize what’s coming with EAD3.

EAD has come a long way since the launch of version 1.0 in 1998. I was one of the lucky members of the initial EAD research group, which was recognized in the same year by the Society of American Archivists with its annual Coker award. Having been there at the beginning, it has been nothing short of amazing to observe both the depth and breadth of EAD adoption all over the world over the ensuing sixteen years. It doesn’t seem like that long ago that lots of archivists didn’t think it was possible to wrangle the anarchic finding aid into submission by development of a standard. EAD version 2002 introduced quite a few changes to the initial DTD, particularly in response to needs of members of the international archival community who were among the early adopters.

And now EAD3 is on target to launch in the winter of 2015. Mike Rush, co-chair of SAA’s Technical Subcommittee on EAD, recently presented a webinar to bring us all into the loop about some of the significant changes that are coming. TS-EAD has done a great job of soliciting input and communicating about the revision process. An enormous amount of information is here, and a nice summary of the principles behind the array of changes is here.

In a nutshell, EAD3 is intended to “improve the efficiency and effectiveness of EAD as a standard for the electronic representation of descriptions of archival materials and a tool for the preservation and presentation of such data and its interchange between systems.” A specific objective is to achieve greater conceptual and semantic consistency in EAD’s use; this should be good news to techies responsible for implementations of the standard, some of whom have been vocal for years about the extent to which the excessive flexibility of EAD’s design has proven challenging. Two other goals are to find ways to make EAD-encoded finding aids connect more effectively with other protocols and to improve multilingual functionalities.

TS-EAD has been working madly to finalize the schema and isn’t making the latest version available as little tweaks are made, but you can see a relatively final version of the element list here.

So, what are some of the significant changes that we’ll see in EAD3? I’m going to assume that readers are familiar enough with EAD elements that these will make sense. Note that this is a very partial list.

  • Lots of changes are coming to the metadata about the finding aid, currently found in <eadheader>, which is changing to <control>. I particularly like the new element <otherrecordid> that enables record identifiers from other systems to be brought in. This will make it possible, for example, to add the record i.d. for a companion MARC record.
  • We’ll see new and modified <did> elements, which are the basic descriptive building blocks of a finding aid. A new one is <unitdatestructured>, an optional sibling of <unitdate>, which will enable the parts of a date to be pulled out for manipulation of whatever sort. It bears noting, however, that this is an example of a new functionality that won’t be useful unless entire bodies of finding aids are retrospectively enhanced. That said, I really like the new attributes @notbefore and @notafter.
  • A <relations> element is being added in concert with the same as found in Encoded Archival Context: Corporate Bodies, Persons, and Families (EAC-CPF). <relations> will be a provisional element due to debate within TS-EAD about whether it makes sense in descriptions of archival materials in addition to being found in authority records for the named entities that occur in those descriptions. I confess that my understanding of <relations> isn’t all it could be, but, like some TS-EAD members, I’m dubious that it belongs in a descriptive context. Isn’t it enough to point out relationships among named entities within authority records? Is it intended as a stopgap until we have masses of EAC-CPF records widely available? (If so, use of <relations> in EAD will perpetuate the mixing of descriptive and authority data …) One stated value of <relations> is that it’ll support uses of Linked Open Data. Experimentation will determine this element’s fate.
  • Access term elements (those found within <controlaccess> have been tweaked. For example, <persname> can now be parsed into multiple <part>s for name and life dates. <geographiccoordinate>, which is self-describing, is a new subelement of <geogname>. Nice.
  • The “mixed content” model in which some elements can contain both other elements and open text has been streamlined. For example, <repository> must now contain a specific element such as <corpname> rather than open text without specification of the type of name. This is good; adds to the name’s utility as an access point.
  • Some descriptive elements have been “disentangled,” such as <unitdate> no longer being available within <unittitle>. I like it; presumably a file name that consists solely of a date will now be coded as such. On the other hand, would it be a display and stylesheet problem to have no <unittitle> within a <did>?
  • Some minor elements have been deprecated (i.e., they’re going away). In general, my reaction is “good riddance.”
  • Multilingual functionalities have been expanded by adding language code and script codes to most elements. It’s now possible to encode this data inline via the new <foreign> element. “Foreign” in an international standard? Well, I wasn’t the one who had to come up with an ecumenical word, so I’m not throwing any stones.
  • Linking elements have been simplified, mostly by deprecating some that have been minimally used and by limiting where others are available. One thing does bug me: <dao> will be available only within <did>. Problematic for those who have included sample images at the head of the finding aid? Or who want to affiliate images with e.g. <scopecontent> or <bioghist> rather than within a particular <did>?

Observations? Disagreements? Worries? Please let me know what you think by leaving a comment.

 

About Jackie Dooley

Jackie Dooley leads OCLC Research projects to inform and improve archives and special collections practice. Activities have included in-depth surveys of special collections libraries in the U.S./Canada and the U.K./Ireland; leading the Demystifying Born Digital work agenda; a detailed analysis of the 3 million MARC records in ArchiveGrid; and studying the needs of archival repositories for specialized tools and services. Her professional research interests have centered on the development of standards for cataloging and archival description. She is a past president of the Society of American Archivists and a Fellow of the Society.

Mail | Web | Twitter | Facebook | More Posts (15)

Harvard Library Innovation Lab: Link roundup November 5, 2014

planet code4lib - Wed, 2014-11-05 16:44

Scholars, museums, and hustlers.

Jazzsoon

A hustler hustling. I want to hustle in the library.

The Met and Other Museums Adapt to the Digital Age – NYTimes.com

Inspiration. Let visitors change digital art on the walls by choosing from our archives on their mobile devices.

Apple Picking Season Is Here. Don’t You Want More Than a McIntosh? – NYTimes.com

THE book on apples is being publishes. I love that the author has been editing the same WordPerfect file since 1983.

The Gentleman Who Made Scholar

Google Scholar “asks the actual authors … to identify which groups of paper are theirs”

Maine Charitable Mechanic Association’s History

I love the history of this library. If I make it to Portland I want to pop in and visit.

OCLC Dev Network: Enhancements Planned for November 9

planet code4lib - Wed, 2014-11-05 14:45

This weekend will bring a new release on November 9 that will include changes to two of our WMS APIs.

Library of Congress: The Signal: Audio for Eternity: Schüller and Häfner Look Back at 25 Years of Change

planet code4lib - Wed, 2014-11-05 14:27

The following is a guest post by Carl Fleischhauer, a Digital Initiatives Project Manager in the Office of Strategic Initiatives.

During the first week of October, Kate Murray and I participated in the annual conference of the International Association of Sound and Audiovisual Archives in Cape Town, South Africa.  Kate’s blog describes the conference.  This blog summarizes a special presentation by two digital pioneers in the audio field, who looked back at a quarter century of significant change in audio preservation, change that they had both witnessed and helped lead. 

Dietrich Schüller, photo courtesy of the Phonogrammarchiv.

The main speaker was Dietrich Schüller, who served as the director of the Phonogrammarchiv of the Austrian Academy of Sciences (the world’s first sound archive, founded in 1899) from 1972-2008.  He was a member of the Executive Board of IASA from 1975 to 1987, and is a member of the Audio Engineering Society.  He has served as UNESCO Vice-President of the Information for All Programme.  In this presentation, Schüller was joined by his colleague Albrecht Häfner (recently retired from the German public broadcaster Südwestrundfunk).

Schüller came to the field in the 1970s.  For many years, he said, the prevailing paradigm had a focus on the medium, i.e., on the tape as much as on the sound on the tape.  This approach was more or less modeled on object conservation as practiced in museums, where copies are made to serve certain needs, e.g., reproductions of paintings for books or posters.  But the copies of museum objects are not intended to replace the original. 

For sound archives, however, there is an additional problem: the limited life expectancy of the original carriers.  Magnetic tapes, for example, may deteriorate over time, or the devices that play the tape may become obsolete and unavailable.  Therefore, sound archives must make replacement copies that will carry the content forward, extending the museum paradigm to embrace the replacement copy.  But in years past, there was a catch: copies were made on analog audio tape and suffered what is called generation loss, an inevitable reduction in signal quality each time a copy is made.

As a sidebar, Schüller noted that, before the 1980s, the scientific understanding of the properties of audio and video carriers was not as well developed as, say, what was known about film.  The first relevant citation that Schüller could find, as it happens, came from the Library of Congress: A.G. Pickett and M.M. Lemcoe’s 1959 publication, Preservation and Storage of Sound Recordings.

The 1980s brought change.  There was increased interest in the chemistry of audio carriers, looking at the decay of lacquer discs and brittle acetate tapes, and the study of what is called “sticky shed syndrome,” a condition created by the deterioration of the binders in a magnetic tape.  As the 1980s ended and the 1990s began, conferences began to focus on the degradation of the materials that carry recorded sound.  Nevertheless, many archivists still sought a permanent medium–the paradigm remained.

The year 1982 saw digital audio arrive in the form of the compact disk.  Some mistakenly expected that this medium would be stable for the long term.  In the late 1980s, consumer products like DAT digital tapes (developed to replace the compact cassette) entered the professional world, even used by some broadcasters for archiving (not a good idea).  In that same period, the Audio Engineering Society formed the Preservation and Restoration of Audio Recording committee, which brought together archivists and manufacturers of equipment and tape.  This was, Schüller said, “the first attempt to explain that archives are a market.” 

An important turning point occurred in 1989, the date of a UNESCO-related meeting in Vienna associated with the 90th anniversary of the Phonogrammarchiv.  The meeting brought together the manufacturers of technical equipment for audiovisual archives and, Schüller said, was the first time that the idea of a self-checking, self-generating sound archive was discussed.  This–what we might call a digital repository today–was a design concept that featured automated copying (after initial digitization) to support long-term content management. 

The findings that emerged from the 1989 UNESCO meeting included some guiding principles:

  • Sooner or later all carriers will decay beyond retrievability
  • All types of players (playback devices) will cease being operable, partly due to lack of parts
  • Long-term preservation can be accomplished in the digital realm by subsequent lossless copying of the bits

To over-simplify, the gist was “forget about the original carriers, copy and recopy the content.”  For calendar comparison, the term migration and related digital-preservation concepts reached many of us in the United States a few years later, with the 1996 publication of Donald Waters and John Garrett’s important work Preserving Digital Information: Report of the Task Force on Archiving of Digital Information.

The years that followed the UNESCO conference saw slow and grudging acceptance of these findings by audio preservation specialists.  A meeting in Ottawa in 1990 was marked by debate, Schüller said, with some archivists skeptical of the new concepts (“this is merely utopian”) and others arguing that the concepts were a betrayal of archival principles (“the original is the original, a copy is only a copy”). 

The year 1992 brought a distraction as lossy audio compression came on the scene, with MP3 soon becoming the most prominent format.  This led the IASA Technical Committee, meeting in Canberra, to declare that lossy data reduction was incompatible with archival principles (“data reduction is audio destruction”).  By the mid-1990s, however, lower data storage costs removed some of the motivation to use lossy compression for archiving.

Albrecht Häfner at a IASA meeting in 2009, photo courtesy of IASA

As these new ideas were being digested, it became clear to specialists in the field that digital preservation management would require automated systems that operated at scale.  And here is where Albrecht Häfner added his recollections to the talk.  He said that he had become the head of the Südwestrundfunk radio archive in 1984, just as digital production for radio was starting.  He saw that digitizing the older holdings would be a good idea, supporting the broadcasters’ need to repurpose old sounds in new programs. 

At about that same time, a trade show on satellite communication gave Häfner “a lucky chance.”  The show featured systems for the storage of big quantities of digital data, including the SONY DIR-1000 Digital Instrumentation Recorder.  Häfner said that this system had been developed for satellite-based systems such as interferometry used in cosmic radio research or earth observation imaging, and was marketed to customers with very extensive data, like insurance companies or financial institutions.  “My instant idea,” Häfner said, “was that digital audiovisual data produced by an A/D converter and digital image data delivered from a satellite are both streams of binary signals: why shouldn’t this system work in a sound archive?”  He added, “This trade fair was really the event of crucial importance that determined my future activities as to sound archiving.”

By the early 1990s, Südwestrundfunk and IBM were working together on a pilot project system with a high storage capacity, low error rate, and managed lossless copying.  But when Häfner first reported on the system to IASA, he found that few colleagues embraced the idea.  Some specialists, he said, “looked upon us rather incredulously, because they considered digitization to be under-developed and had their doubts about its functionality.  Rather they preferred the traditional analog technique. At the annual IASA conference 1995 in Washington, there was slide show about the preservation of the holdings of the Library of Congress and I never heard the word digital once!”

Today, attitudes are quite different.  Schüller closed the session by returning to the title the men had used for their talk, a slogan that spotlights the completion of the paradigm shift.  We have moved, he said, “from eternal carrier to eternal file.”  That’s a great bumper sticker for audio archivists!

In the Library, With the Lead Pipe: Responsive Acquisitions: A Case Study on Improved Workflow at a Small Academic Library

planet code4lib - Wed, 2014-11-05 11:30

Fast Delivery, CC-BY David, Bergin, Emmett and Elliott (Flickr)

In Brief: Fast acquisitions processes are beneficial because they get materials into patrons’ hands quicker. This article describes one library’s experience implementing a fast acquisitions process that dramatically cut turnaround times—from the point of ordering to the shelf—to under five days, all without increasing costs. This was accomplished by focusing on three areas: small-batch ordering, fast shipping and quick processing. Considerations are discussed, including the decision to rely on Amazon for the vast majority of orders.

I’m impatient. This is especially true when it comes to getting library materials for our patrons. I’m aware of the work required to get a book into someone’s hands: it has to be discovered or suggested; ordered; shipped; received; paid for; cataloged and processed—only then is it made available on a shelf. Skip or skimp on any one of these and the item never shows up or may never be found again once it leaves the technical services area. But performing these steps well mustn’t lead to months of delay. Patrons want—and deserve—today’s top sellers today, not next season. Students and faculty are knee-deep in research this week; next month is too late and who knows when that inter-library loan will actually arrive. It’s a cliché, but our society is fast-paced and instant gratification is king. I’m not the only one who is impatient.

Background

In 2011, I took advantage of an opportunity to put into place a fast acquisitions workflow that I’d been formulating. This effort followed in the footsteps of a wide variety of libraries that have prioritized the importance of getting materials into patrons’ hands as quickly as possible (Speas, 2012). At the time I was the newly hired Director of Library Services at Columbia Gorge Community College (CGCC), in The Dalles and Hood River, Oregon. CGCC is a small community college east of Portland that encourages innovation. The timing was right to try the new acquisitions workflow: the library staff—especially fellow librarian Katie Wallis—was receptive and ready for a challenge; the business office was supportive; I was new to the college and my boss believed in my ideas. The goal was to make the entire process, from the ordering decision until the item was in a patron’s hands, as fast as possible without sacrificing quality or spending more money. Specifically, we wanted the process to take less than a week from start to finish. That is, from the point when a decision was made to acquire an item we wanted it to be on the shelf within five business days. This would be a substantial improvement over CGCC’s existing practice and faster than any acquisitions process I had experienced. Other libraries, including large ones such as the Columbus Metropolitan Library, have managed an impressive 48 hours to process materials after they were received, but I am unaware of a library attempting such a short turnaround time from the point of ordering (Hatcher, 2006). Achieving such an ambitious goal would require rethinking all aspects of the process.

In other libraries I’ve worked at, acquisitions processes generally took from several weeks to a few months from start to finish. To be sure, occasional high priority rush orders were acquired and processed quickly, but they were the exception. Acquisitions typically took a long time and I’d realized that a few points in the process were especially prone to delay. The first delay often occurs during the selection process when lists of desired items are created. Lists I created regularly sat untouched for weeks or even months. This happened either because the list was waiting for someone else to do the actual ordering or because I had become distracted and hadn’t finalized it for some reason. This seemed like an area where a lot of time could be saved. Not only that, but the very practice of ordering big batches of items contributed to slowdowns later in the process, as we’ll see.

The second slow point was more clear-cut: shipping times. In order to get our entire process down to less than a week we clearly needed reliably quick shipping. Perhaps not surprisingly, faster shipping is probably the easiest way to speed up an acquisitions process as it doesn’t involve changing workflows or priorities. However, fast shipping is often expensive. Identifying fast shipping that didn’t increase our costs would likely determine how successful our overall effort would be.

The third slow point was the bottleneck that occurred in technical services when a big order arrived that, understandably, took a long time to catalog and process. Backlogs have been prevalent in libraries for decades and I’ve worked in several where it was not uncommon for items to spend more than a month being cataloged and processed (Howarth, Moor & Sze, 2010). To be sure, prioritizing fast, efficient cataloging is essential to getting acquisition turnaround times down, but dozens of items can only be processed so quickly, especially at a small library. At larger libraries the quantities are bigger but the concept is the same: there are only so many items existing staff can reasonably process in a given day. That being the case, this bottleneck provided two areas for improvement: improving the actual technical services workflow as well as re-thinking how orders are placed so as not be to overwhelmed when they arrive.

By focusing on these three areas—immediate, small-batch ordering; fast shipping; and quick processing—we identified solutions that led to a dramatic decrease in the overall turnaround times for our acquisitions process. The three areas and our methods of addressing them are similar to those often identified in the “buy instead of borrow” philosophy of collection development. With this method libraries monitor interlibrary loan requests and purchase those items that meet set criteria, a concept subsequently expanded to ebooks and often referred to as patron-driven acquisitions (Allen, Ward, Wray & Debus-López,  2003; Nixon, Freeman & Ward, 2010). Our process at CGCC differs from these efforts in that we applied the practices to all acquisitions rather than just interlibrary loan or ebooks.

Implementing Responsive Acquisitions

The most prominent change we implemented at CGCC was to move virtually all of our ordering to Amazon. At some institutions this might require completing a sole-source justification, but that wasn’t the case at CGCC. In any event, given the benefits outlined here I suspect it would have been straightforward to justify. Prior to Amazon, we used several vendors and while they each had their strengths, they were simply too slow. In contrast, Amazon is fast and offers competitive pricing. Additionally, we paid for an Amazon Prime membership (currently $99 annually) that made Amazon really fast because it includes free two-day shipping on most items.1

Relying almost exclusively on Amazon meant that we needed to have a credit account (essentially a credit card) with Amazon that allowed us to pay our bill monthly instead of with each order. Our business office worked with us to set up open purchase orders (POs) for different types of materials as well as a process for tracking the orders and paying the monthly bills. While seemingly simple, my experience is that not all business offices can or will allow such an arrangement.

Since we were ordering from Amazon it made sense to do some of our other collection development work, such as selection, on Amazon as well. It’s worth emphasizing the distinction between the selection process—deciding which items to purchase—and actually acquiring an item. This article focuses on the latter. While our selection process certainly evolved and no doubt sped up, we continued to take our time identifying the best materials to support the college’s curriculum. The changes came once we decided to order an item, whether it took weeks or only seconds to reach that decision. Once decided, we ordered the item immediately or typically within 24 hours. Ordering was facilitated through the use of Amazon’s wish lists to organize and prioritize acquisitions. We maintained three main lists, for books, movies, and music. We used additional lists for special projects.

Amazon’s wish lists have several valuable features that assisted selection and acquisitions: they help minimize unintentional duplicate purchases by notifying you if an item was previously purchased or is already on a wish list (helpfully, if you add an item a second time it moves to the top of the list); they have built-in priority and commenting capability; they can be shared, which means anyone can create a list and share the link so that all orders can be placed from the same account (lists can also be kept private); and overall, wish lists are as easy to use as Amazon itself. While other vendors have analogous collection development tools of varying complexity, my experience is that they are less intuitive to use than Amazon’s wish lists. For example, Ingram’s ipage doesn’t automatically warn users when they’ve added a duplicate title or if that title was previously purchased. It is possible to run a duplicate ISBN search in ipage selection lists, but it’s not automatic and previously purchased items are only included if they’re still on a selection list.

A significant benefit of using Amazon with a Prime membership is that it allowed us to intentionally move away from big orders and instead make frequent, small orders; sometimes even ordering a single item at a time. Small orders are easier to process than larger orders. We generally received new items in batches of one to ten. In comparison to dozens of items in a batch, even ten items seems manageable to process quickly—certainly within a day—and it was our practice to catalog items within 24 hours. Placing small orders is mentally-freeing as well, since you don’t have to put a lot of thought into compiling a complete list of titles. Ordering small batches through Amazon is relatively efficient; it’s a simple process to place orders once you’ve logged in and selected an item, as anyone who has ordered through Amazon has experienced. The only difference as an institution is that when completing the purchase we added the appropriate purchase order number for bookkeeping purposes.

Once an item arrived a librarian handled the cataloging, which for the most part was basic copy cataloging. Once cataloged, a library assistant or a student assistant did the remainder of the processing, again within 24 hours and often the same day. At that point the item was ready to go and either added to our new book/media display or placed on the hold shelf. To recap: from the time an order was placed items typically took two days to arrive, one day to catalog and another day to finish processing; four days total. But our emphasis on completing the process quickly—coupled with small-batch ordering—meant that we regularly bested even these times. For example, cataloging and processing was often completed in a day or even a single afternoon.

In practice, if someone requested an item that we decided to purchase we would order it immediately, sometimes while the patron was still standing there. This got the process started and drove home the notion that we were listening to their needs. With an Amazon Prime membership, shipping cost is the same regardless of the size of the order. More frequently, however, if an item was identified for purchase we gave it a “highest” priority in a wish list and then one person was responsible for regularly checking the wishlists and placing an order that included all of the highest priority items. This generally happened daily. The two methods helped give us the best of both worlds: a simple way to frequently order a handful of high priority items as well as the ability to order a single item immediately.

Super-Fast Acquisitions

Two-day shipping is fast and comes standard with an Amazon Prime membership, but we regularly had items delivered even faster, as in the following day. Shipping from our previous vendors took longer, and faster shipping led to the easiest time savings of all the changes we implemented. Depending on proximity to your library and order volume other vendors may be able to compete with Amazon’s two-day shipping, but overall I suspect Amazon has the most competitive shipping options for a majority of libraries, which is an important advantage. Whichever vendor you go with, you should need—and want—the fastest shipping you can afford.

For nearly all of our orders (90%+) the entire process took five business days or less and a majority of items were available for patrons two to three business days after the order was placed. On a number of occasions someone asked for an item and it was hand delivered to them the following afternoon. Research has shown quick turnaround times to be a driver of patron satisfaction and, indeed, at CGCC reaction to such quick turnaround times was positive (Hussong-Christian and Goergen-Doll, 2010). People were amazed that it was even possible for their item to be available so quickly because the fast shipping meant that in many cases we were faster than if they’d purchased the item from Amazon themselves. While most positive feedback CGCC received on this point was anecdotal, patron surveys from this period capture an increase in satisfaction with library services. This suggests that, overall, our efforts to improve services—including more responsive acquisitions—were working. Being responsive to our patrons’ needs and fulfilling their requests quickly helped to cement the library in their consciousness as a viable option for obtaining materials.

Things to Think About

While CGCC’s experience was a resounding success, there are a number of constraints and drawbacks to keep in mind. One prominent constraint is size. CGCC is a small academic library that spends approximately $14,000 annually on physical books and media. We seldom ordered multiple copies nor did we automate any of our acquisitions through the use of standing orders. Instead, we relied primarily on two related ways to track expenditures and ensure allocated funds would last the entire year. The first way stemmed from the fact that we knew a $14,000 budget meant we could spend a little over $1,100 per month. When we placed an order we would note basic information—date, amount, number of titles and PO number—in a simple spreadsheet that made our expenses to date easy to see. At the same time, we established multiple open purchase orders for a given category of materials (e.g. books or media), each for a portion of our total budgeted amount. For example, we might start the fiscal year with a $2,500 PO for books and a $1,000 PO for media, understanding that those amounts were expected to cover purchases for about three months. We established new POs quarterly before the existing POs were exhausted. In short, once our allocation for the year was established we determined roughly how much could be spent per month and stuck to it. If we went a little over one month we compensated for it the following month.

Other considerations range from the philosophical to the practical. On the philosophical side is the reality that some libraries may avoid supporting Amazon because of the role they’ve played in altering the bookselling landscape or concerns about supporting industry consolidation and the long term consequences of that trend. Indeed, Amazon was able to fulfill the vast majority of our orders (>95%), with most of the rest being textbooks we bought from our campus bookstore or independent films purchased directly from their distributor. While this consolidation is arguably good from the perspective of being able to efficiently fill orders from a single source, the long-term effects are hard to predict. To mention just one minor example, Amazon could change its policies governing how Prime works for institutional or high volume customers, perhaps by substantially increasing its cost or otherwise devaluing its benefits. Such negative changes should perhaps be expected if competition decreases. On the other hand, many libraries already purchase at least some materials from Amazon. A 2008 Association of American University Presses survey of academic librarians found that 31% of respondents used Amazon as their primary book distributor, a number that seems likely to have increased in the intervening years.

When implementing these changes at CGCC we initially tried to avoid using Amazon because of concerns about supporting industry consolidation as well as a desire to support more local alternatives. We looked into ordering through Powell’s Books, Portland’s well-known independent bookseller. Powell’s offers a generous discount to Oregon libraries that helps make their prices highly competitive. However, the library discount could not be combined with free shipping, meaning shipping charges must be factored in when doing a price comparison. Amazon’s combination of overall price and shipping speed—especially with an Amazon Prime membership—led us to decide it was the best value available to us and to a large extent forced our hand; as stewards of public funds we felt obligated to use the vendor that met our needs at the lowest cost. In the end, our desire and responsibility to quickly obtain competitively priced materials trumped our philosophical concerns about supporting Amazon’s industry-consolidating practices.

Cataloging practices are another consideration as proper cataloging is sometimes put forward as a necessarily slow and deliberate process. While high quality cataloging records should be valued and expected, libraries need to be careful not to sacrifice the good (i.e. fast processing) for perfect catalog records. This is not to say that error-ridden catalog records are acceptable; they aren’t. Like many things, however, there are diminishing returns when striving for perfection and immaculate records may not be worth the effort. Mary Bolin’s summation of the situation and her call for quantity as well as quality in cataloging remains as relevant today as when it was published more than 20 years ago. In short, she states how “high quality and high quantity in cataloging are not incompatible” (1991, p. 358). Moreover, Bolin opens her piece by referring to Andrew Osborn’s similar argument made a full fifty years earlier (1941). Given the prevalence of copy cataloging and the reasonably high quality records available through OCLC and some library consortiums, a skilled cataloger should be able to quickly obtain high quality records for most commonly held items, tweak them as needed and move on. If the process seems slow then the library needs to decide whether the improvements obtained from a more deliberate process are worth the delay. Libraries that rely more heavily on original cataloging will necessarily require more time per item, but they, too, should foster a culture that values quick cataloging.

Some libraries reduce the need for in-house cataloging and technical services through the purchase of pre-processed materials. Amazon launched its own processing program for libraries in 2006 (Amazon, 2006), but apparently it never took off and an Amazon representative I spoke with said it was discontinued in 2007, a mere year after it started. At CGCC, the vast majority of items we acquired were broadly held and good quality catalog records were generally available from OCLC or our consortium. As noted above, a librarian imported the records and made changes as necessary. We strove to catalog items within 24 hours of their arrival with an additional 24 hours allotted for further processing, a target that we typically met or exceeded. While the evidence supporting this practice is anecdotal, CGCC experienced increasing circulation statistics that suggest, at a minimum, the overall benefits of the changes outweighed the costs, including costs from an emphasis on quick cataloging.

Another consideration is Amazon’s frustrating practice of not consistently including packing slips in packages (forget drone delivery—consistent packing slips would make me a happy bookkeeper). When this happened we needed to look up item prices so that we could add their value into our library management system as well as print our order confirmation for documentation purposes. Something else to be aware of is that invoices are calculated per shipment, not per order, which further complicates bookkeeping. For example, the order you place for $200 may be shipped in three separate packages, resulting in invoices for $90, $60 and $50 to reconcile. Neither of these—lack of packing slips and per shipment invoices—are hard to handle, but they are added wrinkles. All told, the bookkeeping was straightforward and it took less than an hour per month to organize the paperwork for the business office, which paid the bills.

Finally, while I like the simplicity of Amazon’s wish lists and competitive prices, I can envision how libraries with a more robust materials budget may find that Amazon’s wish lists aren’t up to the task of large volume ordering or that their existing vendor’s discounts are superior to Amazon’s prices.

Amazon Alternatives

The most prominent change we implemented at CGCC was to move practically all of our ordering to Amazon. This was a positive move because it helped us to quickly address two of our problem areas (slow shipping and processing big batches of items). With that said, I see Amazon as a tool that we used to help speed up our acquisitions process; other libraries may find different tools that work as well or better for their specific circumstances. The point to emphasize is that your library should want and expect fast shipping along with the ability to place orders in small batches at a low cost—the goal being to get items into your patrons’ hands as quickly as logistically and financially possible.

Conclusion

CGCC’s responsive acquisitions workflow was a positive change for patrons, the library and the college as a whole. Most importantly, patrons had their items weeks faster than they otherwise would have. For the library, the faster workflow meant improvements in everything from happier patrons to requiring less space in technical services to store items that were waiting to be processed. At the same time, these benefits occurred without a higher cost, either in terms of higher prices or staff time and resources.

Implement Fast Acquisitions in Three Steps
  1. Commit to making the process fast and efficient; get staff buy-in.
  2. Identify and use the fastest shipping you can afford, either from your existing vendor or alternatives with fast shipping and similar levels of service.
  3. Review cataloging processes with an eye towards efficiencies. Determine how many items can reasonably be processed in a day and order roughly that many (or fewer) items at a time.
Acknowledgements

I want to thank everyone who read this article and provided feedback and/or encouragement: my reviewers Rachel Howard at University of Louisville and Hugh Rundle with the City of Boroondara for their time and thoughtful comments; Erin Dorney and the other editors at Lead Pipe for their guidance and support; Ellen Dambrosio and Iris Carroll at Modesto Junior College for reading an early draft and encouraging me to seek a wider audience for it; and Katie Wallis at Columbia Gorge Community College for her help implementing a super fast acquisitions process that far exceeded my expectations. Thank you all.

References

Allen, Megan, Suzanne M. Ward, Tanner Wray and Karl E. Debus-López (n.d.). “Patron-Focused Services: Collaborative Interlibrary Loan, Collection Development and Acquisitions.” Digital Repository at the University of Maryland. Retrieved from http://drum.lib.umd.edu/

Amazon (2006). “Amazon.com Announces Library Processing for Public and Academic Libraries Across the United States.” Amazon Media Room. Retrieved from http://phx.corporate-ir.net/phoenix.zhtml?p=irol-mediahome&c=176060

The American Association of University Presses (2008). “Marketing to Libraries: 2008 Survey of Academic Librarians.” AAUPNet. Retrieved from www.aaupnet.org

Bolin, Mary (1991).  “Make a Quick Decision in (Almost) All Cases: Our Perennial Crisis in Cataloging.” The Journal of Academic Librarianship, 16(6): 357-361.

Hatcher, Marihelen (2006). “On the Shelf in 48 Hours.Library Journal, 131(15): 30-31.

Howarth, Lynne. C., Les Moor and Elisa Sze (2010). “Mountains to Molehills: The Past, Present, and Future of Cataloging Backlogs.Cataloging & Classification Quarterly, 48(5): 423-444.

Hussong-Christian, Uta and Kerri Goergen-Doll (2010). “We’re Listening: Using Patron Feedback to Assess and Enhance Purchase on Demand.” Journal of Interlibrary Loan, Document Delivery & Electronic Reserve, 20(5): 319-335.

Nixon, Judith M., Robert S. Freeman and Suzanne M. Ward (2010). “Patron-Driven Acquisitions: An Introduction and Literature Review.” Collection Management, 35(3-4): 119-124.

Osborn, Andrew D. (1941). “The Crisis in Cataloging.” Library Quarterly, 11(4): 393-411.

Speas, Linda (2012). “Getting New Items into the Hands of Patrons: A Public Library Efficiency Evaluation.Public Libraries Online, 51(6).

  1. As a bonus, up to three other Amazon accounts that share the same address as the Prime member can also take advantage of the free two-day shipping, a benefit that was much appreciated by other departments on campus.

Pages

Subscribe to code4lib aggregator