You are here

Feed aggregator

OCLC Dev Network: Homegrown Reclamation with WorldCat Metadata API

planet code4lib - Thu, 2015-01-22 15:30

Check out Sarah Johnston's article in code4lib journal on using WorldCat Metadata API to do a reclamation project.

pinboard: The Code4Lib Journal – Query Translation in Europeana

planet code4lib - Thu, 2015-01-22 06:58
RT @kiru: My article, 'Query Translation in Europeana' is just published at the new #code4lib magazine #solr #europeana #api

Library Tech Talk (U of Michigan): How Easy To Read is Your Web Content?

planet code4lib - Thu, 2015-01-22 00:00
The Readability Test Tool can help web content creators make pages easier to read.

DuraSpace News: Webinar Recording Available: "Real Life Experiences with Hosted Institutional /Digital Repository Services."

planet code4lib - Thu, 2015-01-22 00:00

On January 22, 2015 Stephanie Davis-Kahl, Scholarly Communications Librarian & Associate Professor, Ames Library, Illinois Wesleyan University and Oceana Wilson, Director of Library and Information Services, Crossett Library, Bennington College presented, “Real Life Experiences with Hosted Institutional /Digital Repository Services.”  During this webinar, they shared  how their institutions strive to meet the goal of access and long-term preservation through a hosted service.

DuraSpace News: The DSpace 5 Mirage 2 User Interface at Stellenbosch University

planet code4lib - Thu, 2015-01-22 00:00

Winchester, MA  Hilton Gibson, Stellenbosch University—home of the SUN Scholar Research Repository, has provided documentation "for those itching to use the Mirage 2" User Interface released earlier this week as part of DSpace version 5.0.

OCLC Dev Network: OCLC LC Name Authority File (LCNAF) Now Available

planet code4lib - Wed, 2015-01-21 22:00

The LC Name Authority File (LCNAF) data source issue has been resolved and the service is now available. We apologize for any inconvenience. 

District Dispatch: Money does not solve everything (including copyright)

planet code4lib - Wed, 2015-01-21 21:58

We’re taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of the law, and addressing what’s at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Today’s installment is everyone’s favorite copyright exception, fair use. There’s much that can be said about fair use, but I will use a true story (in order to ultimately focus your attention on something else).

Early in my career as a copyright librarian, I participated in a panel on fair use with other librarians. This was quite a while ago, and librarians were as not familiar with fair use as they are now. From the audience members, there was a lot of griping and groaning. “Fair use is too hard. What if I go to jail? It’s easier to not bother?”

One individual suggested that if libraries had more funding, then there would be no copyright problems because we could afford to pay permission fees. But when other people were nodding their heads in agreement, I was shocked, shocked that librarians seemed so reluctant to use the most prominent of all of the hard-fought rights and privileges in the Copyright Act to provide maximum access to information to their users.

I explained that even if library budgets were bottomless, there is no reason to pay a permission fee when your use is fair. Indeed, even if a license is available, you do not have to use it when you have before you a fair use.

Fair use has been called a safety valve, a transformative use, ambiguous, and a necessary exception to copyright law to enable research, commentary, criticism, innovation and learning. I believe fair use is all of these things, but in library-speak, I like to talk about fair use as enabling the “free flow of information.” Not free—we pay millions for content every year— but free flow.

Knowledge and creativity cannot advance if people are unable to build on the work of others. So one needs access to information, but also one needs a creative process that is not burdened by fits and starts, censorship or fear. Fair use is the exception that most supports freedom of inquiry and expression. Now we’re talking about intellectual freedom and the First Amendment, top dog principles for librarians.

We must never accept mandatory licensing regimes, because by their very existence, fair use is weakened.  By accepting such licensing regimes, you are agreeing to pay even when uses are fair. You will pay a fee that does not necessarily go to the original creator or rights holder. You will have to ask for permission for any use of a protected work. You will agree to inhibit the spontaneous flow of inquiry, innovation, and learning, and ultimately limit the creation of new works and new knowledge that benefits the public. And you will be compromising the First Amendment.

The moral of this story is that fair use enables intellectual freedom. Of course, pay the royalty when your use is not fair, but be wary of the mandatory licensing systems that are supposed to make your life easier because you won’t have to think about fair use.

The post Money does not solve everything (including copyright) appeared first on District Dispatch.

OCLC Dev Network: Fix for VuFind WorldCat Module

planet code4lib - Wed, 2015-01-21 21:00

A fix for the VuFind WorldCat module author search performance issue is now available.

LITA: LITA at ALA Midwinter 2015

planet code4lib - Wed, 2015-01-21 20:31

If you’re making the hop to Chicago for ALA Midwinter 2015 then check out all the great LITA events.

Get full details at the LITA Highlights at 2015 ALA Midwinter Meeting web page.

Friday, January 30, 2015

There will be 2 Pre-conference Workshops from 8:30-4:00 at McCormick Place in Chicago IL.

  • Introduction to Practical Programming with Elizabeth Wickes, University of Illinois at Urbana-Champaign
  • From Lost to Found: How user Testing Can Improve the User Experience of Your Library Website with Kate Lawrence and Deirdre Costello, sponsored by EBSCO Information Services

Costs for LITA Members start at $235 and you can still register at LITA’s Midwinter Workshops.

Throughout the Conference

LITA Committees and Interest Groups will be holding timely and vibrant discussions on topics such as linked data, drupal, games, coding, data-driven decision making, open source projects, user experience, library technology projects and more. Check out the Sessions web page as well as the LITA specific Conference Scheduler for more details.

Sunday, February 1, 2015

Don’t miss the Top Technology Trends Discussion Session 10:30 am – 11:30 am McCormick Place West, W183a. The conference panelists and their suggested trends will include:

  • Moderator: Karen Schneider
  • Marshall Breeding, Independent Consultant – Empowering underserved libraries through technology; discovery beyond the library.
  • Todd Carpenter, Executive Director of NISO – Infrastructure demands of a growing or majority OA ecosystem; balancing patron privacy and using data to improve services.
  • Casey McCoy, Program Coordinator at Lincolnwood Public Library District – Tech programming for youth, esp. girls; app-based home technology.
  • Willie Miller, Librarian at Indiana University – Purdue University Indianapolis – Gamification; e-course packs.
  • Carli Spina, Emerging Technologies and Research Librarian at Harvard Law School Library – Universal design; beacons.

More information about the program is available at the Top Tech Trends web site.

The LITA Open House from 4:30-5:30 pm McCormick Place West, W470b is an opportunity for current and prospective members to talk with Library and Information Technology Association (LITA) leaders, committee chairs, and interest group participants.

LITA Happy Hour will be 6:00 pm – 8:00 pm at Lizzie McNeill’s Irish Pub 400 N McClurg Court Chicago, IL 60611. Located 1 block east of the Sheraton Chicago Hotel and Towers 301 East North Water Street, Chicago, IL. Join LITA members from around the country for networking, good cheer, and great fun! Expect lively conversation and excellent drinks. Cash Bar. Bring your ALAMW conference badge to receive a 25% discount.

Monday, February 2, 2015

Attend the LITA Town Meeting from 8:30 am – 10:00 am McCormick Place West, W180 and join your fellow LITA members for breakfast and a discussion about LITA’s strategic path. The meeting will focus on how LITA’s goals–collaboration and networking; education and sharing of expertise; advocacy; and infrastructure–help our organization serve you and the broader library community. This Town Meeting will help us turn those goals into plans that will guide LITA going forward.

Patrick Hochstenbach: Homework assignment #3 Sketchbookskool

planet code4lib - Wed, 2015-01-21 19:45
Filed under: Doodles Tagged: black mesa, fudenosuke, new mexico, sketchbookskool, urbansketching

LITA: Jobs in Information Technology: January 21

planet code4lib - Wed, 2015-01-21 18:44

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Collection Strategist Librarian,  University of California, Davis, Davis, CA

Manager, IT, Yale University, New Haven, CT

Visit the LITA Job Site for more available jobs and for information on submitting a  job posting.


Code4Lib Journal: Editorial Introduction: A Brand New Year

planet code4lib - Wed, 2015-01-21 17:35
by Terry Reese Happy New Year!  Issue 27 marks the first issue of 2015, a fresh start to a new year.  And an interesting year I think it will be, for both the Code4Lib community and the Journal; especially the Journal as we embark on our first special issue with a guest editorial committee and […]

Code4Lib Journal: Digital forensics on a shoestring: a case study from the University of Victoria

planet code4lib - Wed, 2015-01-21 17:35
While much has been written on the increasing importance of digital forensics in archival workflows, most of the literature focuses on theoretical issues or establishing best practices in the abstract. Where case studies exist, most have been written from the perspective of larger organizations with well-resourced digital forensics facilities. However organizations of any size are increasingly likely to receive donations of born-digital material on outdated media, and a need exists for more modest solutions to the problem of acquiring and preserving their contents. This case study outlines the development of a small-scale digital forensics program at the University of Victoria using inexpensive components and open source software, funded by a $2000 research grant from the Canadian Association of Research Libraries (CARL).

Code4Lib Journal: Homegrown WorldCat Reclamation: Utilizing OCLC’s WorldCat Metadata API to Reconcile Your Library’s Holdings

planet code4lib - Wed, 2015-01-21 17:35
OCLC’s WorldCat Metadata API now allows libraries to set and delete their OCLC holdings without using Connexion or Batch Services. St. Olaf College Libraries had used up our “one free turn” at a reclamation several years ago. Unfortunately due to the normal inconsistencies that accumulate over time, as well as problems with holdings being set for us by external entities, we had reached a point where we were once again in need of reconciling our holdings with OCLC's. This time we wanted to accomplish the reclamation in a low-cost way that would also allow us to have more local control over the process. Using the WorldCat Search API and Metadata API in tandem, we first retrieved all OCLC numbers with holdings currently set and deleted these holdings with a fairly simple Perl script. We then pulled from our local catalog all the OCLC numbers for which we wanted holdings set and updated again using the Metadata API. The result, in the words of one of our catalogers, is that “For the first time since I got here, 15 years ago, I feel our holdings finally reflect what we really own.” In this article, I will discuss the issues to consider if you wish to do a similar project with your OCLC holdings, share the Perl scripts I wrote for processing, and reflect on the pros and cons of the process overall.

Code4Lib Journal: Using Google Tag Manager and Google Analytics to track DSpace metadata fields as custom dimensions

planet code4lib - Wed, 2015-01-21 17:35
DSpace can be problematic for those interested in tracking download and pageview statistics granularly. Some libraries have implemented code to track events on websites and some have experimented with using Google Tag Manager to automate event tagging in DSpace. While these approaches make it possible to track download statistics, granular details such as authors, content types, titles, advisors, and other fields for which metadata exist are generally not tracked in DSpace or Google Analytics without coding. Moreover, it can be time consuming to track and assess pageview data and relate that data back to particular metadata fields. This article will detail the learning process of incorporating custom dimensions for tracking these detailed fields including trial and error attempts to use the data import function manually in Google Analytics, to automate the data import using Google APIs, and finally to automate the collection of dimension data in Google Tag Manager by mimicking SEO practices for capturing meta tags. This specific case study refers to using Google Tag Manager and Google Analytics with DSpace; however, this method may also be applied to other types of websites or systems.

Code4Lib Journal: Using SemanticScuttle for managing lists of recommended resources on a library website

planet code4lib - Wed, 2015-01-21 17:35
Concordia University Libraries has adopted SemanticScuttle, an open source and locally-hosted PHP/MySQL application for social bookmarking, as an alternative to Delicious for managing lists of recommended resources on the library’s website. Two implementations for displaying feed content from SemanticScuttle were developed: (1) using the Google Feed API and (2) using direct SQL access to SemanticScuttle’s database.

Code4Lib Journal: Training the Next Generation of Open Source Developers: A Case Study of OSU Libraries & Press’ Technology Training Program

planet code4lib - Wed, 2015-01-21 17:35
The Emerging Technologies & Services department at Oregon State University Libraries & Press has implemented a training program for our technology student employees on how and why they should engage in Open Source community development. This article will outline what they've done to implement this program, discuss the benefits they've seen as a result of these changes, and will talk about what they viewed as necessary to build and promote a culture of engagement in open communities.

Code4Lib Journal: Communication Between Devices in the Viola Document Delivery System

planet code4lib - Wed, 2015-01-21 17:35
Viola is a newly developed document delivery system that handles incoming and outgoing requests for printed books, articles, sharing electronic resources, and other document delivery services on the local level in a library organisation. An important part of Viola is the stack fetching Android application that enables librarians to collect books in the open and closed stacks in an efficient manner using a smartphone and a Bluetooth connected portable printer. The aim of this article is to show how information is transferred between systems and devices in Viola. The article presents code examples from Viola that use current .NET technologies. The examples span from the creation of high-level REST-based JSON APIs to byte array communication with a Bluetooth connected printer and the reading of RFID tags. Please note that code examples in this article are for illustration purposes only. Null checking and other exception handling has been removed for clarity. Code that is separated in Viola for testability and other reasons has been brought together to make it more readable.

Code4Lib Journal: Query Translation in Europeana

planet code4lib - Wed, 2015-01-21 17:35
Europeana – a database containing European digital cultural heritage objects – recently introduced query translation in order to aid users in searching the collections regardless of language. The user enters query terms, and the portal searches for those terms in multiple languages. This article discusses the technical details of query translation with the aim of assisting similar projects to implement similar features.

DPLA: Libraries and Copyright: Big Wins in 2014 and Big Challenges Ahead for 2015

planet code4lib - Wed, 2015-01-21 16:41

In terms of copyright, 2014 was a big year for libraries. The highlights were the release of decisions in two major copyright cases on appeal, largely in favor of library uses and affirming the applicability of fair use to certain aspects of digitization. Other developments, such as the release of a new code of best practices in fair use of collections containing orphan works, have created opportunities for libraries and other memory institutions to make further progress on addressing copyright obstacles to digital access to their collections.

Before the first open committee call of the year for the DPLA Legal Committee (which is later today at 2:00pm Eastern), now is a good time for a short recap what we’ve seen over the last year and what we can expect in 2015.

To register for today’s open Legal Committee call at 2:00 PM Eastern, click here.

The first and maybe the most important development of 2014 comes in Authors Guild v. HathiTrust, a major copyright case before the Second Circuit Court of Appeals that was decided in June in favor of the HathiTrust Digital Library (a DPLA content hub with over 13 million digitized volumes). The suit was filed by the Authors Guild in 2011, largely in response to HathiTrust’s efforts to make it’s collections of orphan works more accessible. In its complaint, the Authors Guild objected to HathiTrust’s digitization project for that and several other reasons.

As it turned out, orphan works were not much of an issue in that case. The courts concluded that those claims were not ripe for adjudication because HathiTrust stopped its orphan works program and has no plans to continue it. Instead, most of the lawsuit focused on other uses of the HathiTrust collection, such as creating an indexed search of the contents of digitized books (and related research uses), full-text access for the blind and print-disabled, and preservation in digital formats.

HathiTrust initially prevailed in the suit before the district court for Southern District of New York, with that court finding that all contested uses qualified as “fair use” under the Copyright Act. In June 2014 HathiTrust won an even bigger victory when the Second Circuit Court of Appeals largely affirmed the ruling of the lower court.

While the HathiTrust case addressed only a subset of the copyright issues raised by library mass digitization, it still represents a major positive development for digital libraries like those that contribute to DPLA to enhance access to their collections. The case makes clear that library digitization for purposes of enhanced search and for full-text use by the blind are acceptable under fair use. While this short summary can’t do justice to the importance of the case, the Association of Research Libraries has done a great job explaining it. One of the best resources is a seven-page document prepared by Jonathan Band (ARL counsel) titled “What Does the HathiTrust Decision Mean for Libraries?”

The second major decision of 2014 came in Cambridge University Press v. Becker, which was decided by the Eleventh Circuit Court of Appeals. That case began in 2008 when Cambridge University Press, Oxford University Press and Sage Publications sued Georgia State University over faculty use of book excerpts scanned for use in electronic course reserves. After a lengthy trial, the district court in that case issues a painstaking, 300+ page opinion detailing why, in the vast majority of instances, Georgia State e-reserves practices fell within the bounds of Copyright’s fair use doctrine. While not everything in the district court decision represented a positive development for libraries, it was still an important victory, especially on more generally-applicable issues of the weight and importance of the nonprofit, educational use of the work in the fair use analysis, and the weight that the court placed on whether a digital licensed copy was made available by the publishers (if no license was offered, the court generally found that that favored Georgia State’s fair use assertion).

The publishers in that case appealed to the Eleventh Circuit Court of Appeals, and in November that court issued its decision. Formally, the Eleventh Circuit reversed the district court. But in the Eleventh Circuit’s reasoning for the reversal, it was clear that the vast majority of the principles contained within the district court’s analysis–for example, the importance of the nonprofit, educational purpose of the use–were preserved. Like HathiTrust, this decision has a lot to unpack that would be impossible to review here. The best summary and analysis I have seen comes from University of Minnesota Copyright Librarian Nancy Sims. You should know that this case is still active; the Eleventh Circuit recently rejected a petition to rehear the case, but it remains possible for either side to petition the U.S. Supreme Court to review the case.

Beyond litigation, one major development worth noting is the release of a new Statement of Best Practices in Fair Use of Collections Containing Orphan Works for Libraries, Archives, and Other Memory Institutions (disclaimer: I was on the team that helped draft this document). That document, endorsed by DPLA and several of its hubs, along with many other leading national libraries, archives, and other memory institutions, takes aim at helping libraries and archives address the longstanding issue of what to do with orphan works–i.e., works for which copyright owners are difficult or impossible to locate–especially when they are embedded in larger collations that libraries seek to digitize.

The Best Practices was released in December 2014 and discussed in an ALA-sponsored webinar. In February during Fair Use Week (February 23-27), the team that helped draft these best practices from American University and UC Berkeley will be hosting an event in Washington, D.C. (also webcast live) explaining the document, discussing situations in which it might be most useful, and fielding questions via a panel of experts about its application on the ground. More details to come on that.

While 2014 was a big year for library copyright issues in the courts, it also contained the beginnings of several important discussions about legislative efforts to revise copyright law. Over the course of the year the House Subcommittee on Courts, Intellectual Property & the Internet held a series of hearings on the Copyright Act with an eye toward possible revision. Among other topics, the subcommittee addressed the scope of fair use, and preservation and reuse of copyrighted works. In addition, the U.S. Copyright Office and the U.S. Patent & Trademark Office both held roundtable meetings addressing possible areas of legislative revision. More broadly, a major conference hosted by UC Berkeley’s Center for Law and Technology, titled “The Next Great Copyright Act,” addressed an even wider range of potential Copyright Act revisions.

If I had to pick the biggest challenge for the upcoming year for libraries in this area, it would be continuing library engagement with legislative and administrative efforts to propose changes to text of the Copyright Act itself. Similar hearings and other studies are likely to continue throughout 2015. Add on top of that efforts ramping up to create a library and archive copyright exceptions treaty through the World Intellectual Property Organization, and it will be a busy and difficult year in which librarians must make concerted efforts to have their voices heard on how legislation should be crafted to ensure better online access to library collections.  My hope is that the DPLA, along with many of the other organizations such as ALA and ARL, can continue to help keep us informed about issues like this on which librarians should speak up and present a positive agenda for reform.


Subscribe to code4lib aggregator