You are here

Feed aggregator

LibUX: On the User Experience of Ebooks

planet code4lib - Thu, 2015-09-24 01:57

So, when it comes to ebooks I am in the minority: I prefer them to the real thing. The aesthetic or whats-it about the musty trappings of paper and ink or looming space-sapping towers of shelving just don’t capture my fancy. But these are precisely the go-to attributes people wax poetically about — and you can’t deny there’s something to it.

In fact, beyond convenience ebooks don’t have much of an upshot. They are certainly not as convenient as they could be. All the storytelling power of the web is lost on such a stubbornly static industry where print – where it should be most advantageous – drags its feet. Write in the gloss on, but not in an ebook; embellish a narrative with animation at the New York Times (a newspaper), but not in an ebook; share, borrow, copy, paste, link-to anything but an ebook.

Note what is lacking when it comes to ebook’s advantages: the user experience. True, some people certainly prefer an e-reader (or their phone or tablet), but a physical book has its advantages as well: relative indestructibility, and little regret if it is destroyed or lost; tangibility, both in regards to feel and in the ability to notate; the ability to share or borrow; and, of course, the fact a book is an escape from the screens we look at nearly constantly. At the very best the user experience comparison (excluding the convenience factor) is a push; I’d argue it tilts towards physical books.


All things being equal, where it lacks can be made-up by the no-cost of its distribution, but the rarely discounted price of the ebook is often more expensive for those of us in libraries or higher ed – if not substantially subjectively so given that readers neither own nor can legally migrate their ebook-as-licensed-software to a device, medium, or format where the user experience can be improved.

This aligns with findings which show while ebook access improves (phones, etc …) their reading doesn’t meaningfully pull away from the reading of print books.

Recent hullabaloo involving the ebookalypse may be a misreading which ignores data from sales of ebooks without isbns (loathed self-publishers) in which Amazon dominates because of the ubiquity of Kindle and its superior bookstore. There, big-publisher books are forced to a fixed price using an Amazon-controlled interface wherein authors add and easily publish good content on the cheap. We are again reminded that investing in even a slightly better user experience than everyone else is good business:

  • the price of ebooks are competitively low – or even free;
  • ebooks, through Kindles or the Kindle App, can be painlessly downloaded that while largely encumbered by DRM doesn’t require inconvenient additional software or – worst – to be read on a computer;
  • and features like WhisperSync enhance the reading experience in a way that isn’t available in print.

Other vendors, particularly those available to libraries, have so far been able to only provide a fine user experience that doesn’t do much for their desirability for either party.

The post On the User Experience of Ebooks appeared first on LibUX.

District Dispatch: ALA Congratulates Dr. Kathryn Matthew

planet code4lib - Wed, 2015-09-23 22:35

Dr. Kathryn Matthew, Director, Institute of Museum and Library Services.

U.S. Senate confirms Matthew as Director of the Institute of Museum and Library Services

Washington, DC— In a statement, American Library Association (ALA) President Sari Feldman commented on the United States Senate’s confirmation of Dr. Kathryn K. Matthew as director of the Institute of Museum and Library Services (IMLS).

“We commend President Obama on Dr. Matthew’s appointment and the U.S. Senate for her confirmation. Communities across the nation will greatly benefit from her experience in bringing museums and libraries and the sciences together as resources readily accessible to families, students and others in our society.”

The Institute, an independent United States government agency, is the primary source of federal support for the nation’s 123,000 libraries and 35,000 museums.

“I am honored to have been nominated by President Barack Obama and to have received the confidence from the Senate through their confirmation process. I look forward to being appointed to serve as the fifth Director of the Institute of Museum and Library Services,”  Dr. Matthew said. “I am eager to begin my work at IMLS to help to sustain strong libraries and museums that convene our communities around heritage and culture, advance critical thinking skills, and connect families, researchers, students, and job seekers to information.”

Dr. Matthew will serve a four-year term as the Director of the Institute. The directorship of the Institute alternates between individuals from the museum and library communities.

ALA appreciates the exemplary service of Maura Marx, who served as IMLS Acting Director since January 19, 2015, following the departure of IMLS Director Susan H. Hildreth, at the conclusion of her four-year term. Marx is currently the deputy director for library services.  ALA has enjoyed good, close and collaborative relationship with Hildreth and with Anne-Imelda Radice, who served as IMLS Director from 2006-2010, and looks forward to a similarly strong and cooperative relationship with Dr. Matthew.

Dr. Matthew’s career interests have centered around supporting and coaching museums and other nonprofits, large and small, who are focused on propelling their programs, communications, events, and fundraising offerings to a higher level of success. Dr. Matthew’s professional experience spans the breadth of the diverse museum field. Through her many different leadership positions, she brings to the agency a deep knowledge of the educational and public service roles of museums, libraries, and related nonprofits.

Trained as a scientist, Dr. Matthew’s 30-year museum career began in curatorial, collections management, and research roles at the Academy of Natural Sciences in Philadelphia and Cranbrook Institute of Science. She worked with a variety of collections including ornithology, paleontology, fine arts, and anthropology. She then moved into management, exhibits and educational programs development, and fundraising and marketing roles, working at the Santa Barbara Museum of Natural History, the Virginia Museum of Natural History, The Nature Conservancy, the Historic Charleston Foundation, and The Children’s Museum of Indianapolis. She was also a science advisor for the IMAX film “Tropical Rainforest,” produced by the Science Museum of Minnesota.

In addition she was Executive Director of the New Mexico Museum of Natural History and Science, a state-funded museum. In that role she worked with corporations, federal agencies, public schools, and Hispanic and Native American communities to offer STEM-based programs. “Proyecto Futuro” was a nationally-recognized program that began during her tenure.

Dr. Matthew has worked on three museum expansion projects involving historic buildings; Science City at Union Station, in Kansas City, Missouri, and the Please Touch Museum at Memorial Hall and The Chemical Heritage Foundation, both in Philadelphia.

Over her 30-year career, she has been active as a volunteer to smaller nonprofits, board member, and award winning peer reviewer for the American Alliance of Museums’ Accreditation and Museum Assessment Programs. Her board service has included two children’s museums, a wildlife rehabilitation center, and a ballet company.

The post ALA Congratulates Dr. Kathryn Matthew appeared first on District Dispatch.

District Dispatch: Six takeaways from new broadband report

planet code4lib - Wed, 2015-09-23 21:52

ALA participated at a White House roundtable on new federal broadband recommendations, (photo by www.GlynLowe .com via Flickr)

On Monday the inter-agency Broadband Opportunity Council (BOC) released its report and recommendations on actions the federal government can take to improve broadband networks and bring broadband to more Americans. Twenty-five agencies, departments and offices took part in the Council, which also took public comments from groups like the ALA.

The wide-ranging effort opened the door to address outdated program rules as well as think bigger and more systemically about how to more efficiently build and maximize more robust broadband networks.

Here are six things that struck me in reading and hearing from other local, state and national stakeholders during a White House roundtable in which ALA participated earlier this week:

  1. It’s a big deal. The report looks across the federal government through a single lens of what opportunities for and barriers to broadband exist that it may address. Council members (including from the Institute of Museum and Library Services) met weekly, developed and contributed action plans, and approved the substance of the report. That’s a big job—and one that points to the growing understanding that a networked world demands networked solutions. Broadband (fixed and mobile) is everyone’s business, and this report hopefully begins the process of institutionalizing attention to broadband across sectors.
  2. It’s still a report…a first step toward action. There’s no new money, but some action items will increase access to federal programs valued at $10 billion to support broadband deployment and adoption. The US Department of Agriculture (USDA), for instance, will develop and promote new funding guidance making broadband projects eligible for the Rural Development Community Facility Program and will expand broadband eligibility for the RUS Telecommunications Program. Both of these changes could benefit rural libraries.
  3. It’s a roadmap. Because the report outlines who will do what and when, it provides a path to consider next steps. Options range from taking advantage of new resources to advising on new broadband research to increasing awareness of new opportunities among community partners and residents.
  4. “Promote adoption and meaningful use” is a key principle. ALA argued that broadband deployment and adoption should be “married” to drive digital opportunity, and libraries can and should be leveraged to empower and engage communities. Among the actions here is that the General Services Administration (GSA) will modernize government donation, excess and surplus programs to make devices available to schools, libraries and educational non-profits through the Computers for Learning program, and the Small Business Administration (SBA) will develop and deploy new digital empowerment training for small businesses.
  5. IMLS is called out. It is implicated in seven action items, and the lead on two related to funding projects that will provide libraries with tools to assess and manage broadband networks and expanding technical support for E-rate-funded public library Wi-Fi and connectivity expansions. IMLS also will work with the National Science Foundation and others to develop a national broadband research agenda. The activity includes review existing research and resources and considering possible research questions related to innovation, adoption and impacts (to name a few).
  6. A community connectivity index is in the offing. It is intended to help community leaders understand where their strengths lie and where they need to improve, and to promote innovative community policies and programs. I can think of a few digital inclusion indicators for consideration—how about you?

National Telecommunications and Information Administration (NTIA) Chief Lawrence Strickling noted that the report is “greater than the sum of its parts” in that it increased awareness of broadband issues across the government and brought together diverse stakeholders for input and action. I agree and am glad the Council built on the impactful work already completed through NTIA’s Broadband Technology Opportunities Program (BTOP). As with libraries and the Policy Revolution! initiative, we must play to our strengths, but also think differently and more holistically to create meaningful change. It’s now up to all of us to decide what to do next to advance digital opportunity.

The post Six takeaways from new broadband report appeared first on District Dispatch.

Jonathan Rochkind: bento_search 1.5, with multi-field queries

planet code4lib - Wed, 2015-09-23 20:25

bento_search is a gem that lets you search third party search engine APIs with standardized, simple, natural ruby API. It’s focused on ‘scholarly’ sources and use cases.

Version 1.5, just released, includes support for multi-field searching:

searcher = ENV['SCOPUS_API_KEY']) results = => { :title => '"Mystical Anarchism"', :author => "Critchley", :issn => "14409917" })

Multi-field searches are always AND’d together, title=X AND author=Y; because that was the only use case I had and seems like mostly what you’d want. (On our existing Blacklight-powered Catalog, we eliminated “All” or “Any” choices for multi-field searches, because our research showed nobody ever wanted “Any”).

As with everything in bento_search, you can use the same API across search engines, whether you are searching Scopus or Google Books or Summon or EBSCOHost, you use the same ruby code to query and get back results of the same classes.

Except, well, multi-field search is not yet supported for Summon or Primo, because I do not have access to those proprietary projects or documentation to make sure I have the implementation right and test it. I’m pretty sure the feature could be added pretty easily to both, by someone who has access (or wants to share it with me as an unpaid ‘contractor’ to add it for you).

What for multi-field querying?

You certainly could expose this feature to end-users in an application using a bento_search powered interactive search. And I have gotten some requests for supporting multi-field search in our bento_search powered ‘articles’ search in our discovery layer; it might be implemented at some point based on this feature.

(I confess I’m still confused why users want to enter text in separate ‘author’ and ‘title’ fields, instead of just entering the author’s name and title in one ‘all fields’ search box, Google-style. As far as I can tell, all bento_search engines perform pretty well with author and title words entered in the general search box. Are users finding differently? Do they just assume it won’t, and want the security, along with the more work, of entering in multiple fields? I dunno).

But I’m actually more interested in this feature for other users than directly exposed interactive search.

It opens up a bunch of possibilities for a under-the-hood known-item identification in various external databases.

Let’s say you have an institutional repository with pre-prints of articles, but it’s only got author and title metadata, and maybe the name of the publication it was eventually published in, but not volume/issue/start-page, which you really want for better citation display and export, analytics, or generation of a more useful OpenURL.

So you take the metadata you do have, and search a large aggregating database to see if you can find a good match, and enhance the metadata with what that external database knows about the article.

Similarly, citations sometimes come into my OpenURL resolver (powered by Umlaut) that lack sufficient metadata for good coverage analysis and outgoing link generation, for which we generally need year/volume/issue/start-page too. Same deal.

Or in the other direction, maybe you have an ISSN/volume/issue/start-page, but don’t have an author and title. Which happens occasionally at the OpenURL link resolver, maybe other places. Again, search a large aggregating database to enhance the metadata, no problem:

results = => { :issn => "14409917", :volume => "10", :issue => "2", :start_page => "272" })

Or maybe you have a bunch of metadata, but not a DOI — you could use a large citation aggregating database that has DOI information as a reverse-DOI lookup. (Which makes me wonder if CrossRef or another part of the DOI infrastructure might have an API I should write a BentoSearch engine for…)

Or you want to look up an abstract. Or you want to see if a particular citation exists in a particular database for value-added services that database might offer (look inside from Google Books; citation chaining from Scopus, etc).

With multi-field search in bento_search 1.5, you can do a known-item ‘reverse’ lookup in any database supported by bento_search, for these sorts of enhancements and more.

In my next post, I’ll discuss this in terms of DOAJ, a new search engine added to bento_search in 1.5.

Filed under: General

LITA: Jobs in Information Technology: September 23, 2015

planet code4lib - Wed, 2015-09-23 18:39

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week:

Systems and Web Services Librarian, Concordia College, Moorhead, MN

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

SearchHub: How Bloomberg Scales Apache Solr in a Multi-tenant Environment

planet code4lib - Wed, 2015-09-23 17:09
As we countdown to the annual Lucene/Solr Revolution conference in Austin this October, we’re highlighting talks and sessions from past conferences. Today, we’re highlighting Bloomberg engineer Harry Hight’s session on scaling Solr in a multi-tenant environment. Bloomberg Vault is a hosted communications archive and search solution, with over 2.5 billion documents in a 45TB Solr index. This talk will cover some of the challenges we encountered during the development of our Solr search backend, and the steps we took to overcome them, with emphasis on security and scalability. Basic security always starts with different users having access to subsets of the documents, but gets more interesting when users only have access to a subset of the data within a given document, and their search results must reflect that restriction to avoid revealing information. Scaling Solr to such extreme sizes presents some interesting challenges. We will cover some of the techniques we used to reduce hardware requirements while still maintaining fast responses times. Harry Hight is a software engineer for Bloomberg Vault. He has been working with Solr/Lucene for the last 3 years building, extending, and maintaining a communications archive/e-discovery search back-end. Efficient Scalable Search in a Multi-Tenant Environment: Presented by Harry Hight, Bloomberg L.P. from Lucidworks Join us at Lucene/Solr Revolution 2015, the biggest open source conference dedicated to Apache Lucene/Solr on October 13-16, 2015 in Austin, Texas. Come meet and network with the thought leaders building and deploying Lucene/Solr open source search technology. Full details and registration…

The post How Bloomberg Scales Apache Solr in a Multi-tenant Environment appeared first on Lucidworks.

Library of Congress: The Signal: Improving Technical Options for Audiovisual Collections Through the PREFORMA Project

planet code4lib - Wed, 2015-09-23 16:03

The digital preservation community is a connected and collaborative one. I first heard about the Europe-based PREFORMA project last summer at a Federal Agencies Digitization Guidelines Initiative meeting when we were discussing the Digital File Formats for Videotape Reformatting comparison matrix. My interest was piqued because I heard about their incorporation of FFV1 and Matroska, both included in our matrix but not yet well adopted within the federal community. I was drawn first to PREFORMA’s format standardization efforts – Disclosure and Adoption are two of the sustainability factors we use to evaluate digital formats on the Sustainability of Digital Formats website – but the wider goals of the project are equally interesting.

In this interview, I was excited to learn more about the PREFORMA project from MediaConch’s Project Manager Dave Rice and Archivist Ashley Blewer.

Kate: Tell me about the goals of the PREFORMA project and how you both got involved. What are your specific roles?

MediaConch Project Manager Dave Rice. Photo courtesy of Dave Rice

Dave: The goals of the PREFORMA project are best summarized by their foundational document called the PREFORMA Challenge Brief (PDF). The Brief describes an objective to “establish a set of tools and procedures for gaining full control over the technical properties of digital content intended for long-term preservation by memory institutions”. The brief recognizes that although memory institutions have honed decades of expertise for the preservation of specific materials, we need additional tools and knowledge to achieve the same level of preservation control with digital audiovisual files.

For initial work, the PREFORMA consortium selected several file formats including TIFF, PDF/A, lossless FFV1 video, the Matroska container, and PCM audio. After a comprehensive proposal process, three suppliers were selected to move forward with development. A project called VeraPDF focusing on PDF/A is led by a consortium comprised of Open Preservation Foundation, PDF Association, Digital Preservation Coalition, Dual Lab, and KEEP SOLUTIONS. The TIFF format is addressed by DPF Manager led by Easy Innova. Ashley and I work as part of the team. Our project is called MediaConch and focuses on the selected audiovisual formats: Matroska, FFV1, and PCM. MediaArea is led by Jérôme Martinez, who is the originator and principal developer of MediaInfo.

MediaConch Archivist Ashley Blewer. Photo courtesy of Ashley Blewer.

Ashley: Dave and Jérôme have collaborated in the past on open source software projects such as BWF MetaEdit (developed by AudioVisual Preservation Solutions as part of a FADGI initiative to support embedded metadata) and QCTools. QCTools, developed by BAVC with support from the National Endowment for the Humanities, was profiled in a blog post last year. Dave had also brought me in to do some work on the documentation and design of QCTools. When QCTools development was wrapping up, we submitted a proposal to PREFORMA and were accepted into the initial design phase. During that phase, we competed with other teams to deliver the software structure and design. We were then invited to continue to Phase II of the project: the development prototyping stage. We are currently in month seven (out of 22) of this second phase.

The majority of the work happens in Europe, which is where the software development team is based. Jérôme Martinez is the technical lead of the project. Guillaume Roques works on MediaConchOnline, database management, and performance optimization. Florent Tribouilloy develops the graphical user interface, reporting, and metadata extraction.

Here in the U.S., Dave Rice works as project manager and leads the team in optimizations for archival practice, system OAIS compliance, and format standardization. Erik Piil focuses on technical writing, creation of test files, and file analysis. Tessa Fallon leads community outreach and standards organization, mostly involving our plans to improve the standards documentation for both the Matroska and FFV1 formats through the Internet Engineering Task Force. I work on documentation, design and user experience, as well as some web development. Our roles are somewhat fluid, and often we will each contribute to tasks such as analyzing bitstream trace outputs to writing press releases for the latest software features.

PREFORMA: PREservation FORMAts for culture information/e-archives

Kate: The standardization of digital formats is a key piece in the PREFORMA puzzle as well as being something we consider when evaluating the Disclosure factor in the Sustainability of Digital Formats website. What’s behind the decision to pursue standardization through the Internet Engineering Task Force instead of an organization like the Society of Motion Picture and Television Engineers? What’s the process like and where are you now in the sequence of events? From the PREFORMA perspective, what’s to be gained through standardization?

Dave: A central aspect of the PREFORMA project is to create a conformance checker that would be able to process files and report on the state to which they deviate or conform to their associated specification. Early in the development of our proposal for Matroska and FFV1, we realized that the state of the specification compromised how effectively and precisely we could create a conformance checker. Additionally as we interviewed many archives that were using FFV1 and/or Matroska for preservation we found that the state of the standardization of these formats was the most shared concern. This research led us to include efforts towards facilitating the further standardization of both FFV1 and Matroska through an open standards body into our proposal. After reaching agreement from the FFmpeg and Matroska communities, we developed a standardization plan (PDF), which was included in our overall proposal.

As several standards organizations were considered, it was important to gain feedback on the process from several stakeholder communities. These discussions informed our decision to approach the IETF, which appeared the most appropriate for the project needs as well as the needs of our communities. The PREFORMA project is designed with significant emphasis and mandate on an open source approach, including not only the licensing requirements of the results, but also a working environment that promotes disclosure, transparency, participation, and oversight. The IETF subscribes to these same ideals; the standards documents are freely and easily available without restrictive licensing and much of the procedure behind the standardization is open to research and review.

The IETF also strives to promote involvement and participation; their recent conferences include IRC channels, audio stream, video streams per meeting and an assigned IRC channel representative to facilitate communication between the room and virtual attendees. In addition to these attributes, the format communities involved (Matroska, FFmpeg, and libav) were already familiar with the IETF from earlier and ongoing efforts to standardize open audiovisual formats such as Opus and Daala. Through an early discovery process we gathered the requirements and qualities needed in a successful standardization process for Matroska and FFV1 from memory institutions, format authors, format implementation communities, and related technical communities. From here we assessed standards bodies according to traits such as disclosure, transparency, open participation, and freedom in licensing, confirming that IETF is the most appropriate venue for standardizing Matroska and FFV1 for preservation use.

At this stage of the process we presented our proposal for standardization of Matroska and FFV1 standardization at the July 2015 IETF93 conference. After soliciting additional input and feedback from IETF members and the development communities, we have a proposed working group charter under consideration that encompasses FFV1, Matroska, and FLAC. If accepted, this will provide a venue for the ongoing standardization work on these formats towards the specific goals of the charter.

I should point out that other PREFORMA projects are involved in standardization efforts as well. The Easy Innova team are working on furthering TIFF standardization in their TIFF/A initiative.

Kate: Let’s talk about two formats of interest for this project, FFV1 and Matroska. What are some of the unique features of these formats that make them viable for preservation use and for the goals of PREFORMA?

Initial draft of MediaConch IETF process.

Dave: FFV1 is a very efficient lossless video codec from the FFmpeg project that is designed in a manner responsive to the requirements of digital preservation. A number of archivists participated and reviewed efforts to design, standardize, and test FFV1 version 3. The new features in FFV1 version 3 included more self-descriptive properties to store its own information regarding field dominance, aspect ratio, and colorspace so that it is not reliant on a container format to store this information. Other codecs that rely heavily on its container for technical description often face interoperability challenges. FFV1 version 3 also facilitates storage of cyclic redundancy checks in frame headers to allow verification of the encoded data and stores error status messages. FFV1 version 3 is also a very flexible codec allowing adjustments to the encoding process based on different priorities such as size efficiency, data resilience, or encoding speed. For the past year or two, FFV1 may be seen at a tipping point for preservation use. Its speed, accessibility, and digital preservation features make it an increasingly attractive option for lossless video encoding that can be found in more and more large scale projects; the standardization of FFV1 through an open standards organization certainly plays a significant role in the consideration of FFV1 as a preservation option.

Matroska is an open-licensed audiovisual container format with extensive and flexible features and an active user community. The format is supported by a set of core utilities for manipulating and assessing Matroska files, such as mkvtoolnix and mkvalidator. Matroska is based on EBML, Extensible Binary Meta Language. An EBML file is comprised of one of many defined “Elements”. Each element is comprised of an identifier, a value that notes the size of the element’s data payload, and the data payload itself. Matroska integrates a flexible and semantically comprehensive hierarchical metadata structure as well as digital preservation features such as the ability to provide CRC checksums internally per selected elements. Because of its ability to use internal, regional CRC protection it is possible to update a Matroska file to log OAIS events without any compromise to the fixity of its audiovisual payload. Standardization efforts are currently renewed with an initial focus on Matroska’s underlying EBML format. For those who would like to participate I’d recommend contributing to the EBML specification GitHub repository or joining the matroska-devel mailing list.

Ashley: Matroska is especially appealing to me as a former cataloger and someone who has migrated data between metadata management systems because of its inherent ability to store a large breadth of descriptive metadata within the file itself. Archivists can integrate content descriptions directly into files. In the event of a metadata management software sunsetting or potential loss occurring during the file’s lifetime of duplication and migration, the file itself can still harbor all the necessary intellectual details required to understand the content.

MediaConch’s plan to integrate into OAIS workflows.

It’s great to have those self-checking mechanisms in place to set and verify fixity inherently built into a file format’s infrastructure instead of requiring an archivist to do supplemental work on top by storing technical requirements, checksums, and descriptive metadata alongside a file for preservation purposes. By using Matroska and FFV1 together, an archivist can get full coverage of every aspect of the file. And if fixity fails, the point where that failure occurs can be easily pinpointed. This level of precision is ideal for preservation and as harbinger for archivists in the future. Since error warnings can be frame/slice-level specific, assessing problems becomes much easier. It’s like being able to use a microscope to analyze a record instead of being limited to plain eyesight. It avoids the problem of “I have a file, it’s not validating against a checksum that represents the entirety of a file, and it’s a 2 hour long video. Where do I begin in diagnosing this problem?”

Kate: What communities are currently using them? Would it be fair to say that ffv1 and Matroska are still emerging formats in terms of adoption in the US?

Ashley: Indiana University has embarked upon a project to digitally preserve all of its significant audio and video recordings in the next four years. Mike Casey, director of technical operations for the Media Preservation Initiative project confirmed in a personal email that “after careful examination of the available options for video digitization formats, we have selected FFV1 in combination with Matroska for our video preservation master files.”

Dave: The Wikipedia page for FFV1 has an initial list of institutions using or considering FFV1. Naturally users do not need to announce publicly that they use it but there’s been an increase in messages to related communities forums.

Plan to integrate into the open source community/outreach strategy

Kate: Do you expect that the IETF standardization process will likely help increase adoption?

Ashley: I think a lot of people are unsure of these formats because they aren’t currently backed by a standards body. Matroska has been around for a long time and is a sturdy open source format. Open source software can have great community support but getting institutional support isn’t usually a priority. We have been investing time into clarifying the Matroska technical specifications in anticipation of a future release.

The harder case to be made regarding adoption in libraries and archives is with FFV1, as this codec is relatively new, less familiar, and has yet to be fully standardized. Access to creating FFV1 encoded files is limited to people with a lot of technical knowledge.

Kate: One of my favorite parts of my job is playing format detective in which I use a set of specialized tools to determine what the file is – the file extension isn’t always a reliable or specific enough marker – and if the file has been produced according to the specifications of a standard file format. But the digital preservation community needs more flexible and more accurate format identification and conformance toolsets. How will PREFORMA contribute to the toolset canon?

Ashley: The initial development with MediaConch began with creating an extension of MediaInfo, which is already heavily integrated into many institutions in the public and private sectors as a microservice to gather information about media files. The MediaConch software will go beyond just providing useful information about the file and help ensure that the file is what it says it is and can continually be checked through routine services to ensure the file’s integrity far into the future.

MediaConch GUI with policy editor displaying parameters.

A major goal for PREFORMA is the extensibility of the software being developed — working across all computer platforms, working to check files at the item level or in batches, and cross-comparability between the different formats. We collaborate with Easy Innova and veraPDF to discover and implement compatible methods of file checking. The intent is to avoid creating a tool that exists within a silo. Even though we are three teams working on different formats, we can, in the end, be compatible through API endpoints, not just for the three funded teams but to other specialized tools or archival management programs like Archivematica. Keeping the software open source for future accessibility and development is not optional — it’s required by the PREFORMA tender.

Dave: Determining if a file has been produced according to the specifications of a standard file format is a central issue to PREFORMA and unfortunately there are not nearly enough tools to do so. I credit Matroska for developing a utility, mkvalidate, alongside the development of their format specifications, but to have this type of conformance utility accompany the specification is unfortunately a rarity.

Our current role in the PREFORMA project is fairly specific to certain formats but there are some components of the project which contribute to file format investigation. Already we have released a new technical metadata report, MediaTrace, which may be generated via MediaInfo or MediaConch. The MediaTrace report will help with advanced ‘format detective’ investigations as it presents the entire structure of an audiovisual file in an orderly way. The report may be used directly, but within our PREFORMA project it plays a crucial role in supporting conformance checks of Matroska. MediaConch is additionally able to display the structure of Matroska files and will eventually allow metadata fixes and repairs to both Matroska and FFV1.

MediaArea seeks input and feedback on the standard, specifications and future of each format for future development of the preservation-standard conformance checker software. If you work with these formats and are interested in contributing your requirements and/or test files, please contact us at

David Rosenthal: Canadian Government Documents

planet code4lib - Wed, 2015-09-23 15:00
Eight years ago, in the sixth post to this blog, I was writing about the importance of getting copies of government information out of the hands of the government:
Winston Smith in "1984" was "a clerk for the Ministry of Truth, where his job is to rewrite historical documents so that they match the current party line". George Orwell wasn't a prophet. Throughout history, governments of all stripes have found the need to employ Winston Smiths and the US government is no exception. Government documents are routinely recalled from the FDLP, and some are re-issued after alteration.Anne Kingston at Maclean's has a terrifying article, Vanishing Canada: Why we’re all losers in Ottawa’s war on data, about the Harper administration's crusade to prevent anyone finding out what is happening as they strip-mine the nation. They don't even bother rewriting, they just delete, and prevent further information being gathered. The article mentions the desperate struggle Canadian government documents librarians have been waging using the LOCKSS technology to stay ahead of the destruction for the last three years. They won this year's CLA/OCLC Award for Innovative Technology, and details of the network are here.

Read the article and weep.

LITA: A Linked Data Journey: Introduction

planet code4lib - Wed, 2015-09-23 14:00

retrieved from Wikipedia, created by Anja Jentzsch and Richard Cyganiak


Linked data. It’s one of the hottest topics in the library community. But what is it really? What does it look like? How will it help? In this series I will seek to demystify the concept and present practical examples and use-cases. Some of the topics I will touch on are:

  • The basics
  • Tools for implementing linked data
  • Interviews with linked data practitioners
  • What can you do to prepare?

In this part one of the series I will give a brief explanation of linked data; then I will attempt to capture your interest by highlighting how linked data can enhance a variety of library services, including cataloging, digital libraries, scholarly data, and reference.

What is Linked Data?

I’m not going to go into the technical detail of linked data, as that isn’t the purpose of this post. If you’re interested in specifics, please, please contact me.

At its core, linked data is an idea. It’s a way of explicitly linking “things” together, particularly on the web. As Tim Berners-Lee put it:

The Semantic Web isn’t just about putting data on the web. It is about making links, so that a person or machine can explore the web of data. With linked data, when you have some of it, you can find other, related, data.

Resource Description Framework is a framework for realizing linked data. It does so by employing triples, which are fundamentally simple (though RDF can become insanely complex), and by uniquely identifying “things” via URIs/URLs when possible. Here is a quick example:

Jacob Shelby schema:worksFor Iowa State University

Behind each of those three “things” is a URL. Graph-wise this comes out to be:

courtesy of W3C’s RDF Validator

This is the basic principle behind linked data. In practice there are a variety of machine-readable languages that are able to employ the RDF model, among them are XML, JSON-LD, TTL, and N-Triples. I won’t go into any specifics, but I encourage you to explore these if you are technologically curious.

What will it be able to do for you?

So, the whole idea of linked data is fine and dandy. But what can it do for you?  Why even bother with it? I am now going to toss around some ways linked data will be able to enhance library services. Linked data isn’t at full capacity yet, but it is rapidly becoming flesh and bone. The more the library community “buys into” linked data and prepares for it, the quicker and more powerful linked data will become. Anywho, here we go.

I should clarify that all of these examples conform to the concept of linked open data. There is such a thing as linked “closed” (private) data.


Right now the traditional cataloging world is full of metadata with textual values (strings) and closed, siloed databases. With linked data it can become a world full of uniquely-identified resources (things) and openly available data.

With linked data catalogers will be able to link to linked data vocabularies (there are already a plethora of linked data vocabularies out there, including the Library of Congress authorities and the Getty vocabularies). For users this will add clarification to personal names and subject headings. For catalogers this will eliminate the need for locally updating authorities when a name/label changes. It will also help alleviate the redundant duplication of data.

Digital Libraries

The “things instead of strings” concept noted above rings true for non-MARC metadata for digital libraries. Digital library staff will be able to link to semantic vocabularies.

Another interesting prospect is that institutions will be able to link metadata to other institutions’ metadata. Why would you do this? Maybe another institution has a digital resource that is closely related with one of yours. Linked data allows this to be done without having to upload another institution’s metadata into a local database; it also allows for metadata provenance to be kept intact (linked data explicitly points back to the resource being described).

Scholarly Data

Linked data will help scholarly data practitioners more easily keep works and data connected to researchers. This can be done by pointing to a researcher’s ORCID ID or VIVO ID as the “creator”. It will also be possible to pull in researcher profile information from linked data services (I believe VIVO is one; I’m not sure about ORCID).


Two words: semantic LibGuides. With linked data, reference librarians would be able to pull in data from other linked data sources such as Wikipedia (actually, DBpedia). This would allow for automatic updates when the source content changes, keeping information up-to-date with little effort on the librarian’s part.

To take this idea to the extreme: what about a consortial LibGuide knowledge base? Institutions could create, share, and reuse LibGuide data that is openly and freely available. The knowledge base would be maintained and developed by the library community, for the public. I recently came across an institution’s LibGuides that are provided via a vendor. To gain access to the LibGuides you had to log in because of vendor restrictions. How lame is that?


Maybe I’m be a little too capricious, but given time, I believe these are all possible. I look forward to continuing this journey in future posts. If you have any questions, ideas, or corrections, feel free to leave them in a comment or contact me directly. Until next time!

District Dispatch: Shutdown could threaten library funding

planet code4lib - Wed, 2015-09-23 13:04

With just a just a few calendar days, and even fewer legislative days left before the end of the fiscal year at midnight on September 30th, Congressional leaders are struggling to avert a Federal government shutdown by acting to fund the government as of October 1. The options available for leaders are few, however, and several roadblocks stand in the way: a block of conservative members are demanding that Congress defund Planned Parenthood; other Members are calling for dramatic cuts in non-Defense programs without compromising with Democrats, who are seeking reductions in defense spending to increase funds for some domestic priorities.

Government shutdown threatens library funding (istock photos)

As a result, Congress may be forced to adopt a series of Continuing Resolutions: short-term stopgap measures to keep the government’s doors open while leaders seek to resolve controversial issues and differences in spending priorities that have split the parties as well as reportedly created fissures within the Republican Party.

For library community priorities, and the education community in general, the appropriations process has been a mixed bag to date. The Library Services and Technology Act (LSTA) received funding of $180.9 million in the FY15 Omnibus funding bill passed last December. The Obama Administration proposal requested that Congress increase that sum to $186.6 million in its FY 2016 budget. The House Appropriations Committee, however, approved it’s a funding bill with a smaller increase to $181.1 million, while the Senate Committee provided $181.8 million. Innovative Approaches to Literacy (IAL) would receive level funding of $25 million under the Senate’s bill, although no funds were requested for the program by the Administration or the House.

Final appropriations for LSTA and IAL — included in the Labor, Health and Human Services, Education, and Related Agencies Appropriations bill — have yet to be considered on the Floor in the House or Senate and the likelihood that any such individual appropriations bill will be considered as a stand-alone bill is small and diminishing at this late stage of Congress’ funding cycle.

Funding for education priorities in general also is facing rough seas with significant cuts proposed by both House and Senate Appropriators. Overall education funding would be cut $2.7 billion by the House and $1.7 billion by the Senate. The Obama Administration had proposed an education funding increase of $3.6 billion.

The pathways forward for FY16 funding are uncertain. It is virtually guaranteed, however, that between now and the end of this month you will be hearing and reading the terms government shutdown, continuing resolution, Omnibus package, Planned Parenthood, and defense versus domestic spending more and more every day.

Might want to keep those earplugs and eyeshades handy!

The post Shutdown could threaten library funding appeared first on District Dispatch.

In the Library, With the Lead Pipe: Editorial: Summer Reading 2015

planet code4lib - Wed, 2015-09-23 13:00

Photo by Flickr user Moyan_Brenn (CC BY 2.0)

Editors from In The Library With The Lead Pipe are taking a break from our regular schedule to share our summer reading favs. Tell us what you’ve been reading these last few months in the comments!


It is, of course, winter where I live. This makes it a great time to curl up with a nice big fat tome, and I have been spending the long winter nights reading Peter Watson’s The great divide: history and human nature in the Old World and the New. Watson takes the reader on a journey from 15,000BC through the Great Flood, following the first humans across Eurasia and through the rise and fall of empires until the Conquistadors appeared. He explains how, according to our best guesses, humans came to be in the Americas, when it happened, and why the new world they found there led the Americans in such a different direction to the ‘old world’ of Eurasia. This is a fascinating book in many ways. Covering archaeology, religion, botany, geology and very ancient history, Watson attempts to explain why the pre-Columban Americas had such comparatively short-lived civilisations, bloody religions, and localised cultures.


I’ve recently joined a mini book club with my brother and a few of our friends who are also Marvel Unlimited subscribers. Our rough plan is to choose arcs that are completed in 6 issues. Several of us have a tendency to spiral off though. So far we’ve done Longshot Saves the Marvel Universe (2013), Rogue (2004), Young Avengers (2005), and Deathlok (2014). Young Avengers in particular spiraled off into the whole 12 issue run as well as Truth: Red, White & Black (2003) which linked up nicely but came from a separate Twitter recommendation.

I’ve also nostalgically torn through everything Rogue and Gambit, which lead to meeting Pete Wisdom and exploring New Excalibur (2005). And I’m devouring the recent spate of female leads: Captain Marvel (2014), Ms Marvel (2014), Thor (2014), and The Unbeatable Squirrel Girl (2015). The metadata in Marvel Unlimited is awkward and the coverage can be spotty, which leads to lots of online searching to figure out what to read next. I’m grateful for friends who can share tips such as, “For further reading I suggest Avengers Children’s Crusade which is basically issues 13+. Issue 4 has its own separate entry on Marvel Unlimited for no reason, and Avengers Children’s Crusade Young Avengers is by the same author and comes between issues 4 and 5.”


The summer here has flown by pretty quickly. I’m beginning a research project for an upcoming book, The Feminist Reference Desk, edited by Maria Accardi, so I’ve been doing a lot of reading to prep for that. The article that I’d really recommend everyone read is  Library Feminism and Library Women’s History: Activism and Scholarship, Equity and Culture by Suzanne Hildenbrand. It’s really insightful and explains how gender roles impact our profession.

For more light reading, I have been reading Modern Romance the new book by Aziz Ansari and Eric Klinenberg. The research is interesting and Aziz is pretty funny. Last, I totally judged a book by it’s cover and started reading The Woman Destroyed by Simone de Beauvoir. So far I’ve only read the first story, but I’m not sure how I feel about it. I hope everyone had a good summer!


This summer I haven’t done my due diligence in the reading department. I always have a mental list of books to read based on people’s recommendations, but there are never enough hours in the day to get through everyone’s great suggestions! During my road trip to Indiana University Libraries Information Literacy Colloquium in August, I listened to a book on CD, which is my absolute favorite way to read. It was Jodi Picoult’s gripping novel Change of Heart. Always exploring controversial topics through fiction, things unravel at the beginning of Picoult’s novel when a woman’s husband and child are brutally murdered. The plot thickens as the death row inmate who committed the murders seeks to bequeath his heart to a little girl in need of a heart transplant. Not just any girl, but the biological daughter and sister of the victims murdered. The novel forced me to consider a new twist on an already uncomfortable subject.


For the first time since I was a kid, I’m using the public library to borrow books again. I know that’s such a bad librarian thing to admit, but since I worked in academic libraries, I got everything I needed through the school. Using my public library has been… interesting. It’s fascinating to see things from the strictly-patron side again (and that’s a whollllle different Lead Pipe article)! I’ve been devouring books lately. Some of my favorites have been Miguel Street by V.S. Naipaul, All My Puny Sorrows by Miriam Toews, and Geek Love by Katharine Dunn. I flew through The Paris Wife by Paula McLain (yeah, I’m a couple years late on that one) and I was totally surprised/disturbed by Sarah Waters’ Fingersmith. I’m in the middle of CA Conrad’s Ecodeviance and I’m about to start Emily Gould’s Friendship. I’ve also been reading some books about Minnesota history/culture and rocks/minerals specific to Lake Superior, since that’s where I’m living now!


Discussing summer reading makes me a little nervous. Since I joined Twitter about a year and half ago my book reading markedly declined. At the same time my eyesight deteriorated and my child became a toddler. Whatever combination of factors may have prompted it, the fact is that I read fewer books than ever before. My overall reading, however, has not been reduced, principally because Twitter (for me this means mainly ‘library Twitter’) is directing me daily to various news articles, journal articles, blog posts, websites, and other shorter-form writing that absorbs most of my reading time on any given day.

Yet I did find the time for at least ONE book this summer! I am working on a project tracing the careers of German academic librarians through the turbulent decades of the mid-20th century. For this research I read a book of essays about Austrian women librarians who confronted persecution under the Nazis and were either forced into emigration, imprisoned, persecuted, tortured, or even murdered. Entitled “Austrian Women Librarians on the Run: Persecuted, Suppressed, Forgotten?” [Ilse Korotin, ed., Österreichische Bibliothekarinnen auf der Flucht: vefolgt, verdrängt, vergessen? Wien: Praesens Verlag, 2007.], this small book contains a rich trove of remarkable and often harrowing stories of women librarians who found themselves forced out of their jobs and their homes before and during World War II. Of particular interest to me was how the political commitments of many women were strengthened by the experiences of persecution, expulsion, and exile. Many of those who survived the war and the Holocaust saw librarianship as an integral part of their struggles against racism and sexism, and their commitments to social justice.


Over the last couple of years, I’ve gotten back into reading real-life books, and it has been wonderful. Some distant friends started a sci-fi/fantasy book club, and it’s gotten me back into reading. We used to meet monthly via Google Hangouts from our varying locations (the power of technology!), and I’ve recently joined up with another book club that’s in-the-flesh. In the meantime, I’ve been enjoying some easy escapist reading. Most recently, I finished The Curious Incident of the Dog in the Night-Time by Mark Haddon. Finding his neighbor’s dog dead in the front yard (and this is how the book opens, y’all), young Chris decides to do some detecting in order to find out whodunnit. Mayhem ensues, of course.

My favorite book of the summer though has definitely been Magonia by Maria Dahvana Headly. I became obsessed with the unique, otherworldliness of this book. Reading sci-fi and fantasy, I sometimes find it difficult to escape recurring themes and common tropes. What I liked most about this one was that it blew all of that out of the water. I’d never experienced a world like Magonia before, and I loved that.

State Library of Denmark: Light validation of Solr configuration

planet code4lib - Wed, 2015-09-23 10:17

This week we were once again visited by the Edismax field alias bug in Solr: Searches with boosts, such as foo^2.5, stopped working. The problem arises when an alias with one or more non-existing fields is defined in solrconfig.xml and it is tedious to track down as one needs to check for existence of all the fields referenced.

We have a 10+ different Solr setups and we use aliases in most of them. So a quick script was whipped together:, which (…wait for it…) validates Solr configs. Nothing fancy and it tends to report false positives when things are commented out in the XML files. Still, it does check that

  • all fields in schema.xml references existing field types
  • all copyFields in schema.xml references existing fields
  • all fields referenced in solrconfig.xml are defined in schema.xml
  • no alias in has the same name as a field in schema.xml

Some of these problems, such as referencing a non-existing fields in mlt.fl or pf in solrconfig.xml, are silent and hard to track down: Solr does not complain and searches seem to work. But in the case of misspellings of field names, the result is poorer quality searches as the intended functionality is not activated.

Cross-validation of fields used in solrconfig.xml and schema.xml would be nice to have as part of Solr core startup, but until then the script might be of use. Get it at GitHub.

Karen Coyle: FRBR Before and After - Afterword

planet code4lib - Wed, 2015-09-23 05:51
Below is a preview of the Afterword of my book, FRBR, Before and After. I had typed the title of the section as "Afterward" (caught by the copy editor, of course), and yet as I think about it, that wasn't really an inappropriate misspelling, because what really matters now is what comes after - after we think hard about what our goals are and how we could achieve them. In any case, here's a preview of that "afterward" from the book.

AfterwordThere is no question that FRBR represents a great leap forward in the theory of bibliographic description. It addresses the “work question” that so troubled some of the great minds of library cataloging in the twentieth century. It provides a view of the “bibliographic family” through its recognition of the importance of the relationships that exist between created cultural objects. It has already resulted in vocabularies that make it possible to discuss the complex nature of the resources that libraries and archives gather and manage.
As a conceptual model, FRBR has informed a new era of library cataloging rules. It has been integrated into the cataloging workflow to a certain extent. FRBR has also inspired some non-library efforts, and those have given us interesting insight into the potential of the conceptual model to support a variety of different needs.
The FRBR model, with its emphasis on bibliographic relationships, has the potential to restore context that was once managed through alphabetical collocation to the catalog. In fact, the use of a Semantic Web technology with a model of entities and relations could be a substantial improvement in this area, because the context that brings bibliographic units together can be made explicit: “translation of,” “film adaptation of,” “commentary on.” This, of course, could be achieved with or without FRBR, but because the conceptual model articulates the relationships, and the relationships are included in the recent cataloging rules, it makes sense to begin with FRBR and evolve from there.

However, the gap between the goals developed at the Stockholm meeting in 1991 and the result of the FRBR Study Group’s analysis is striking. FRBR defined only a small set of functional requirements, at a very broad level: find, identify, select, and obtain. The study would have been more convincing as a functional analysis if those four tasks had been further analyzed and had been the focus of the primary content of the study report. Instead, from my reading of the FRBR Final Report, it appears that the entity-relation analysis of bibliographic data took precedence over user tasks in the work of the FRBR Study Group.
The report’s emphasis on the entity-relation model, and the inclusion of three simple diagrams in the report, is mostly likely the reason for the widespread belief that the FRBR Final Report defines a technology standard for bibliographic data. Although technology solutions can and have been developed around the FRBR conceptual model, no technology solution is presented in the FRBR Final Report. Even more importantly, there is nothing in the FRBR Final Report to suggest that there is one, and only one, technology possible based on the FRBR concepts. This is borne out by the examples we have of FRBR-based data models, each of which interprets the FRBR concepts to serve their particular set of needs. The strength of FRBR as a conceptual model is that it can support a variety of interpretations. FRBR can be a useful model for future developments, but it is a starting point, not a finalized product.
There is, of course, a need for technology standards that can be used to convey information about bibliographic resources. I say “standards” in the plural, because it is undeniable that the characteristics of libraries and their users have such a wide range of functions and needs that no one solution could possibly serve all. Well-designed standards create a minimum level of compliance that allows interoperability while permitting necessary variation to take place. A good example of this is the light bulb: with a defined standard base for the light bulb we have been able to move from incandescent to fluorescent and now to LED bulbs, all the time keeping our same lighting fixtures. We must do the same for bibliographic data so that we can address the need for variation in the different approaches between books and non-books, and between the requirements of the library catalog versus the use of bibliographic data in a commercial model or in a publication workflow.
Standardization on a single over-arching bibliographic model is not a reasonable solution. Instead, we should ask: what are the minimum necessary points of compliance that will make interoperability possible between these various uses and users? Interoperability needs to take place around the information and meaning carried in the bibliographic description, not in the structure that carries the data. What must be allowed to vary in our case is the technology that carries that message, because it is the rapid rate of technology change that we must be able to adjust to in the least disruptive way possible. The value of a strong conceptual model is that it is not dependent on any single technology.
It is now nearly twenty years since the Final Report of the FRBR Study Group was published. The FRBR concept has been expanded to include related standards for subjects and for persons, corporate bodies, and families. There is an ongoing Working Group for Functional Requirements for Bibliographic Records that is part of the Cataloguing Section of the International Federation of Library Associations. It is taken for granted by many that future library systems will carry data organized around the FRBR groups of entities. I hope that the analysis that I have provided here encourages critical thinking about some of our assumptions, and fosters the kind of dialog that is needed for us to move fruitfully from broad concepts to an integrative approach for bibliographic data.
From FRBR, Before and After, by Karen Coyle. Published by ALA Editions, 2015
©Karen Coyle, 2015
FRBR, Before and After by Karen Coyle is licensed under a Creative Commons Attribution 4.0 International License.

DuraSpace News: Telling DSpace Stories at University of Konstanz with Stefan Hohenadel

planet code4lib - Wed, 2015-09-23 00:00

“Telling DSpace Stories” is a community-led initiative aimed at introducing project leaders and their ideas to one another while providing details about DSpace implementations for the community and beyond. The following interview includes personal observations that may not represent the opinions and views of the University of Konstanz or the DSpace Project.

Christina Harlow: Notes On Being A Metadata Supervisor

planet code4lib - Wed, 2015-09-23 00:00

I’ve been meaning to write this post up for a while. It is still very much a work in progress, so please forgive the winding, rambling nature this post will take. I’m trying to pull together and process ideas and experiences that I can eventually use in my own improvement, or maybe as an essay or article proposal on ‘reskilling catalogers’ and how it is part of a larger re-imagining of library metadata work beyond just teaching catalogers to code or calling cataloging ‘metadata’. If you have feedback on this, please let me know: or @cm_harlow on Twitter. Thanks!

My Background and Goals as a Supervisor

First, a bit on me, my work background briefly, and my current job, as well as my idealism for metadata work.

My current position is both my first ‘librarian’ position (although, FYI, I think the term ‘entry-level librarian’ has serious flaws, and it is a really sore spot with me personally) and my first time as a supervisor in a library. I supervise the Cataloging Unit (5 f/t staff members), sometimes referred to by the catalogers themselves (but nobody else at present) as the ‘Cataloging & Metadata Unit’, in a medium-sized academic library. Before this, I was temporarily a ‘professional’ but non-librarian metadata munger, and before that, a support staff or paraprofessional in a large academic library in a variety of posts. Some of those posts involved supervising students, but not officially - I’d be there to assign/guide work, check hours, schedule, do all the on-the-ground stuff, but wasn’t the person who would sign the timesheets or do the hiring. Often, and more frequently in recent years, I was a bit of an unofficial liaison, tutor, whatever you want to call it, for some of the librarians looking to expand technical practices and/or skills, but very much unofficially. A lot of this kind of work came to me because I love exploring new technology and ideas, and I absolutely love informal workshops/skillshares. Outside of libraries, I’ve got some supervisory experience, as well as a year as a public NYC middle school math teacher, under my belt.

In taking my current position, there was a lot more involved in making that decision, but one reason included that I was actually pretty excited to take on being a Cataloging & Metadata Unit supervisor (as well as pretty nervous, of course). I wanted to see how I would adapt both to this position and adapt the position to me. I continue to hope I have a lot to offer to the catalogers I work with because I spent years as a libraries paraprofessional before deciding to get my MLIS and move ‘up the ladder’, and I’m highly suspect of that ladder.

Additionally, I hope this can be a way for me to lead library data work into a new imagining and model through example and experience. Many people talk about how Cataloging == Metadata, and we see more and more traditional MARC cataloging positions being called ‘Metadata’ positions. They might even involve some non-MARC metadata work, but usually remaining divorced from MARC work by differing platforms, standards, data models, or other. There are plenty of people declaring (rightfully in my opinion!) metadata and cataloging to be the same work, yet these statements are usually from one side of the still-existent fence unfortunately. Actually integrating decades of data silos, distinct sets of standards and communities, toolsets/editors, functional units, workflows and procedures, among so many other divisions both real and perceived, is something I want to make actually happen, though I freely admit how daunting it can be. Trying my hand at being a supervisor was one way for me to help us as a library technology and data community work towards this integration.

A lot of what I’ve focused on in the first months of this job is assessing what already exists - catalogers’ areas of expertise and interests, workflows, toolsets, communication lines, expectations - then trying to lay down foundations for where I hope we as a unit can go. As it stands, there was a lot of change going on around my arrival in this position, especially for the catalogers. My library migrated (in an over-rushed fashion, but hindsight is 20-20) ILSes a few months before my arrival. Cataloging procedures had been haphazardly moved to RDA according to particular areas of MARC expertise and interest (such as Music and Video cataloging was moved to RDA policies because the particular catalogers focused on that area are invested in learning RDA). The digital collections metadata work was partially given to the catalogers vis-a-vis a very much locked-down MODS metadata editor, before being taken back over by digital library developers, digitization staff, and archivists (and now managed by me). And there is imminent but yet-to-be-well-defined (due to a number of reasons, including many retirements) technical services re-organization going on, both of department structure and space. As regards non-MARC metadata, though not the metadata work the catalogers were involved in before my arrival, there is migration of multiple digital library platforms to one, an IR platform migration in the works, and migration/remediation of all the previous digital projects metadata from varying versions of DC to MODS.

So, a lot of change to walk into as the new cataloging & metadata unit supervisor, as well as the only cataloging and/or metadata librarian. Even more changes for the catalogers to endure with now a new and relatively green supervisor.

I was pretty prepped to expect that I would be taking on a new sort of library data leadership role that works across departments - to re-imagine, as I understand it, where, how and why cataloging and metadata expertise/work can be applied. And to make sure that all of our library data practices are not just interoperable, but accessible to metadata enhancement and remediation work by the catalogers. This has meant the creation of new workflow, data piplines, tools, and most importantly, comfort areas for the catalogers. Working with them at the forefront of my change efforts has really forced me to develop new skills rather quickly, including trying to situate not just myself, but a team of talented people with varying experiences and goals in a rapidly changing field. Change doesn’t scare me, but it’s not just about me now.

Stop Dumping on Technical Services & Stop Holding onto the Past, Technical Services

Beyond all of these local changes, it is pretty well documented that libraries, in particular, academic libraries’ technical services departments are changing. Some might say shrinking, and I understand that, but I want to see it as positive change - we can take our metadata skills and expertise, and generalize them outside of MARC and the ILSes that so many catalogers associate directly with their work. That generalized skillset - and I hesitate at using the word generalized, perhaps something like more easily transferable, or integrated, or interoperable is better - can then be applied to many different and new library workflows; in particular, all the areas growing around data work writ large in libraries.

In a presentation from a while ago, I made a case for optimism in library technical services, if we can be imaginative and ready to adapt, as well as libraries at a higher level be prepared for what can be best described as more modular and integrated data workflows - no more data/workflow/functional/platform silos. I try not just to say that ‘cataloging is metadata work’, but involve metadata work across data platforms and pipelines, and show the value of making this work responsive and iterative - almost agile, though I feel uncomfortable taking that term from a context I’m less familiar with (agile development). I especially want to divorce cataloging expertise from knowing how to work with a particular ILS or OCLC Connexion editor.

In the Ithaka S+R US Library Survey 2013, the question “Will your library add or reduce staff resources in any of the following areas over the next 5 years?” showed a steep decline of staff resources for technical services in response - close to 30%, and far more of a decline than any other academic library area mentioned in the context of this question. However, we see a lot of growth in response to that question for areas that can use the data expertise currently under-tapped in cataloging and metadata work: areas such as Digital preservation and archiving; Archives, rare books, and special collections; Assessment and data analytics; Specialized faculty research support (including data management); and Electronic resources management. This all uses the skills of cataloging and metadata workers in different ways, but we also need to recognize that there are different and varied skills represented in cataloging and metadata work as it exists now. One way to conceptualize this is the divide in skills required between original MARC cataloging, where the focus is very much on the details of a single object and following numerous standards, versus what may have previously been called ‘database maintenance’ and is more generally seen, to me, now as batch library data munging - where it is necessary to understand the data models involved and how to target enhancements to a set of records while avoiding errors in data outliers.

Cataloging versus Metadata & Where Semantics Hit Institutional Culture

A note on ‘cataloging’ versus ‘metadata’ as a term to describe the work: yes, I agree that its all metadata, and that continuing to support the divide between MARC and non-MARC work is a problem. However, I also recognize that departmental and institutional organizations and culture are not going to change overnight, and that these terms are very much tied into those. There is disruption, then there is alienation, and as a supervisor, I’ve been very aware of the tense balance required therein. I don’t want to isolate the catalogers; I really cannot afford to isolate the administration that helps to decide the catalogers’ professional futures (if job lines remain upon vacancy; if their work continues to be recognized and supported; if they get reassigned to other units with easier to explain areas of operation and outreach; etc.). But I know things needs to change. This explains in part why I am wary of the use of new terms (though metadata is not a new term, but it has only recently grown exponentially in use for describing MARC work) because they can carry the possibility of turning people away from changes, as folks might see the new labels as part of a gimmick and not real, substantive change. I will generally go with describing all of this work as metadata in most contexts, because I do feel like we are beginning to integrate our data work in a way that the catalogers now buy into what is meant really by saying metadata. Yet in certain contexts, I do continue to use cataloging to mean MARC cataloging and metadata as non-MARC work, because it is admittedly an easy shorthand as well as tied into other (perhaps political, perhaps not) considerations.

Back to the post at hand, what I’ve started to build, and see some forward-movement on (as well as some hesitation), is a more integrated cataloging & metadata unit. The catalogers did do some metadata work before I arrived, by which I mean non-MARC metadata creation. However, this was severely limited to simply working with descriptive metadata in a vaccuum - namely, a metadata editor made explicitly for a particular project. From what I can tell, the metadata model and application profile was created outside the realm of the catalogers; they were just brought in to fill in the form for one object at a time. This is not unusual, but hardly touches on what metadata work can be. Worse, the metadata work the catalogers did ended up not being meaningfully used in any platform or discovery layer, resulting in some disenchantment with non-MARC metadata work as a whole (seeing it as not important as ‘traditional MARC cataloging’, or as unappreciated work). I can absolutely understand how this limited-view editor and metadata work decisions can make things more efficient; I somewhat understand the constant changes in project management that left a lot of metadata work unused; but I am trying to unravel now just what this means for the catalogers’ understanding of high-level data processes outside of MARC and how the work they do in MARC records can apply similarly to the work done elsewhere for descriptive metadata. I also need to rebuild their trust of their work being appreciated and used in contexts beyond the MARC catalog. The jury is still out on how this is going.

Cataloging/Metadata Reskilling Workflows So Far

So yeah, yeah, lots of thoughts and hot air on what I am trying to do, what I hope happens. What have I tried? And how is it going? How are the catalogers reacting? Here are a few examples.

Metadata Remediation Sprint

When I first arrived, we had a ‘metadata remediation sprint’. This was a chance for us all to get to know each other in a far less formal work environment - as well as a chance for the catalogers to get to know some of my areas of real interest in data work, in particular, non-MARC metadata remediation using OpenRefine, a set of Python scripts, and GitHub for metadata versioning. This event built on the excitement of the recently announced Digital Library of Tennessee, a DPLA Service Hub with aggregation and metadata work happening at UTK (I’m the primary metadata contact for this work). The catalogers knew something about what this meant, and not only did they want to learn more, but they wanted to get involved. I tried my best to build a data remediation and transformation pipeline for our own UTK collections that could involve them in this work, but some groundwork for batch metadata remediation had to be laid first, and this sprint helped with that.

The day involved having a 8:30 AM meeting (with coffee and pie for breakfast) where I explained the metadata sets, OAI-PMH feeds of XML records, the remediation foci - moving DC to MODS, reconciling certain fields against chosen vocabularies, cleaning up data outliers - and working with this metadata in OpenRefine. There was some talk about the differences between working with data record by record versus working with a bunch of records in batch, as we had at that point about 80,000 DC records needing to be pulled, reviewed, remediated and transformed, collection by collection. Then, each cataloger was given a particular dataset (chosen according to topical interest), and given the day to play around with migration this metadata work. It was seen as a group focus on a particular project, so a kind of ‘sprint’.

The sprint was also a way for me to gauge each cataloger’s interest possibly in doing more of this batch metadata work, who really wanted to dive into learning new tools, and the ability each had for working with metadata sets. This is not to say at all that each cataloger couldn’t learn and excel at batch metadata work, using new tools, or metadata work generally; but matching different aspects of metadata work to folk’s work personalities was key in my admittedly limited opinion. In assigning new projects and reskilling, I didn’t want to throw anyone into new areas of work that they wouldn’t be a good fit for or have some sort of overlapping expertise with, as there was already enough change going on. Cataloging & metadata work is not always consistent or uniform, so there is and remains different types of projects to be better integrated into workflows and given to the person best able to really take ownership (in a positive way) of that project and excel with it.

The catalogers had so much untapped expertise already, that the sprint went very well. Some catalogers warmed to OpenRefine right away, with the ability to facet, see errors, and repair/normalize across records. Other catalogers preferred to stick with using Excel and focusing in on details for each record. All the datasets, each a collection pulled from the OAI-PMH feed and prepared as CSV and as OpenRefine projects by me beforehand, were pulled from GitHub repositories, giving the catalogers a view of version control and one possible use of Git (without me saying, ‘Hey, I’m going to teach you version control and coding stuff’ - the focus was on their area of work, metadata). Better yet, I was able to get their work into either migration paths for our new digital collections platform or even into the first group of records for the DPLA in Tennessee work, meaning the catalogers saw immediately that their work was being used and greatly appreciated (if only by me at first, those others have taken note of this work as well).

From that day, beyond getting the catalogers comfortable with asking me questions and attacking new projects (and new types of projects), the catalogers were able to claim ownership of new kinds of work for a broader view of metadata, helping generate buy-in with some of the metadata migration and integration work I was talking about earlier. Some of the harder to migrate and map collections, due to bad original metadata creation practices, were handed off to the catalogers who are more record-focused; other collections, needing more transformation and reconciliation work, were handed off to the catalogers who really enjoyed working with OpenRefine and batch editing. In particular, the OpenRefine GREL (Google Refine Expression Language, think kind of javascript but for data editing), has warmed 2 of the catalogers to the idea of scripting (but again, without someone explicitly saying ‘hey, you need to learn scripting’). They all are aware of GitHub now and some have even begun using a GitHub client on their workstations to access new datasets for migration work.

The catalogers have done amazingly well with all of this, and I know how lucky I am to work with a team that is this open to change.

Moving Some to Batch Data Work

This movement in part towards batch metadata work and remediation doesn’t just stick with the original focus on non-MARC metadata for that sprint day. In particular, 2 of the catalogers have really taken on a lot of the batch metadata normalization and enhancement with our MARC data as well, informed perhaps by seeing batch data work outside of the context of MARC/non-MARC or specific paltforms during that day or in other such new projects given to them. Though, to be fair, I need to admit two things (at least):

  1. one of the catalogers is already the ‘database maintenance’ person, or what I’d call data administrator, though her position (not HR) title was, upon my arrival, still blank. This fact is tied up to ideas in administration of this database maintenance work not being ‘cataloging’ in a traditional understanding - highlighting the record by record creation versus data munging divide that seems to exist in too many places still. I think this work will lead metadata work in the future, especially as content specialists are more often the metadata creators in digital collections, and catalogers need to be brought in increasingly for data review, remediation, enhancement, and education/outreach. Don’t think this will happen with MARC records? I think it already is when we consider the poor state of most vendor MARC records we often accept. We need to find better ways to review/enhance these records while balanced against the possibility they’ll be overwritten. Leading to my second admission…
  2. The MARC/non-MARC work is still very much tied to platforms, especially the Alma ILS which our department has really bought into at a high level. One of the catalogers who did very well with OpenRefine is now working with the vendor records for electronic resources using MARCEdit outside of the ILS. She has really done very well in being able to review these records in MARCEdit in batch, apply some normalization routines, and only then import those into our Alma ILS. While these do eventually end up in the ILS, it is my hope that the work with the data itself outside of Alma gives the non-MARC data work outside of other platforms and editors more context for her. I don’t know if this is the case, however.

For the catalogers who are more record-focused, we’ve gotten some cleanup projects requiring more manual review lined up - this includes reviewing local records where RDA conversion scripts/rules cannot be applied automatically because they need a closer review, or sets of metadata where fields are used too inconsistently to have metadata mappings applied in batch. This work is not pressing/urgent, so it can be worked on when a break from traditional MARC cataloging is needed, or the platforms for traditional MARC cataloging are down (which seems to occur more and more often).

Centralized, Public, Group-created Documentation

In all of this, one of the key things I’ve needed to do is to get centralized, responsive (as in changing according to new needs and use cases), and open/transparent documentation somewhere. There was some documentation stored in various states in a shared drive when I arrived, but a lot of it had not been updated since the previous supervisor. There were multiple version of procedures floating about in the shared drive as well as in print-outs, leading to other points of confusion. Additionally, it was difficult, sometimes impossible, for other UTK staff who sometimes need to perform minor cataloging work or understand how cataloging happens to access these documents in that shared drive.

Upon my arrival, the digital initiatives department was already planning a move to confluence wikis for their own documentation; I immediately signed up for a Cataloging wiki space as well. In getting this wiki set-up, a lot of the issue was (and remains) buy-in - not just for reading the wiki, but for using and updating the wiki documentation. Documentation can be a pain to write up, and there can be fear about ‘writing the wrong thing’ for everyone to see, particularly in a unit that has had many different workflows and communication whirlpools about.

I’ve tried my best to get wiki documentation buy-in by example and creating an open atmosphere, though I worry at how successful I’ve been with this. I link to everything from procedures, legacy documentation in process of being updated, data dictionaries, mappings, meetings notes, and unit goals in the wiki. Catalogers are asked to fill in lacunae that I can’t fill myself either due to lack of UTK-specific knowledge/experience or time. I try to acknowledge their work on documentation wherever possible - meetings, group emails, etc. Other staff members outside of the Cataloging Unit are often pointed to the wiki documentation for questions and evolving workflows. I hope this gives them a sense of appreciation for doing this work.

Documentation and wiki buy-in remains a struggle, but not because the catalogers don’t see the value of this work (I believe), but because documentation takes time and can be hard to create. To not push too hard on getting this documentation filled out immediately, thus risking burn out, I’ve not pushed on rewriting all possible policies and procedures at once, despite there being many standing documentation gaps. Instead, we aim to focus on documenting areas that we run across in projects or that the catalogers are particularly interested in (like music cataloging, special collections procedures, etc.) or working through currently. I’m heartened to say that, increasingly, they are sharing their expertise more and more in the wiki.

To be continued…

I have outstanding ideas and actions to discuss, including our policy on cataloger statistics (and how they are used), the recent experience of revising job descriptions, and the difficulty between both being a metadata change agent and the advocate for the catalogers when cataloging work is often overlooked or underestimated by either administration or other departments (particularly as more metadata enhancement instead of or in tandem with metadata creation is done). But this will need to be part of a follow-up post.

I’m new to all this, and I’m trying my best to be both a good colleague and supervisor while wanting to move the discussion on what metadata work is in our library technology communities. I have a lot of faults and weaknesses, and as such, if you’re reading this and have ideas, recommendations, criticisms, or other, please get in touch - or @cm_harlow on Twitter (and thanks for doing so). Whatever happens in the future, whether I stay a supervisor or not in the years to come (I do sorely miss having my primary focus on metadata ‘research and development’ so to speak), this has been a really engaging experience so far.

Nicole Engard: Bookmarks for September 22, 2015

planet code4lib - Tue, 2015-09-22 20:30

Today I found the following resources and bookmarked them on Delicious.

  • Vector Vector is a new, fully open source communication and collaboration tool we’ve developed that’s open, secure and interoperable. Based on the concept of rooms and participants, it combines a great user interface with all core functions we need (chat, file transfer, VoIP and video), in one tool.
  • ResourceSpace Open source digital asset management software is the simple, fast, & free way to organize your digital assets

Digest powered by RSS Digest

The post Bookmarks for September 22, 2015 appeared first on What I Learned Today....

Related posts:

  1. Software Freedom Day in September
  2. Another big name online office suite
  3. Convert Your Files

LITA: Personal Digital Archiving – a new LITA web course

planet code4lib - Tue, 2015-09-22 17:23

Check out the latest LITA web course:
Personal Digital Archiving for Librarians

Instructor: Melody Condron, Resource Management Coordinator at the University of Houston Libraries.

Offered: October 6 – November 11, 2015
A Moodle based web course with asynchronous weekly content lessons, tutorials, assignments, and group discussion.

Register Online, page arranged by session date (login required)

Most of us are leading very digital lives. Bank statements, interaction with friends, and photos of your dog are all digital. Even as librarians who value preservation, few of us organize our digital personal lives, let alone back it up or make plans for it. Participants in this 4 week online class will learn how to organize and manage their digital selves. Further, as librarians participants can use what they learn to advocate for better personal data management in others. ‘Train-the-trainer’ resources will be available so that librarians can share these tools and practices with students and patrons in their own libraries after taking this course.


At the end of this course, participants will:

  • Know best practices for handling all of their digital “stuff” with minimum effort
  • Know how to save posts and data from social media sites
  • Understand the basics of file organization, naming, and backup
  • Have a plan for managing & organizing the backlog of existing personal digital material in their lives (including photographs, documents, and correspondence)
  • Be prepared to handle new documents, photos, and other digital material for ongoing access
  • Have the resources to teach others how to better manage their digital lives

Here’s the Course Page

Melody Condron is the Resource Management Coordinator at the University of Houston Libraries. She is responsible for file loading and quality control for the library database (basically she organizes and fixes records for a living). At home, she is the family archivist and recently completed a 20,000+ family photo digitization project. She is also the Chair of the LITA Membership Development Committee (2015-2016).


October 6 – November 11, 2015


  • LITA Member: $135
  • ALA Member: $195
  • Non-member: $260

Technical Requirements:

Moodle login info will be sent to registrants the week prior to the start date. The Moodle-developed course site will include weekly new content lessons and is composed of self-paced modules with facilitated interaction led by the instructor. Students regularly use the forum and chat room functions to facilitate their class participation. The course web site will be open for 1 week prior to the start date for students to have access to Moodle instructions and set their browser correctly. The course site will remain open for 90 days after the end date for students to refer back to course material.

Registration Information:

Register Online, page arranged by session date (login required)
Mail or fax form to ALA Registration
call 1-800-545-2433 and press 5

Questions or Comments?

For all other questions or comments related to the course, contact LITA at (312) 280-4268 or Mark Beatty,

SearchHub: How Getty Images Executes Managed Search with Apache Solr

planet code4lib - Tue, 2015-09-22 17:05
As we countdown to the annual Lucene/Solr Revolution conference in Austin this October, we’re highlighting talks and sessions from past conferences. Today, we’re highlighting Jacob Graves’s session on how they configure Apache Solr for managed search at Getty Images. The problem is to create a framework for business users that will:
  • Hide technical complexity
  • Allows control over scoring components and result ordering
  • Allows balancing of these scoring components against each other
  • Provides feedback
  • Allows visualization of the result of their changes
We call this Managed Search Managed Search: Presented by Jacob Graves, Getty Images from Lucidworks Join us at Lucene/Solr Revolution 2015, the biggest open source conference dedicated to Apache Lucene/Solr on October 13-16, 2015 in Austin, Texas. Come meet and network with the thought leaders building and deploying Lucene/Solr open source search technology. Full details and registration…

The post How Getty Images Executes Managed Search with Apache Solr appeared first on Lucidworks.

Roy Tennant: Astonishing Public Service

planet code4lib - Tue, 2015-09-22 15:21

Last night I dined at the bar of a run-of-the-mill chain restaurant. On the road for business this is my usual modus operandi, with the variant of dining in the hotel bar instead. You get the picture.

So my bartender in this instance turns out to be flat out awesome. She’s there when I want her and not when I don’t. A simple signal while I’m on a long phone call with my wife answers any question. She’s attentive but not hovering. She knows which questions to ask and when, and also when to stay away. She even recognizes me from previous visits, often a year apart. She gives, in other words, astonishing service. Believe me, I know it when I see it.

At this little chain restaurant in a town that most people have never heard of, I was getting the kind of service that I’ve received at some of the most expensive restaurants in Sonoma, Napa, Chicago, New York, Paris, San Francisco — you name it. And often (sadly) better.

The point is this: great service is not always tied to the money being paid for that service. I agree that if you are paying top dollar at an expensive restaurant you expect excellent service. But the converse is not true: that you will necessarily receive poor service at a much less expensive restaurant. This is because service has more to do with the individual providing the service than it does with anything else.

Sure, good training can be key. But some servers learn on the job and intuitively understand what great service means. And libraries are no different. Individuals can be given the tools they need to provide excellent customer service regardless of the monetary resources at hand.

Great service, I assert, can be boiled down to a few principles that can be employed in any organization that attempts to provide it:

  • Attentiveness. A moment of breakthrough understanding about service for me came when I was at a restaurant and I happened to notice a waitperson standing aside, surveying the tables. He/she (it doesn’t matter which) was looking for anything that needed doing. Was anyone light on water? Was a table finishing their meal? Would someone need to be alerted to bring the bill? This level of attentiveness to the entire enterprise is, sadly, rare, whether it be a restaurant or a library. What would happen, do you think, if you set a library staffer to simply observe users of the library and try to discern what they needed before they even express it?
  • Distance. What may appear at first glance to be the opposite of attentiveness is distance, but it isn’t. True attentiveness also means perceiving when to stay away. Frankly, I find it quite annoying to be interrupted in the middle of a conversation with my dinner partner simply for him/her to ask if everything is OK. One of the secrets of great service is to know when to step back and let the magic happen. Ditto with libraries, although we are less cursed with this particular mistake due to lack of staff.
  • Listening. To know what someone wants, you need to actively listen and even, as any reference librarian knows, ask any necessary clarifying questions.
  • Anticipation. Outstanding service anticipates needs. Libraries try to do this in various ways, but I also believe that we can do a better job of this.
  • Permission. I cut my teeth in libraries by running circulation operations. As an academic library circulation supervisor, I understood how important it was to provide permission to my workers to make exceptions to certain rules. For other rules, they were to escalate the issue up to me so I could decide if a rule could be bent. But you should always provide your staff with clear guidance on ways in which public service could be enhanced when necessary by variance in enforcement and the permission to apply the fix.

These are just some of the strategies that occur to me in developing astonishing public service. Feel free to share your thoughts in a comment below. Libraries are nothing if not public service organizations, so getting this really right is essential to our success.

District Dispatch: Webinar: Protect the freedom to read in your library

planet code4lib - Tue, 2015-09-22 14:50

What do you do when a patron or a parent finds a book in your library offensive and wants to take it off yourshelves? How do you remain sensitive to the needs of all patrons while avoiding banning a title? How can you bring attention to the issue of book banning in an effective way? In this 1-hour webinar presented by ALA’s Office for Intellectual Freedom and SAGE, three experienced voices will share personal experiences and tips for protecting and promoting the freedom to read.

Tuesday, September 29   |9am PDT| 10am MDT|11am CDT| 12pm EDT

Register now!

Washington, D.C. Public Library, Photo by Phil Freelon

Part I: How to use open communication to prevent book challenges

Kate Lechtenberg, teacher librarian at Iowa’s Ankeny Community School District, finds that conversations between librarians, teachers, students, and parents are a key way to creating a culture that understands and supports intellectual freedom. “The freedom to read is nothing without the freedom to discuss the ideas we find in books.”

Part II: How to handle a book challenge after it happens

Kristin Pekoll, assistant director of ALA’s Office for Intellectual Freedom, will share her unique experiences facing several book challenges (and a potential book burning!) when she served as a young adult librarian. How did she address the needs of upset parents and community members while maintaining unrestricted access to information and keeping important books on her shelves?

Part III: How to bring attention to the issue of banned books

Why would a supporter of free speech and open learning purposely ban a book? Scott DiMarco, director of the North Hall Library at Mansfield University, reveals how he once banned a book to shed light on library censorship and what else he is doing to support the freedom to read on his Pennsylvania campus.

Following the three presentations, there will be some time for Q&A moderated by Vicky Baker, deputy editor of the London-based Index on Censorship magazine.

The post Webinar: Protect the freedom to read in your library appeared first on District Dispatch.


Subscribe to code4lib aggregator