You are here

Feed aggregator

FOSS4Lib Upcoming Events: HydraCamp

planet code4lib - Wed, 2015-01-28 21:19
Date: Monday, March 9, 2015 - 08:00 to Thursday, March 12, 2015 - 17:00Supports: HydraFedora Repository

Last updated January 28, 2015. Created by Peter Murray on January 28, 2015.
Log in to edit this page.

See also: Advanced Blacklight Workshop, to be held the day after HydraCamp at the same location.

From the registration page:

Data Curation Experts will lead a four-day HydraCamp hosted by Yale University Library from Monday, March 9th through Thursday, March 12th, 2015. HydraCamp is open to all developers interested in building skills working with the Hydra technology framework. High-level course topics include:

Library of Congress: The Signal: All in the (Apple ProRes 422 Video Codec) Family

planet code4lib - Wed, 2015-01-28 19:23

FADGI’s report on selected born digital video projects in a range of federal agencies includes the use of various Apple ProRes 422 codecs.

We’ve spent a lot of time recently thinking about digital video issues. As mentioned in a previous blog post, the Federal Agencies Digitization Guidelines Initiative published several reports on this topic including “Creating and Archiving Born Digital Video.” Work on the “Eight Federal Case Histories” (PDF) report nudged us to add the Apple ProRes 422 family of video codecs to the Sustainability of Digital Formats website because both the American Folklife Center’s Civil Rights History Project and NOAA’s Office of Ocean Exploration and Research Okeanos Explorer make use of these codecs.

Developed and maintained by Apple, Inc, ProRes is a family of proprietary, lossy compressed, high quality intermediate codecs for digital video primarily supported by the Final Cut Pro suite of post-production and editing software programs. There are two main branches of Apple ProRes: the Apple ProRes 422 Codec Family and the Apple ProRes 4444 Codec Family.

The Apple ProRes 422 Codec Family comprises four subtypes, each of which is geared for a different end use. The 422 codecs are differentiated mainly by expected file size ranges, software version support and data rate limits. The consideration of data rate is important in three ways. First, it governs quality (the higher the rate, the better the quality); second, file size (the higher the rate, the bigger the file) and, third, it relates to the ability of a given network to carry a video signal in real time for viewing (the lower the rate, the easier to carry).

Apple ProRes 422 HQ is the highest data-rate version of the ProRes 422 codecs, applying the least compression for the best imagery and resulting in the largest files. Its high quality means that it is often selected for professional HD video production, especially in the creation of documentaries and other programs for broadcast television. One video expert states that ProRes 422 HQ provides nearly the same quality of uncompressed 10-bit 4:2:2 video but at about 1/5th the file size and throughput (the 4:2:2 ratio represents the type of chroma subsampling in each version).

Apple ProRes 422, with the second-highest data-rate of the group, frequently is used for  multistream, real-time editing at a significant storage savings over uncompressed video and Apple ProRes 422 HQ.

Apple ProRes 422 LT,  the third-highest data-rate version is also considered an editing codec but its smaller file sizes lend its use towards environments where storage capacity and data rate are at a premium.

Apple ProRes 422 Proxy is the lowest data-rate version of the ProRes 422 codecs and is often used in offline post-production work that requires low data rates but also a full screen picture.

Indian Head interlace” by RCA_Indian_Head_test_pattern.JPG Licensed under Public Domain via Wikimedia Commons – Interline twitter, demonstrated using the Indian Head test card. Image by Damian Yerrick.

The key character traits that define the ProRes 422 family are support for:

  • 4:2:2 source material, as well as 4:2:1 and 4:2:0 source material if the chroma subsampling is upsampled to 4:2:2 prior to encoding.
  • any frame size (including SD, HD, 2K, 4K, and 5K) at full resolution.
  • 10-bit sample depth.
  • intraframe (I-frame) only, and
  • variable bit rate.

While ProRes is a 10-bit native codec, it can be used with either 8- or 10-bit sources. 8-bit sources (such as DVCProHD), would be upsampled to a 10-bit file.

The Apple ProRes codecs, both the 422 and 4444 families, support both interlaced and progressive scanned images and preserve the scanning method used in the source material.

Apple ProRes 4444 Codec Family comprises Apple ProRes 4444 and Apple ProRes 4444 HQ. The fourth “4” in codec family name indicates this format’s support for alpha (transparency) data, in contrast to ProRes 422. Other features include picture sizes ranging as high as 5K and 4:4:4 chroma subsampling up to 12-bits per sample. Alpha channel sampling can be as high as 16-bits. There is some use of the ProRes 4444 family in the production of advertising and in content destined for theatrical distribution.

Resolucions 4k by Espolet96 (pròpia) [Public domain], via Wikimedia Commons

As demonstrated in the FADGI Born Digital Video case histories, the ProRes family is widely adopted in professional moving image production.  The popularity of Apple’s Final Cut software suite as well as officially licensed uses in specific products and workflows has encouraged uptake of the family. Other third party implementations, including FFmpeg, have broadened the adoption beyond Apple devotees.

For preservation-minded video producers, ProRes presents a format selection dilemma, as suggested by the AFC case history for the Civil Rights History Project interviews. Using the Format Sustainability Factors website criteria, ProRes would get high marks for Adoption and Clarity and low marks for Disclosure, Documentation and Transparency.

As with any project, available skills and resources also play a part in determining deliverables. In the case of the CRHP format selection, significant factors in the decision to use ProRes 422 HQ included AFC’s existing non-linear editing system consisting of an Apple Mac with Final Cut Pro which allowed native editing within ProRes codecs, the proficiency of AFC staff in using their system and the fact that that the KiPro portable hard disk used on the project only supported Apple ProRes codecs. (It should be noted that the first two demonstrate the high marks for Adoption!)  In the face of the preservation trade-offs as reflected in the Sustainability Factor scores, these considerations combined with ProRes 422 HQ’s high image quality led to the decision to use ProRes on this project.

LITA: Jobs in Information Technology: January 28

planet code4lib - Wed, 2015-01-28 18:01

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Digital Content Strategist, Oak Park Public Library, Oak Park, IL

Executive Director, Metropolitan New York Library Council(METRO), Metropolitan New York Library Council (METRO) New York, NY

Systems and Technology Librarian, Catawba College Library, Salisbury, NC

Visit the LITA Job Site for more available jobs and for information on submitting a  job posting.


DPLA: Metadata Aggregation Webinar Video and Extended Q&A

planet code4lib - Wed, 2015-01-28 15:50

Thanks to all of you who attended our webinar. We had a great turnout and hope you found it interesting and informative.

As promised, you can now find the video for our recent Metadata Aggregation webinar below or over at our Vimeo account. Links to download each presenter’s slides are included in this post as well. Unfortunately, we didn’t have enough time to get to all of the questions that came up during the webinar. However, our presenters agreed to answer a few more in writing for our blog. You can find them below in the Extended Q&A section.

DPLA aggregation webinar 1-22-15, 2.03 PM from DPLA on Vimeo.

Download the presentation slides from: Extended Q&A

How do you prepare potential data contributors to ensure metadata quality? For example, do you provide any training or work closely with them in the first few months?

NCDHC: We haven’t yet had a request for formal training, but we do work closely with potential contributors, providing feedback on mappings and any quirks we find with the structure of their data. Because we require so few fields and don’t check metadata quality very far, this may be part of why little training has been needed or sought.


How many programmers does your operation require? FTE and PTE?

NCDHC: Stephanie is our programmer. For starting as a hub, she spent around .25 FTE for two months. Now, for maintenance, it varies, but it’s about 1 hour per new partner, and 4-5 hours if we start taking in a new metadata format (for instance, when we started taking in METS).

DPLA: Our tech team has four full time members. Not all devote their entire day to what might be described as “programming” though. We have a Director of Technology, Mark Matienzo, who still finds some time to develop code in between administrative and executive duties. Our Metadata and Platform Architect, Tom Johnson, also develops code but spends a lot of his time designing the overall system. Our Technology Specialists Audrey Altman and Mark Breedlove work on developing our codebase, but also work on server administration, web development, and support for our API.

SCDL: We currently run SCDL’s aggregation technology with the use of one full time Digital Services Librarian who is an experienced programmer.  His time commitment to the project varies as needed but I would estimate that it is less than 25% of his time.


I’d be interested in hearing from all of the presenters about where they think we are on the adoption curve. Are “most,” or “some” or “few” of the potential partners in each state already contributing to DPLA through the hubs?

NCDHC: We currently have 14 data providers, and have made contact with 17 others in the state. Of those 17, we’ve heard back varying degrees of interest. Right now, we figure we may add 1-3 additional data providers  per year.

SCDL: I think I can safely say that most of the potential partners in the state are either contributing to DPLA through the hubs or are in the process of getting ready to contribute.  This “most” figure does not include potential parters who aren’t interested in participating.  For the most part, those who want to participate and reach out to SCDL eventually get to participate.


Are contributing partners responsible for fixing the mapping/missing field issues you identify? (rather than you updating the data after aggregating)? How is that process managed with contributors? Is there a timeline/turnaround time they are given to remediate their data?

NCDHC: We generally give our data providers about 10 days notice before a harvest. That leaves us a good bit of time to identify any outstanding issues. Generally, changing mappings is so quick and easy that data providers can fix it within minutes. It’s the other issues (fixing rights statements, for example) that often take longer.

SCDL: If the partner is still under review and their data has not been harvested (at all) in yet, we usually give them a much longer window to correct mismappings and missing fields.  However, if they are a regular partner and they’ve added new collections that, post-harvest, are noticed to be incorrectly formatted (mismapping, etc.), then we usually request a quick correction (under a week) and then we reharvest.  This quick turn around time is usually ok becuase at this point the problem is isolated to one or more new collections.  Reharvesting is not a problem because we are dealing with relatively small numbers from our partners.  Our smallest partner has just under 1,000 records and our largest has just over 40,000 records.


If you want to wind up with mods why not start with MODS/METS? If an institution was just starting their local repository would DPLA suggest they adopt MODS as their metatdata standard or should they choose Dublin Core.

NCDHC: We take what our institutions provide from their systems, and by a large majority that’s Dublin Core. We don’t ask them to change their metadata schema in order to participate. I’m almost positive that would mean little to no participation! CONTENTdm doesn’t accommodate MODS or METS, and most of the digital collections in content management systems in North Carolina are in CONTENTdm.

DPLA: We try not to recommend any metadata standard. It all really depends on the data you are receiving from your partners and what makes sense. Yes, MODS is very robust and provides a lot of granularity, but its complexity can also be a burden leading to complicated crosswalks. That said, we have successfully worked with MODS, as well as both simple and qualified Dublin Core, as well as MARC data. In the next month we will be announced an updated metadata application profile along with updated documentation. We will be providing a generic crosswalk from both MODS and Dublin Core to provide some guidance for institutions starting from scratch who want to ensure their data can be easily harvested by DPLA. Stay tuned to our blog for details.


For SCDL – What are the benefits of mapping Date Digitized to dc:date for DPLA? I have seen items mapped like this showing up on the timeline in ways that may be confusing to users – e.g. a photo of a prehistoric item is showing up under 2015 on the timeline. 

SCDL: We actually don’t do that. We map to None and date.created (which is the date of the analog object) to dc:date.  Sorry for any confusion.

DPLA: What you’ve found in the timeline is an instance where inconsistent metadata within a data feed has caused an error. This is why consistency in metadata application is so important. We can write the crosswalk to map any appropriate field to DPLA’s date field. However, if that field is mistakenly used in some records for the digitization date rather than the creation date, there isn’t really anything we can do about that until after the mapping when the error is noticed. This is an example of the type of quality control we try to do in the final steps of aggregation. The severity of the problem merits how quickly we look for a solution. Some of the misapplied dates, unfortunately, won’t be fixed until the next harvest.


Are there notable aggregation issues with disparate instances (eg. ContentDM vs. BePress)?

SCDL: In our experience, BePress users had a lot less control over their feed and that created problems when it cam time to adjust their feed for aggregation.  We are actively working with a BePress feed now and it has been under review longer than any other feed.  But, we are making progress.


A question for heather in SC – could she talk about the process of getting her clean metadata back into CDM after cleaning in google refine? we haven’t figured out a process of getting it back in easily.

SCDL: I know I answered this during the webinar, but I wanted to clarify further that we primarily use Open Refine as a step before to adding content to a repository.  Metadata Creation–>Check MD in Open Refine –> Ingest to Repository (CONTENTdm, Dspace, etc.) .  When I use it to assist in creating Metadata qa/qc sheets, I don’t pass corrected metadata back to the supplying repository. I use Open Refine to quickly identify common errors across collections and then use that data to create the qa/qc spreadsheet.  It is not often that I would delete and re-import a collection into CONTENTdm after passing the data through Open Refine.  Once the collection is in the CONTENTdm, I might use Open Refine to identify problems, but I’d use CONTENTdm’s find and replace to fix the problems.


Why doesn’t DPLA or the aggregators pass the cleaned data back to the source?

SCDL: This would probably be better answered by DPLA but I know that a big issue for us would be getting the clean data back into our individual repositories without having to rebuild all of the collections.

DPLA: There are really two issues: the first for us is the question of how best to provide that data back to Hubs. We now store the data as JSON-LD, and in the near future will begin storing it as RDF triples. These data formats are typically not what is used by Hubs. If we were to provide data back to them reformatting it would lead to another workflow for mapping and quality assurance. Secondly, in the case of Service Hubs we are dealing with the metadata aggregator in most cases, not the creator. We could provide them with updated records to load into their system, but as soon as they re-harvest from their partners, those would be eliminated, or at the very least could lead to problems with versioning between the Hub and its partners. In short, the problems with providing clean or enriched data back to providers is a logistical one, not a technical one. It is an issue that DPLA is interested in solving for the benefit of all, but figuring out how that work can be done is yet to be determined.


Is the Copyright work being coordinated with the parallel work being done by DPLA/Europeana?

SCDL: We’ve been working with the DPLA on our forthcoming copyright materials, yes.

DPLA: For more information on DPLA’s rights work see our blog post on the Knight News Challenge-winning project, Getting it Right on Rights ( The project whitepaper should be released soon.


How are you preparing DPLA for the Semantic Web?

DPLA: We have been going through the process of updating our application profile since last fall to accommodate more linked data in our records. Version 4.0 of the MAP will have expanded classes for things like agents and concepts in addition to the place and collection classes that already exist. We will be adding properties for URI matches to linked data endpoints and will begin by incorporating these for geographic entities to start. In addition, the new MAP will incorporate several more namespaces to increase its interoperability in the Linked Open Data / Semantic Web world. These changes will open a path for DPLA to think about how LOD can be used by us and our partners in a sensible and beneficial way.


Is DPLA (SCDL, NC DHC) considering the use/implementation of ResourceSync as the NISO standardized  successor of OAI-PMH?

DPLA: Yes. We are very interested in ResourceSync. It has a lot of benefits over OAI-PMH, particularly in the area of syncing collections without having to entirely rewrite. However, use of ResourceSync would require that we also have Hubs that use it, and as of yet we do not have a Hub adopting it. This would be a great opportunity for our Hub network to work together, however, and it could easily become a feature integrated with “Aggregation-in-a-box.”


Does DPLA have a preferred format for geographic locations?

DPLA: Our only preference really is for consistent, discretely identifiable geographic locations. That means, first of all, that all geographic properties in the Hubs feed are expressed in the same way (all look like “Erie (Pa.)” for example, or “Erie; Pennsylvania” and so on). It also means that if one collection uses, say, county names and the others do not, we have a way to detect that. For example, if the place names are just semi-colon separated values and some have “Country; State; County; City” but another has “Region; State; City” that can be very confusing. A better method in that case would be to have the place names in separate elements that indicate what they are (such as MODS <hierarchicalGeographic> elements). This way we only have to have one set of logic for the enrichment module’s parser.

The second requirement is that the parts of the place be discretely identifiable. This means that we can differentiate the city name from the county name or the country name. this is achieved by either separating the place names with punctuation (preferably semicolons, although in the case of LC geographic terms, we can use the parenthesis) or by breaking them up into separate elements.

In the Library, With the Lead Pipe: A Conversation with Librarian-Editors

planet code4lib - Wed, 2015-01-28 13:00

In brief: Ellie Collier interviews several librarian-editors about the publishing process, with a focus on “call for chapters” style books.


I began working on In the Library with the Lead Pipe in 2008 as a founding editor and author, despite hating to write. The prospect seemed too exciting to let my own dislike of writing get in the way. I was the first editorial board member to step off of our initial author rotation and I remain grateful that the board let me stay on in an editorial role only, stepping back into a writing role from time to time to share survey results or make announcements like our Code of Conduct.

Over the past six years I’ve enjoyed growing my editorial skills immensely and have been vaguely on the lookout for additional opportunities to use those skills. My personal interest in editing and specifically wanting to learn more about the type of editing involved in “call for chapters” style books has lead me to this interview with a group of librarian-editors.


I wanted to approximate a conversation, allowing those answering the questions the opportunity to interact with and respond to each other while also allowing time for thoughtfully constructing answers. With that end in mind, I chose a collaborative Google Doc as the interview medium.

I also thought that this might be a good topic to follow up with a Google Hangout including editors who are willing to participate (with participants submitting comments and questions via chat). Let us know in the comments if that would interest you.


Heather Booth is the editor, with Karen Jensen, of The Whole Library Handbook: Teen Services (ALA Editions, 2014) and the author of Serving Teens Through Readers’ Advisory (ALA Editions, 2007). She is the Teen Services Librarian at the Thomas Ford Memorial Library, a Booklist reviewer, and a blogger and speaker on topics relating to effective teen library services.


Emily Drabinski is Coordinator of Instruction at Long Island University, Brooklyn. She is series editor of Gender & Sexuality in Information Studies (Litwin Books/Library Juice Press) and sits on the board of Radical Teacher, a journal of feminist, socialist, and anti-racist teaching practice. She also edited (with Alana Kumbier, and Maria Accardi) Critical Library Instruction: Theories and Methods (Library Juice Press).

Nicole Pagowsky & Miriam Rigby are co-editors of The Librarian Stereotype: Deconstructing Presentations and Perceptions of Information Work (ACRL Press, 2014). Nicole is a Research & Learning Librarian at the University of Arizona and writes and speaks about library instruction, game-based learning, and critical librarianship. She will be co-editing a forthcoming handbook for critical library instruction with ACRL Press. Miriam is a Social Sciences Librarian at the University of Oregon and currently serves as Vice-Chair of the Anthropology and Sociology Section (ANSS) of the Association of College & Research Libraries. She researches and writes on a variety of of subjects, with recent foci in instruction techniques and social websites such as Reddit.

The Interview:

Ellie: Can you give a brief overview of what the stages in the process were for you? What is the order of events: idea, pitch, find publisher, call for chapters, select, compile, edit, publish? What am I missing?

Heather: My process was largely dictated by the fact that the book I edited is part of an ongoing series. In my case, an acquisitions editor from the publisher I had worked with on my previous book approached me. She sent a copy of another book in the series, I wrote a proposal for the book I envisioned based on the structure and scope of the others in the series, and we went to a contract from there. Once the contract was set, I spent a lot of time looking for work that was already published that I could request for reprint. Around this time, I encountered some personal delays and the project went on hold for a bit. Then I brought a co-editor on, at which point we revisited the original proposal, revised it slightly to include more original content, and began both writing and soliciting writing from others. We looked for leaders in the field on a variety of topics and contacted individuals directly. Editing the articles happened next, and I did request a few significant revisions from authors. Compiling, organizing, and editing for consistency were the last phases before it went to the acquisition editor and back to us, and then the copyeditor and back to us one last time.

Miriam: Nicole and I did things a little bit backwards. We had a successful conference panel and then a webinar on aspects of our book, decided to try to put a book together on the topic, advertised for chapter proposals, and then went to ACRL Press with an almost complete proposal (with most of the chapter topics) to pitch our book idea. I think that having a fairly solid and fleshed out book proposal helped our pitch, but apparently most people pitch the book idea and then put out a call for papers… like Heather described.

As for content, we wanted completely new work, and with specific parameters for thorough research techniques, so we selected chapter proposals based both on originality of topic and research plans. We also required that everyone be able to complete their chapters within about 8 months—though we staggered due-dates of drafts and edits to make a manageable timeframe for everyone. There was a lot of back and forth between the authors and us, both for basic editing reasons, and because Nicole and I really wanted our authors to push their ideas and be able to make bold arguments. (I think maybe some of the authors were a bit shy or hesitant at first in a semi-self-censoring way, but everyone opened up really well once we made it clear that we wanted them to be strong in this way.) As the book progressed, a couple chapters exited the project for various reasons and we solicited a couple different chapters to fill topic-gaps that our original selections left. And we solicited a couple other chapters just because we thought they’d be awesome.

In addition to our own editorial input, we also sent each chapter out for double-blind peer-review, and near the end of the project, we also had pairs of chapter-authors trade their chapters with each other (based on similar topics/themes) for one last round of input and editing. Then we sent everything off to ACRL’s copy-editors who made additional suggestions, which Nicole and I mostly fixed up (with author-permission) as they were fairly minor edits.

The launch of the book was quite the big deal too… but Nicole gets the majority of the credit for coordinating that… so I’m going to let her speak now!

Nicole: Yes! Miriam captured our process very well, so I’m going to avoid repeating what she has already described. I will say that with the topic of librarian stereotypes and the challenges we face, the idea for the book was to really speak to this being a serious topic with implications for diversity (and lack thereof in the profession), status and pay, and notions of gendered work. With that in mind, our CFP was very important and had to be handled with care. We needed to be clear that this was a scholarly volume and that chapters should be well-researched. Instead of just pointing out tropes in popular culture, for example, we wanted authors to go in depth as to the implications of the existence of these tropes and look to gender studies, LGBTQ+, anthropology, psychology, etc. We spent a lot of time crafting our CFP to reflect what we did—and did not—want for submissions.

It’s sort of a catch-22 with finding a publisher and pushing a CFP, because a publisher typically wants to know who will be writing the chapters and what the topics will be before agreeing to sign; and on the flipside, authors typically would like to know who the publisher will be before putting in the effort to draft a proposal and get on board.

As far as launching and promoting, we plotted out what would have most impact and when (announcements, book giveaways, etc.). And having an additional platform (Librarian Wardrobe) where we had an already established audience of librarians interested in these topics made it easier to get the word out. We also lucked out working with Kathryn Deiss and ACRL Press, where it was possible to try new ideas and generate excitement with encouragement and support.

Emily: I met Alana Kumbier at the LGBT-ALMS conference when it was in New York in 2010. Alana talked to me about her idea for a book, and then we met up again a week or so later at the Thinking Critically conference in Milwaukee. Rory Litwin [of Library Juice Press] attended that conference too, so the three of us sat down over meat salads and talked about the potential project. (I love conferences in Wisconsin—all meat salads and glasses of milk.) He was into it, I pulled in my colleague Maria Accardi, we wrote the CFP for Critical Library Instruction, and it was on. We got a surprising number of submissions—surprising now that I’ve worked on a number of different volumes as a series editor. It turns out to be very difficult to get a sufficient number of quality chapters to make a whole book. I think Critical Library Instruction just arrived at the right time, when there was a critical mass of people with things to say about how to teach critically in the library classroom. If I did that book again, I think I’d put more work into recruiting submissions from authors whose work I admire.

Ellie: You’ve touched on this in a broad sense already, but could you go into a little more detail about the idea formation stage? What was the inspiration for your book? What excited you about your topic?

Emily: This book idea arrived at just the right time for me, professionally. I’d been teaching just long enough to have a critique of my own practice, and was beginning to suspect that it was about more than Boolean operators. I had just published my first ever book chapter, “Teaching the Radical Catalog,” in K.R. Roberto’s Radical Cataloging, and had just joined the Board of Radical Teacher, a feminist, socialist, and anti-racist journal of teaching practice. So pedagogy was all I was thinking about. At the same time, I was working on my own line of inquiry about queer theory and cataloging that turned into “Queering the Catalog: Queer Theory and the Politics of Correction,” published in Library Quarterly last year. So I knew there were a lot of interesting things I had to say about critical approaches to instruction, and I was excited to see what other people had to say. I also can’t stress enough how wonderful it was to work with Alana and Maria on the project. Editing together meant we had lots of warrant to talk to each other all the time, work through ideas, share labor, and generally live inside a small political and intellectual world together for a year or two. That was a total pleasure.

Nicole: I had started Librarian Wardrobe in 2010 to catalog how librarians dress for work as a reference for the field, but also as a way to publicize that we don’t all look the same or fit the stereotypes. As more conversations around these topics arose surrounding the blog, it was apparent that although people are aware the stereotypes exist and realize to some extent that they hinder our work, it didn’t seem that it was as widely known that these are gender, diversity, and social justice issues and that we need to do something about them instead of just rolling our eyes at shushing bunhead images in popular media (and that it’s much deeper than popular media). Miriam and I had met at ALA in New Orleans in 2011 and hit it off, and I remembered we had similar interests and that she had a background in anthropology. So, I asked her if she’d like to organize a panel with me around these topics, and with her background and expertise, if she’d be the moderator. She agreed, we had a blast with awesome panelists (K.R. Roberto, Dale McNeill, Jenny Benevento, and Allie Flanary) and it was standing room only at ALA in Anaheim the following year. We were also able to turn the panel into a webinar with ALA TechSource and Library BoingBoing and had over 300 people attend. We then thought… we know this is important, there’s a lot of interest, and we need to do something with this. And so a book was born.

Heather: Like I said, the idea for the book came to me from an editor at Editions as she was looking to add to an existing series. So it was interesting to work within the structure that was already there and figure out how to shape it to meet the needs of a different group of readers. With both this and my previous book, I wanted to balance the theoretical with the practical. I think it’s just as important to know why we do it as how we do it, and more than anything that was my initial guiding principle. It wasn’t until my coeditor Karen Jensen came on board that the idea really started to take shape. She had created the Teen Librarian Toolbox blog and was just amazingly prolific and had all of this good energy and was getting feedback from different people than those I knew professionally. Working together, we were able to reach out to a wider group and really craft a book that is essentially the guide we wished we had when we started as teen services librarians.

Ellie: How did you decide it should be written through a call for chapters? Did you consider other formats—a special issue of a journal for example?

Heather: The series aspect dictated the format for me. I think in this case, it works well for a couple reasons. First, as it’s published by ALA Editions, it indicates that ALA is placing teen library issues on a pretty high platform. The other titles in the series are The Whole Library Handbook, which is broad and all-encompassing, and The Whole School Library Handbook, which is specific to a unique setting. That our book is included with these two demonstrates that addressing the needs of adolescents in libraries, and the work of teen services librarians, is of significant importance. It also keeps the content together. There are a few different journals that publish material similar to what is in our book, but the articles are spread out, and each issue needs to cover other material too. I like that it’s a cohesive whole in a book.

Miriam: As I recall, we never really considered something other than a book. We wanted to compose something that was fully our own and didn’t place many restrictions on us in terms of form and content. The main driving factor in how we suggested types of topics in our call for chapters, selected chapter proposals, and specifically requested chapters from additional authors (though with a lot of flexibility on what they wrote about), was that we wanted a book that got at as many perspectives as possible—so we didn’t just look at race, or ethnicity, or just at gender, or just at specific ways of thinking about the myriad issues. And we think we were pretty successful at getting this balance. There are certainly still some gaps, but we were able to push the conversation on stereotypes in many directions to help promote even more conversations into the future—even including the conversation of why are we so obsessed with stereotypes, which a lot of people feel is detrimental to getting librarianship done (but we make sure to cover why that’s not really the case.)

Emily: We never considered a journal issue, although I can’t say why. A journal issue never occurred to me! Patrick Keilty and I are editing an issue of Library Trends based on presentations at the Gender and Sexuality in Information Studies colloquium held at the University of Toronto in October 2014. That’s largely because Patrick has already co-edited a book in that area (Feminist and Queer Information Studies Reader, with Rebecca Dean) and also because an issue of a prominent journal promises to make a different kind of impact, not just in terms of tenure considerations (which are real), but also in terms of readership. I do think they’re different animals, in part because of how we access them. I don’t know the last time I held a print journal in my hands and read it cover to cover. Books are different, the physical object matters in a different way, and has a different kind of life. I’ve edited a journal issue and worked on a lot of books. Books are way more fun.

Ellie: Heather mentioned initially planning to use mostly previously published works, did anyone else use (or consider using) previously published materials? In what ways did working with those differ from soliciting new writing?

Heather: Some of the chapters in our book had been published previously. Honestly, there is a lot of content out there that we would have liked to include in the book, but we ran into a lot of difficulty in obtaining rights to reprint. I’m really glad that we have the pieces that we have that are reprinted as they add a lot and the diversity in tone is great, but I also like that the pieces that were written just for this book help to make it a cohesive work.

Emily: We didn’t consider collecting previously published work. I think if we’d intended to make Critical Library Instruction a textbook, we might have done that differently. Keilty and Dean included previously published work in their collection, and faced the same challenge Heather did—reprint rights are expensive and hard to get.

Nicole: We also did not consider this, but did contact potential authors who had previously published relevant work (including blog posts) and asked if they would be interested in adapting/expanding it for the book.

Ellie: What was your process for recruiting authors and calling for submissions? Did you already have many authors in mind? In the end, what was the balance between chapters that were submissions vs. recruited?

Heather: All of our chapters were recruited—either by asking directly or by obtaining reprint rights. We found our authors and submissions in a few different ways. Initially, as I outlined the sections I wanted to have and the topics I wanted covered, certain people who are leaders on those topics stood out—Debbie Reese on the topic of diversity and accurate representation is one, Joni Bodart on booktalking is another. They’re really some of the first people who come to mind when those topics come up, and we were fortunate that they wanted to contribute on the topics we suggested. For other topics, the conference/workshop/panel networking that Emily and Nicole mentioned really factored into our process. We made lists of people who were doing good work in our areas of interest: people we knew in real life, people we knew of through virtual PLNs, people we had seen speak at conferences, or people who we knew were speaking at conferences that we couldn’t attend but were following through Twitter conversations. Basically, if we read something that really got us excited about the topic, that person went on our list as a potential author. This allowed for expert coverage on all of the topics we felt needed to be addressed with very little overlap.

Emily: We put out a CFP on a bunch of listservs, Rory shared it on his blog, and we all shared it with our individual networks. We ended up not soliciting chapters from people, although if I could go back in time, I might have done that—it would have been great to have a chapter from James Elmborg, for example. I wish we’d taken Heather’s approach. Next time, I’ll read this article first, before I get started!

Nicole: We mostly took the same approach as Emily, the majority of our chapters came through proposals, but then we did approach authors for a percentage. Our particular topic for the book both has and has not been written about a lot, if that makes sense—written about mostly tangentially for what sparked our interest, so we had to think of ways to talk with potential authors about how their work related to our book and how they could tweak it just a little bit to fit in with the general subject area.

Ellie: How did you vet the authors and their ideas, especially how did you get a sense of their writing and work ethic?

Miriam: We asked for chapter proposals to include not just the concept/idea for the chapter, but also a description of research methods, and an explanation of their expected timeline for research and writing—including a discussion of any potential delays such as getting human subject research approved. One thing we somewhat regretted not requesting were writing samples from the authors in the form of previous publications, or even research papers from graduate school. For the most part this wasn’t really a problem, but for future projects it would definitely be a step we’d include—not just because it would help us gauge writing styles, but also because some people have excellent ideas and excellent writing skills, but flounder a little bit in the short description of future research, so it’s possible we might have included submissions we passed over in the first round. For the solicited chapters, it was pretty easy, as we knew their writing style already (that’s why we sought them out), and so we were basically asking authors if they thought they could complete an original work in X number of months (or even weeks in one case).

Heather: Our process was like Miriam’s for her solicited chapters. There were a few people who we asked but were unable to finish the work for a variety of reasons, and a few others who we asked for more significant revisions than others, but by and large the work we got was what we expected.

Emily: We went on abstracts alone! When I think back to it I’m a little shocked it all worked out. I think our collection is as uneven as any edited volume, but by and large I’m really proud of what’s in the book, even though now it feels like we just got really lucky.

Ellie: Did you plan to write a chapter? Did you end up writing a chapter?

Miriam: Nicole might have started with big ideas for our chapter, but I set out with the idea that we’d have a fairly basic “introduction to the book and the concepts” type of chapter. And then we started writing it and we kept thinking of more things to include, and more works to cite, and more concepts to introduce… and then we had a giant chapter that we ended up editing back down fairly significantly. But overall, I’m quite pleased with what we ended up with—we still basically introduce concepts and the background to the book, just with more depth and analysis than I’d first imagined.

Emily: I didn’t plan to write a chapter. Alana didn’t write a chapter either, but Maria did. Alana and I wrote the introduction. I didn’t think I had it in me to both edit and write something, because editing is a ton of work. And it turned out I liked editing a lot better. I love working with authors and texts to make something really work. I’m a decent writer, but I think I’m a better reader. Alana and I wrote the introduction to the book, and I really loved doing that, looking at what we’d come up with, how we’d sorted it, what was behind our decisions, etc. It’s just the editorial version of talking about yourself. What’s more fun?

Heather: I did, and I did! Karen and I both contributed our own writing. I did the bulk of the editing, so there are more writing contributions from Karen, plus she had some really great resources already written and ready to go that we were able to use.

Ellie: Let’s talk about the editorial process: Did you do both copy-edits and substantive review? What techniques did you use to ensure diversity and to push authors to think deeper? If you worked with a co-editor(s), how did you divide your efforts? Did you use the chapter authors as co-reviewers of each others chapters? What do you think of that idea?

Emily: Maria, Alana, and I divvied up the chapters and each edited a third of the book. There was some horse trading—there were submissions I really wanted to work with, and others that were quite challenging for me—but mostly we just added up the chapters and divided by three. We all read all the chapters and gave general feedback, and then split them up for more in depth review. It’s an editor reviewed book, meaning we didn’t send the chapters out beyond ourselves. I think the best editing reads for big picture stuff, making synthetic connections that authors sometimes can’t see, and pointing to assumptions that might benefit from explication and evidence. Good editing is constructive rather than destructive, pointing to ways to strengthen a piece rather than pointing out all the ways a piece gets it wrong. I have been on the receiving end of peer review many times and really think it’s an art. I’ve had reviews that have pushed me to make more daring claims (to the extent that anything in this field can be daring!) and resulted in much stronger pieces, and I’ve had reviews that just made me feel bad about myself. I worked hard to give the authors I worked with a generous and critical engagement, the best gift you can give a writer. You’d have to ask them if that was effective or not!

Nicole: We did both, we mostly focused on substantive review, but copy-edited as well. ACRL provided us a copy-editor to review the final manuscript, which was so completely wonderful, allowing us to put more of that focus into content. Miriam and I both edited every single chapter. On some, one of us might have taken the lead, but we both read and provided feedback on all work (and all drafts of that work). We used Google docs to send feedback, which allowed for more interactivity and discussion with authors. To push authors to think more deeply on certain topics, I think we did a lot of question posing. Just asking authors what they thought of x or y, or why did they discuss the topic in this way, and did they think of z? What Emily says is very important, and I think when you’re approaching deadlines, as an editor you can get stressed out and want to comment quickly, but it can be an art to think about how your feedback might affect those receiving it and adjust accordingly.

Beyond that, we did have authors review each others’ chapters. We paired people up who had complementary topics and had them share feedback. Not only did it let authors get additional feedback from another perspective, but it helped them think about their own writing in a different way. Additionally, we had double-blind peer-review and found librarians and other academics with related research backgrounds to read and comment on chapters. On one hand we might have gone overboard with all the reviewing, but because these are sensitive topics, and because Miriam and I are both cis-het white women, we wanted to make sure that the chapters speaking to diversity in particular were looked at from perspectives more diverse than ours.

Heather: We also had a copyeditor through Editions, but as I read and reviewed the pieces, I did copyedit too. It’s really hard not to! I feel fortunate that I had worked with an editor previously, so as I read the submissions, I tried to do what my editor did for me: ask questions, address organizational confusions, give nudges when it seemed the author was on the brink of something bigger than what was currently written. My co-editor Karen edited my pieces, and I did the remainder, with the exception of the reprints, which we left as-is.

Ellie: Nicole touched on this a bit in her last answers, but what technology did you use in various stages? What technology worked and what failed?

Heather: We wrote the whole thing in a shared Dropbox. It was our best solution for version control. We also did lots of texting back and forth, and stored more than we ought to have in Gmail. We used one spreadsheet to track where in the process various submissions were, another one for author contact information, and another for tracking reprint permissions.

Emily: Ours was a Google project, we even had a Google Group that we used to communicate with one another, although I think that only lasted a minute before we reverted to email threads. I used a Google spreadsheet to track submissions and due dates and email addresses and follow-ups. We submitted the final manuscript as individual chapters in an email to Rory. Now I use Dropbox to facilitate file transfer from authors in my book series to Library Juice Press/Litwin Books, but that wasn’t part of my workflow back then. Which is funny, because Dropbox is my whole life now. We had a joke at the time that we should publish a paper on how we managed the process of assembling a book—those kinds of articles seem to have so much more traction in library science than some of the political and theoretical work we published in that collection!

Nicole: Our workflow was similar to Emily’s Google experience, where our whole world became Google (or, isn’t it already? heh). We had a Google group for communication, we set up a separate Gmail address for just book emails that Miriam and I shared, we used Google spreadsheets to track everything, and as I mentioned, used Google docs for moving drafts back and forth with feedback and discussion. Miriam and I also heavily used Gchat to discuss progress and process. We used Word for the double-blind peer review so there would not be names attached to any comments, and we erased all author info within the document settings. When we worked with ACRL at the final manuscript stages, we moved to Dropbox and to using only Word and PDF files.

Ellie: How did you connect with your publisher? Did you ever consider going the open source/open access digital publishing route? Self publishing? Why or why not? I know The Librarian Stereotype has been able to make chapters available in institutional repositories, can you talk about that process as well?

Heather: I was first connected with ALA Editions back in 2004 when my then-supervisor Joyce Saricks nudged me to write a book on reader’s advisory for teens. Editions had published her books and she introduced me to her editor there. They liked my work and asked me to take on The Whole Library Handbook: Teen Services. I feel like I’m a broken record on this—series title dictated many of the decisions, so self-publishing was never a consideration.

Nicole: We chose ACRL specifically because we wanted the book to be directly associated with scholarly work to have a more serious treatment of these topics. ACRL also has great copyright policies for authors, is a non-profit, and as mentioned, willing to create OA PDF versions of chapters (those should be available soon!).

Emily: Our book is explicitly political, so we went with a political press. Rory was right there at the meat salad buffet table with Alana and I as we talked through our idea. We’ve also been able to submit chapters to institutional repositories, but there are only a few on deposit. Librarians are apparently like anyone else when it comes to getting our work in the campus IR!

Ellie: What other details have we not covered? Was there something that you weren’t aware would be a part of the process, or that took more time or was harder than you anticipated?

Heather: Obtaining reprint permissions was hands down our biggest unexpected challenge. That was frustrating because there are certainly pieces out there that I wish we could have included. Finding my coeditor Karen and essentially starting the process over was not something I anticipated at the outset, but it was a wonderful development. It was also much easier than I anticipated to work with someone I’d never met in person at a great distance.

Emily: Heather’s right about the challenge of obtaining permissions. It can be frustrating and expensive! Also, editing means an agreement to enter into many relationships that will likely involve at least some degree of conflict. I didn’t realize how much affective labor would be involved in working with authors. If you’re conflict-averse (like I am!), it’s important to be prepared to face those aversions head on, whether it means saying no to an abstract that you can just tell won’t work, or the hard work of telling an author when something isn’t working. Editing that engages texts critically, productively and with generosity is hard work with an emotional dimension I didn’t anticipate. Doing it in a way that produces the best possible work from friends and strangers is a real skill.

Nicole: I’m not sure if I can think of anything else that Miriam and I haven’t noted already. We did write our own chapter as mentioned, so that entailed a lot more work for us on top of everything else. If both editing and being an author, my advice would be to plan that out well in advance and don’t think you’ll just start working on your chapter once the bulk of editing is done for all the other authors. You’ll be exhausted! Having a “final” date for editing typically doesn’t work out as planned, so we were still doing a ton of editing as we were writing, re-writing, and editing our own chapter.

Miriam: One other thing that we haven’t really mentioned much yet is some of the design and finishing aspects of the book. We have an awesome graphic/comic style chapter from Amanda Brennan and Dorothy Gambrell, which Dorothy created the art for. She figured out the layout of that quite expertly, though it was enlightening to learn some of the copyediting notations for how it would be worked in with the uniform chapter headers and such. Dorothy also designed our book cover (based on Bureau of Labor Statistics data about librarians) with her infographics expertise, which was an absolute lifesaver, as Nicole and I were incredibly stumped on a design, or even what statistics we wanted to represent with the design. Finally, I think everyone is pretty well aware how hard it is to come up with good titles for things, but it was still quite the nerve racking endeavor to find a title we felt really positively strongly about. Like Nicole said, it’s important to plan out well in advance… but there are some things that only happen when they happen, and luckily it all worked out for us!

Ellie: What were your overall feelings about the experience? What did you feel was especially successful? What would you do differently? What would you like to try?

Heather: I feel that the biggest success was bringing all of these voices together. We have over twenty contributors, and they all have something important to say. I feel very fortunate that the book is working as a platform to elevate some really great voices and perspectives. Meeting and working with my coeditor was another great take-away. Due to outside conflicts, the book was just not happening the way it needed to before Karen came on board, and we’ve developed a strong working relationship as a result. If I were to do it again, I’d ask for more pages. There’s so much more that we could’ve covered!

Nicole: It was a great experience and I’m glad Miriam and I got to work together on this. Our authors were fantastic and I’m really proud of what we all accomplished. Most successful I think was actually doing what we set out to do: compiling different approaches to the issue of stereotypes and diversity from the perspective (scholarly) that we were hoping for, and from a diverse group of authors. Of what I’d do differently, hmm, I’d say we were extremely organized with all of our spreadsheets and set dates, but maybe next time I’d want to share a document with all authors so they could see their progress and make sure they’re keeping up with everyone else on deadlines. I’m going to actually keep this in mind since I have another book project starting soon with ACRL Press: Kelly McElroy and I will be co-editing a critical library instruction handbook and have just released our Call for Proposals!

Emily: I love editing. After Critical Library Instruction came out, I picked up the book series gig from Rory Litwin, and that’s been a continued absolute pleasure. Editing means opening the door for other people to do their thing, have their say, and maybe help make that say a little more precise and well-argued. Things I’m always reminding myself to do: say no when no needs to be said. Fulfill ego elsewhere—few people remember the editor, not the tenure review committee, sometimes not even your mom, and if you’re good at your job the reader won’t notice. Work a little every day. Don’t fear email. Big projects are the result of a zillion tiny decisions, so go ahead and make them. And don’t be afraid to commit to print—it’s the only way to keep talking.

Further Reading:

The Call for Papers for Librarian Wardrobe
Series description for Series on Gender and Sexuality in Information Studies

Thanks so much to Heather Booth, Emily Drabinski, Nicole Pagowsky, and Miriam Rigby for their thoughtful answers and to Bob Schroeder and Erin Dorney for their assistance in shaping the questions. 


Hydra Project: OR2015 Proposal Deadline Extended

planet code4lib - Wed, 2015-01-28 09:30

A message from the Open Repositories 2015 Conference organizers Indianapolis, United States.

The final deadline for submitting proposals for the Tenth International Conference on Open Repositories (@OR2015Indy and #or2015) has been extended until Friday, Feb. 6, 2015. The conference is scheduled to take place June 8-11 in Indianapolis and is being hosted by Indiana University Bloomington Libraries, University of Illinois Urbana-Champaign Library and Virginia Tech University Libraries.

The theme this year is “LOOKING BACK, MOVING FORWARD: OPEN REPOSITORIES AT THE CROSSROADS”. You may review the call for proposals here: .

* Submit your proposal here: by Feb. 6, 2015 *

Ranti Junus: Public Domain Day: what could have entered it in 2015 and what did get released

planet code4lib - Wed, 2015-01-28 05:00

Every year, January 1st also marks works from around the world that would be entering the public domain thanks to the copyright laws in theirs respective countries.

Public Domain Reviews put a list of creators whose work that are entering the public domain. (Kandinsky! Whooh!)

Center of Study for the Public Domain put a list of some quite well-known works that are still under the extended copyright restriction:

John Mark Ockerbloom from the University of Pennsylvania pointed out that EEBO is now out and, among other things, promoted several arternatives to

DuraSpace News: Open Repositories Conference Update: OR2015 Proposal Deadline Extended

planet code4lib - Wed, 2015-01-28 00:00

A message from the Open Repositories 2015 Conference organizers

Indianapolis, United States The final deadline for submitting proposals for the Tenth International Conference on Open Repositories (@OR2015Indy and #or2015) has been extended until Friday, Feb. 6, 2015. The conference is scheduled to take place June 8-11 in Indianapolis and is being hosted by Indiana University Bloomington Libraries, University of Illinois Urbana-Champaign Library and Virginia Tech University Libraries.

Meredith Farkas: Sorry Springshare, but also not sorry

planet code4lib - Tue, 2015-01-27 23:05

So I probably didn’t make a lot of friends at Springshare with my blog post about LibGuides this morning (if you haven’t already, take a look at the update I made to my original post). And I do apologize for lumping them in with EBSCO, because it appears that they have not taken away something that people had access to in LibGuides 1.0 and made it only available in the CMS product. That said, their new API (as opposed to their old API which they shouldn’t have been called API at all because it’s not a true API, but is still available in LibGuides 2.0) is only available in their CMS product. If you want to use JSON data or need full access to the API, you will need to upgrade to the CMS product, so it is accurate that their API is not open. But if you used the thing previously referred to as an API in LibGuides 1.0, you should still be able to do everything you did with it previously when you upgrade to 2.0. My understanding of this situation was based on communications from Springshare over the past month and a half with our web librarian, from what documentation I could find on their website, and from a response to a question I posed on Twitter.

I think this is a great example of how important clear communications are for a company. Throwing out terms like widgets and APIs and then using them in different ways in different contexts is bound to lead to confusion (especially if your original API wasn’t actually a real API). That I couldn’t find the information in their documentation or community site — which, while it is a treasure trove of good information, is also big, unwieldy, disorganized, and incomplete — is a huge problem for current and potential future customers. That the emails from support were so unclear that they led several intelligent librarian to the completely opposite conclusion is not good at all.

I still really love using LibGuides and am thrilled that we are ditching Library a la Carte (as are my colleagues), but the neat freak (and librarian) in me really wants to open the amazingly useful junk drawer that is their documentation and organize it for them.

District Dispatch: IMLS funding announced to improve library services

planet code4lib - Tue, 2015-01-27 20:52

By Images Money

The Institute of Museum and Library Services (IMLS) last week announced funding for 56 State Library Administration Agencies (SLAAs) totaling almost $155 million. The annual Grants to States represent the largest source of federal funding support for library services in the United States and supported by the American Library Association. A full list of state grants can be found here.

Each year, over 2,500 Grants to States projects support the purposes and priorities outlined in the Library Services and Technology Act (LSTA). SLAAs may use the funds to support statewide initiatives and services, and they may also distribute the funds through competitive sub-awards to, or cooperative agreements with, public, academic, research, school, or special libraries or consortia (for-profit and federal libraries are not eligible).

States and sub-recipients have partnered with community organizations to provide a variety of services and programs, including access to electronic databases, computer instruction, homework centers, summer reading programs, digitization of special collections, access to e-books and adaptive technology, bookmobile service, and development of outreach programs to the underserved. To find out more about how funds are used in your state, visit your state profile page.

The grants allocate a base amount to each of the SLAAs, plus a supplemental amount based on population. The agency’s Grants to States program provides federal funds as a supplement to existing state library services rather than a replacement for state funding, and it assures local involvement through financial matching requirements. The newly released allotment table identifies both the federal (66%) and state match share (34%) for each SLAA.

ALA maintains a close working relationship with IMLS, the primary source of federal support for the nation’s 123,000 libraries and 35,000 museums. IMLS’s grant making, policy development, and research help libraries and museums deliver valuable services that make it possible for communities and individuals to thrive. To learn more, visit and follow us on Facebook,  Twitter and Tumblr.

The post IMLS funding announced to improve library services appeared first on District Dispatch.

Federated Search Blog: Microsoft Case Study – Bridging the Gap Between Language and Science

planet code4lib - Tue, 2015-01-27 20:47

On January 8, 2015, Microsoft published a new, Customer Solution Case Study about Deep Web Technologies’ innovative search technology developed in collaboration with the WorldWideScience Alliance.  Using the Microsoft Translation services, the search application allows users to search in their native language, find results from sources around the world, and read the results translated back into their language. In light of the enormous strides made each year in the global scientific community where timely dissemination of the vast published knowledge is critical, increases access to many important databases and encourages international collaboration.

The WorldWideScience Alliance turned to Abe Lederman, Chief Executive Officer and Chief Technology Officer of Deep Web Technologies, to realize its vision of a better, more automated solution with multilingual support. “We wanted to create an application that would make scholarly material more accessible worldwide to both English and non-English speakers,” he says. “For instance, we wanted a French-speaking user to be able to type in a query and find documents written in any language.”

The Case Study, posted to the Microsoft “Customer Stories” page, comes on the heels of a update in 2014, improving the application look and feel and speed. Additionally, 2015 holds a bright future as the study mentions: “To provide better accessibility, also offers a mobile interface. Deep Web Technologies is launching a streamlined HTML5 version that will work with virtually any device, whether PC, phone, or tablet. Other future enhancements include a localization feature that will provide search portals in the user’s native language.”

In response to the Case Study, Olivier Fontanta, Director of Product Marketing for Microsoft Translator said, “Microsoft Translator can help customers better reach their internal and external stakeholders across languages.  By building on the proven, customizable and scalable Translator API, Deep Web Technologies has developed a solution that has a direct impact on researcher’s ability to learn and exchange with their peers around the world, thereby improving their own research impact.” The Microsoft Translator Team Blog has followed up on the Case Study here.

Oh, and one more thing… is not the only Deep Web Technologies’ multilingual application. WorldWideEnergy translates energy related content into four languages and the United Nations Economic Commission for Africa will be rolling out a multilingual search in 2015.

View the Press Release.

William Denton: Steve Reich phase pieces with Sonic Pi

planet code4lib - Tue, 2015-01-27 18:48

The first two of the phase pieces Steve Reich made in the sixties, working with recorded sounds and tape loops, were It’s Gonna Rain (1965) and Come Out (1966), both of which are made of two loops of the same fragment of speech slowly going out of phase with each other and then coming back together as the two tape players run at slightly different speeds. I was curious to see if I could make phase pieces with Sonic Pi, and it turns out it takes little code to do it.

Here is the beginning of Reich’s notes on “It’s Gonna Rain (1965)” in Writings on Music, 1965–2000 (Oxford University Press, 2002):

Late in 1964, I recorded a tape in Union Square in San Francisco of a black preacher, Brother Walter, preaching about the Flood. I was extremely impressed with the melodic quality of his speech, which seemed to be on the verge of singing. Early in 1965, I began making tape loops of his voice, which made the musical quality of his speech emerge even more strongly. This is not to say that the meaning of his words on the loop, “it’s gonna rain,” were forgotten or obliterated. The incessant repetition intensified their meaning and their melody at one and the same time.


I discovered the phasing process by accident. I had two identical tape loops of Brother Walter saying “It’s gonna rain,” and I was playing with two inexpensive tape recorders—one jack of my stereo headphones plugged into machine A, the other into machine B. I had intended to make a specific relationship: “It’s gonna” on one loop against “rain” on the other. Instead, the two machines happened to be lined up in unison and one of them gradually started to get ahead of the other. The sensation I had in my head was that the sound moved over to my left ear, down to my left shoulder, down my left arm, down my leg, out across the floor to the left, and finally began to reverberate and shake and become the sound I was looking for—“It’s gonna/It’s gonna rain/rain”—and then it started going the other way and came back together in the center of my head. When I heard that, I realized it was more interesting than any one particular relationship, because it was the process (of gradually passing through all the canonic relationships) making an entire piece, and not just a moment in time.

The audio sample

First I needed a clip of speech to use. Something with an interesting rhythm, and something I had a connection with. I looked through recordings I had on my computer and found an interview my mother, Kady MacDonald Denton, had done in 2007 on CBC Radio One after winning the Elizabeth Mrazik-Cleaver Canadian Picture Book Award for Snow.

She said something I’ve never forgotten that made me look at illustrated books in a new way:

A picture book is a unique art form. It is the two languages, the visual and the spoken, put together. It’s sort of like a—almost like a frozen theatre in a way. You open the cover of the books, the curtain goes up, the drama ensues.

I noticed something in “theatre in a way,” a bit of rhythmic displacement: there’s a fast ONE-two-three one-two-THREE rhythm.

That’s the clip to use, I decided: the length is right (1.12 seconds), the four words are a little mysterious when isolated like that, and the rhythm ought to turn into something interesting.

Interesting bricks. Phase by fractions

The first way I made a phase piece was not with the method Reich used but with a process simpler to code: here one clip repeats over and over while the other starts progressively later in the clip each iteration, with the missing bit from the start added on at the end.

The start and finish parameters specify where to clip a sample: 0 is the beginning, 1 is the end, and ratio grows from 0 to 1. I loop n+1 times to make the loops play one last time in sync with each other.

full_quote = "~/music/sonicpi/theatre-in-a-way/frozen-theatre-full-quote.wav" theatre_in_a_way = "~/music/sonicpi/theatre-in-a-way/frozen-theatre-theatre-in-a-way.wav" length = sample_duration theatre_in_a_way puts "Length: #{length}" sample full_quote sleep sample_duration full_quote sleep 1 4.times do sample theatre_in_a_way sleep length + 0.3 end # Moving ahead by fractions of a second n = 100 (n+1).times do |t| ratio = t.to_f/n # t is a Fixnum, but we need ratio to be a Float # This one never changes sample theatre_in_a_way, pan: -0.5 # This one progresses through sample theatre_in_a_way, start: ratio, finish: 1, pan: 0.5 sleep length - length * ratio sample theatre_in_a_way, start: 0, finish: ratio, pan: 0.5 sleep length*ratio end

This is the result:

“Music as a gradual process”

A few quotes from Steve Reich’s “Music as a Gradual Process” (1968), also in Writings on Music, 1965–2000:

I do not mean the process of composition but rather pieces of music that are, literally, processes.

The distinctive thing about musical processes is that they determine all the note-to-note (sound-to-sound) details and the overall form simultaneously. (Think of a round or infinite canon.)

Although I may have the pleasure of discovering musical processes and composing the material to run through them, once the process is set up and loaded it runs by itself.

What I’m interested in is a compositional process and a sounding music that are one and the same thing.

When performing and listening to gradual musical processes, one can participate in a particular liberating and impersonal kind of ritual. Focusing in on the musical process makes possible that shift of attention away from he and she and you and me outwards toward it.

In Sonic Pi we do all this with code.

Mesopotamian wall cone mosaic at the Metropolitan Museum of Art in New York City. Phase by speed

The second method is to run one loop a tiny bit faster than the other and wait for it to eventually come back around and line up with the fixed loop. This is what Reich did, but here we achieve the effect with code, not analog tape players.

The rate parameter controls how fast a sample is played (< 1 is slower, > 1 is faster), and if n is how many times we want the fixed sample to loop then the faster sample will have length length - (length / n) and play at rate (1 - 1/n.to_f) (the number needs to be converted to a Float for this to work). It needs to loop n * length / phased_length) times to end up in sync with the steady loop. (Again I add 1 to play both clips in sync at the end as they did in the beginning.)

For example, if the sample is 1 second long and n = 100, then the phased sample would play at rate 0.99, be 0.99 seconds long, and play 101 times to end up, after 100 seconds (actually 99.99, but close enough) back in sync with the steady loop, which took 100 seconds to play 1 second of sound 100 times.

It took me a bit of figuring to realize I had to convert numbers to Float or Integer here and there to make it all work, which is why to_f and to_i are scattered around.

full_quote = "~/music/sonicpi/theatre-in-a-way/frozen-theatre-full-quote.wav" theatre_in_a_way = "~/music/sonicpi/theatre-in-a-way/frozen-theatre-theatre-in-a-way.wav" length = sample_duration theatre_in_a_way puts "Length: #{length}" sample full_quote sleep sample_duration full_quote sleep 1 4.times do sample theatre_in_a_way sleep length + 0.3 end # Speed phasing n = 100 phased_length = length - (length / n) # Steady loop in_thread do (n+1).times do sample theatre_in_a_way sleep length end end # Phasing loop ((n * length / phased_length) + 1).to_i.times do sample theatre_in_a_way, rate: (1 - 1/n.to_f) sleep phased_length end

This is the result:

Set n to 800 and it takes over fifteen minutes to evolve. The voice gets lost and just sounds remain.

“Time for something new”

In notes for “Clapping Music (1972)” (which I also did on Sonic Pi), Reich said:

The gradual phase shifting process extremely useful from 1965 through 1971, but I do not have any thoughts of using it again. By late 1972, it was time for something new.

DPLA: Unexpected: Snow Removal

planet code4lib - Tue, 2015-01-27 18:42

[Today we're starting a new series on our blog, called Unexpected. With over eight million items in our collection (and growing!), there are countless unusual artifacts, and since we now bring together 1,400 different libraries, archives, museums, and cultural heritage sites in one place, we can begin to associate these surprising sources into rich categories and themes. Unexpected will showcase some of the most, well, unexpected, items and topics—just the tip of the DPLA iceberg. We hope the series inspires you to explore our collection further, to tell others about DPLA, and to use our materials for education, research, and just plain fun. —Dan Cohen]

The history of snow removal is a history of American ingenuity, in which the basic desire to get rid of the white stuff mixes in combustible and bewildering ways with the eccentric inclination to forge monstrous new machines.

Patents for snow removal stretch back through the nineteenth century. This patent for a snow plow, filed by David Grove in 1882, didn’t push the snow; it ingested it through its giant mouth and spat it out the side.

[Image courtesy UNT Libraries Government Documents Department via the Portal to Texas History]

Is there anything conveyor belts can’t do? This design from around 1930 tests that proposition.

[Image from the Boston Public Library via Digital Commonwealth]

Did you know that you can put a plow on virtually anything? The historical record says yes. Have a horse and some metal? You’ve got yourself a plow.

[Image from the Boston Public Library via Digital Commonwealth]

But don’t stop there. Go all the way, tinkerer friend. Get yourself some flywheels, some big gears, a few spare I-beams, cables and chains, and go to town.

[Image from the New York Public Library]

Are all the kids in the neighborhood going to come out to watch? You bet.

Perhaps you have a train. In that case, try a rotating plow of death.

[Image from Brigham Young University Harold B. Lee Library via the Mountain West Digital Library]

Live in Montana and have a train? You’re gonna need a bigger plow.

[Image from the University of Montana-Missoula's Mansfield Library Archives & Special Collections via the Montana Memory Project]

And why go with the standard truck plow when you can multiply the effect by having two plows and shoot snow out of both sides of your monster truck.

[Video from the Walter J. Brown Media Archives and Peabody Awards Collection via the Digital Library of Georgia]

Hope everyone in the Northeast enjoys the snow day, and remember, don’t use a shovel when you can let your imagination run wild—in the snow and in the Digital Public Library of America.

District Dispatch: ALA responds to proposed changes to the Code of Federal Regulations

planet code4lib - Tue, 2015-01-27 18:26

Photo by Wknight94

On Monday, ALA submitted comments to the National Archives and Records Administration’s Administrative Committee of the Federal Register regarding the proposed changes to the Code of Federal Regulations (CFR).

As states, “The Federal Register is an official daily legal publication that informs citizens of: rights and obligations, opportunities for funding and Federal benefits, and actions of Federal agencies for accountability to the public” and the CFR contains “Federal rules that have: general applicability to the public, current and future effect as of the date specified”.

Both documents are important in aiding researchers and promoting a more transparent government. Given a library’s role of providing public access to government information of all types, ALA was pleased at the opportunity to submit comments.

The post ALA responds to proposed changes to the Code of Federal Regulations appeared first on District Dispatch.

District Dispatch: National Library Legislative Day 2015

planet code4lib - Tue, 2015-01-27 16:38

Lisa Rice visits with Rep. Brett Guthrie (R-KY), NLLD 2014

Good news! Registration for the 41st annual National Library Legislative Day is now open!

This two-day advocacy event brings hundreds of librarians, trustees, library supporters, and patrons to Washington, D.C. to meet with their Members of Congress to rally support for libraries issues and policies.

Registration information and hotel booking information are available on the ALA Washington Office website.

This year, National Library Legislative Day will be held May 4-5, 2015. Participants will receive advocacy tips and training, along with important issues briefings prior to their meetings.

First-time participants are eligible for a unique scholarship opportunity. The White House Conference on Library and Information Services Taskforce (WHCLIST) and the ALA Washington Office are calling for nominations for the 2015 WHCLIST Award. Recipients of this award receive a stipend ($300 and two free nights at a D.C. hotel) to a non-librarian participant in National Library Legislative Day.

For more information about the WHCLIST award or National Library Legislative Day, visit Questions or comments can be directed to grassroots coordinator Lisa Lindle.

The post National Library Legislative Day 2015 appeared first on District Dispatch.

HangingTogether: LC and OCLC Collaborate on Linked Data

planet code4lib - Tue, 2015-01-27 15:00

As we have said before, the Library of Congress and OCLC have been sharing information and approaches regarding library linked data. In a nutshell, we have two different use cases and strategies that we believe are compatible and complementary.

Now, in a just co-published white paper, we are beginning to share more details and evidence that this is the case.

This is actually just a high-level view of a more technical review of our approaches, and more details will be forthcoming in the months ahead. The Library of Congress’ main use case is to transition from MARC into a linked data world that will enable a much richer and more full-featured interface to library data. OCLC’s use case is to syndicate library data at scale into the wider web, as well as enabling richer online interactions for end-users.

OCLC is of course committed to enabling our member libraries to obtain the vital metadata they need for their work in appropriate formats, including BIBFRAME. This is one of the things we make clear in this paper.

As always, we want to know what you think. So download the paper, read it, and let us know in the comments below, or by email to the authors (their addresses on the title page verso) what you think.

About Roy Tennant

Roy Tennant works on projects related to improving the technological infrastructure of libraries, museums, and archives.

Mail | Web | Twitter | Facebook | LinkedIn | Flickr | YouTube | More Posts (85)

Ed Summers: A Life Worth Noting

planet code4lib - Tue, 2015-01-27 14:16

There are no obituaries for the war casualties that the United States inflicts, and there cannot be. If there were to be an obituary there would had to have been a life, a life worth noting, a life worth valuing and preserving, a life that qualifies for recognition. Although we might argue that it would be impractical to write obituaries for all those people, or for all people, I think we have to ask, again and again, how the obituary functions as the instrument by which grievability is publicly distributed. It is the means by which a life becomes, or fails to become, a publicly grievable life, and icon for national self-recognition, the means by which a life becomes noteworthy. As a result, we have to consider the obituary as an act of nation-building. The matter is not a simple one, for, if a life is not grievable, it is not quite a life; it does not qualify as a life and is not worth a note. It is already the unburied, if not the unburiable.

Precarious Life by Judith Butler, (p. 34)

LITA: Why We Need to Encrypt The Whole Web… Library Websites, Too

planet code4lib - Tue, 2015-01-27 13:30

The Patron Privacy Technologies Interest Group was formed in the fall of 2014 to help library technologists improve how well our tools protect patron privacy.  As the first in a series of posts on technical matters concerning patron privacy, please enjoy this guest post by Alison Macrina.

When using the web for activities like banking or shopping, you’ve likely seen a small lock symbol appear at the beginning of the URL and noticed the “HTTP” in the site’s address switch to “HTTPS”. You might even know that the “s” in HTTPS stands for “secure”, and that all of this means that the website you’ve accessed is using the TLS/SSL protocol. But what you might not know is that TLS/SSL is one of the most important yet most underutilized internet protocols, and that all websites, not just those transmitting “sensitive” information, should be using HTTPS by default.

To understand why TLS/SSL is so important for secure web browsing, a little background is necessary. TLS/SSL is the colloquial way of referring to this protocol, but the term is slightly misleading – TLS and SSL are essentially different versions of a similar protocol. Secure Sockets Layer (SSL) was the first protocol used to secure applications over the web, and Transport Layer Security (TLS) was built from SSL as a standardized version of the earlier protocol. The convention of TLS/SSL is used pretty often, though you might see TLS or SSL alone. However written, it all refers to the layer of security that sits on top of HTTP. HTTP, or HyperText Transfer Protocol, is the protocol that governs how websites send and receive data, and how that data is formatted. TLS/SSL adds three things to HTTP: authentication, encryption, and data integrity. Let’s break down those three components:

Authentication: When you visit a website, your computer asks the server on the other end for the information you want to access, and the server responds with the requested information. With TLS/SSL enabled, your computer also reviews a security certificate that guarantees the authenticity of that server. Without TLS/SSL, you have no way of knowing if the website you’re visiting is the real website you want, and that puts you at risk of something called a man-in-the-middle attack, which means data going to and from your computer can be intercepted by an entity masquerading as the site you intended to visit.

Fig. 1: Clicking the lock icon next to a site with TLS/SSL enabled will bring up a window that looks like one above. You can see here that Twitter is running on HTTPS, signed by the certificate authority Symantec. [Image courtesy Alison Macrina]

Fig. 2: Clicking “more information” in the first window will bring up this window. In the security tab, you can see the owner of the site, the certificate authority that verified the site, and the encryption details. [Image courtesy Alison Macrina]

Fig. 3: Lastly, clicking the “view certificate” option in the previous window will bring up even more technical details, including the site’s fingerprints and the certificate expiration date. [Image courtesy Alison Macrina]

Data encryption: Encryption is the process of scrambling messages into a secret code so they can only be read by the intended recipient. When a website uses TLS/SSL, the traffic between you and the server hosting that website is encrypted, providing you with a measure of privacy and protection against eavesdropping by third parties.

Data integrity: Finally, TLS/SSL uses an algorithm that includes a value to check on the integrity of the data in transit, meaning the data sent between you and a TLS/SSL secured website cannot be tampered with or altered to add malicious code.

Authentication, encryption, and integrity work in concert to protect the data you send out over TLS/SSL enabled websites. In this age of widespread criminal computer hacking and overbroad surveillance from government entities like the NSA, encrypting the web against interception and tampering is a social necessity. Unfortunately, most of the web is still unencrypted, because enabling TLS/SSL can be confusing, and often some critical steps are left out. But the digital privacy rights advocates at the Electronic Frontier Foundation are aiming to change that with Let’s Encrypt, a free and automated way to deploy TLS/SSL on all websites, launching in Summer 2015. EFF has also built a plugin called HTTPS Everywhere which forces TLS/SSL encryption on websites where this protocol is supported, but not fully set up (a frequent occurrence).

As stewards of information and providers of public internet access, librarians have a special duty to protect the privacy of our patrons and honor the public trust we’ve worked hard to earn. Just as we continue to protect patron checkout histories from unlawful snooping, we should be actively protecting the privacy of patrons using our website, catalog, and public internet terminals:

  • Start by enabling TLS/SSL on our library websites and catalog (some instructions are here and here, and if those are too confusing, Let’s Encrypt goes live this summer. If your website is hosted on a server that is managed externally, ask your administrator to set up TLS/SSL for you).
  • Install the HTTPS Everywhere add-on on all library computers. Tell your patrons what it is and why it’s important for their digital privacy.
  • Urge vendors, database providers, and other libraries to take a stand for privacy and start using TLS/SSL.

Privacy is essential to democratic institutions like libraries; let’s show our patrons that we take that seriously.

Alison Macrina is an IT librarian in Massachusetts and the founder of the Library Freedom Project, an initiative aimed at bringing privacy education and tools into libraries across the country. Her website doesn’t have any content on it right now, but hey, at least it’s using HTTPS! 

The inaugural in-person meeting of the LITA Patron Privacy Interest Technologies Group is at Midwinter 2015 on Saturday, January 31st, at 8:30 a.m. Everybody interested in learning about patron privacy and data security in libraries is welcome to attend! You can also subscribe to the interest group’s mailing list.

Library Tech Talk (U of Michigan): People Don't Read on the Web

planet code4lib - Tue, 2015-01-27 00:00
How much do people actually read on the web? Not much. UX Myths presents the evidence.

District Dispatch: Make your mark on national policy agenda for libraries!

planet code4lib - Mon, 2015-01-26 22:00

As many of us bundle up and prepare to head to Chicago for the ALA Midwinter Meeting, the ALA has added another discussion item for attendees—and beyond. Today the American Library Association (ALA) Office for Information Technology Policy (OITP) released a discussion draft national policy agenda for libraries to guide a proactive policy shift.

As ALA President Courtney Young states clearly: “Too often, investment in libraries and librarians lags the opportunities we present. Libraries provide countless benefits to U.S. communities and campuses, and contribute to the missions of the federal government and other national institutions. These benefits must be assertively communicated to national decision makers and influencers to advance how libraries may best contribute to society in the digital age.”

The draft agenda is the first step towards answering the questions “What are the U.S. library interests and priorities for the next five years that should be emphasized to national decision makers?” and “Where might there be windows of opportunity to advance a particular priority at this particular time?”

The draft agenda provides an umbrella of timely policy priorities and is understood to be too extensive to serve as the single policy agenda for any given entity in the community. Rather, the goal is that various library entities and their members can fashion their national policy priorities under the rubric of this national public policy agenda.

Outlining this key set of issues and context is being pursued through the Policy Revolution! Initiative, led by ALA OITP and the Chief Officers of State Library Agencies (COSLA) with guidance from a Library Advisory Committee—which includes broad representation from across the library community. The three-year initiative, funded by the Bill & Melinda Gates Foundation, has three major elements: to develop a national public policy agenda, to initiate and deepen national stakeholder interactions based on policy priorities, and build library advocacy capacity for the long-term.

“In a time of increasing competition for resources and challenges to fulfilling our core missions, libraries and library organizations must come together to advocate proactively and strategically,” said COSLA President Kendall Wiggin. “Sustainable libraries are essential to sustainable communities.”

The draft national public policy agenda will be vetted, discussed, and further elaborated upon in the first quarter of 2015, also seeking to align with existing and emerging national library efforts. Several members of the team that worked on the agenda will discuss the Policy Revolution! Initiative and invite input into the draft agenda at the 2015 ALA Midwinter Meeting on February 1 from 1-2:30 p.m. in the McCormick Convention Center, room W196A.

From this foundation, the ALA Washington Office will match priorities to windows of opportunity and confluence to begin advancing policy priorities—in partnership with other library organizations and allies with whom there is alignment—in mid-2015.

Please join us in this work. Feedback should be sent by February 27, 2015, to oitp[at]alawash[dot]org, and updates will be available online.

The post Make your mark on national policy agenda for libraries! appeared first on District Dispatch.


Subscribe to code4lib aggregator