You are here

Feed aggregator

SearchHub: Infographic: The Woes of the CIOs

planet code4lib - Tue, 2015-05-19 17:28
It’s tough out there for CIOs. They’re getting it from all sides and from all directions. Let’s take a look at the unique challenges CIOs face in trying to keep their organizations competitive and effective:

The post Infographic: The Woes of the CIOs appeared first on Lucidworks.

Islandora: Fedora 4 Project Update IV

planet code4lib - Tue, 2015-05-19 15:31

As the project entered the fourth month, work continued on migration planning and mapping, migration-utils, and Drupal integration.

Migration work was split between working on migration-utils, migration mappings, data modeling (furthering Portland Common Data Model compliance), and working with the Islandora (Fedora 4 Interest Group), Fedora (Fedora Tech meetings), and Hydra (Hydra Metadata Working Group) communities on the preceding items. In addition, Audit Service-- a key requirement of an Islandora community fcrepo3 -> fcrepo4 migration -- finalized the second phase of the project. Community stakeholders are currently reviewing and providing feedback.

Work on migration-utils focused mainly applying a number of mappings (outlined here) to the utility, adding support for object-to-object linking, and providing documentation on how to use the utility. This work can be demonstrated by building the Islandora 7.x-2.x Vagrant Box, cloning the migration-utils repository, and pointing migration-utils at a fcrepo3 native filesystem or directory of exported FOXML.

As for object modeling and inter-community work, an example of this work is the below image of a sample Islandora Large Image object modeled in the Portland Common Data Model. This model will continue to evolve as the communities work together in the various Hydra Metadata Working Group sub-working groups.

On the Drupal side of things, work was started on Middleware Services, a middleware service that will utilize the Fedora 4 REST API and the Drupal Services modules, and create an API for the majority of interactions between the two systems. In addition, a few Drupal modules have been created to leverage this; islandora_basic_image, islandora_collection, islandora_dcterms.

In addition, the team has been exploring options with RDF integration and support in Drupal, as well as how to handle editing (Islandora XML Forms) the various descriptive metadata schemas the community uses. This is captured in a few issues in the issue queue; #27 & #28. Due to the importance of the issue, a special Fedora 4 Interest Group meeting was held to discuss how to proceed with this functionality in Islandora 7.x-2.x. The group's consensus was to solicit use cases from the community to better understand how to proceed with 7.x-2.x

Work will continue on the migration and Drupal sides of the project into May.

David Rosenthal: How Google Crawls Javascript

planet code4lib - Tue, 2015-05-19 15:00
I started blogging about the transition the Web is undergoing from a document to a programming model, from static to dynamic content, some time ago. This transition has very fundamental implications for Web archiving; what exactly does it mean to preserve something that is different every time you look at it? Not to mention the vastly increased cost of ingest, because executing a program takes a lot more, a potentially unlimited amount of, computation than simply parsing a document.

The transition has big implications for search engines too; they also have to execute rather than parse. Web developers have a strong incentive to make their pages search engine friendly, so although they have enthusiastically embraced Javascript they have often retained a parse-able path for search engine crawlers to follow. We have watched academic journals adopt Javascript, but so far very few have forced us to execute it to find their content.

Adam Audette and his collaborators at Merkle | RKG have an interesting post entitled We Tested How Googlebot Crawls Javascript And Here’s What We Learned. It is aimed at the SEO (Search Engine Optimzation) world but it contains a lot of useful information for Web archiving. The TL;DR is that Google (but not yet other search engines) is now executing the Javascript in ways that make providing an alternate, parse-able path largely irrelevant to a site's ranking. Over time, this will mean that the alternate paths will disappear, and force Web archives to execute the content.

District Dispatch: Ending “bulk collection” of library records on the line in looming Senate vote

planet code4lib - Tue, 2015-05-19 13:14

Image Source: PolicyMic

Last week, the House of Representatives voted overwhelmingly, 338 to 88, for passage of the latest version of the USA FREEDOM Act, H.R. 2048. The bill — and the battle to achieve the first meaningful reform of the USA PATRIOT Act since it was enacted 14 years ago — now shifts to the Senate. There, the outcome may well turn on the willingness of individual voters to overwhelm Congress with demands that USA FREEDOM either be passed without being weakened, or that the now infamous “library provision” of the PATRIOT Act (Section 215) and others slated for expiration on June 1 simply be permitted to “sunset” as the Act provides if Congress takes no action. Now is the time for all librarians and library supporters — for you — to send that message to both of your US Senators. Head to the action center to find out how.

For the many reasons detailed in yesterday’s post, ALA and its many private and public sector coalition partners have strongly urged Congress to pass the USA FREEDOM Act of 2015 without weakening its key, civil liberties-restoring provisions. Already a finely-tuned compromise that delivers fewer privacy protections than last year’s Senate version of the USA FREEDOM Act, this year’s bill simply cannot sustain further material dilution and retain ALA’s (and many other groups’) support. The Obama Administration also officially endorsed and called for passage of the bill.

Unfortunately, the danger of the USA FREEDOM Act being blocked entirely or materially weakened is high. The powerful leader of the Senate, Mitch McConnell of Kentucky, is vowing to bar consideration of H.R. 2048 and, instead, to provide the Senate with an opportunity to vote only on his own legislation (co-authored with the Chair of the Senate Intelligence Committee) to reauthorize the expiring provisions of the PATRIOT Act with no privacy-protecting or other changes whatsoever. Failing the ability to pass that bill, Sen. McConnell and his allies have said that they will seek one or more short-term extensions of the PATRIOT Act’s expiring provisions.

Particularly in light of last week’s ruling by a federal appellate court that the government’s interpretation of its “bulk collection” authority under Section 215 was illegally broad in all key respects, ALA and its partners from across the political spectrum vehemently oppose any extension without meaningful reform of the USA PATRIOT Act of any duration.

The looming June 1 “sunset” date provides the best leverage since 2001 to finally recalibrate key parts of the nation’s surveillance laws to again respect and protect library records and all of our civil liberties. Please, contact your Senators now!

Additional Resources

House Judiciary Committee Summary of H.R. 2048

Statement of Sen. Patrick Leahy, lead sponsor of S. 1123 (May 11, 2015)

Open Technology Institute Comparative Analysis of select USA FREEDOM Acts of 2014 and 2015

Patriot Act in Uncharted Legal Territory as Deadline Approaches,” National Journal (May 10, 2015)

N.S.A. Collection of Bulk Call Data Is Ruled Illegal,” New York Times (May 7, 2015)

The post Ending “bulk collection” of library records on the line in looming Senate vote appeared first on District Dispatch.

LITA: Call for Writers

planet code4lib - Tue, 2015-05-19 13:00
meme courtesy of Michael Rodriguez

The LITA blog is seeking regular contributors interested in writing easily digestible, thought-provoking blog posts that are fun to read (and hopefully to write!). The blog showcases innovative ideas and projects happening in the library technology world, so there is a lot of room for contributor creativity. Possible post formats could include interviews, how-tos, hacks, and beyond.

Any LITA member is welcome to apply. Library students and members of underrepresented groups are particularly encouraged to apply.

Contributors will be expected to write one post per month. Writers will also participate in peer editing and conversation with other writers – nothing too serious, just be ready to share your ideas and give feedback on others’ ideas. Writers should expect a time commitment of 1-3 hours per month.

Not ready to become a regular writer but you’d like to contribute at some point? Just indicate in your message to me that you’d like to be considered as a guest contributor instead.

To apply, send an email to briannahmarshall at gmail dot com by Friday, May 29. Please include the following information:

  • A brief bio
  • Your professional interests, including 2-3 example topics you would be interested in writing about
  • If possible, links to writing samples, professional or personal, to get a feel for your writing style

Send any and all questions my way!

Brianna Marshall, LITA blog editor

Hydra Project: ActiveFedora 8.1.0 released

planet code4lib - Tue, 2015-05-19 08:21

We are pleased to announce the release of ActiveFedora 8.1.0.  This release:

– Patches casting behavior – see for detailed information on the problem.
– Fixes rsolr patch-level dependency introduced by 35189fc.

Details can be found at:

Thanks, as always, to the team!

District Dispatch: EFF chief to keynote Washington Update session at Annual Conference

planet code4lib - Tue, 2015-05-19 06:56

Cindy Cohn, Legal Director and General Counsel for the EFF. Photographed by Erich Valo.

For decades, the Electronic Frontier Foundation (EFF) and the American Library Association (ALA) have stood shoulder to shoulder on the front lines of the fight for privacy online, at the library and in many other spheres of our daily lives. EFF Executive Director Cindy Cohn will discuss that proud shared history and the uncertain future of personal privacy during this year’s 2015 ALA Annual Conference in San Francisco. The session, titled “Frenetic, Fraught and Front Page: An Up-to-the-Second Update from the Front Lines of Libraries’ Fight in Washington,” takes place from 8:30 to 10:00 a.m. on Saturday, June 27, 2015, at the Moscone Convention Center in room 2001 of the West building.

Before becoming EFF’s Executive Director in April of 2015, Cohn previously served as the award-winning group’s legal director and general counsel from 2000–2015. In 2013, National Law Journal named Cohn one of the 100 most influential lawyers in America, noting: “If Big Brother is watching, he better look out for Cindy Cohn.” In 2012, the Northern California Chapter of the Society of Professional Journalists awarded her the James Madison Freedom of Information Award.

During the conference session, Adam Eisgrau, managing director of the ALA Office of Government Relations, will provide an up-to-the-minute insight from the congressional trenches of key federal privacy legislation “in play,” including the current status of efforts to reform the USA PATRIOT Act, the Freedom of Information Act (FOIA), as well as copyright reform, net neutrality and federal library funding. Participants will have the opportunity to pose questions to the speakers.

  • Cindy Cohn, executive director, Electronic Frontier Foundation
  • Adam Eisgrau, managing director, Office of Government Relations, American Library Association

View all ALA Washington Office conference sessions

The post EFF chief to keynote Washington Update session at Annual Conference appeared first on District Dispatch.

Cynthia Ng: Accessible Format Production: Overview on Creating Accessible Formats

planet code4lib - Tue, 2015-05-19 01:49
I have been meaning to post a series of post on how to create accessible formats, so here’s the overview.  The Overall Process Scan the print material. Run through OCR, creating a Text-readable PDF. Edit the PDF to make an Accessible PDF. Convert PDF (or EPUB) to document format. Edit the document to make it … Continue reading Accessible Format Production: Overview on Creating Accessible Formats

DuraSpace News: Finish Off Your Digital Preservation To-do List with ArchivesDirect

planet code4lib - Tue, 2015-05-19 00:00

Winchester, MA  Everyone has a different set of priorities when it comes to planning for digital preservation. Here are some examples of items that might appear on a typical digital preservation to-do list:

1. leverage hosted online service to manage preservation process

2. apply different levels of preservation to different types of content

3. do more than back up content on spare hard drives

4. keep copies in multiple locations

5. make sure content remains viable

SearchHub: Lucidworks Fusion 1.4 Now Available

planet code4lib - Mon, 2015-05-18 19:34
We’ve just released Lucidworks Fusion 1.4! This version is a Short-Term Support, or “preview” release of Fusion. There are a lot of new features in this version. Some of the highlights are: Security Fusion has always provided fine-grained security control on top of Solr. In version 1.4, we’ve significantly enhanced our integration with enterprise security systems: Kerberos We now support setting up Fusion as a Kerberos-protected service. You will be able to authenticate to Kerberos in your browser or API client, and instead of providing a password to Fusion, Fusion will validate you and allow (or disallow) access via Kerberos mechanisms. LDAP Group Mapping We’ve enriched our LDAP directory integration. In the past, we’ve been able to use LDAP to authenticate users and perform document-level security trimming. We can now additionally determine the user’s LDAP group memberships, and use those memberships to assign them to Fusion roles. Alerting We’ve introduced pipeline stages to send alerts, one each in the indexing and query pipelines. With these stages, you can send emails or Slack messages in response to documents passing through those pipelines. Emails are fully templated, so you can customize the content and include data from matching documents. And you’ll soon also be able to add other alerting methods besides email and Slack. A simple use for these is to set up notifications whenever a document matching a set of queries is crawled. Look for a post from our CTO (who wrote the code!) published here for more info on using alerting. Logstash Connector We add new connector integrations to Fusion all the time, but the Logstash Connector deserves special note. For those of you collecting machine data, it’s been possible to configure Logstash to ship logs to a Fusion pipeline or Solr index. The new Fusion Logstash Connector does this too, but makes it easier to install, configure, and manage. We include an embedded Logstash installation, so that you can start, stop, and edit your Logstash configuration right from the Fusion Datasource Admin GUI. You can use any standard Logstash plugin (including the network listeners, file tailing inputs, grok filter, or other format filters), and Fusion will automatically send the results into Fusion. There, you can do further Fusion pipeline processing, simple field mappings, or just index straight into Solr. Apache Spark Fusion is now including Apache Spark and the ability to use Spark to run complex analytic jobs. For now, the Fusion event aggregations and signals extractions can run in Spark for faster processing. In future releases, we expect to allow you to write and run more types of jobs in Spark, taking advantage of any of Spark’s powerful features and rich libraries. Solr 5.x As of Fusion 1.4, we officially support running Fusion against Solr 5.x clusters. We will still ship with an embedded Solr 4.x installation until we have validated repackaging and upgrades for existing Fusion/Solr 4.x customers, but new customers are free to install Solr 5.x, start it up in the SolrCloud cluster mode (bin/solr start -c), and use Fusion and all Fusion features with the new version. As you can see, we’re quickly adding new capabilities to Fusion and these latest features are just a preview of what’s on the way. Stay tuned for much more! Download Lucidworks Fusion, read the release notes, or learn more about Lucidworks Fusion.

The post Lucidworks Fusion 1.4 Now Available appeared first on Lucidworks.

Jonathan Rochkind: Yahoo YBoss spell suggest API significantly increases pricing

planet code4lib - Mon, 2015-05-18 15:53

For a year or two, we’ve been using the Yahoo/YBoss/YDN Spelling Service API to provide spell suggestions for queries in our homegrown discovery layer. (Which provides UI to search the catalog via Blacklight/Solr, as well as an article search powered by EBSCOHost api).

It worked… good enough, despite doing a lot of odd and wrong things. But mainly it was cheap. $0.10 per 1000 spell suggest queries, according to this cached price sheet from April 24 2105. 

However, I got an email today that they are ‘simplifying’ their pricing by charging for all “BOSS Search API” services at $1.80 per 1000 queries, starting June 1.

That’s 18x increase. Previously we paid about $170 a year for spell suggestions from Yahoo, peanuts, worth it even if it didn’t work perfectly. That’s 1.7 million querries for $170, pretty good.  (Honestly, I’m not sure if that’s still making queries it shouldn’t be, in response to something other than user input. For instance, we try to suppress spell check queries on paging through an existing result set, but perhaps don’t do it fully).

But 18x $170 is $3060.  That’s a pretty different value proposition.

Anyone know of any decent cheap spell suggest API’s? It looks like maybe Microsoft Bing has a poorly documented one.  Not sure.

Yeah, we could role our own in-house spell suggestion based on a local dictionary or corpus of some kind. aspell, or Solr’s built-in spell suggest service based on our catalog corpus.  But we don’t only use this for searching the catalog, and even for the catalog I previously found these API’s based on web searches provided better results than a local-corpus-based solution.  The local solutions seemed to false positive (provide a suggestion when the original query was ‘right’) and false negative (refrain from providing a suggestion when it was needed) more often than the web-based API’s. As well, of course, as being more work on us to set up and maintain.

Filed under: General

Manage Metadata (Diane Hillmann and Jon Phipps): What’s up with this Jane-athon stuff?

planet code4lib - Mon, 2015-05-18 15:13

The RDA Development Team started talking about developing training for the ‘new’ RDA, with a focus on the vocabularies, in the fall of 2014. We had some notion of what we didn’t want to do: we didn’t want yet another ‘sage on the stage’ event, we wanted to re-purpose the ‘hackathon’ model from a software focus to data creation (including a major hands-on aspect), and we wanted to demonstrate what RDA looked like (and could do) in a native RDA environment, without reference to MARC.

This was a tall order. Using RIMMF for the data creation was a no-brainer: the developers had been using the RDA Registry to feed new vocabulary elements into their their software (effectively becoming the RDA Registry’s first client), and were fully committed to FRBR. Deborah Fritz had been training librarians and other on RIMMF for years, gathering feedback and building enthusiasm. It was Deborah who came up with the Jane-athon idea, and the RDA Development group took it and ran with it. Using the Jane Austen theme was a brilliant part of Deborah’s idea. Everybody knows about JA, and the number of spin offs, rip-offs and re-tellings of the novels (in many media formats) made her work a natural for examining why RDA and FRBR make sense.

One goal stated everywhere in the marketing materials for our first Jane outing was that we wanted people to have fun. All of us have been part of the audience and on the dais for many information sessions, for RDA and other issues, and neither position has ever been much fun, useful as the sessions might have been. The same goes for webinars, which, as they’ve developed in library-land tend to be dry, boring, and completely bereft of human interaction. And there was a lot of fun at that first Jane-athon–I venture to say that 90% of the folks in the room left with smiles and thanks. We got an amazing response to our evaluation survey, and the preponderance of responses were expansive, positive, and clearly designed to help the organizers to do better the next time. The various folks from ALA Publishing who stood at the back and watched the fun were absolutely amazed at the noise, the laughter, and the collaboration in evidence.

No small part of the success of Jane-athon 1 rested with the team leaders at each table, and the coaches going from table to table helping out with puzzling issues, ensuring that participants were able to create data using RIMMF that could be aggregated for examination later in the day.

From the beginning we thought of Jane 1 as the first of many. In the first flush of success as participants signed up and enthusiasm built, we talked publicly about making it possible to do local Jane-athons, but we realized that our small group would have difficulty doing smaller events with less expertise on site to the same standard we set at Jane-athon 1. We had to do a better job in thinking through the local expansion and how to ensure that local participants get the same (or similar) value from the experience before responding to requests.

As a step in that direction CILIP in the UK is planning an Ag-athon on May 22, 2015 which will add much to the collective experience as well as to the data store that began with the first Jane-athon and will be an increasingly important factor as we work through the issues of sharing data.

The collection and storage of the Jane-athon data was envisioned prior to the first event, and the R-Balls site was designed as a place to store and share RIMMF-based information. Though a valuable step towards shareable RDA data, rballs have their limits. The data itself can be curated by human experts or available with warts, depending on the needs of the user of the data. For the longer term, RIMMF can output RDF statements based on the rball info, and a triple store is in development for experimentation and exploration. There are plans to improve the visualization of this data and demonstrate its use at Jane-athon 2 in San Francisco, which will include more about RDA and linked data, as well as what the created data can be used for, in particular, for new and improved services.

So, what are the implications of the first Jane-athon’s success for libraries interested in linked data? One of the biggest misunderstandings floating around libraryland in linked data conversations is that it’s necessary to make one and only one choice of format, and eschew all others (kind of like saying that everyone has to speak English to participate in LOD). This is not just incorrect, it’s also dangerous. In the MARC era, there was truly no choice for libraries–to participate in record sharing they had to use MARC. But the technology has changed, and rapidly evolving semantic mapping strategies [see:] will enable libraries to use the most appropriate schemas and tools for creating data to be used in their local context, and others for distributing that data to partners, collaborators, or the larger world.

Another widely circulated meme is that RDA/FRBR is ‘too complicated’ for what libraries need; we’re encouraged to ‘simplify, simplify’ and assured that we’ll still be able to do what we need. Hmm, well, simplification is an attractive idea, until one remembers that the environment we work in, with evolving carriers, versions, and creative ideas for marketing materials to libraries is getting more complex than ever. Without the specificity to describe what we have (or have access to), we push the problem out to our users to figure out on their own. Libraries have always tried to be smarter than that, and that requires “smart” , not “dumb”, metadata.

Of course the corollary to the ‘too complicated’ argument lies the notion that a) we’re not smart enough to figure out how to do RDA and FRBR right, and b) complex means more expensive. I refuse to give space to a), but b) is an important consideration. I urge you to take a look at the Jane-athon data and consider the fact that Jane Austen wrote very few novels, but they’ve been re-published with various editions, versions and commentaries for almost two centuries. Once you add the ‘based on’, ‘inspired by’ and the enormous trail created by those trying to use Jane’s popularity to sell stuff (“Sense and Sensibility and Sea Monsters” is a favorite of mine), you can see the problem. Think of a pyramid with a very expansive base, and a very sharp point, and consider that the works that everything at the bottom wants to link to don’t require repeating the description of each novel every time in RDA. And we’re not adding notes to descriptions that are based on the outdated notion that the only use for information about the relationship between “Sense and Sensibility and Sea Monsters” and Jane’s “Sense and Sensibility” is a human being who looks far enough into the description to read the note.

One of the big revelations for most Jane-athon participants was to see how well RIMMF translated legacy MARC records into RDA, with links between the WEM levels and others to the named agents in the record. It’s very slick, and most importantly, not lossy. Consider that RIMMF also outputs in both MARC and RDF–and you see something of a missing link (if not the Golden Gate Bridge .

Not to say there aren’t issues to be considered with RDA as with other options. There are certainly those, and they’ll be discussed at the Jane-In in San Francisco as well as at the RDA Forum on the following day, which will focus on current RDA upgrades and the future of RDA and cataloging. (More detailed information on the Forum will be available shortly).

Don’t miss the fun, take a look at the details and then go ahead and register. And catalogers, try your best to entice your developers to come too. We’ll set up a table for them, and you’ll improve the conversation level at home considerably!

LITA: Tech Yourself Before You Wreck Yourself – Volume 6

planet code4lib - Mon, 2015-05-18 14:00

What’s new with you TYBYWYers? I’m sure you’ve been setting the world on fire with your freshly acquired tech skills. You’ve been pushing back the boundaries of the semantic web. Maybe the rumors are true and you’re developing a new app to better serve your users. I have no doubt you’re staying busy.

If you’re new to Tech Yourself, let me give you a quick overview. Each installment, produced monthly-ish offers a curated list of tools and resources for library technologists at all levels of experience. I focus on webinars, MOOCs, and other free/low-cost options for learning, growing, and increasing tech proficiency. Welcome!

Worthwhile Webinars:

Texas State Library and ArchivesTech Tools With Tine – One Hour of Arduino – May 29, 2015 – I’ve talked about this awesome ongoing tech orientation series before, and this installment on Arduino promises to be an exciting time!

TechSoup for LibrariesExcel at Everything! (Or At Least Make Better Spreadsheets) – May 21, 2015 – I will confess I am obsessed with Excel, and so I take every free class I find on the program. Hope to see you at this one!

Massachusetts Library SystemPower Searching: Databases and the Hidden Web – May 28, 2015 – Another classic topic, and worth revisiting!

I Made This:

LYRASISLYRASIS eGathering – May 20th, 2015

Shameless self-promotion, but I’m going to take three paragraphs to draw your attention to an online conference which I’ve organized. I know! I am proud of me too.

eGathering 2015

But not as proud as I am of the impressive and diverse line-up of speakers and presentations that comprise the 2015 eGathering. The event is free, online, and open to you through the generosity of LYRASIS members. Register online today and see a Keynote address by libtech champion Jason Griffey, followed by 6 workshop/breakout sessions, one of which is being hosted by our very own LITA treasure, Brianna Marshall. Do you want to learn ’bout UX from experts Amanda L. Goodman and Michael Schofield? Maybe you’re more interested in political advocacy and the library from EveryLibrary‘s John Chrastka? We have a breakout session for you.

Register online today! All registrants will receive an archival copy of the complete eGathering program following the event. Consider it my special gift to you, TYBYWYers.

Tech On!

TYBYWY will return June 19th!

DPLA: A DPLA of Your Very Own

planet code4lib - Mon, 2015-05-18 13:48

This guest post was written by Benjamin Armintor, Programmer/Analyst at Columbia University Libraries and a 2015 DPLA + DLF Cross-Pollinator Travel Grant awardee.

I work closely with the Hydra and Blacklight platforms in digital library work, and have followed the DPLA project with great interest as a potential source of data to drive Blacklight sites. I think of frameworks like Blacklight as powerful tools for exploring what can be done with GLAM data and resources, but it’s difficult to get started in without data and resources to point it at. I had experimented with mashups of OpenLibrary data and public domain MARC cataloging, but the DPLA content was uniquely rich and varied, has a well-designed API, and carried with it a decent chance that an experimenter would be affiliated with some of the entries in the index.

Blacklight was designed to draw its data from Solr, but the DPLA API itself is so close to a NoSQL store that it seemed like a natural fit to the software. Unfortunately, it’s hard to make time for projects like that, and as such the DPLA+DLF Cross-Pollinator travel grant was a true boon.

Attending DPLAfest afforded me a unique opportunity to work with the DPLA staff on a project to quickly build a Blacklight site against DPLA data, and thanks to their help and advice I was able to push along a Blacklight engine that incorporated keyword and facet searches and thumbnail images of the entire DPLA corpus—an impressive 10 million items!—by the end of the meeting. The progress we made was enthusiastically received by the Blacklight and Hydra communities: I began receiving contributions and installation reports before the meeting was over. I’ve since made progress moving the code along from a conference demonstration to a fledgling project; the community contributions helped find bugs and identify gaps in basic Blacklight functionality, which I’ve slowly been working through. I’m also optimistic that I’ve recruited some of the other DPLAFest attendees to contribute, as an opportunity to learn more about the DPLA api, Blacklight, and Ruby on Rails. Check the progress on the project that started at DPLAfest on GitHub

LITA: Storify of LITA’s First UX Twitter Chat

planet code4lib - Mon, 2015-05-18 11:18

LITA’s UX Interest Group did a fantastic job moderating the first ever UX Twitter Chat on May 15th. Moderators Amanda (@godaisies) and Haley (@hayleym1218) presented some deep questions and great conversations organically grew from there. There were over 220 tweets over the 1-hour chat.

The next UX Twitter Chat will take place on Friday, May 29th, from 2-3 p.m. EDT, with moderator Bohyun (@bohyunkim). Use #litaux to participate. See this post for more info. Hope you can join us!

Here’s the Storify of the conversation from May 15th.

Patrick Hochstenbach: Brush inking exercise

planet code4lib - Sun, 2015-05-17 10:22
Filed under: Comics Tagged: art, cartoon, cat, ink, inking, mouse

Patrick Hochstenbach: Brush inking exercise II

planet code4lib - Sun, 2015-05-17 10:22
Filed under: Doodles Tagged: brush, cartoon, cat, comic, inking, mouse, sketchbook

David Rosenthal: A good op-ed on digital preservation

planet code4lib - Sun, 2015-05-17 03:12
Bina Venkataraman was White House adviser on climate change innovation and is now at the Broad Foundation Institute working on long-term vs. short-term issues. She has a good op-ed piece in Sunday's Boston Globe entitled The race to preserve disappearing data. She and I e-mailed to and fro as she worked on the op-ed, and I'm quoted in it.

Update: Bina's affiliation corrected - my bad.

District Dispatch: Supporting the USA FREEDOM Act of 2015: ALA’s Perspective

planet code4lib - Fri, 2015-05-15 20:45

by Ronald Repolona

Anyone who’s followed legislative efforts over the past ten plus years to restore a fraction of the civil liberties lost by Americans to the USA PATRIOT Act and other surveillance laws will understand the photo accompanying this post. With the revelations of the last several years in particular, first by the New York Times and then Edward Snowden, many believed that real reform might be achieved in the last Congress by passing the USA FREEDOM Act of 2014. They were wrong.

In May 2014, the House passed a version of the USA FREEDOM Act (H.R. 3361) that was dramatically weakened from a civil liberties point of view in the House Judiciary Committee and then stripped of virtually all meaningful privacy-restoring reforms by the full House of Representatives. While strenuous efforts were made to bring a robust version of the bill (S. 2685) to the floor of the Senate, Republican members filibustered that bill and the 113th Congress ended without further action on any form of the USA FREEDOM Act of 2014.

Undeterred, the bill’s bipartisan sponsors in both chambers recently reintroduced the USA FREEDOM Act of 2015, H.R. 2048 and S. 1123, a tenuously calibrated agreement that garnered the support of both many civil liberties organizations, including the American Library Association (ALA), as well as congressional “surveillance hawks,” the nation’s intelligence agencies, and the Administration. On May 14, just one week after a federal appeals court ruled the NSA’s use of Section 215 to collect Americans’ telephone call records in bulk illegal, H.R. 2048 passed the House with a strongly bipartisan vote  (338 yeas – 88 nays). At this writing, with effectively just one week remaining for Congress to consider expiring PATRIOT Act provisions before recessing for the Memorial Day holiday and the June 1 “sunset” of those provisions, the bill’s fate rests with the Senate and is highly uncertain.

Not all civil liberties advocates, however, are pushing for passage of this year’s version of the USA FREEDOM Act. The ACLU, for example, is calling on Congress to simply permit Section 215 and other expiring provisions of the PATRIOT Act to “sunset” as scheduled on June 1. The Electronic Frontier Foundation (EFF) also is urging Members of Congress to strengthen H.R. 2048 (rather than pass it in its current form) because, in EFF’s view, the reforms it makes will not sweep as broadly as the appeals court’s recent ruling could if upheld and broadened in its precedential effect by adoption in other courts (including eventually perhaps the U.S. Supreme Court). Neither group, however, is urging Members of Congress to vote against H.R. 2048.

These views by respected long-time ALA allies have, not unreasonably, caused some to ask (and no doubt many more to wonder) why ALA is actively urging its members and the public to work for passage of H.R. 2048. The answer is distillable to four words: policy, politics, permanence, and perseverance.


Since January of 2003, the Council of the American Library Association (the Association’s policy-setting body) has adopted at least eight Resolutions addressing the USA PATRIOT Act and the access to library patron reading, researching and internet usage records that it affords the government under Section 215 and through the use of National Security Letters (NSLs) and their associated “gag orders.” While somewhat different in individual focus based upon the legislative environments in which they were written, all make ALA’s position on Section 215 of the PATRIOT Act and related authorities consistently clear. Stated most recently in January of 2014, that position is that ALA “calls upon Congress to pass legislation supporting the reforms embodied in [the USA FREEDOM Act of 2014] (see ALA CD#20-1(A)).”

As detailed in this Open Technology Institute (OTI) section-by-section, side-by-side comparison of the current USA FREEDOM Act (H.R. 2048) with two versions introduced in the last Congress, the current bill is a long way from perfect (just as the “old” ones were). It does, however, achieve the principal objectives of last year’s legislation endorsed by ALA’s Council. Specifically, H.R. 2048:

  • categorically ends the bulk collection not only of telephone call records but also of any “tangible things” (in the language of Section 215), library records included. Henceforth, any request for records must relate to a specific pending investigation and be based upon a narrowly defined “specific selection term” as defined in the law. Accordingly, no longer will the NSA or FBI be able to assert that the search histories of all public access computers are “tangible things” whose production they can lawfully and indefinitely compel as part of an essentially boundless fishing expedition. Nor will agencies be able to continue “bulk collection” under other legal authorities, including National Security Letters, or “PEN register” and “trap and trace” statutes;
  • significantly strengthens judicial review of the non-disclosure (“gag”) orders that generally accompany NSLs by eliminating the current requirement in law that a court effectively accept without challenge mere certification by a high-level government official that disclosure of the order would endanger national security. H.R. 2048 also requires the government to initiate judicial review of nondisclosure orders and to bear the burden of proof in those proceedings that they are statutorily justified;
  • permits more robust public reporting by companies and others who have received Section 215 orders or NSLs from the government of the number of such requests they’ve processed; and
  • requires the secret “FISA Court” that issues surveillance authorities to designate a panel of fully “cleared” expert civil liberties counsel whom the court may appoint to advise it in cases involving significant or precedential legal issues, and to declassify its opinions or summarize them for public access when declassification is not possible. The bill also expands the opportunity for review of FISA Court opinions by federal appellate courts.

As OTI’s “side-by-side” also indicates, H.R. 2048 falls short of last year’s USA FREEDOM Act iteration in several important respects. Most significantly, records collected by the government on persons who ultimately are not relevant to an investigation may still be retained, and reforms affected in last year’s bill to Section 702 of the Foreign Intelligence Surveillance Act Amendments Act are decidedly weaker. The bill also extends expiring portions of the PATRIOT Act, as modified, for five years.


Determining whether ALA should support a particular piece of almost inevitably imperfect legislation turns not only on the content of the legislation (though that naturally receives disproportionate weight in an assessment), but also on the probability of achieving a better result and when such a result might conceivably be obtained. With the change in control of the Senate in 2014 and very high probability that control of the House will not shift for many elections to come, many groups including ALA believe that H.R. 2048 represents the “high water mark” in reform of Section 215 and related legal authorities achievable in the foreseeable future.


The recent landmark ruling by the U.S. Court of Appeals for the Second Circuit noted above was sweeping and clear in some respects, but limited and uncertainty producing in others. Specifically, the Court firmly ruled that the bulk collection of telephone records under Section 215 is illegal. That ruling, however, addressed only the NSA’s bulk collection of “telephony metadata.” It did not directly speak to the bulk collection of any other information, including library records of any kind.

Further, while binding in the states that make up the Second Judicial Circuit (Connecticut, New York, and Vermont), the court’s decision has no precedential effect in any other part of the country. It is also unclear whether the Second Circuit’s decision will be appealed by the government and, if so, what the outcome will be.

Finally, similar decisions are pending in two other federal Courts of Appeal. Should one or both rulings differ materially from the Second Circuit’s, further uncertainty as to what the law is and should be nationally will result. Resolution of such a “split in the Circuits” can only be accomplished through a multi-year appeal process to the U.S. Supreme Court, which is not required to hear the case.

Enactment of the current version of the USA FREEDOM Act would “lock in” the reforms noted above immediately, permanently and nationwide. Accordingly, on balance, ALA and its many coalition allies are supporting the bill and affirmatively urging Members of Congress to do the same.


Finally, and crucially, ALA and its allies have long been and remain fully committed to working for the most profound reform of all of the nation’s privacy and surveillance laws possible. ALA thus regards the USA FREEDOM Act of 2015 as a critical step — the first possible in 14 years — to make real progress toward that much broader permanent goal, but as only a step.

Work in this Congress (and beyond) will continue aggressively to pass comprehensive reform of the badly outdated Electronic Communications Privacy Act and to restore Americans’ civil liberties still compromised by, for example, other portions of the USA PATRIOT Act, Section 702 of the Foreign Intelligence Surveillance Act, Executive Order 12333 and many other privacy-hostile legal authorities.

With our allies at our side, and librarians and their millions of patrons behind us, the fight goes on.

The post Supporting the USA FREEDOM Act of 2015: ALA’s Perspective appeared first on District Dispatch.

Nicole Engard: Bookmarks for May 15, 2015

planet code4lib - Fri, 2015-05-15 20:30

Today I found the following resources and bookmarked them on Delicious.

Digest powered by RSS Digest

The post Bookmarks for May 15, 2015 appeared first on What I Learned Today....

Related posts:

  1. ATO2014: Using Bootstrap to create a common UI across products
  2. Speeding up WordPress Dashboard
  3. Google Docs Templates


Subscribe to code4lib aggregator