You are here

Feed aggregator

DuraSpace News: NOW AVAILABLE: DSpace 5.4–Bug Fixes, Memory Enhancements++

planet code4lib - Tue, 2015-11-10 00:00

From Tim Donohue, on behalf of the DSpace 5.4 Release Team, and all the DSpace developers

Winchester, MA  DSpace 5.4 is now available providing security fixes to the JSPUI, along with significant bug fixes and memory usage enhancements to all DSpace 5.x users.

DuraSpace News: NOW AVAILABLE: VIVO 1.8.1–Improved Performance and New Visualizations

planet code4lib - Tue, 2015-11-10 00:00

Winchester, MA  On November 10, 2015 VIVO 1.8.1 was released by the VIVO team.  This new release offers users vastly improved performance, new and better visualizations, as well as bug fixes.

Full release notes are available on the VIVO wiki:

LibUX: 028 – Crafting Websites with Design Triggers – Part One

planet code4lib - Mon, 2015-11-09 23:27

A design trigger is a pattern meant to appeal to behavior and cognitive biases observed in users. Big data and the user experience boom has provided a lot of information about how people actually use the web, which designs work, and–although creepy–how it is possible to cobble together an effective site designed to social engineer users.

This is the first-half of an hour long talk, where I introduce design triggers as a concept and their reason for being, touch on things like anchoring, how people actually look at websites, and other techniques to pimp your wares through design.

You can follow along if you like.

Thanks, and enjoy!

You can subscribe to LibUX on Stitcher, iTunes, or plug our feed right into your podcatcher of choice. Help us out and say something nice.

The post 028 – Crafting Websites with Design Triggers – Part One appeared first on LibUX.

Peter Sefton: Scratching an itch: my software for formating song-sheets into, *gasp* PDF!

planet code4lib - Mon, 2015-11-09 23:00

[update: 2015-11-11 minor edits]

Summary: I made a new open source program to format song sheets in chordpro format in a variety of ways. It’s a command line thing. Not everyone understands; when I talk about it to my friends in the band I sort of accidentally seem to have nearly joined we have IM exchanges like:


I have it set up now with a cloud server watching the dropbox, so if you put in a text file with a .cho extension it will auto-generate a PDF for the song

Other band member:

The what with the what now?

So, I decided to tell you all here on the Interwebs where you will 100% get what I’m talking about.

The problem was this: I had several years worth of song-sheets downloaded from various places on the internets or typed out by hand, for my own and other people’s songs, in a variety of formats, including Word docs, RTF and PDF but mostly text files. Then I started playing music with other people again for the first time in years, and we’d be trading bits of paper and files and mailing song files to each other, and so on, always searching for a fourth-generation photocopy of something someone had in their folder. Anyway, I got to looking at ways to manage this and create consistently formatted song sheets.

Turns out there’s a handy format for marking-up songs called chordpro. This involves putting chord names in square brackets, inline, in amongst the lyrics. Like [C] this. Or [G#maj7] or this, with a {title: } at the top, and a few other simple commands. Here’s one I prepared earlier.

{title: Universe} {st: Peter Sefton} {key: C} {transpose: -3} [C] This is a song about [E7] everything [F] It's really div[C]erse [Caug9]Got something for [E7] everyone [Am] But only [G] has one [F] verse [F] Get it? Uni [D] Verse {c: Pre chorus} [D7] Here comes the chorus: {soc} {c: Chorus} [C] Uni [E7] verse Uni [F] verse [F] Uni [Fm] verse Uni [C] verse {eoc} {sob} {c: Bridge} [C] That's [G] it {eob} {c: Coda} [F] Sorry the song's [C] so terse

There are lots of software packages for managing chordpro songs, printing them out, showing them on your church projector, organising them on your iPad, transposing them between keys and so on, but none of the software did quite what I wanted in the way I wanted. For example, most of the packages are designed to put chords above the lyrics, but I prefer leaving the chords inline, the way Rob Weule did it in the Ukulele Club Songbook and Richard G does it in his Ukulele songs, it’s more compact for one thing not to mention easier to copy and paste. Here’s an example of some free software online which is really well done, but not what I wanted at all:

A nice online chordpro converter at

I’ve been keeping my songs in text files in Dropbox for years and that suits me. I don’t want to have to suck all the files in to the maw of some slightly dodgy open source Java application on my laptop, or upload them somewhere, or install a web application, and it hurts me to say this after all the work I put into scholarly publishing in HTML but PDF is perfect for songs which are page-based, good for printing and good for displaying on tablets.

And I like to keep my hand in with coding and, well, I have a few hours on the train every day which I don’t always use for work stuff and, you can probably see where this is heading. I got to wondering what would happen if I ran a few songs through Pandoc, the magnificent omniverous document conversion tool to make PDF and HTML versions, and one thing led to another. It started innocently enough with a simple script to process chordpro declarations into markdown. But then I asked myself; how would this look as an epub ebook made up of multiple songs? And then how do I make a word doc with a table of contents and start each song on a new page? And how might I get the script to make the songs scale (pun intended) so they fill up the page, for maximum readability? Which led to experiments with LaTeX, the taste of which I still don’t have out of my mouth entirely, and a brief flirtation with the Open Document Format and the LibreOffice presentation software (we’ve dated before but it has never worked out long-term). I finally got friendly with an amazing bit of command line software called wkhtmltopdf which can turn HTML web pages into PDF including running any javascript they contain before doing so. This way I was able to write a script that automatically scaled up text to fit an A4 page.

I told myself “I’m not going to do chord grids”, you know, little images with fret-dots etc, cos me and my music buddies are all awesome players who know every chord ever, and if not can like totally work them out in our heads, but then I wondered if there was an existing open source library that did chords for multiple instruments I could just, like, drop in to my Python program so I could re-learn the banjo chords I’ve forgotten, and learn a bit more mandolin, and it turned out that there isn’t really, but a I used a few train trips and a hot saturday arvo to write a chord-grid-drawer. Turns this chord definition I got from the open source software at Uke Geeks (thanks Buz!):

{define: Aaug frets 2 1 1 4 fingers 2 1 1 4 add: string 1 fret 1 finger 1 add: string 4 fret 1 finger 1}

Into this:


Unlike most chord drawing software I found which has built-in limitations, It will also cheerfully render a really silly chord like this, which would require twelve strings, and 33 playable frets, not to mention 11 fingers, like that the dude in one fish two fish red fish blue fish by Dr Seuss:


Look at his fingers!

One, two, three…

How many fingers

Do I see?

One, two, three, four,

Five, six, seven,

Eight, nine, ten.

He has eleven!


This is something new.

I wish I had

Eleven, too!

{define: F#stupid base-fret 22 frets 1 2 3 x 4 5 6 7 8 9 10 11 fingers 11 10 9 8 0 7 6 5 4 3 2 1}


So, I had the technology to render chords (even at grand piano scale) but then it turned I couldn’t find chord definition files for more instruments. There are lots of chord charts in image formats, of course, but no data that I can find, which led to working out how to compile this old C code and modify it a bit to produce chord-data files (my update to that code is not done enough to release).

Anyway, the above song looks like this when run through my software. Now, after a couple of months of part-time tinkering in I can type ./ -o --instrument Uke uni-verse.cho and this appears!

Uni-verse rendered for printing

Better yet, I have it set up now with a cloud server watching the dropbox, so if you put in a text file with a .cho extension it will auto-generate a PDF for the song. That is, in our shared band folder in Dropbox, if anyone creates a new song file, a new PDF appears automagically about a second later. Drop me a line if you’d like to try it - all you have to do is share a Dropbox folder with an account of mine. This is one of my favourite deployment patterns for software, almost like a no-interface user-interface. I’ll write more about this matter soon.

Along the way I learned:

  • How to make books, by feeding in a list of files.
  • How to make a setlist book for a performance complete with additional performance notes, from a markdown file.
  • That even when we had a setlist book my bandmates typed up setlists in big writing to put on the floor, so I added a feature to write out set-per-page at the start of the book.
  • That we really need the ability to do per-performance annotations, such as who goes first and whether we have a count-in or all-in approach to the song, so notes from the setlist such as go slow get put at the top of each song.
  • How to keep two page songs on facing pages in said books so you don’t have to turn pages.
  • How to generate chord-definitions for arbitrary instruments (still working on that bit).
  • How to transpose chord definitions from one instrument to another if they have the same relative tuning between strings - eg soprano uke chords into baritone uke, or open-G banjo tuning into the C tuning used by my baby banjo, the Goldtone Plucky.
  • How to make stand-alone one page versions of songs …
  • .. automatically, whenever I drop a new on in my Dropbox, or change one.

Now, I realise that a command line thing that’s a hassle to install and will almost certainly break the moment someone other than me tries to run it has limited appeal, but I’m releasing it to the world anyway. Even if the software is not to your taste, I think some of the things I’ve done will be useful to others. For example I:

  • Liberated Buz Carter’s chord definitions for Soprano uke which were encoded in a javascript program into a stand alone file, observing the license conditions of the software of course.
  • Likewise liberated the guitar chords from the venerable chordii software.
  • Generated chord definitions for 5 string banjo G-tuning and mandolin; these may not be the best voicings or fingerings as they were auto-generated - let me know if you can help improve them.
The future

Here are some other things I am Never Ever Going To Do With This Software. Ever. Never. Absolutely promise.

  • It will never have a Graphical User Interface.

  • It will never be a web application.

  • It will never be a dot-com startup.

  • It will never be an iOS app.


If anyone wants to help we could:

  • Build a decent library of chord shapes defined in chordpro format

  • Improve the look of the generated books a lot, if anyone knows some modern HTML and CSS.

  • Make it better.

District Dispatch: Market solves infringement problem? Yeah, right.

planet code4lib - Mon, 2015-11-09 22:45

The NASDAQ stock market, photo by Wikimedia.

“Let the market solve the problem” is a familiar refrain especially for those who want smaller government and fewer regulations. Frequently the market does solve the problem because the government is unable or not successful. The government’s failed attempt to combat piracy—SOPA—is an example. The public roundly opposed it and said that it was overkill, a threat to security, privacy, and free expression, leading to the now famous and somewhat embarrassing internet blackout. Rights holders can resort to the “notice and takedown” provision of the Digital Millennium Copyright Act (DMCA), but explain its limitations like a game of “whack-a-mole”—take down a piracy website only to have it pop up again under a different URL.

We need what Adam Smith called the “invisible hand” of the market. Be patient, the market will decide how to fix this infringement problem. Allow innovation and experimentation, let new technologies develop, and welcome new players in the market to emerge to take on these battles.

The market has emerged.

Why fight piracy, when you can monetize it? Since piracy cannot be completely eliminated, why not capitalize on it? Rights holders can get a piece of the pie. For instance, they can hire companies to search for their content on YouTube. Instead of suing and taking down the content, a rights holder can monetize the content by leaving it on YouTube and collecting part of the advertising revenue.

Bullying is another money-maker. Copyright trolls can sweep the net, find alleged infringers and scare them just enough so they settle out of court. Cease and desist and pay a fine or else. Cha-ching! A quick $500! Now we’re talking!

Image Technologies Laboratories (ITL) is an emerging global company that uses the latest technology to find images found on the web, finally addressing the unmet need faced by photographers who are ripped off every day. ITL is excited moving forward announcing in a press release that not only is there a business backlog, “the market has never been more saturated with the mishandling of digital content and the theft of copyrighted property,” suggesting a sustainable business.

Even better yet: allow people to invest in infringement.

Enter RightsCorp Inc., a publicly traded company. Their innovative business model—a market solution—uses the best digital crawler (patent pending) to sweep peer-to-peer sites, and find alleged infringers. Using the trolling method, they collect legal damages and settle out of court. Alleged infringers cough up revenue that is then evenly split between RightsCorp and the rights holders. Next RightsCorp took the business plan to a new level by selling stock in the company. Let’s share the wealth (while recouping some start-up costs) and sell company stock. Wise investors can bet on the permanency of piracy.

The market – the ultimate problem solver – NOT!

The post Market solves infringement problem? Yeah, right. appeared first on District Dispatch.

DPLA: DPLA Announces Knight Foundation Grant to Research Potential Integration of Newspaper Content

planet code4lib - Mon, 2015-11-09 18:50

The Digital Public Library of America has been awarded $150,000 from the John S. and James L. Knight Foundation to research the potential integration of newspaper content into the DPLA platform.

Over the course of the next year, DPLA will investigate the current state of newspaper digitization in the US. Thanks in large part to the National Endowment for the Humanities and the Library of Congress’s joint National Digital Newspaper Program (NDNP) showcased online as Chronicling America, many states in the US have digitized their historic newspapers and made them available online. A number of states, however, have made newspapers available outside of or in addition to this important program, and DPLA plans to investigate what resources it would take to potentially provide seamless discovery of the newspapers of all states and US territories, including the over 10 million pages already currently available in Chronicling America.

“We are grateful to the Knight Foundation for providing funding to DPLA that enables us to devote time and resources to investigate the potential integration of newspapers into the DPLA,” said Emily Gore, DPLA Director of Content. “We look forward to working with our current hubs, NDNP participants and other significant newspaper projects over the next year.”

Other national digital libraries including Trove in Australia and Europeana have undertaken efforts to make full-text newspaper discovery a priority. Europeana recently launched Europeana Newspapers by aggregating 18 million historic newspaper pages. The intent of the DPLA staff is to engage the state newspaper projects, as well as Trove and Europeana Newspapers, over the next year as we consider the viability of a US-based newspaper aggregation. DPLA will also engage with the International Image Interoperability Framework (IIIF) community to discuss how IIIF may play a role in centralized newspaper discovery.

At the conclusion of the yearlong planning process, DPLA will hold a summit to report out on our findings and to discuss next steps with the cultural heritage newspaper community.

Image credit: Detail from “Students reading newspapers together,” 1961. University of North Texas Libraries Special Collections via The Portal to Texas History.

Evergreen ILS: Welcome Evergreen’s newest core committer, Kathy Lussier!

planet code4lib - Mon, 2015-11-09 18:24

I am very pleased to announced that Kathy Lussier, project coordinator for the Massachusetts Library Network Cooperative, is Evergreen’s newest core committer!

Core committers are the folks entrusted with the responsibility of pushing new code to Evergreen’s main Git repository. Consequently, they serve as one of the final lines of defense against bugs slipping in. Kathy is eminently prepared for this role: since 2012, she has tested and signed off on over 250 patches written by others. During the same time period, she authored 100 code and documentation patches, with an especial focus on TPAC.

Kathy has also been very active in a wide variety of work to make coding for Evergreen happen more smoothly.  Some examples include analyzing requirements and writing specifications for various new features; helping to organize Evergreen Hack-A-Ways; helping to expand our use of automated tests; and and coordinating Evergreen’s participation in the GNOME Outreach Program for Women (now Outreachy).

We are fortunate and honored that Kathy Lussier has been a part of Evergreen for years, and I look forward to what she will accomplish in her latest role as a core committer.

Cynthia Ng: Mozilla Festival Day 2: CopyBetter: Notes from Copyright Reform in the EU

planet code4lib - Mon, 2015-11-09 16:46
Trying to explain the bureaucratic mess. Three main institutions European Commission (Executive, divide who to ) Parliament (only elected group in the EU) Council (representation of all member states) Three institution have to hammer out solution or compromise, then parliament and council vote, then commission implement it. Legislative Acts Directive – minimum standard for all … Continue reading Mozilla Festival Day 2: CopyBetter: Notes from Copyright Reform in the EU

Thom Hickey: More about justlinks

planet code4lib - Mon, 2015-11-09 15:16

We had an earlier post about the 'justlinks' view of VIAF clusters, but I thought it would be worthwhile to explore how that can combine with other VIAF functionality.

First a reminder of how the justlinks view works.  While the default view of clusters to Web browsers is the HTML interface, VIAF clusters can be displayed in several ways, including the raw XML, RDF XML, MARC-21 and justlinks JSON.  Here's a request for justlinks.json:

which returns:

{ "viafID":"36978042", "B2Q":["0000279733"], "BAV":["ADV11117013"], "BNE":["XX904401"], "BNF":[""], "DNB":[""], "ISNI":["000000010888091X"], "LAC":["0064G7865"], "LC":["n90602202"], "LNB":["LNC10-000054199"], "N6I":["vtls000101241"], "NKC":["js20080511012"], "NLA":["000035338539"], "NLI":["000501536"], "NLP":["a11737736"], "NSK":["000051380"], "NTA":["073902861"], "NUKAT":["vtls000205390"], "PTBNP":["70922"], "SELIBR":["256753"], "SUDOC":["031580661"], "WKP":["Q6678817"], "XA":["2219"], "ORCID":[""], "Wikipedia":[""]}

Ralph LeVan came up with this and we think it is pretty neat!  But wait, it gets even better!

Each of the IDs in this record that is a 'source record' ID to VIAF (in this case everything except the ORCID ID and the en.wikipedia URI) can be used to retrieve the cluster.  Here's how to pull justlinks.json using the LC ID:|n90602202/justlinks.json

HTTPS works too:|000051380/justlinks.json

All the different views of the clusters can be requested either through the explicit URI's shown here, or through HTTP headers, and they in turn can be  combined with sourceID redirection.


OCLC Dev Network: Change to Terminology Services

planet code4lib - Mon, 2015-11-09 15:00

OCLC Research will be ending support for the Terminology Services prototype on 20 November 2015.

Open Knowledge Foundation: Join the School of Data team: Technical Trainer wantd

planet code4lib - Mon, 2015-11-09 14:09

The mission of Open Knowledge International is to open up all essential public interest information and see it utilized to create insight that drives change. To this end we work to create a global movement for open knowledge, supporting a network of leaders and local groups around the world; we facilitate coordination and knowledge sharing within the movement; we build collaboration with other change-making organisations both within our space and outside; and, finally, we prototype and provide a home for pioneering products.

A decade after its foundation, Open Knowledge International is ready for its next phase of development. We started as an organisation that led the quest for the opening up of existing data sets – and in today’s world most of the big data portals run on CKAN, an open source software product developed first by us.

Today, it is not only about opening up of data; it is making sure that this data is usable, useful and – most importantly – used, to improve people’s lives. Our current projects (School of Data, OpenSpending, OpenTrials, and many more) all aim towards giving people access to data, the knowledge to understand it, and the power to use it in our everyday lives.

The School of Data is growing in size and scope, and to support this project – alongside our partners – we are looking for an enthusiastic Technical Trainer (flexible location, part time).

School of Data is a network of data literacy practitioners, both organisations and individuals, implementing training and other data literacy activities in their respective countries and regions. Members of the School of Data work to empower civil society organizations (CSOs), journalists, governments and citizens with the skills they need to use data effectively in their efforts to create better, more equitable and more sustainable societies. Over the past four years, School of Data has succeeded in developing and sustaining a thriving and active network of data literacy practitioners in partnership with our implementing partners across Europe, Latin America, Asia and Africa.

Our local implementing partners are Social TIC, Code for Africa, Metamorphosis, and several Open Knowledge chapters around the world. Together, we have produced dozens of lessons and hands-on tutorials on how to work with data published online, benefitting thousands of people around the world. Over 4500 people have attended our tailored training events, and our network has mentored dozens of organisations to become tech savvy and data driven. Our methodologies and approach for delivering hands-on data training and data literacy skills – such as the data expedition – have now been replicated in various formats by organisations around the world.

One of our flagship initiatives, the School of Data Fellowship Programme, was first piloted in 2013 and has now successfully supported 26 fellows in 25 countries to provide long-term data support to CSOs in their communities. School of Data coordination team members are also consistently invited to give support locally to fellows in their projects and organisations that want to become more data-savvy.

In order to give fellows a solid point of reference in terms of content development and training resources, and also to have a point person to give capacity building support for our members and partners around the world, School of Data is now hiring an outstanding trainer/consultant who’s familiar with all the steps of the Data Pipeline and School of Data’s innovative training methodology to be the all-things-content-and-training for the School of Data network.


The hired professional will have three main objectives:

  • Technical Trainer & Data Wrangler: represent School of Data in training activities around the world, either supporting local members through our Training Dispatch or delivering the training themselves;
  • Data Pipeline & Training Consultant: give support for members and fellows regarding training (planning, agenda, content) and curriculum development using School of Data’s Data Pipeline;
  • Curriculum development: work closely with the Programme Manager & Coordination team to steer School of Data’s curriculum development, updating and refreshing our resources as novel techniques and tools arise.
Terms of Reference
  • Attend regular (weekly) planning calls with School of Data Coordination Team;
  • Work with current and future School of Data funders and partners in data-literacy related activities in an assortment of areas: Extractive Industries, Natural Disaster, Health, Transportation, Elections, etc;
  • Be available to organise and run in person data-literacy training events around the world, sometimes in short notice (agenda, content planning, identifying data sources, etc);
  • Provide reports of training events and support given to members and partners of School of Data Network;
  • Work closely with all School of Data Fellows around the world to aid them in their content development and training events planning & delivery;
  • Write for the School of Data blog about curriculum and training events;
  • Take ownership of the development of curriculum for School of Data and support training events of the School of Data network;
  • Work with Fellows and other School of Data Members to design and develop their skillshare curriculum;
  • Coordinate support for the Fellows when they do their trainings;
  • Mentor Fellows including monthly point person calls, providing feedback on blog posts and curriculum & general troubleshooting;
  • The position reports to School of Data’s Programme Manager and will work closely with other members of the project delivery team;
  • This part-time role is paid by the hour. You will be compensated with a market salary, in line with the parameters of a non-profit-organisation;
  • We offer employment contracts for residents of the UK with valid permits, and services contracts to overseas residents
  • A lightweight monthly report of performed activities with Fellows and members of the network;
  • A final narrative report at the end of the first period (6 months) summarising performed activities;
  • Map the current School of Data curriculum to diagnose potential areas of improvement and to update;
  • Plan and suggest a curriculum development & training delivery toolkit for Fellows and members of the network
  • Be self-motivated and autonomous;
  • Fluency in written and spoken English (Spanish & French are a plus);
  • Reliable internet connection;
  • Outstanding presentation and communication skills;
  • Proven experience running and planning training events;
  • Proven experience developing curriculum around data-related topics;
  • Experience working remotely with workmates in multiple timezones is a plus;
  • Experience in project management;
  • Major in Journalism, Computer Science, or related field is a plus

We strive for diversity in our team and encourage applicants from the Global South and from minorities.


Six months to one year: from November 2015 (as soon as possible) to April 2016, with the possibility to extend until October 2016 and beyond, at 10-12 days per month (8 hours/day).

Application Process

Interested? Then send us a motivational letter and a one page CV via

Please indicate your current country of residence, as well as your salary expectations (in GBP) and your earliest availability.

Early application is encouraged, as we are looking to fill the positions as soon as possible. These vacancies will close when we find a suitable candidate.

Interviews will be conducted on a rolling basis and may be requested on short notice.

If you have any questions, please direct them to jobs [at]

LITA: “Settling for a job” and “upward mobility”: today’s career paths for librarians

planet code4lib - Mon, 2015-11-09 14:00
The Jeffersons, 1975.

I very recently shifted positions from a large academic research library to a small art school library, and during my transition the phrases “settling for a job” and “upward mobility” were said to me quite a bit. Both of these phrases set me personally on edge, and it got me thinking about today’s career paths for librarians and how they view their own trajectory.

At my last job, I was a small cog in a very well-oiled machine. It was not a librarian position and because I was in such a big institution I did not have a large variety of responsibilities. Librarian positions there were traditionally tenure-track, though it was clear that Technical Services was already on the path to eliminating Librarian titled positions and removing MLIS/MLS degrees from the required qualifications of position descriptions. A recent post from In the Library With the Lead Pipe addressed the realities of professional impact on the career trajectory of academic librarians today:

While good advice is readily available for most librarians looking to advance “primary” responsibilities like teaching, collection development, and support for access services, advice on the subject of scholarship—a key requirement of many academic librarian positions—remains relatively neglected by LIS programs across the country. Newly hired librarians are therefore often surprised by the realities of their long term performance expectations, and can especially struggle to find evidence of their “impact” on the larger LIS profession or field of research over time. These professional realizations prompt librarians to ask what it means to be impactful in the larger world of libraries. Is a poster at a national conference more or less impactful than a presentation at a regional one? Where can one find guidance on how to focus one’s efforts for greatest impact? Finally, who decides what impact is for librarians, and how does one go about becoming a decision-maker?

Though my last job taught me a great deal about management and scholarly publication, I accepted my current position at a small art school library because of my desire to take on a role that required me to wear a lot of different hats taking care of cataloging, helping with circulation and reference, and dabbling in student library programming. While this appeals to me greatly because of how multi-faceted my job can be, I often received negative opinions from colleagues at my last institution prior to my transition. It couldn’t be a very good position if I was doing cataloging and reference, they’d say. The unsolicited advice I was given was “don’t settle for a job. Really think about your career trajectory so that your resume makes sense to future employers.”

This sentiment really made me uncomfortable. The fact that someone would imply that the job I was taking was inferior to my institution at the time and that the only reasonable explanation was that I was “settling” was offensive. Isn’t a career trajectory something that should really only concern the individual accepting those positions? Librarianship is such a multi-faceted and diverse field, is there really such a thing as a career trajectory that “makes sense?” Is there one clear path for everyone that is meant to lead to “upward mobility?”

Should we all be viewing professional impact in librarianship the same way? My last professional environment heavily stressed implementing new (but inexpensive) technologies that would enhance library discovery and bibliographic control. My current environment is much more holistic in that it encapsulates information literacy, high-quality reference, and really just making the library a more welcoming place for students to be in.

So how do we determine the altmetrics of our career trajectory? Is there a right and a wrong way, and does this change from early-career to mid-career librarianship? In a DIY age where a lot of us are teaching ourselves skills we know to be highly desired on the fly, how do these factors contribute to our view of the impact we have on the field?

Ed Summers: Dead Letter Revision

planet code4lib - Mon, 2015-11-09 05:00

You may remember last week I provided a short example of using metaphor to give depth and life to some of my otherwise shallow and boring text. I’m not sure I acheived this, but it was a fun exercise for someone like me who likes playing with words. The crucial last step in Sword’s process is to share the reworked sentence with a friend to see if the reworked sentenece works, and to get feedback on how to make it better.

During last week’s class I shared my original text and the various iterations I went through to generate a new sentence using the Dead Letter Office as a metaphor for the HTTP redirect. Just the process of trying to read my text out loud to my classmates was illuminating. I had trouble simply reading it without slipping into a muddled monotone. Fortunately, I have generous and kind classmates who gave me useful advice without ripping the idea to shreds.

The main piece of advice I took from the conversation was to stop trying to achieve parity between the metaphor of the Dead Letter Office and the HTTP redirect. As I gathered more details about the Dead Letter Office (mostly on Wikipedia) I tried to work these facts into the description. I had built up a lot of text and ideas, but hadn’t simplified them using plain language. It felt like I was going down the rabbit hole of talking about Dead Letter Offices instead of redirects. Also, the metaphor was stretched thin: aspects of the work in a DLO didn’t align properly with the redirect, but I tried to force them together anyway.

Kari pointed out that my previous sentence had a one word metaphor that worked quite well:

A more practical solution to minting the perfect URL for your Web resources is to accept that most things change, but to alert people who care when these changes occur.

I’ve used the word mint so many times when discussing URLs because of its use in the semantic web literature. All this time I hadn’t really considered how it operated as a metaphor for financial systems. Kari suggested that I might want to do something similarly understated with the Dead Letter Office metaphor. In the process of working with it I decided to abandon the Dead Letter Office and focus more on a change of address form in the Post Office, which was one of the other options I had brainstormed. It seemed easier to understand and less distracting than the Dead Letter Office:

Think of an HTTP redirect as the forwarding address you give the post office when you move. The post office keeps your change of address on file, and sends mail on to your new address, for a period of time (typically a year). The medium is different but the mechanics are quite similar, as your browser seamlessly follows a redirect from the old location of a document to its new location.

I don’t think I achieved subtlety in this version, but it felt like an improvement, and less contrived. Another thing I incorporated was Diane’s suggestion that I mention how the browser much like the postal system works seamlessly or invisibly. Most of the time you don’t even notice when the link you’ve clicked on actually results in you viewing a document somewhere else. The browser quietly follows the redirects without letting you know.

It was hard to part with the image of the Dead Letter Office. Perhaps it’s the seed of an idea for another time.

Terry Reese: MarcEdit Mac Updates

planet code4lib - Sun, 2015-11-08 16:52

I’ve posted a new MarcEdit update.  You can get the builds directly from: or using the automated update tool within MarcEdit.  Direct links:

The change log follows:



MarcEdit Mac ChangeLog: 11/8/2015

MarcEdit Applications Changes:
* Build New Field Tool Added
** Added Build New Field Tool to the Task Manager
* Validate Headings Tool Added
* Extract/Delete Selected Records Tool Added

* Updates to Linked Data tool
** Added option to select oclc number for work id embedding
** Updated Task Manager signatures

* Edit Indicators
** Removed a blank space as legacy wildcard value.  Wildcards are now strictly “*”

Merge Records Tool
* Updated User defined fields options to allow 776$w to be used (fields used as part of the MARC21 option couldn’t previously be redefined to act as a single match point)

* Results page will print UTF8 characters (always) if present

* Adding an option so if selected, 880 will be sorted as part of their paired field.

Z39.50 Client
* Supports Single and Batch Search Options

Terry Reese: MarcEdit Windows/Linux Update Notes

planet code4lib - Sun, 2015-11-08 16:51

I’ve posted a new MarcEdit update.  You can get the builds directly from: or using the automated update tool within MarcEdit.  Direct links:

The change log follows:



MarcEdit Windows/Linux ChangeLog: 11/8/2015

MarcEdit Application Changes:
* Updates to the Build New Field Tool
** Code moved into meedit code library (for portability to the mac system)
** Separated options to provide an option to add new field only, add when not present, replace existing fields
** Updated Task Manager signatures — if you use this function in a task, you will need to update the task

* Updates to Linked Data tool
** Added option to select oclc number for work id embedding
** Updated Task Manager signatures
** Updated cmarcedit commandline options

* Edit Indicators
** Removed a blank space as legacy wildcard value.  Wildcards are now strictly “*”

Merge Records Tool
* Updated User defined fields options to allow 776$w to be used (fields used as part of the MARC21 option couldn’t previously be redefined to act as a single match point)

* Results page will print UTF8 characters (always) if present

Validate ISBN/ISSN
* Results page now includes the 001 if present in addition to the record # in the file

* Adding an option so if selected, 880 will be sorted as part of their paired field.

* Added Sorting Preferences
* Added New Options Option, shifting the place where the folder settings are set.

UI Improvements
* Various UI improvements made to better support Windows 10.

Cynthia Ng: Mozilla Festival Day 2: Notes on Organize Better, More Inclusive Hackathons

planet code4lib - Sun, 2015-11-08 12:35
Since I help organize conferences and events, I thought it would be interesting to attend the session. Things that Worked Well or Not / Made You Feel More/Less Included (Small Group Discussion) planning/schedule – clear schedule, what to bring, clear goal / purpose team / relationship building – prepared make it fun / social – … Continue reading Mozilla Festival Day 2: Notes on Organize Better, More Inclusive Hackathons

Cynthia Ng: Mozilla Festival Day 2: Opening Keynotes

planet code4lib - Sun, 2015-11-08 11:00
Day 2 started with more talks to kick off the day. Mark Surman There is a new wave of open emerging. More optimistic than the corporate controlled internet talked about last year. Rally citizens / connect leaders -> fuel the movement. Told a story about making his first TV commercial that was then destroyed. What … Continue reading Mozilla Festival Day 2: Opening Keynotes

Ed Summers: Seminar Week 10

planet code4lib - Sun, 2015-11-08 05:00

This week’s readings were focused on Values in Design with Friedman & Nissenbaum (1997), Shilton, Koepfler, & Fleischmann (2014) and (???). We were fortunate to have Katie Shilton on hand to talk about her article, and values sensitive design in general.

We actually spent the first half of the class working with Envisioning Cards, which are a deck of cards that help designers explore the value dimensions in their projects. We split up into groups, picked a technology project, and got 4 random cards to work with. My group decided to examine public transportation planning, which was hybrid socio-technical system involving human actors such as elected officials, planners, civic organizations as well as planning and data collection systems. Since this was kind of a sprawling system to think about we mostly focused on a particular planning task: designing the transportation system around a new supermarket that was being built.

One of our cards asked us to examine value tensions:

The card had us come up with three value tensions in our transportation planning system, and to then identify a design feature that favors one value over another. One of the value tensions we came up with in our supermarket planning was between car parking and public transportation. One design feature that materialized this tension was the size of the supermarket. If the location supported public transportation, then less space would need to be dedicated to car parking, which would leave ore space for the store itself.

I thought the cards did a nice job of guiding our group discussion to tease out value issues. They were especially good for getting our conversation started, and breaking the ice. We didn’t spend more than 5 minutes on each card, but even in that time it became clear that each member of our group had their own values that they brought to the table, and that these perspectives were valuable to the discussion.

Apparently it is more common to use Envisioning Cards when teaching Value Sensitive Design in classes than it is to actually use them as part of an actual design process. But I ordered a set anyway, since they seem like they could be useful to try in an actual design. Perhaps using game like elements in serious design issues could lead some to feel like values issues are being minimized or trivialized. But I think games can help unlock creativity as well. The cards reminded me a bit of CRC Cards in agile software development, which I’ve seen work quite well when designing object-oriented software. The card metaphor is a generally useful design metaphor, which is present in project management tools like Trello.

One of the interesting conversations we had around (???) involved transparency: how important is it for authors to discuss their own values in their research:

Another facet of this recommendation is connected with the aforementioned distinction among explicitly supported values, stakeholder values, and designer values. If researchers make this distinction in a given project, it may well be useful for the reader to know what are the relevant personal values of the researchers or designers, as well as the values that are explicitly supported in that project and the relevant values of the key stakeholders, so that the reader can judge how well the different kinds of values are distinguished and treated in the work. This kind of clarity is particularly important if there were value tensions or conflicts among values in these three groups.

Some felt that discussing values in a paper about quarks for example, may seem a bit out of place, and perhaps even discouraged. I think Borning’s point is not about research in general, but research involving the study of values. It becomes difficult to contextualize research into values when the researcher and designer values aren’t adequately described.

I have spent a significant portion of my career working in and with the open source and agile software development communities. Part of the reason these communities appeal to me are because of their implicit values of the commons and the value of users in design. But people can get involved in open source software for a variety of reasons, and successful projects often form because of shared values. It makes me think that the better these values are understood in design, the more fulfilling the work will be, and the easier it will be for outsiders to join in. This is just a hunch though, it would be interesting to see if other people are studying that. At any rate, I suspect that the bibliographies in these papers contain a lot of stuff I’m going to be looking at in the coming years.


Friedman, B., & Nissenbaum, H. (1997). Human values and the design of computer technology. In (pp. 330–347). Cambridge University Press.

Shilton, K., Koepfler, J. A., & Fleischmann, K. R. (2014). How to see values in social computing: Methods for studying values dimensions. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 426–435). Association for Computing Machinery.

Ed Summers: Seminar Week 9

planet code4lib - Sun, 2015-11-08 05:00

This week we took a break from readings and reviewed each others initial research topic proposals. I believe that the idea is for these research proposals to feed into the work we do in next semester’s seminar, which ultimately leads up to our integration paper that culminates our class work, and then feeds into our dissertation. It’s not really appropriate for me to share my classmates ideas here, but I will say that I was really struck by how varied and interesting they were: methods for studying citizen science, values in design, trauma in information systems. Our discussion was useful because it revealed the degree to which I actually missed the intent behind the proposals. I also got some useful feedback about mine, which mostly brought home that I have yet to express an actual research project!

I do know that I’m interested in studying web archives. I’ve initially been focused on appraisal: how we decide what goes into Web archives, and how computers can assist in those decisions, specifically social media streams. But there is such a strong Human Computer Interaction lab at UMD that it feels like it would be a wasted opportunity not to tap into this more in the design of these web archive systems. Similarly the strengths of the Ethics and Design Lab seem like another important area to draw on. I think I’ve been focused on the digital curation because of my background, but the design of these archival systems, so that they support the varied values of curation is important to me, and is something that I would like to build into my research.

For example I want my research to support a particular set of data curation values: open access, knowledge sharing, diversity, community development and transparency ; rather than law enforcement, surveillance, and militarization. So much of information technology is dual use, and I am interested in ways of influencing use over time–so called opinionated software. I guess this is what motivated Richard Stallman’s work on the [GNU General Public License] which has had such a profound effect on the software development community. The [Creative Commons] licenses and movement that folks like Aaron Swartz and Lawrence Lessig worked on, also comes to mind. These individuals had very clearly articulated lineage of values, and I feel like it’s useful to tap into this as part of my research work, rather than letting it be completely technical and agnostic about use.

Cynthia Ng: Mozilla Festival Day 1: Notes from Disassembling the world’s worst data wrapper: PDFs

planet code4lib - Sat, 2015-11-07 15:05
It’s no secret that PDFs are a terrible way to distribute data, so some tips and tools on helping to extract data and information from PDFs. Tabula For extracting data in tables. Online version at Also available a version to download and run locally. If you have any issues, try the other detection mode. … Continue reading Mozilla Festival Day 1: Notes from Disassembling the world’s worst data wrapper: PDFs


Subscribe to code4lib aggregator