You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib -
Updated: 1 hour 12 min ago

DPLA: IIIF: Access to the World’s Images – Ghent, Belgium 8 Dec 2015

Tue, 2015-11-10 17:55

The extended DPLA community is warmly invited to a one day event in Ghent, Belgium, hosted by the International Image Interoperability Framework community ( and Ghent University Library (, describing the power and potential of interoperable image delivery over the Web.

The day will showcase how institutions are leveraging IIIF to reduce total cost and time to deploy image delivery solutions, while simultaneously improving end user experience with a new host of rich and dynamic features. It will also highlight how to participate in this growing movement to take advantage of the common framework.  This event will be valuable for organizational decision makers, repository and collection managers, software engineers; for cultural heritage or STEM (science / technology / engineering / medicine) institutions; or for anyone engaged with image-based resources on the Web.

The event will be held at the beautiful Ghent Opera House on Tuesday December 8th, 2015.  There is no cost to attend, so please register now on EventBrite:

A detailed program and further logistical information will be available at: including related events that same week.

There will be many opportunities for discussion, questions and networking throughout the day with new and existing partners including national libraries, top tier research institutions, commercial providers and major aggregators.

Please register now on EventBrite (, and join for announcements and discussion regarding the event.  Widespread dissemination of the event is strongly encouraged.

We hope to see you all there!

You need an iFrame capable browser to view this within the post. Click the link to view:
Singleton, Esther, d. 1930. The Belgian galleries : being a history of the Flemish school of painting, illuminated and demonstrated by critical descriptions of the great paintings in Bruges, Antwerp, Ghent, Brussels and other Belgian cities. Boston: St. Botolph Society, 1912. Provided by the Internet Archive and the Boston Public Library. Read more about the Internet Archive’s new IIIF services.

LibUX: A Quarter (25%) of the Web Uses WordPress

Tue, 2015-11-10 17:17

The folks at W3Techs Surveys report that the use of WordPress has reached a milestone now powering 25.0% of the web. No question it is the most popular content management system, but just look at that spread.

What’s more, the WP REST API is slated to merge into core, making WordPress not just the most ubiquitous CMS – but the most powerful.

This post is part of a nascent library UX data collection we hope you can use as a reference to make smart decisions. If you’re interested in more of the same, follow @libuxdata on Twitter, or continue the conversation on our Facebook group. You might also think about signing-up for the Web for Libraries weekly.

Email Address

The post A Quarter (25%) of the Web Uses WordPress appeared first on LibUX.

LITA: I’m Jenny Levine, and This Is How I Work

Tue, 2015-11-10 17:08

(Format shamelessly stolen from LifeHacker)

Jenny Levine

Location: Chicago, IL
One word that best describes how you work: Collaboratively
Current mobile device: Samsung Galaxy S6 (I love customizing the heck out of my phone so that it works really well for me) .
Current computer: At work, I have a standard HP desktop PC, but at home I use an Asus Zenbook.

What apps, software, or tools can’t you live without?
I’m constantly trying new tools and cobbling together new routines for optimal productivity, but right now my goto apps are LastPass for password management across all of my devices, PushBullet for sharing links and files across devices, and Zite for helping me find a wide selection of links to read.

My workspace

What’s your workplace setup like?
At work, I love my adjustable standing desk. I wanted to paint my office walls with whiteboard paint, but that hasn’t worked out well for other ALA units so I’m looking forward to getting an 8’ x 4’ whiteboard. I like organizing my thoughts visually on big spaces. At home, I pretty much sit on the couch with my laptop.

What’s your best time-saving shortcut or life hack?
Work-life balance is really important. You can’t be your best at home or work if you’re not getting what you need from both. Life really is too short to spend your time doing things you don’t want to do (some clichés are clichés for a reason).

What’s your favorite to-do list manager?
I’m constantly tinkering with new tools to find the ideal workflow, but I haven’t hit on the perfect one yet. Earlier this year I read “Work Simply” by Carson Tate, which explains the four productivity styles she’s identified. She then makes recommendations about workflows and tools based on your productivity style. Unfortunately, I came out equally across all four styles, which I think explains why some of the standard routines like Getting Things Done and Inbox Zero don’t work for me. Traditionally I’ve been a Post-It Notes type of person, but I’ve been trying to save trees by moving that workflow into Trello. It’s working well for me tracking projects long-term, but I just can’t seem to escape the paper Post-It Note with my “must do today” list, and now I’m learning to accept that thanks to Tate’s book. I’m also experimenting with WorkLife to manage meeting agendas.

Ella, the world’s greatest dog

Besides your phone and computer, what gadget can’t you live without and why?
I couldn’t do without my wireless headphones, because I listen to a lot of podcasts while I’m walking the world’s best dog, Ella. I also don’t feel right if I’m not wearing my Fitbit. Gotta get my 11,000 steps in each day.

What everyday thing are you better at than everyone else? What’s your secret?
At a macro level, I’m good at identifying trends and connecting them to libraries. At a more granular level, I’m really good at making connections between things and people so that they’re able to do, learn, share, and implement more together. These are things I’m really looking forward to doing for LITA. I want to meet all of our members so that I can connect them, learn from them, and help them do great things together.

What do you listen to while you work?
Almost anything. I subscribe to Rdio in part because you can easily see every single new album they add each week. I tend to browse that list and just listen to whichever ones have interesting cover art or names. When I really need to concentrate on something, I tend to go for classical music. I’m intrigued by Coffitivity.

What are you currently reading?
I recently finished a series of mind-blowing science fiction, “Blindsight” by Peter Watts followed by “Seveneves” by Neal Stephenson. I loved them both (although I wish “Seveneves” had a proper ending), as well as the first two books of Cixin Liu’s Three-Body trilogy (I’m anxiously awaiting the translation of the third book). I also just finished “Being Mortal” by Atul Gawande, which I recommend everyone read.

After reading all of these, though, I’m ready to curl up in the corner now and wait for the end of humanity. I may need to read a Little Golden book next, but I just started “Ancillary Mercy” by Ann Leckie.

How do you recharge?
In general, walking the dog is my zen time, but I’m also prone to watching tv. I don’t have email notifications set up on my computers, phones, or tablet, and I’m very deliberate about how I use technology so that I feel a sense of control over it. I’ve also learned that at least once a year I have to go on vacation and completely unplug to restore some of that balance. I love technology, but I also love doing without it sometimes.

What’s the best advice you’ve ever received?
When I graduated from my college, I didn’t want to go into the field I’d majored in (broadcast news), so I was trying to figure out what to do with my life instead. I had a little money from one of my grandmothers, so I decided to open a bookstore because I had loved working in one in high school. My Mom sat me down and told me about this place called “Border’s Bookstore” that was opening down the street and why I wouldn’t be opening my own bookstore. Instead, she suggested I go to library school. Best advice ever.

I’m passionate about….
Accessibility, collaboration, inclusivity, diversity, efficiency, transparency, communication. Everything can be improved, and we can build new things – how do we do that together? If we could build a 21st century organization from scratch, how would it be different? These are all areas I want to work on within LITA.

The future’s so bright…
I’m excited to be the new Executive Director of LITA, especially this week because it’s LITA Forum time (sing that to yourself in your best MC Hammer voice). I can’t believe it, but this will be my first ever LITA Forum, so in addition to being really happy I’m also kind of nervous. If you see me at Forum, please wave, say hi, or even better tell me what your vision is for LITA.

If you won’t be at Forum, I’d still love to hear from you. I went for the Director job because I believe that LITA has a bright future ahead and a lot of important work to do. We need to get going on changing the world, so share your thoughts and join in. There are a lot of places you can find LITA, but you can also contact me pretty much anywhere: email (jlevine at ala dot org), Facebook, Hangouts (shiftedlibrarian), Snapchat (shiftedlib), and Twitter for starters.

Open Knowledge Foundation: Announcement – Open Definition 2.1

Tue, 2015-11-10 16:23

Today Open Knowledge and the Open Definition Advisory Council are pleased to announce the release of version 2.1 of the Open Definition. The definition “sets out principles that define openness in relation to data and content” and continues to play a key role in supporting the growing open ecosystem.

The Open Definition was first published in 2005 by Open Knowledge and is maintained today by an expert Advisory Council. This new version is a refinement of version 2.0, which was the most significant revision in the Definition’s nearly eleven-year history.

This version is a result of over one year of discussion and consultation with the community including input from experts involved in open data, open access, open culture, open education, open government, and open source. This version continues to adhere to the core principles while strengthening and clarifying the Definition in three main areas.

What’s New

Version 2.1 incorporates the following changes:

  • Section 1.1 Open License or Status, formerly named Open License, has been changed to more clearly and explicitly include works that while not released under a license per se, are still considered open, such as works in the public domain.
  • In Version 2.0, section 1.3 specified the requirement for both machine readability and open formats. In Version 2.1 these requirements are now separated into their own sections 1.3 Machine Readability and 1.4 Open Format.
  • The new 1.4 Open Format section has been strengthened such that in order to be considered open, the work has to be able to be both in a format which places no restrictions, monetary or otherwise and it has to be able to be fully processed by at least one free/libre/open-source software tool. In version 2.0 only one of these conditions was needed to satisfy the requirement.
  • An attribution addendum has been added to recognize the work that the definition is based on.

Version 2.1 also includes several other less significant changes to enhance clarity and better convey the requirements and acceptable conditions.

More Information Authors

This post was written by Herb Lainchbury, Chair of the Open Definition Advisory Council and Rufus Pollock, President and Founder of Open Knowledge.

David Rosenthal: Follow-up to the Emulation Report

Tue, 2015-11-10 16:00
Enough has happened while my report on emulation was in the review process that, although I announced its release last week, I already have enough material for a follow-up post. Below the fold, the details, including a really important paper from the recent SOSP workshop.

First, a few links to reinforce points that I made in the report:

One important assumption that lies behind the use of emulation for preservation is that future hardware will be much more powerful than the hardware that originally ran the preserved digital artefact. Moore's Law used to make this a no-brainer for CPU performance and memory size. Although it has recently slowed, the long time scales implicit in preservation mean that these are still a good bets. But the capabilities emulation needs to be more powerful are not limited to CPU and memory. They include the I/O resources needed for communication with the user. The report points out that this is no longer a good bet. Desktop and laptop sales are in free-fall and as The Register reports, even tablet sales have been cratering over the last year. The hardware future users will use to interact with emulations will be a smartphone. It won't have a physical keyboard and its display and pixels will be much smaller. Most current emulations are unusable on a smartphone.

The report starts with an image of a Mac emulator running on an Apple watch. Nick Lee started a trend. Hacking Jules has Nintendo 64 and PSP emulators running on his Android Wear. Not, of course, that these emulated games really recreate the experience of playing on a Nintendo 64 or a PSP. But, as with Nick Lee's Mac, they show that simply running an emulation is not that hard.

One thing that surprised me during the research for the report was that retro-gaming is a $200M/yr business. It just held a convention in Portland, complete with a keynote by Al Alcorn.

Some papers at iPRES2015 addressed issues that were raised in the report:
  • Functional Access to Forensic Disk Images in a Web Service. by Kam Woods et al. describe using Freiburg's emulation-as-a-service on a collection of forensic disk images.
  • Characterization of CDROMs for Emulation-based Access by Klaus Rechert et al is a paper I cited in the report, thanks to a pre-print from Klaus. It describes the DNB's efforts using Freiburg's EAAS to provide access to their collection of CD-ROM images. In particular it describes an automated workflow for extracting the necessary technical metadata.
  • Getting to the Bottom Line: 20 Digital Preservation Cost Questions. Cost is the single most important cause of the Half-Empty Archive. One concern the report raises is that, absent better ingest tools, the per-artefact cost of emulation is too high. Matt Schultz et al describe a resource to help institutions identify the full range of costs that might be associated with any particular digital preservation service.
  • Dragan Espenscheid's beautiful poster about the Theresa Duncan CD-ROMs is worth a look.
The Freiburg team have continued to make progress, including:
Based on the facts that cloud services depend heavily on virtualization, and that preserved system images generally work well, the report is cautiously enthusiastic about the fidelity with which emulators execute their target's instruction set. But it does flag several concerns in this area, such as an apparent regression in QEMU's ability to run Windows 95.

A paper at the recent SOSP by Nadav Amit et al entitled Virtual CPU Verification casts light on the causes and cures of fidelity failures in emulators. They observed that the problem of verifying virtualized or emulated CPUs is closely related to the problem of verifying a real CPU. Real CPU vendors sink huge resources into verifying their products, and this team from the Technion and Intel were able to base their research into X86 emulation on the tools that Intel uses to verify its CPU products.

Although QEMU running on an X86 tries hard to virtualize rather than emulate, it is capable of emulating and the team were able to force it into emulation mode. Using their tools, they were able to find and analyze 117 bugs in QEMU, and fix most of them. Their testing also triggered a bug in the VM BIOS:
But the VM BIOS can also introduce bugs of its own. In our research, as we addressed one of the disparities in the behavior of VCPUs and CPUs, we unintentionally triggered a bug in the VM BIOS that caused the 32-bit version of Windows 7 to display the so-called blue screen of death.Their conclusion is worth quoting:
Hardware-assisted virtualization is popular, arguably allowing users to run multiple workloads robustly and securely while incurring low performance overheads. But the robustness and security are not to be taken for granted, as it is challenging to virtualize the CPU correctly, notably in the face of newly added features and use cases. CPU vendors invest a lot of effort—hundreds of person years or more—to develop validation tools, and they exclusively enjoy the benefit of having an accurate reference system. We therefore speculate that effective hypervisor validation could truly be made possible only with their help. We further contend that it is in their interest to provide such help, as the majority of server workloads already run on virtual hardware, and this trend is expected to continue. We hope that open source hypervisors will be validated on a regular basis by Intel Open Source Technology Center.Having Intel validate the open source hypervisors, especially doing so by forcing them to emulate rather than virtualize, would be a big step forward. But note the focus on current uses of virtualization. To what extent the validation process would test the emulation of the hardware features of legacy CPUs important for preservation is uncertain. Though the fact that their verification caught a bug that was relevant only to Windows 7 is encouraging.

M. Ryan Hess: ProtonMail: A Survivors Tale

Tue, 2015-11-10 00:03

Beginning November 3rd, encrypted email service provider, ProtonMail, came under a DDOS attack by blackmailers. Here is my experience, as a supporter and subscriber, watching from the sidelines. It’s a survival story with many heroes that reads like a Mr. Robot script.

Why Encrypt Your Email?

ProtonMail is an encrypted email service that I just love. It overcomes the problems with email providers’ harvesting your personal data for resale, the pitfalls of these databases falling into criminal hands and just plain weirdness you feel when every word, attachment and contact is shared to whomever.

To make my point on why everyone should use encrypted email, like ProtonMail, consider this experience: I recently had to fill out an affidavit confirming my identity but did not have all the particulars with me, such as past addresses, etc. No problem, I just logged into my 12 year old Gmail account and did some searching. In no time, I had all the personal info the affidavit required to prove my identity.

It’s not that I purposely saved all this information in there. It just accumulates over the years organically.

Imagine if that data fell into the wrong hands.

ProtonMail is a crowd-funded, free email service that comes out of the CERN laboratories in Switzerland and MIT. The engineers at these research facilities were inspired by the revelations of Edward Snowdon about back doors into email servers and the general collection of data by governments, so they built ProtonMail.

The service is simple, elegant and super secure. The encryption happens through the use of a client-side password, so theoretically, nobody, not even ProtonMail, can decrypt your emails and read them.

ProtonMail Taken Down

The recent Distributed Denial of Service (DDOS) attack began on November 3rd when a group held for ransom access to ProtonMail’s email service. This was a very sophisticated attack that flooded their servers with requests, but also their ISP. The result was that ProtonMail and several other sites, including e-commerce and banking sites, were unreachable. After failing to successfully fight back, the ISP and other firms put enormous pressure on ProtonMail to pay off the cyber gang. They did so and the attack stopped…momentarily.

Less than half a day later, the attack re-commenced. This time it was even more sophisticated and destructive. And, things got even weirder. The original blackmailers actually contacted ProtonMail to let them know they were not involved in the new attack. ProtonMail is pretty certain that the second attack was likely a state entity.

You can read all the details on their blog post on the incident.

Over this past weekend, November 7-8th, ProtonMail launched a response to the ongoing attack, deploying new defensive technologies used by large Internet firms, funded through a GoFundeMe campaign. As of this writing nearly 1,500 individuals donated $50,000 in just 3 days to help in this regard.

Those would be the first, rather large, set of heroes. Thanks to you guys!

Click here to add to the fund.

Social Networks Get the Word Out

The media was really late to this story. It was not until the end of the week that the first news reports came out about the blackmail story made sexier by the fact that the ransom was paid with bitcoins.

Most of the breaking news, however, was only available on ProtonMail’s Twitter feed and their Sub-Reddit.

It was on their Twitter page that they first disclosed the moment-by-moment details of their fight to restore access and their ultimate attempt to fund new defensive technologies. It was on Reddit that the controversy and pain was aired such as reactions to their payment of the ransom and frustration of everyday users at not being able to access their email.

People really gave them a lot of credit, however. And it was heartening that, despite some rather single-minded rants, most people rallied around ProtonMail.

Lessons Learned

One thing I was surprised about were some of the complaints from business people that were using ProtonMail as their exclusive business email. They were losing money during the attack so they were often the most irate. But you have to wonder about someone using an emerging tool like ProtonMail for something so critical as company email. Obviously, new Internet services take time, especially when they are not backed by seasoned VCs who are risk adverse.

I personally had not made the switch to ProtonMail entirely. Part of this was because they don’t have an iPhone app yet, which is where I do about 50% of my emailing. But I was getting close.

So, yes, I had a few important emails get bounced back to the senders. And perhaps one or two have been lost permanently (I may never know). But it does go to show that, for the foreseeable future, ProtonMail is not a reliable sole-email solution. However, given the work they are doing in response to the latest attack, this event may be the turning point that makes them a truly stable email service.

Just this morning, they came under another attack, but unlike previous days over the past week, they were back online very quickly. Hopefully this means their new defenses are paying off.

Bottom Line

ProtonMail rocks. I really love it. The recent DDOS attack only confirms that the good team at CERN and MIT are dedicated to doing what it takes to keep this alive. I can think of other such services that have folded when they came under similar pressure. In fact, the user community around ProtonMail is as serious as ever, shelling out the money required to safeguard encrypted email just when it counted.

There will likely be further trouble ahead. The British government has suggested it might ban encrypted email services. And who knows how the US will respond long term. So, there could be more chop ahead. But for the time being, it seems that ProtonMail may have survived a very critical test of its resilience.

Stay tuned!

DuraSpace News: DuraSpace Services Webinar Recording Available

Tue, 2015-11-10 00:00

This month DuraSpace is presenting a two-part webinar series entitled “2015 Accomplishments and A Sneak Peek at What Lies Ahead" highlighting the recent developments in the suite of DuraSpace services: DSpaceDirect, ArchivesDirect, DuraCloud, and soon-to-be released, DuraCloud Vault. The first webinar -- presented yesterday by Carissa Smith, Cloud Services Manager -- focused on "The 'Direct' Services - DSpaceDirect and ArchivesDirect".

DuraSpace News: NOW AVAILABLE: DSpace 5.4–Bug Fixes, Memory Enhancements++

Tue, 2015-11-10 00:00

From Tim Donohue, on behalf of the DSpace 5.4 Release Team, and all the DSpace developers

Winchester, MA  DSpace 5.4 is now available providing security fixes to the JSPUI, along with significant bug fixes and memory usage enhancements to all DSpace 5.x users.

DuraSpace News: NOW AVAILABLE: VIVO 1.8.1–Improved Performance and New Visualizations

Tue, 2015-11-10 00:00

Winchester, MA  On November 10, 2015 VIVO 1.8.1 was released by the VIVO team.  This new release offers users vastly improved performance, new and better visualizations, as well as bug fixes.

Full release notes are available on the VIVO wiki:

LibUX: 028 – Crafting Websites with Design Triggers – Part One

Mon, 2015-11-09 23:27

A design trigger is a pattern meant to appeal to behavior and cognitive biases observed in users. Big data and the user experience boom has provided a lot of information about how people actually use the web, which designs work, and–although creepy–how it is possible to cobble together an effective site designed to social engineer users.

This is the first-half of an hour long talk, where I introduce design triggers as a concept and their reason for being, touch on things like anchoring, how people actually look at websites, and other techniques to pimp your wares through design.

You can follow along if you like.

Thanks, and enjoy!

You can subscribe to LibUX on Stitcher, iTunes, or plug our feed right into your podcatcher of choice. Help us out and say something nice.

The post 028 – Crafting Websites with Design Triggers – Part One appeared first on LibUX.

Peter Sefton: Scratching an itch: my software for formating song-sheets into, *gasp* PDF!

Mon, 2015-11-09 23:00

[update: 2015-11-11 minor edits]

Summary: I made a new open source program to format song sheets in chordpro format in a variety of ways. It’s a command line thing. Not everyone understands; when I talk about it to my friends in the band I sort of accidentally seem to have nearly joined we have IM exchanges like:


I have it set up now with a cloud server watching the dropbox, so if you put in a text file with a .cho extension it will auto-generate a PDF for the song

Other band member:

The what with the what now?

So, I decided to tell you all here on the Interwebs where you will 100% get what I’m talking about.

The problem was this: I had several years worth of song-sheets downloaded from various places on the internets or typed out by hand, for my own and other people’s songs, in a variety of formats, including Word docs, RTF and PDF but mostly text files. Then I started playing music with other people again for the first time in years, and we’d be trading bits of paper and files and mailing song files to each other, and so on, always searching for a fourth-generation photocopy of something someone had in their folder. Anyway, I got to looking at ways to manage this and create consistently formatted song sheets.

Turns out there’s a handy format for marking-up songs called chordpro. This involves putting chord names in square brackets, inline, in amongst the lyrics. Like [C] this. Or [G#maj7] or this, with a {title: } at the top, and a few other simple commands. Here’s one I prepared earlier.

{title: Universe} {st: Peter Sefton} {key: C} {transpose: -3} [C] This is a song about [E7] everything [F] It's really div[C]erse [Caug9]Got something for [E7] everyone [Am] But only [G] has one [F] verse [F] Get it? Uni [D] Verse {c: Pre chorus} [D7] Here comes the chorus: {soc} {c: Chorus} [C] Uni [E7] verse Uni [F] verse [F] Uni [Fm] verse Uni [C] verse {eoc} {sob} {c: Bridge} [C] That's [G] it {eob} {c: Coda} [F] Sorry the song's [C] so terse

There are lots of software packages for managing chordpro songs, printing them out, showing them on your church projector, organising them on your iPad, transposing them between keys and so on, but none of the software did quite what I wanted in the way I wanted. For example, most of the packages are designed to put chords above the lyrics, but I prefer leaving the chords inline, the way Rob Weule did it in the Ukulele Club Songbook and Richard G does it in his Ukulele songs, it’s more compact for one thing not to mention easier to copy and paste. Here’s an example of some free software online which is really well done, but not what I wanted at all:

A nice online chordpro converter at

I’ve been keeping my songs in text files in Dropbox for years and that suits me. I don’t want to have to suck all the files in to the maw of some slightly dodgy open source Java application on my laptop, or upload them somewhere, or install a web application, and it hurts me to say this after all the work I put into scholarly publishing in HTML but PDF is perfect for songs which are page-based, good for printing and good for displaying on tablets.

And I like to keep my hand in with coding and, well, I have a few hours on the train every day which I don’t always use for work stuff and, you can probably see where this is heading. I got to wondering what would happen if I ran a few songs through Pandoc, the magnificent omniverous document conversion tool to make PDF and HTML versions, and one thing led to another. It started innocently enough with a simple script to process chordpro declarations into markdown. But then I asked myself; how would this look as an epub ebook made up of multiple songs? And then how do I make a word doc with a table of contents and start each song on a new page? And how might I get the script to make the songs scale (pun intended) so they fill up the page, for maximum readability? Which led to experiments with LaTeX, the taste of which I still don’t have out of my mouth entirely, and a brief flirtation with the Open Document Format and the LibreOffice presentation software (we’ve dated before but it has never worked out long-term). I finally got friendly with an amazing bit of command line software called wkhtmltopdf which can turn HTML web pages into PDF including running any javascript they contain before doing so. This way I was able to write a script that automatically scaled up text to fit an A4 page.

I told myself “I’m not going to do chord grids”, you know, little images with fret-dots etc, cos me and my music buddies are all awesome players who know every chord ever, and if not can like totally work them out in our heads, but then I wondered if there was an existing open source library that did chords for multiple instruments I could just, like, drop in to my Python program so I could re-learn the banjo chords I’ve forgotten, and learn a bit more mandolin, and it turned out that there isn’t really, but a I used a few train trips and a hot saturday arvo to write a chord-grid-drawer. Turns this chord definition I got from the open source software at Uke Geeks (thanks Buz!):

{define: Aaug frets 2 1 1 4 fingers 2 1 1 4 add: string 1 fret 1 finger 1 add: string 4 fret 1 finger 1}

Into this:


Unlike most chord drawing software I found which has built-in limitations, It will also cheerfully render a really silly chord like this, which would require twelve strings, and 33 playable frets, not to mention 11 fingers, like that the dude in one fish two fish red fish blue fish by Dr Seuss:


Look at his fingers!

One, two, three…

How many fingers

Do I see?

One, two, three, four,

Five, six, seven,

Eight, nine, ten.

He has eleven!


This is something new.

I wish I had

Eleven, too!

{define: F#stupid base-fret 22 frets 1 2 3 x 4 5 6 7 8 9 10 11 fingers 11 10 9 8 0 7 6 5 4 3 2 1}


So, I had the technology to render chords (even at grand piano scale) but then it turned I couldn’t find chord definition files for more instruments. There are lots of chord charts in image formats, of course, but no data that I can find, which led to working out how to compile this old C code and modify it a bit to produce chord-data files (my update to that code is not done enough to release).

Anyway, the above song looks like this when run through my software. Now, after a couple of months of part-time tinkering in I can type ./ -o --instrument Uke uni-verse.cho and this appears!

Uni-verse rendered for printing

Better yet, I have it set up now with a cloud server watching the dropbox, so if you put in a text file with a .cho extension it will auto-generate a PDF for the song. That is, in our shared band folder in Dropbox, if anyone creates a new song file, a new PDF appears automagically about a second later. Drop me a line if you’d like to try it - all you have to do is share a Dropbox folder with an account of mine. This is one of my favourite deployment patterns for software, almost like a no-interface user-interface. I’ll write more about this matter soon.

Along the way I learned:

  • How to make books, by feeding in a list of files.
  • How to make a setlist book for a performance complete with additional performance notes, from a markdown file.
  • That even when we had a setlist book my bandmates typed up setlists in big writing to put on the floor, so I added a feature to write out set-per-page at the start of the book.
  • That we really need the ability to do per-performance annotations, such as who goes first and whether we have a count-in or all-in approach to the song, so notes from the setlist such as go slow get put at the top of each song.
  • How to keep two page songs on facing pages in said books so you don’t have to turn pages.
  • How to generate chord-definitions for arbitrary instruments (still working on that bit).
  • How to transpose chord definitions from one instrument to another if they have the same relative tuning between strings - eg soprano uke chords into baritone uke, or open-G banjo tuning into the C tuning used by my baby banjo, the Goldtone Plucky.
  • How to make stand-alone one page versions of songs …
  • .. automatically, whenever I drop a new on in my Dropbox, or change one.

Now, I realise that a command line thing that’s a hassle to install and will almost certainly break the moment someone other than me tries to run it has limited appeal, but I’m releasing it to the world anyway. Even if the software is not to your taste, I think some of the things I’ve done will be useful to others. For example I:

  • Liberated Buz Carter’s chord definitions for Soprano uke which were encoded in a javascript program into a stand alone file, observing the license conditions of the software of course.
  • Likewise liberated the guitar chords from the venerable chordii software.
  • Generated chord definitions for 5 string banjo G-tuning and mandolin; these may not be the best voicings or fingerings as they were auto-generated - let me know if you can help improve them.
The future

Here are some other things I am Never Ever Going To Do With This Software. Ever. Never. Absolutely promise.

  • It will never have a Graphical User Interface.

  • It will never be a web application.

  • It will never be a dot-com startup.

  • It will never be an iOS app.


If anyone wants to help we could:

  • Build a decent library of chord shapes defined in chordpro format

  • Improve the look of the generated books a lot, if anyone knows some modern HTML and CSS.

  • Make it better.

District Dispatch: Market solves infringement problem? Yeah, right.

Mon, 2015-11-09 22:45

The NASDAQ stock market, photo by Wikimedia.

“Let the market solve the problem” is a familiar refrain especially for those who want smaller government and fewer regulations. Frequently the market does solve the problem because the government is unable or not successful. The government’s failed attempt to combat piracy—SOPA—is an example. The public roundly opposed it and said that it was overkill, a threat to security, privacy, and free expression, leading to the now famous and somewhat embarrassing internet blackout. Rights holders can resort to the “notice and takedown” provision of the Digital Millennium Copyright Act (DMCA), but explain its limitations like a game of “whack-a-mole”—take down a piracy website only to have it pop up again under a different URL.

We need what Adam Smith called the “invisible hand” of the market. Be patient, the market will decide how to fix this infringement problem. Allow innovation and experimentation, let new technologies develop, and welcome new players in the market to emerge to take on these battles.

The market has emerged.

Why fight piracy, when you can monetize it? Since piracy cannot be completely eliminated, why not capitalize on it? Rights holders can get a piece of the pie. For instance, they can hire companies to search for their content on YouTube. Instead of suing and taking down the content, a rights holder can monetize the content by leaving it on YouTube and collecting part of the advertising revenue.

Bullying is another money-maker. Copyright trolls can sweep the net, find alleged infringers and scare them just enough so they settle out of court. Cease and desist and pay a fine or else. Cha-ching! A quick $500! Now we’re talking!

Image Technologies Laboratories (ITL) is an emerging global company that uses the latest technology to find images found on the web, finally addressing the unmet need faced by photographers who are ripped off every day. ITL is excited moving forward announcing in a press release that not only is there a business backlog, “the market has never been more saturated with the mishandling of digital content and the theft of copyrighted property,” suggesting a sustainable business.

Even better yet: allow people to invest in infringement.

Enter RightsCorp Inc., a publicly traded company. Their innovative business model—a market solution—uses the best digital crawler (patent pending) to sweep peer-to-peer sites, and find alleged infringers. Using the trolling method, they collect legal damages and settle out of court. Alleged infringers cough up revenue that is then evenly split between RightsCorp and the rights holders. Next RightsCorp took the business plan to a new level by selling stock in the company. Let’s share the wealth (while recouping some start-up costs) and sell company stock. Wise investors can bet on the permanency of piracy.

The market – the ultimate problem solver – NOT!

The post Market solves infringement problem? Yeah, right. appeared first on District Dispatch.

DPLA: DPLA Announces Knight Foundation Grant to Research Potential Integration of Newspaper Content

Mon, 2015-11-09 18:50

The Digital Public Library of America has been awarded $150,000 from the John S. and James L. Knight Foundation to research the potential integration of newspaper content into the DPLA platform.

Over the course of the next year, DPLA will investigate the current state of newspaper digitization in the US. Thanks in large part to the National Endowment for the Humanities and the Library of Congress’s joint National Digital Newspaper Program (NDNP) showcased online as Chronicling America, many states in the US have digitized their historic newspapers and made them available online. A number of states, however, have made newspapers available outside of or in addition to this important program, and DPLA plans to investigate what resources it would take to potentially provide seamless discovery of the newspapers of all states and US territories, including the over 10 million pages already currently available in Chronicling America.

“We are grateful to the Knight Foundation for providing funding to DPLA that enables us to devote time and resources to investigate the potential integration of newspapers into the DPLA,” said Emily Gore, DPLA Director of Content. “We look forward to working with our current hubs, NDNP participants and other significant newspaper projects over the next year.”

Other national digital libraries including Trove in Australia and Europeana have undertaken efforts to make full-text newspaper discovery a priority. Europeana recently launched Europeana Newspapers by aggregating 18 million historic newspaper pages. The intent of the DPLA staff is to engage the state newspaper projects, as well as Trove and Europeana Newspapers, over the next year as we consider the viability of a US-based newspaper aggregation. DPLA will also engage with the International Image Interoperability Framework (IIIF) community to discuss how IIIF may play a role in centralized newspaper discovery.

At the conclusion of the yearlong planning process, DPLA will hold a summit to report out on our findings and to discuss next steps with the cultural heritage newspaper community.

Image credit: Detail from “Students reading newspapers together,” 1961. University of North Texas Libraries Special Collections via The Portal to Texas History.

Evergreen ILS: Welcome Evergreen’s newest core committer, Kathy Lussier!

Mon, 2015-11-09 18:24

I am very pleased to announced that Kathy Lussier, project coordinator for the Massachusetts Library Network Cooperative, is Evergreen’s newest core committer!

Core committers are the folks entrusted with the responsibility of pushing new code to Evergreen’s main Git repository. Consequently, they serve as one of the final lines of defense against bugs slipping in. Kathy is eminently prepared for this role: since 2012, she has tested and signed off on over 250 patches written by others. During the same time period, she authored 100 code and documentation patches, with an especial focus on TPAC.

Kathy has also been very active in a wide variety of work to make coding for Evergreen happen more smoothly.  Some examples include analyzing requirements and writing specifications for various new features; helping to organize Evergreen Hack-A-Ways; helping to expand our use of automated tests; and and coordinating Evergreen’s participation in the GNOME Outreach Program for Women (now Outreachy).

We are fortunate and honored that Kathy Lussier has been a part of Evergreen for years, and I look forward to what she will accomplish in her latest role as a core committer.

Cynthia Ng: Mozilla Festival Day 2: CopyBetter: Notes from Copyright Reform in the EU

Mon, 2015-11-09 16:46
Trying to explain the bureaucratic mess. Three main institutions European Commission (Executive, divide who to ) Parliament (only elected group in the EU) Council (representation of all member states) Three institution have to hammer out solution or compromise, then parliament and council vote, then commission implement it. Legislative Acts Directive – minimum standard for all … Continue reading Mozilla Festival Day 2: CopyBetter: Notes from Copyright Reform in the EU

Thom Hickey: More about justlinks

Mon, 2015-11-09 15:16

We had an earlier post about the 'justlinks' view of VIAF clusters, but I thought it would be worthwhile to explore how that can combine with other VIAF functionality.

First a reminder of how the justlinks view works.  While the default view of clusters to Web browsers is the HTML interface, VIAF clusters can be displayed in several ways, including the raw XML, RDF XML, MARC-21 and justlinks JSON.  Here's a request for justlinks.json:

which returns:

{ "viafID":"36978042", "B2Q":["0000279733"], "BAV":["ADV11117013"], "BNE":["XX904401"], "BNF":[""], "DNB":[""], "ISNI":["000000010888091X"], "LAC":["0064G7865"], "LC":["n90602202"], "LNB":["LNC10-000054199"], "N6I":["vtls000101241"], "NKC":["js20080511012"], "NLA":["000035338539"], "NLI":["000501536"], "NLP":["a11737736"], "NSK":["000051380"], "NTA":["073902861"], "NUKAT":["vtls000205390"], "PTBNP":["70922"], "SELIBR":["256753"], "SUDOC":["031580661"], "WKP":["Q6678817"], "XA":["2219"], "ORCID":[""], "Wikipedia":[""]}

Ralph LeVan came up with this and we think it is pretty neat!  But wait, it gets even better!

Each of the IDs in this record that is a 'source record' ID to VIAF (in this case everything except the ORCID ID and the en.wikipedia URI) can be used to retrieve the cluster.  Here's how to pull justlinks.json using the LC ID:|n90602202/justlinks.json

HTTPS works too:|000051380/justlinks.json

All the different views of the clusters can be requested either through the explicit URI's shown here, or through HTTP headers, and they in turn can be  combined with sourceID redirection.


OCLC Dev Network: Change to Terminology Services

Mon, 2015-11-09 15:00

OCLC Research will be ending support for the Terminology Services prototype on 20 November 2015.

Open Knowledge Foundation: Join the School of Data team: Technical Trainer wantd

Mon, 2015-11-09 14:09

The mission of Open Knowledge International is to open up all essential public interest information and see it utilized to create insight that drives change. To this end we work to create a global movement for open knowledge, supporting a network of leaders and local groups around the world; we facilitate coordination and knowledge sharing within the movement; we build collaboration with other change-making organisations both within our space and outside; and, finally, we prototype and provide a home for pioneering products.

A decade after its foundation, Open Knowledge International is ready for its next phase of development. We started as an organisation that led the quest for the opening up of existing data sets – and in today’s world most of the big data portals run on CKAN, an open source software product developed first by us.

Today, it is not only about opening up of data; it is making sure that this data is usable, useful and – most importantly – used, to improve people’s lives. Our current projects (School of Data, OpenSpending, OpenTrials, and many more) all aim towards giving people access to data, the knowledge to understand it, and the power to use it in our everyday lives.

The School of Data is growing in size and scope, and to support this project – alongside our partners – we are looking for an enthusiastic Technical Trainer (flexible location, part time).

School of Data is a network of data literacy practitioners, both organisations and individuals, implementing training and other data literacy activities in their respective countries and regions. Members of the School of Data work to empower civil society organizations (CSOs), journalists, governments and citizens with the skills they need to use data effectively in their efforts to create better, more equitable and more sustainable societies. Over the past four years, School of Data has succeeded in developing and sustaining a thriving and active network of data literacy practitioners in partnership with our implementing partners across Europe, Latin America, Asia and Africa.

Our local implementing partners are Social TIC, Code for Africa, Metamorphosis, and several Open Knowledge chapters around the world. Together, we have produced dozens of lessons and hands-on tutorials on how to work with data published online, benefitting thousands of people around the world. Over 4500 people have attended our tailored training events, and our network has mentored dozens of organisations to become tech savvy and data driven. Our methodologies and approach for delivering hands-on data training and data literacy skills – such as the data expedition – have now been replicated in various formats by organisations around the world.

One of our flagship initiatives, the School of Data Fellowship Programme, was first piloted in 2013 and has now successfully supported 26 fellows in 25 countries to provide long-term data support to CSOs in their communities. School of Data coordination team members are also consistently invited to give support locally to fellows in their projects and organisations that want to become more data-savvy.

In order to give fellows a solid point of reference in terms of content development and training resources, and also to have a point person to give capacity building support for our members and partners around the world, School of Data is now hiring an outstanding trainer/consultant who’s familiar with all the steps of the Data Pipeline and School of Data’s innovative training methodology to be the all-things-content-and-training for the School of Data network.


The hired professional will have three main objectives:

  • Technical Trainer & Data Wrangler: represent School of Data in training activities around the world, either supporting local members through our Training Dispatch or delivering the training themselves;
  • Data Pipeline & Training Consultant: give support for members and fellows regarding training (planning, agenda, content) and curriculum development using School of Data’s Data Pipeline;
  • Curriculum development: work closely with the Programme Manager & Coordination team to steer School of Data’s curriculum development, updating and refreshing our resources as novel techniques and tools arise.
Terms of Reference
  • Attend regular (weekly) planning calls with School of Data Coordination Team;
  • Work with current and future School of Data funders and partners in data-literacy related activities in an assortment of areas: Extractive Industries, Natural Disaster, Health, Transportation, Elections, etc;
  • Be available to organise and run in person data-literacy training events around the world, sometimes in short notice (agenda, content planning, identifying data sources, etc);
  • Provide reports of training events and support given to members and partners of School of Data Network;
  • Work closely with all School of Data Fellows around the world to aid them in their content development and training events planning & delivery;
  • Write for the School of Data blog about curriculum and training events;
  • Take ownership of the development of curriculum for School of Data and support training events of the School of Data network;
  • Work with Fellows and other School of Data Members to design and develop their skillshare curriculum;
  • Coordinate support for the Fellows when they do their trainings;
  • Mentor Fellows including monthly point person calls, providing feedback on blog posts and curriculum & general troubleshooting;
  • The position reports to School of Data’s Programme Manager and will work closely with other members of the project delivery team;
  • This part-time role is paid by the hour. You will be compensated with a market salary, in line with the parameters of a non-profit-organisation;
  • We offer employment contracts for residents of the UK with valid permits, and services contracts to overseas residents
  • A lightweight monthly report of performed activities with Fellows and members of the network;
  • A final narrative report at the end of the first period (6 months) summarising performed activities;
  • Map the current School of Data curriculum to diagnose potential areas of improvement and to update;
  • Plan and suggest a curriculum development & training delivery toolkit for Fellows and members of the network
  • Be self-motivated and autonomous;
  • Fluency in written and spoken English (Spanish & French are a plus);
  • Reliable internet connection;
  • Outstanding presentation and communication skills;
  • Proven experience running and planning training events;
  • Proven experience developing curriculum around data-related topics;
  • Experience working remotely with workmates in multiple timezones is a plus;
  • Experience in project management;
  • Major in Journalism, Computer Science, or related field is a plus

We strive for diversity in our team and encourage applicants from the Global South and from minorities.


Six months to one year: from November 2015 (as soon as possible) to April 2016, with the possibility to extend until October 2016 and beyond, at 10-12 days per month (8 hours/day).

Application Process

Interested? Then send us a motivational letter and a one page CV via

Please indicate your current country of residence, as well as your salary expectations (in GBP) and your earliest availability.

Early application is encouraged, as we are looking to fill the positions as soon as possible. These vacancies will close when we find a suitable candidate.

Interviews will be conducted on a rolling basis and may be requested on short notice.

If you have any questions, please direct them to jobs [at]