You are here

Feed aggregator

Riley Childs: TTS Video

planet code4lib - Fri, 2014-10-31 15:59

(Video is on it’s way, there is an issue with the Camera on my laptop.

Hello, I am Riley Childs a 17-year old student at Charlotte United Christian Academy. I am deeply involved there and am in charge/support of our network, *nix servers, viz servers, library stuff and of course end-user computers. One of my favorite things to do is work in the Library and administer IT awesomeness. I also work in the theater at CPCC as a Electrician. Another thing that I love to do is participate in a community called code4lib where I assist others and post about library technology. I also post to the Koha mailing list where I help out others who have issues with Koha. Overall I love technology and I believe in the freedom of information and that is why I love librarians because they are all about distribution of information. In addition to all this indoor stuff I also enjoy a good day hike and also like to go backpacking every once in a while.
Once again I am very sorry that isn’t a video, I will try and post one soon (I kinda jumped the gun on submitting my app!).
Thanks
//Riley

The post TTS Video appeared first on Riley's blog at https://rileychilds.net.

OCLC Dev Network: Learn More About Software Development Practices at November Webinars

planet code4lib - Fri, 2014-10-31 15:15

We're excited to announce two new webinars based on our recent popular blog series covering some of our favortie software development practices. Join Karen Coombs as she walks you through a collection of tools designed to close communication gaps throughout the development process. Registration for both 1-hour webinars is free and now open.

David Rosenthal: This is what an emulator should look like

planet code4lib - Fri, 2014-10-31 15:00
Via hackaday, [Jörg]'s magnificently restored PDP10 console, connected via a lot of wiring to a BeagleBone running the SIMH PDP10 emulator. He did the same for a PDP11. Two computers that gave me hours of harmless fun back in the day!

Kids today have no idea what a computer should look like. But even they can run [Jörg]'s Java virtual PDP10 console!

Islandora: Islandora 7.x-1.4 Release Announcement

planet code4lib - Fri, 2014-10-31 13:42

I am extremely pleased to announce the release of Islandora 7.x-1.4!

This is our second community release, and I couldn't be more happy with how much we've grown and progressed as a community. This software has continued to improve because of you!

We have an absolutely amazing team to thank for this:

Adam Vessey
Alan Stanley
Dan Aiken
Donald Moses
Ernie Gillis
Gabriela Mircea
Jordan Dukart
Kelli Babcock
Kim Pham
Kirsta Stapelfeldt
Lingling Jiang
Mark Jordan
Melissa Anez
Nigel Banks
Paul Pound
Robin Dean
Sam Fritz
Sara Allain
Will Panting
 

Now for the release info!

Release notes and download links are here along with updated documentation, and you can grab an updated VM here (sandbox.islandora.ca will be updated soon).

I'd like to highlight a few things. This release includes 48 bug fixes since the last release, and 23 document improvements. Along with those improvements, we have two new modules. Islandora Videojs (an Islandora viewer module using Video.js) and Islandora Solr Views (Exposes Islandora Solr search results into a Drupal view).

Our next release is will be out in April. If you would like to be apart of the release team (you'll get an awesome t-shirt!!!), keep an eye out on the list for a call for 7.x-1.5 volunteers. We'll need folks as component managers, testers, and documenters.

That's all I have for now.

cheers!

-nruest

Library of Congress: The Signal: An Online Event & Experimental Born Digital Collecting Project: #FolklifeHalloween2014

planet code4lib - Fri, 2014-10-31 12:14

If you haven’t heard, as the title of the press release explains, the Library of Congress Seeks Halloween Photos For American Folklife Center Collection.  As of writing this morning, there are now 288 photos shared on Flickr with the #folklifehalloween2014 tag. If you browse through the results, you can see a range of ways folks are experiencing, seeing, and documenting Halloween and Dia de los Muertos. Everyone has until November 5th to participate. So send this, or some of the links in this post, along to a few other people to spread the word.

Svayambhunath Buddha O’Lantern, Shared by user birellsalsh on Flickr

Because of the nature of this event, you can follow along in real time and see how folks are responding to this in the photostream. See the American Folklife Center’s blog posts on this for a more in depth explanation and some additional context of this project and a set of step-by-step directions about how people can participate. As this is still a live and active event, I wanted to make sure we had a post up about it today for people to share these links with others.

Consider emailing a link to this to any shutterbug friends and colleagues you have. In particular, there is an explicit interest in photos that document the diverse range of communities’ experiences of the holiday. So if you are part of an often underrepresented community it would be great to see that point of view in the photo stream. With that noted, I also wanted to take this opportunity to highlight some of the things about this event that I think are relevant to the digital collecting and preservation focus of The Signal.

Rapid Response Collecting & a Participatory Online Event

Aside from the fun of this project (I mean, its people’s Halloween photos!) I am interested to see how it plays out as a potential mode of contemporary collecting. I think there is a potential for this kind of public event focused on documenting our contemporary world to fit in with ideas like “rapid response collecting” that the Victoria and Albert Museum has been forwarding as well as notions of shared historical authority and conceptions of public participation in collection development.

We can’t know how this will end up playing out over the next few days of the event. However, I can already see how something like this could serve cultural institutions as a means to work with communities to document, interpret and share our perspectives on themes and issues that cultural heritage organizations document and collect in.

Oh and just a note of thanks to Adriel Luis, who shared a bit of wisdom and lessons learned from his work at the Asian Pacific American Center on the Day in the Life of Asian Pacific American event.

So, consider helping to spread the word and sharing some photos!

LibUX: Mobile Users are Demanding

planet code4lib - Fri, 2014-10-31 10:24

As library (public and academic) and higher ed websites approach their mobile moment, it is more crucial than ever that new sites, services, redesigns, whatever are optimized for performance. I would even go so far to say that speed is more important than a responsive layout, but it’s obviously better to improve the former by optimizing the latter all in one go.

There are caveats: it may take more time upfront to develop a performant mobile-first responsive website. This is an important distinction. As I mentioned in this ARCL TechConnect article, not all responsive websites are created equal.

38% of smartphone users have screamed at, cursed at, or thrown their phones when pages take too long to load.

Anticipate the user trying to check library hours from the road in peak-time rush hour traffic on an iPhone 4S over 3G. Latency alone (the time it takes just to communicate with the server) will take 2 seconds.

Your website has just 2 seconds to load at your patron’s point of need before they a certain percentage will give up, which may literally affect your foot traffic. Rather than chance the library being closed, your potential patron may change plans. After 10 seconds, 30% will never return to the site.

This data is from Radware’s 2014 State of the Union: Ecommerce Page Speed & Performance

The post Mobile Users are Demanding appeared first on LibUX.

Casey Bisson: Unit test WordPress plugins like a ninja (in progress)

planet code4lib - Fri, 2014-10-31 03:53

cc-by Zach Dischner

Unit testing a plugin can be easy, but if the plugin needs dashboard configuration or has dependencies on other plugins, it can quickly go off the tracks. And if you haven’t setup Travis integration, you’re missing out.

Activate Travis CI

To start with, go sign in to Travis now and activate your repos for testing. If you’re not already using Github to host the plugin, please start there.

Set the configuration

If your plugin needs options to be set that are typically set in the WP dashboard, do so in tests/bootstrap.php.

In bCMS, I’m doing a simple update_option( 'bcms_searchsmart', '1' ) just before loading the plugin code. For that plugin, that option is checked when loading the components. That’s not ideal, but it’s works for this plugin (until I refactor the plugin to solve the shortcomings this exposes).

Download and activate dependencies

Some plugins depend on others. An example is bStat, which depends on libraries from GO-UI. The dependency in that case is appropriate, but it can add frustration to unit testing. To solve that problem, I’ve made some changes to download the plugins in the Travis environment and activate them in all.

It starts with the tests/dependencies-array.php, where I’ve specified the plugins and their repo paths. That file is used both by bin/install-dependencies.php, which downloads the plugin in Travis, and tests/bootstrap.php, where the plugins are included at runtime.

Of course, if those additional plugins need configuration settings, then do that in the tests/bootstrap.php as in the section above.

DuraSpace News: The Islandora Foundation Releases Islandora 7.1.4

planet code4lib - Fri, 2014-10-31 00:00

From Melissa Anez, Islandora Foundation

Charlottetown, Prince Edward Island, CA  The Islandora Foundation is extremely pleased to announce the release of Islandora 7.x-1.4.

District Dispatch: After privacy glitch, the ball is now in our court

planet code4lib - Thu, 2014-10-30 21:28

Photo by John Leben Art Prints via Deviant Art

Last week, Adobe announced that with its software update (Digital Editions 4.0.1), the collection and transmission of user data has been secured. Adobe was true to its word that a fix would be made by the week of October 20 correcting this apparent oversight.

For those who might not know, a recap: Adobe Digital Editions is widely used software in the e-book trade for both library and commercial ebook transactions to authenticate legitimate library users, apply DRM to encrypt e-book files, and in general facilitate the e-book circulation process, such as deleting an e-book from a device after the loan period has expired. Earlier in October, librarians and others discovered that the new Adobe Digital Editions software (4.0) had a tremendous security and privacy glitch. A large amount of unencrypted data reflecting e-book loan and purchase transactions was being collected and transmitted to Adobe servers.

The collection of data “in the clear” is a hacker’s dream because it can be so easily obtained. Information about books, including publisher, title and other metadata was also unencrypted raising alarms about reader privacy and the collection of personal information. Some incorrectly reported that Adobe was scanning hard drives and spying on readers. After various librarians conducted a few tests, they confirmed that Adobe was not scanning or spying, but nonetheless this was a clearly a security nightmare and alleged assault on reader privacy.

ALA contacted Adobe about the breach and asked to talk to Adobe about what was going on. Conversations did take place and Adobe responded to several questions raised by librarians.

Now that the immediate problem of unencrypted data is fixed, let’s step back and consider what we have learned and ponder what to do next.

We learned that few librarians have the knowledge base to explain how these software technologies work. To a great extent, users (librarians and otherwise) do not know what is going on behind the curtain (without successfully hacking various layers of encryption).

We can no longer ensure user privacy by simply destroying circulation records, or refusing to reveal information without a court order. This just isn’t enough in the digital environment. Data collection is a permanent part of the digital landscape. It is lucrative and highly valued by some, and is often necessary to make things work.

We learned that most librarians continue to view privacy as a fundamental value of the profession, and something we should continue to support through awareness and action.

We should hold venders and other suppliers to account—any data collected to enable services should be encrypted, retained for only as long as necessary with no personal information collected, shared or sold.

What’s next? We have excellent policy statements regarding privacy, but we do not have a handy dandy guide to help us and our library communities understand how digital technologies work and how they can interfere with reader privacy. We need a handy dandy guide with diagrams and narrative that is not too technicalese (new word, modeled after “legalese”).

We have to inform our users that whenever they key in their name for a service or product, all privacy bets are off. We need to understand how data brokers amass boat loads of data and what they do with it. We need to know how to opt out of data collection when possible, or never opt in in the first place. We need to better inform our library communities.

A good suggestion is to collaborate with vendors and other suppliers and not just talk to one another at the license negotiating table. By working together we can renew our commitment to privacy. The vendors have extended an invitation by asking to work with us on best practices for privacy. Let’s RSVP “yes.”

The post After privacy glitch, the ball is now in our court appeared first on District Dispatch.

District Dispatch: Webinar archive available: “$2.2 Billion reasons libraries should care about WIOA”

planet code4lib - Thu, 2014-10-30 21:04

Photo by the Knight Foundation

On Monday, more than one thousand people participated in the American Library Association’s (ALA) webinar “$2.2 Billion Reasons to Pay Attention to WIOA,” an interactive webinar that focused on ways that public libraries can receive funding for employment skills training and job search assistance from the recently-passed Workforce Innovation and Opportunity Act (WIOA).

During the webinar, leaders from the Department of Education and the Department of Labor explored the new federal law. Watch the webinar.

An archive of the webinar is available now:

The Workforce Innovation and Opportunity Act allows public libraries to be considered additional One-Stop partners, prohibits federal supervision or control over selection of library resources and authorizes adult education and literacy activities provided by public libraries as an allowable statewide employment and training activity. Additionally, the law defines digital literacy skills as a workforce preparation activity.

View slides from the webinar presentation:

Webinar speakers included:

  • Susan Hildreth, director, Institute of Museum and Library Services
  • Kimberly Vitelli, chief of Division of National Programs, Employment and Training Administration, U.S. Department of Labor
  • Heidi Silver-Pacuilla, team leader, Applied Innovation and Improvement, Office of Career, Technical, and Adult Education, U.S. Department of Education

We are in the process of developing a WIOA Frequently Asked Questions guide for library leaders—we’ll publish the report on the District Dispatch shortly. Subscribe to the District Dispatch, ALA’s policy blog, to be alerted to when additional WIOA information becomes available.

The post Webinar archive available: “$2.2 Billion reasons libraries should care about WIOA” appeared first on District Dispatch.

District Dispatch: Fun with Dick and Jane, and Stephen Colbert

planet code4lib - Thu, 2014-10-30 17:28

Photo by realworldracingphotog via Flickr

The Library Copyright Alliance (LCA) issued this letter (pdf) in response to Stephen Colbert’s suggestion that librarians just “make up” data. Enjoy!

The post Fun with Dick and Jane, and Stephen Colbert appeared first on District Dispatch.

Library of Congress: The Signal: Gossiping About Digital Preservation

planet code4lib - Thu, 2014-10-30 15:35

ANTI-ENTROPY by user 51pct on Flickr.

In September the Library held its annual Designing Storage Architectures for Digital Collections meeting. The meeting brings together technical experts from the computer storage industry with decision-makers from a wide range of organizations with digital preservation requirements to explore the issues and opportunities around the storage of digital information for the long-term. I always learn quite a bit during the meeting and more often than not encounter terms and phrases that I’m not familiar with.

One I found particularly interesting this time around was the term “anti-entropy.”  I’ve been familiar with the term “entropy” for a while, but I’d never heard “anti-entropy.” One definition of “entropy” is a “gradual decline into disorder.” So is “anti-entropy” a “gradual coming-together into order?” Turns out that the term has a long history in information science and is important to get an understanding of some very important digital preservation processes regarding file storage, file repair and fixity checking.

The “entropy” we’re talking about when we talk about “anti-entropy” might also be called “Shannon Entropy” after the legendary information scientist Claude Shannon. His ideas on entropy were elucidated in a 1948 paper called “A Mathematical Theory of Communication” (PDF), developed while he worked at Bell Labs. For Shannon, entropy was the measure of the unpredictability of information content. He wasn’t necessarily thinking about information in the same way that digital archivists think about information as bits, but the idea of the unpredictability of information content has great applicability to digital preservation work.

“Anti-entropy” represents the idea of the “noise” that begins to slip into information processes over time. It made sense that computer science would co-opt the term, and in that context “anti-entropy” has come to mean “comparing all the replicas of each piece of data that exist (or are supposed to) and updating each replica to the newest version.” In other words, what information scientists call “bit flips” or “bit rot” are examples of entropy in digital information files, and anti-entropy protocols (a subtype of “gossip” protocols) use methods to ensure that files are maintained in their desired state. This is an important concept to grasp when designing digital preservation systems that take advantage of multiple copies to ensure long-term preservability, LOCKSS being the most obvious example of this.

gossip_bench by user ricoslounge on Flickr.

Anti-entropy and gossip protocols are the means to ensure the automated management of digital content that can take some of the human overhead out of the picture. Digital preservation systems invoke some form of content monitoring in order to do their job. Humans could do this monitoring, but as digital repositories scale up massively, the idea that humans can effectively monitor the digital information under their control with something approaching comprehensiveness is a fantasy. Thus, we’ve got to be able to invoke anti-entropy and gossip protocols to manage the data.

An excellent introduction to how gossip protocols work can be found in the paper “GEMS: Gossip-Enabled Monitoring Service for Scalable Heterogeneous Distributed Systems.”  The authors note three key parameters to gossip protocols: monitoring, failure detection and consensus.  Not coincidentally, LOCKSS “consists of a large number of independent, low-cost, persistent Web caches that cooperate to detect and repair damage to their content by voting in “opinion polls” (PDF). In other words, gossip and anti-entropy.

I’ve only just encountered these terms, but they’ve been around for a long while.  David Rosenthal, the chief scientist of LOCKSS, has been thinking about digital preservation storage and sustainability for a long time and he has given a number of presentations at the LC storage meetings and the summer digital preservation meetings.

LOCKSS is the most prominent example in the digital preservation community on the exploitation of gossip protocols, but these protocols are widely used in distributed computing. If you really want to dive deep into the technology that underpins some of these systems, start reading about distributed hash tables, consistent hashing, versioning, vector clocks and quorum in addition to anti-entropy-based recovery. Good luck!

One of the more hilarious anti-entropy analogies was recently supplied by the Register, which suggested that a new tool that supports gossip protocols “acts like [a] depressed teenager to assure data reliability” and “constantly interrogates itself to make sure data is ok.”

You learn something new every day.

LibUX: Web for Libraries: The UX Bandwagon

planet code4lib - Thu, 2014-10-30 15:26

This issue of The Web for Libraries was mailed Wednesday, October 29th, 2014. Want to get the latest from the cutting-edge web made practical for libraries and higher ed every Wednesday? You can subscribe here!

The UX Bandwagon

Is it a bad thing? Throw a stone and you’ll hit a user experience talk at a library conference (or even a whole library conference). There are books, courses, papers, more books, librarians who understand the phrase “critical rendering path,” this newsletter, this podcast, interest groups, and so on.

It is the best fad that could happen for library perception. The core concept behind capital-u Usability is continuous data-driven decision making that invests in the library’s ability to iterate upon itself. Usability testing that stops is usability testing done wrong. What’s more, libraries concerned with UX are thus concerned about measurable outward perception – marketing–which libraries used to suck at–that can neither be haphazard nor half-assed. This bandwagon values experimentation, permits change, and increases the opportunities to create delight.

Latest Podcast: A High-Functioning Research Site with Sean Hannan

Sean Hannan talks about designing a high functioning research site for the John Hopkins Sheridan Libraries and University Museums. It’s a crazy fast API-driven research dashboard mashing up research databases, LibGuides, and a magic, otherworldly carousel actually increasing engagement. Research tools are so incredibly difficult to build well, especially when libraries rely so heavily on third parties, that I’m glad to have taken the opportunity to pick Sean’s brain. You can catch this and every episode on Stitcher, iTunes, or on the Web.

Top 5 Problems with Library Websites – a Review of Recent Usability Studies

Emily Singley looked at 16 library website usability studies over the past two years and broke down the biggest complaints. Can you guess what they are?

“Is the semantic web still a thing?”

Jonathan Rochkind sez: “The entire comment, and, really the entire thread, are worth a read. There seems to be a lot of energy in libraryland behind trying to produce “linked data”, and I think it’s important to pay attention to what’s going on in the larger world here.
Especially because much of the stated motivation for library “linked data” seems to have been: “Because that’s where non-library information management technology is headed, and for once let’s do what everyone else is doing and not create our own library-specific standards.” It turns out that may or may not be the case ….”

How to Run a Content-Planning Workshop

Let’s draw a line. There are libraries that blah-blah “take content seriously” enough in that they pair down the content patrons don’t care about, ensure that hours and suchlike are findable, that their #libweb is ultimately usable. Then there are libraries that dive head-first into content creation. They podcast, make lists, write blogs, etc. For the latter, the library without a content strategy is going to be a mess, and I think these suggestions by James Deer on Smashing Magazine are really helpful.

New findings: For top ecommerce sites, mobile web performance is wildly inconsistent

I’m working on a new talk and maybe even a #bigproject about treating library web services and apps as e-commerce – because, think about it, what a library website does and what a web-store wants you to do isn’t too dissimilar. That said, I think we need to pay a lot of attention to stats that come out of e-commerce. Every year, Radware studies the mobile performance of the top 100 ecommerce sites to see how they measure up to user expectations. Here’s the latest report.

These are a few gems I think particularly important to us:

  • 1 out of 4 people worldwide own a smartphone
  • On mobile, 40% will abandon a page that takes longer than 3 seconds to load
  • Slow pages are the number one issue that mobile users complain about. 38% of smartphone users have screamed at, cursed at, or thrown their phones when pages take too long to load.
  • The median page is 19% larger than it was one year ago

There is also a lot of ink dedicated to sites that serve m-dot versions to mobile users, mostly making the point that this is ultimately dissatisfying and, moreover, tablet users definitely don’t want that m-dot site.

The post Web for Libraries: The UX Bandwagon appeared first on LibUX.

Galen Charlton: Reaching LITA members: a datapoint

planet code4lib - Thu, 2014-10-30 00:00

I recently circulated a petition to start a new interest group within LITA, to be called the Patron Privacy Technologies IG.  I’ve submitted the formation petition to the LITA Council, and a vote on the petition is scheduled for early November.  I also held an organizational meeting with the co-chairs; I’m really looking forward to what we all can do to help improve how our tools protect patron privacy.

But enough about the IG, let’s talk about the petition! To be specific, let’s talk about when the signatures came in.

I’ve been on Twitter since March of 2009, but a few months ago I made the decision to become much more active there (you see, there was a dearth of cat pictures on Twitter, and I felt it my duty to help do something about it).  My first thought was to tweet the link to a Google Form I created for the petition. I did so at 7:20 a.m. Pacific Time on 15 October:

LITA members interested in the @ALA_LITA Patron Privacy Technologies IG – please sign the petition to form the IG: https://t.co/kOggjNSKYi

— Galen Charlton (@gmcharlt) October 15, 2014

Also, if you are interested in being co-chair of the LITA Patron Privacy Tech IG, please indicate on the petition or drop me a line.

— Galen Charlton (@gmcharlt) October 15, 2014

Since I wanted to gauge whether there was interest beyond just LITA members, I also posted about the petition on the ALA Think Tank Facebook group at 7:50 a.m. on the 15th.

By the following morning, I had 13 responses: 7 from LITA members, and 6 from non-LITA members. An interest group petition requires 10 signatures from LITA members, so at 8:15 on the 16th, I sent another tweet, which got retweeted by LITA:

Just a few more signatures from LITA members needed for the Patron Privacy IG formation petition: https://t.co/i4mXsJps1p @ALA_LITA

— Galen Charlton (@gmcharlt) October 16, 2014

By early afternoon, that had gotten me one more signature. I was feeling a bit impatient, so at 2:28 p.m. on the 16th, I sent a message to the LITA-L mailing list.

That opened the floodgates: 10 more signatures from LITA members arrived by the end of the day, and 10 more came in on the 17th. All told, a total of 42 responses to the form were submitted between the 15th and the 23rd.

The petition didn’t ask how the responder found it, but if I make the assumption that most respondents filled out the form shortly after they first heard about it, I arrive at my bit of anecdata: over half of the petition responses were inspired by my post to LITA-L, suggesting that the mailing list remains an effective way of getting the attention of many LITA members.

By the way, the petition form is still up for folks to use if they want to be automatically subscribed to the IG’s mailing list when it gets created.

DuraSpace News: The Society of Motion Picture and Television Engineers (SMPTE) Archival Technology Medal Awarded to Neil Beagrie

planet code4lib - Thu, 2014-10-30 00:00

From William Kilbride, Digital Preservation Coalition

Heslington, York  At a ceremony in Hollywood on October 23, 2014, the Society of Motion Picture and Television Engineers® (SMPTE®) awarded the 2014 SMPTE Archival Technology Medal to Neil Beagrie in recognition of his long-term contributions to the research and implementation of strategies and solutions for digital preservation.

District Dispatch: ALA opposes e-book accessibility waiver petition

planet code4lib - Wed, 2014-10-29 21:29

ALA and the Association of Research Libraries (ARL) renewed their opposition to a petition filed by the Coalition of E-book Manufacturers seeking a waiver from complying with disability legislation and regulation (specifically Sections 716 and 717 of the Communications Act as Enacted by the Twenty-First Century Communications and Video Accessibility Act of 2010). Amazon, Kobo, and Sony are the members of the coalition, and they argue that they do not have to make their e-readers’ Advanced Communications Services (ACS) accessible to people with print disabilities.

Why? The coalition argues that because basic e-readers (Kindle, Sony Reader, Kobo E-Reader) are primarily used for reading and have only rudimentary ACS, they should be exempt from CVAA accessibility rules. People with disabilities can buy other more expensive e-readers and download apps in order to access content. To ask the Coalition to modify their basic e-readers is a regulatory burden, will raise consumer prices, will ruin the streamlined look of basic e-readers, and inhibit innovation (I suppose for other companies and start-ups that want to make even more advanced inaccessible readers).

The library associations have argued that these basic e-readers do have ACS capability as a co-primary use. In fact, the very companies asking for this waiver market their e-readers as being able to browse the web, for example. The Amazon Webkit that comes with the basic Kindle can “render HyperText Markup Language (HTML) pages, interpret JavaScript code, and apply webpage layout and styles from Cascading Style Sheets (CSS).” The combination of HTML, JavaScript, and CSS demonstrates that this basic e-reader’s browser leaves open a wide array of ACS capability, including mobile versions of Facebook, Gmail, and Twitter, to name a few widely popular services.”

We believe denying the Coalition’s petition will not only increase access to ACS, but also increase access to more e-content for more people. As we note in our FCC comments: “Under the current e-reader ACS regime proposed by the Coalition and tentatively adopted by the Commission, disabled persons must pay a ‘device access tax.’ By availing oneself of one of the ‘accessible options’ as suggested by the Coalition, a disabled person would pay at minimum $20 more a device for a Kindle tablet that is heavier and has less battery life than a basic Kindle e-reader.” Surely it is right that everyone ought to be able to buy and use basic e-readers just like everybody has the right to drink from the same water fountain.

This decision will rest on the narrowly question of whether or not ACS is offered, marketed and used as a co-primary purpose in these basic e-readers. We believe the answer to that question is “yes,” and we will continue our advocacy to support more accessible devices for all readers.

The post ALA opposes e-book accessibility waiver petition appeared first on District Dispatch.

Eric Hellman: GITenberg: Modern Maintenance Infrastructure for Our Literary Heritage

planet code4lib - Wed, 2014-10-29 20:51
One day back in March, the Project Gutenberg website thought I was a robot and stopped letting me download ebooks. Frustrated, I resolved to put some Project Gutenberg ebooks into GitHub, where I could let other people fix problems in the files. I decided to call this effort "Project Gitenhub". On my second or third book, I found that Seth Woodworth had had the same idea a year earlier, and had already moved about a thousand ebooks into GitHub. That project was named "GITenberg". So I joined his email list and started submitting pull requests for PG ebooks that I was improving.

Recently, we've joined forces to submit a proposal to the Knight Foundation's News Challenge, whose theme is "How might we leverage libraries as a platform to build more knowledgeable communities? ". Here are some excerpts:
Abstract Project Gutenberg (PG) offers 45,000 public domain ebooks, yet few libraries use this collection to serve their communities. Text quality varies greatly, metadata is all over the map, and it's difficult for users to contribute improvements. We propose to use workflow and software tools developed and proven for open source software development- GitHub- to open up the PG corpus to maintenance and use by libraries and librarians. The result- GITenberg- will include MARC records, covers, OPDS feeds and ebook files to facilitate library use. Version-controlled fork and merge workflow, combined with a change triggered back-end build environment will allow scaleable, distributed maintenance of the greatest works of our literary heritage.  Description Libraries need metadata records in MARC format, but in addition they need to be able to select from the corpus those works which are most relevant to their communities. They need covers to integrate the records with their catalogs, and they need a level of quality assurance so as not to disappoint patrons. Because this sort of metadata is not readily available, most libraries do not include PG records in their catalogs, resulting in unnecessary disappointment when, for example, a patron want to read Moby Dick from the library on their Kindle. Progress 43,000 books and their metadata have been moved to the git version control software, this will enable librarians to collaboratively edit and control the metadata. The GITenberg website, mailing list and software repository has been launched at https://gitenberg.github.io/ . Software for generating MARC records and OPDS feeds have already been written.Background Modern software development teams use version control, continuous integration, and workflow management systems to coordinate their work. When applied to open-source software, these tools allow diverse teams from around the world to collaboratively maintain even the most sprawling projects. Anyone wanting to fix a bug or make a change first forks the software repository, makes the change, and then makes a "pull request". A best practice is to submit the pull request with a test case verifying the bug fix. A developer charged with maintaining the repository can then review the pull request and accept or reject the change. Often, there is discussion asking for clarification. Occasionally versions remain forked and diverge from each other. GitHub has become the most popular sites for this type software repository because of its well developed workflow tools and integration hooks. The leaders of this team recognized the possibility to use GitHub for the maintenance of ebooks, and we began the process of migrating the most important corpus of public domain ebooks, Project Gutenberg, onto GitHub, thus the name GITenberg. Project Gutenberg has grown over the years to 50,000 ebooks, audiobooks, and related media, including all the most important public domain works of English language literature. Despite the great value of this collection, few libraries have made good use of this resource to serve their communities. There are a number of reasons why. The quality of the ebooks and the metadata around the ebooks is quite varied. MARC records, which libraries use to feed their catalog systems, are available for only a subset of the PG collection. Cover images and other catalog enrichment assets are not part of PG. To make the entire PG corpus available via local libraries, massive collaboration amoung librarians and ebook develeopers is essential. We propose to build integration tools around github that will enable this sort of collaboration to occur. 
  1. Although the PG corpus has been loaded into GITenberg, we need to build a backend that automatically converts the version-controlled source text into well-structured ebooks. We expect to define a flavor of MarkDown or Asciidoc which will enable this automatic, change-triggered building of ebook files (EPUB, MOBI, PDF). (MarkDown is a human-readable plain text format used on GitHub for documentation; MarkDown for ebooks is being developed independently by several team of developers. Asciidoc is a similar format that works nicely for ebooks.) 
  2. Similarly, we will need to build a parallel backend server that will produce MARC and XML formatted records from version-controlled plain-text metadata files.
  3. We will generate covers for the ebooks using a tool recently developed by NYPL and include them in the repository.
  4. We will build a selection tool to help libraries select the records best suited to their libraries.
  5. Using a set of "cleaned up" MARC records from NYPL, and adding custom cataloguing, we will seed the metadata collection with ~1000 high quality metadata records.
  6. We will provide a browsable OPDS feed for use in tablet and smartphone ebook readers.
  7. We expect that the toolchain we develop will be reusable for creation and maintenance of a new generation of freely licensed ebooks.

The rest of the proposal is on the Knight News Challenge website. If you like the idea of GITenberg, you can "applaud" it there. The "applause' is not used in the judging of the proposals, but it makes us feel good. There are lots of other interesting and inspiring proposals to check out and applaud, so go take a look!

DPLA: Building the newest DPLA student exhibition, “From Colonialism to Tourism: Maps in American Culture”

planet code4lib - Wed, 2014-10-29 17:00

Oregon Territory, 1835. Courtesy of David Rumsey.

Two groups of MLIS students from the University of Washington’s Information School took part in a DPLA pilot called the Digital Curation Program during the 2013-2014 academic year. The DPLA’s Amy Rudersdorf worked with iSchool faculty member Helene Williams as we created exhibits for the DPLA for the culminating project, or Capstone, in our degree program. The result is the newest addition to DPLA’s exhibitions, called “From Colonialism to Tourism: Maps in American Culture.”

My group included Kili Bergau, Jessica Blanchard, and Emily Felt; we began by choosing a common interest from the list of available topics, and became “Team Cartography.” This project taught us about online exhibit creation and curation of digital objects, copyright and licensing, and took place over two quarters. The first quarter was devoted to creating a project plan and learning about the subject matter. We asked questions including: What is Cartography? What is the history of American maps? How are they represented within the DPLA collections?

Girl & road maps, Southern California, 1932. Courtesy of the University of Southern California Libraries.

As we explored the topic, the project became less about librarianship and more about our life as historians. Cartography, or the creation of maps, slowly transformed into the cultural “maps in history” as we worked through the DPLA’s immense body of aggregated images. While segmenting history and reading articles to learn about the pioneers, the Oregon Trail, the Civil War, and the 20th Century, we also learned about the innards of the DPLA’s curation process. We learned how to use Omeka, the platform for creating the exhibitions, and completed forms for acquiring usage rights the images we would use in our exhibit.

One of the greatest benefits of working with the team was the opportunity to investigate niche areas among the broad topics, as well as leverage each other’s interests to create one big fascinating project. With limited time, we soon had to focus on selecting images and writing the exhibit narrative. We wrote, and revised, and wrote again. We waded through hundreds of images to determine which were the most appropriate, and then gathered appropriate metadata to meet the project requirements.

Our deadline for the exhibit submission was the end of the quarter, and our group was ecstatic to hear the night of the Capstone showcase at the UW iSchool event that the DPLA had chosen our exhibit for publication. Overjoyed, we celebrated remotely, together. Two of us had been in Seattle, one in Maine, and I had been off in a Dengue Fever haze in rural Cambodia (I’m better now).

The Negro Travelers’ Green Book [Cover], 1956. Courtesy of the University of South Carolina, South Caroliniana Library via the South Carolina Digital Library.

Shortly after graduation in early June, Helene asked if I was interested in contributing further to this project: over the summer, I worked with DPLA staff to refine the exhibit and prepare it for public release. Through rigorous editing, some spinning of various themes in new directions, and a wild series of conversations over Google Hangouts about maps, maps, barbecue, maps, libraries, maps, television, movies, and more maps, the three of us had taken the exhibition to its final state.

Most experiences in higher education, be they on the undergrad or graduate levels (sans PhD), fail to capture a sense of endurance and longevity. The exhibition was powerful and successful throughout the process from many different angles. For me, watching its transformation from concept to public release has been marvelous, and has prepared me for what I hope are ambitious library projects in my future.

View this exhibition

A huge thanks to Amy Rudersdorf for coordinating the program, Franky Abbott for her work editing and refining the exhibition, Kenny Whitebloom for Omeka wrangling, and the many Hubs and their partners for sharing their resources. 

All written content on this blog is made available under a Creative Commons Attribution 4.0 International License. All images found on this blog are available under the specific license(s) attributed to them, unless otherwise noted.

LITA: Jobs in Information Technology: October 29

planet code4lib - Wed, 2014-10-29 16:58

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Head of Guin Library, Oregon State University, Newport, OR

Visit the LITA Job Site for more available jobs and for information on submitting a  job posting.

Open Knowledge Foundation: Open Access in Ireland: A case-study

planet code4lib - Wed, 2014-10-29 15:33

Following last week’s Open Access Week blog series, we continue our celebration of community efforts in this field. Today we give the microphone to Dr. Salua Nassabay from Open Knowledge Ireland in a great account from Ireland, originally posted on the Open Knowledge Ireland blog.

In Ireland, awareness of OA has increased within the research community nationally, particularly since institutional repositories have been built in each Irish university. Advocacy programmes and funder mandates (IRCSET, SFI, HEA) have had a positive effect; but there is still some way to go before the majority of Irish researchers will automatically deposit their papers in their local OA repository.

Brief Story

In summer 2004, the Irish Research eLibrary (IReL) was launched, giving online access to a wide range of key research journals. The National Principles on Open Access Policy Statement were launched on Oct 23rd 2012 at the Digital Repository of Ireland Conference by Sean Sherlock, Minister of State, Department of Enterprise, Jobs & Innovation and Department of Education & Skills with responsibility for Research & Innovation. The policy consists of a ‘Green way’ mandate and encouragement to publish in ’Gold’ OA journals. It aligns with the European policy for Horizon 2020. OA on national level is managed by the National Steering Committee on OA Policy, see table 3.

A Committee of Irish research organisations is working in partnership to coordinate activities and to combine expertise at a national level to promote unrestricted, online access to outputs which result from research that is wholly or partially funded by the State:

National Principles on Open Access Policy Statement

Definition of OA

Reaffirm: freedom of researchers; increase visibility and access; support international interoperability, link to teaching and learning, and open innovation.

Defining Research Outputs:

“include peer-reviewed publications, research data and other research artefacts which
feed the research process”.

General Principle (1): all researchers to have deposit rights for an AO repository.

Deposit: post-print/publisher version and metadata; peer-reviewed journal articles and
conference publication. Others where possible; at time of acceptance for publication; in
compliance with national metadata standards.

General Principle (2):Release: immediate for meta-data; respect publisher copyright, licensing and embargo (not
normally exceeding 6months/12months).

Green route policy – not exclusive

Suitable repositories

Research data linked to publications.

High-level principles:

Infrastructure and sustainability: depositing once, harvesting, interoperability and long-term preservation.

Advocacy and coordination: mechanisms for and monitoring of implementation, awareness raising and engagement for ALL.

Exploiting OA and implementation: preparing metadata and national value-added metrics.

Table 1. National Principles on Open Access Policy Statement. https://www.dcu.ie/sites/default/files/communications/pdfs/PatriciaClarke2014.pdf and http://openaccess.thehealthwell.info/sites/default/files/documents/NationalPrinciplesonOAPolicyStatement.pdf

There are seven universities in Ireland http://www.hea.ie/en/about-hea). These Irish universities received government funding to build institutional repositories in each Irish university and to develop a federated harvesting and discovery service via a national portal. It is intended that this collaboration will be expanded to embrace all Irish research institutions in the future. OA repositories are currently available in all Irish universities and in a number of other higher education institutions and government agencies:

Higher Education

Government Agency

Institutional repositories

Subject repository

Dublin Business School; Dublin City University; Dublin Institute of Technology; Dundalk Institte of Technology; Mary Immaculate College; National University of Ireland Galway; National University of Ireland, Maynooth; Royal College of Surgeons in Ireland; Trinity College Dublin; University College Cork; University College Dublin, University of Limerick; Waterford Intitute of Technology

Irish Virtual Research Library & Archive, UCD

Health Service Executive Lenus; All-Ireland electronic Health Library (AieHL); Marine Institute; Teagasc

Table 2. Currently available repositories in Ireland

AO Ireland’s statistics show more than 58,859 OA publications in 13 repositories, distributed as can be seen in the figures 1 and 2.

Figure 1. Publications in repositories.From rian.ie (date: 16/9/2014). http://rian.ie/en/stats/overview

Some samples of Irish OA journals are:

- Crossings: Electronic Journal of Art and Technology: http://crossings.tcd.ie;

-Economic and Social Review: http://www.esr.ie;

-Journal of the Society for Musicology in Ireland: http://www.music.ucc.ie/jsmi/index.php/jsmi;

-Journal of the Statistical and Social Inquiry Society of Ireland: http://www.ssisi.ie;

-Minerva: an Internet Journal of Philosophy: http://www.minerva.mic.ul.ie//;

-The Surgeon: Journal of the Royal Colleges of Surgeons of Edinburgh and Ireland: http://www.researchgate.net/journal/1479-666X_The_surgeon_journal_of_the_Royal_Colleges_of_Surgeons_of_Edinburgh_and_Ireland;

-Irish Journal of Psychological Medicine: http://www.ijpm.ie/1fmul3lci60?a=1&p=24612705&t=21297075.

Figure 2. Publications by document type. From rian.ie (date: 16/9/2014). http://rian.ie/en/stats/overview

Institutional OA policies:

Name

URL

OA mandatory

OA Infrastructure

Health Research Board (HRB) - Funders

Webside: http://www.hrb.ie

Policy:http://www.hrb.ie/research-strategy-funding/policies-and-guidelines/policies/open-access/

Yes

No

Science Foundation Ireland (SFI) – Funders

Webside: http://www.sfi.ie

Policy: http://www.sfi.ie/funding/grant-policies/open-access-availability-of-published-research-policy.html

Yes

No

Higher Education Authority (HEA) – Funders

Webside: http://www.hea.ie

Policy: http://www.hea.ie/en/policy/research/open-access-scientific-information

No

No

Department of Agriculture, Food and Marine (DAFM) – Funders

Webside: http://www.agriculture.gov.ie

Policy:http://www.agriculture.gov.ie/media/migration/research/DAFMOpenAccessPolicy.pdf

Yes effective 2013

No

Environmental Protection Agency (EPA) – Funders

Webside: http://www.epa.ie/

Policy:http://www.epa.ie/footer/accessibility/infopolicy/#.VBlPa8llwjg

Repository: http://www.epa.ie/pubs/reports/#.VBmTVMllwjg

Yes

Yes

Marine Institute (MI) – Funders

Webside: http://www.marine.ie/Home/

Policy: http://oar.marine.ie/help/policy.html

Repository: http://oar.marine.ie

No

Yes

Irish Research Council (IRC) – Funders

Webside: http://www.research.ie

Policy: http://www.research.ie/aboutus/open-access

*Yes

No

Teagasc – Funders

Webside: http://www.teagasc.ie

Policy: http://t-stor.teagasc.ie/help/t-stor-faq.html#faqtopic2

Repository: http://t-stor.teagasc.ie

*No

Yes

Institute of Public Health in Ireland (IPH) – Funders

Webside: http://www.publichealth.ie

Policy: http://www.thehealthwell.info/node/628334?&content=resource&member=749069&catalogue=Policies,%20Strategies%20&%20Action%20plans,Policy&collection=none&tokens_complete=true

Yes

No

Irish Universities Association (IUA) – Researchers

Representative body for Ireland’s seven universities:

http://www.iua.ie

https://www.tcd.ie/research_innovation/assets/TCD%20Open%20Access%20Policy.pdf

http://www.ucd.ie

Yes effective 2010

Yes

Health Service Executive (HSE) – Researchers

Webside: http://www.hse.ie/eng/

Policy:http://www.hse.ie/eng/staff/Resources/library/Open_Access/statement.pdf

Repository: http://www.lenus.ie/hse/

Yes effective 2013

Yes

Institutes of Technology Ireland (IOTI) – Researchers

Webside: http://www.ioti.ie

-

No

Dublin Institute of Technology (DIT) – Researchers

Webside: http://dit.ie

Policy: http://arrow.dit.ie/mandate.html

Repository: http://arrow.dit.ie

*Yes

Yes

Royal College of Surgeons in Ireland (RCSI) – Researchers

Webside: http://www.rcsi.ie

Policy: http://epubs.rcsi.ie/policies.html

Repository: http://epubs.rcsi.ie

*No

Yes

Consortium of National and University Libraries (CONUL) – Library and Repository

Webside: http://www.conul.ie

Repository: http://rian.ie/en

-

Yes

IUA Librarians’ Group (IUALG) - Library and Repository

Webside: http://www.iua.ie

Repository: http://rian.ie/en

-

Yes

Digital Repository of Ireland (DRI) - Library and Repository

Webside and Repository: http://www.dri.ie

DRI Position Statement on Open Access for Data: http://dri.ie/sites/default/files/files/dri-position-statement-on-open-access-for-data-2014.pdf

Yes

effective 2014

Yes

EdepositIreland - Library and Repository

Webside: http://www.tcd.ie/Library/edepositireland/

Policy: https://www.tcd.ie/research_innovation/assets/TCD%20Open%20Access%20Policy.pdf

Repository: http://edepositireland.ie

Yes

Yes

*IRC: Some exceptions like books. See policy.

*Teagasc: Material in the repository is licensed under the Creative Commons Attribution-NonCommercial Share-Alike License

*DIT: Material that is to be commercialised, or which can be regarded as confidential, or the publication of which would infringe a legal commitment of the Institute and/or the author, is exempt from inclusion in the repository.

*RCSI: Material in the repository is licensed under the Creative Commons Attribution-NonCommercial Share-Alike License

Table 3. Institutional OA Policies in Ireland

Funder OA policies:

Major research funders in Ireland

Department of Agriculture, Fisheries and Food: http://www.agriculture.gov.ie/media/migration/research/DAFMOpenAccessPolicy.pdf

IRCHSS (Irish Research Council for Humanities and Social Sciences): No Open Access policies as yet.

Enterprise Ireland: No Open Access policies as yet.

IRCSET (Irish Research Council for Science, Engineering and Technology): OA Mandate from May 1st 2008:http://roarmap.eprints.org/63/

HEA (Higher Education Authority): OA Mandate from June 30th 2009: http://roarmap.eprints.org/95/

Marine Institute: No Open Access policies as yet

HRB (Health Research Board): OA Recommendations, Policy: http://roarmap.eprints.org/76/

SFI (Science Foundation Ireland): OA Mandate from February 1st 2009: http://roarmap.eprints.org/115/

Table 4. Open Access funders in Ireland.

Figure 3. Public sources of funds for Open Access. From rian.ie (date: 16/9/2014), http://rian.ie/en/stats/overview

Infrastructural support for OA:

Open Access organisations and groups

Open Access projects and initiatives. The Open Access to Irish Research Project. Associated National Initiatives

RIAN Steering Group. IUA (Irish Universities Association) Librarian’s Group (Coordinating body). RIAN is the outcome of a project to build online open access to institutional repositories in all seven Irish universities and to harvest their content to the national portal.

NDLR (National Digital Learning Repository):http://www.ndlr.ie

National Steering Group on Open Access Policy. See Table 3

RISE Group (Research Information Systems Exchange)

Irish Open Access Repositories Support Project Working Group. ReSupIE: http://www.irel-open.ie/moodle/

Repository Network Ireland is a newly formed group of Repository managers, librarians and information: http://rni.wikispaces.com

Digital Repository Ireland DRI is a trusted national repository for Ireland’s humanities and social sciences data @dri_ireland

Table 5. Open Access infrastructural support.

Challenges and ongoing developments

Ireland already has considerable expertise in developing Open Access to publicly funded research, aligned with international policies and initiatives, and is now seeking to strengthen its approach to support international developments on Open Access led by the European Commission, Science Europe and other international agencies.

The greatest challenge is the increasing pressure faced by publishers in a fast-changing environment.

Conclusions

The launch of Ireland’s national Open Access policy has put Ireland ahead of many European partners. Irish research organisations are particularly successful in the following areas of research: Information and Communication Technologies, Health and Food, Agriculture, and Biotechnology.

Links

- Repository Network Ireland / http://rni.wikispaces.com

-Open Access Scholarly Publishers / http://oaspa.org/blog/

- OpenDoar – Directory of Repositories / http://www.opendoar.org

- OpenAire – Open Access Infrastructure for research in Europe / https://www.openaire.eu

- Repositories Support Ireland / http://www.resupie.ie/moodle/

-UCD Library News / http://ucdoa.blogspot.ie

- Trinity’s Open Access News / http://trinity-openaccess.blogspot.ie

- RIAN / http://rian.ie/en/stats/overview

Contact person: Dr. Salua Nassabay salua.nassabay@openknowledge.ie

https://www.openknowledge.ie; twitter: @OKFirl

CC-BY-SA-NC

Pages

Subscribe to code4lib aggregator