You are here

Feed aggregator

Harvard Library Innovation Lab: Link roundup August 30, 2015

planet code4lib - Sun, 2015-08-30 17:41

This is the good stuff.

Rethinking Work

When employees negotiate, they negotiate for improved compensation, since nothing else is on the table.

Putting Elon Musk and Steve Jobs on a Pedestal Misrepresents How Innovation Happens

“Rather than placing tech leaders on a pedestal, we should put their successes”

Lamp Shows | HAIKU SALUT

Synced lamps as part of a band’s performance

Lawn Order | 99% Invisible

Jail time for a brown lawn? A wonderfully weird dive into the moral implications of lawncare

Sky-high glass swimming pool created to connect south London apartment complex

Swim through the air

DuraSpace News: Cineca DSpace Service Provider Update

planet code4lib - Sun, 2015-08-30 00:00

From Andrea Bollini, Cineca

It has been a hot and productive summer here in Cineca,  we have carried out several DSpace activities together with the go live of the National ORCID Hub to support the adoption of ORCID in Italy [1][2].

Ed Summers: iSchool

planet code4lib - Sat, 2015-08-29 20:05

As you can see, I’ve recently changed things around here at Yeah, it’s looking quite spartan at the moment, although I’m hoping that will change in the coming year. I really wanted to optimize this space for writing in my favorite editor, and making it easy to publish and preserve the content. Wordpress has served me well over the last 10 years and up till now I’ve resisted the urge to switch over to a static site. But yesterday I converted the 394 posts, archived the Wordpress site and database, and am now using Jekyll. I haven’t been using Ruby as much in the past few years, but the tooling around Jekyll feels very solid, especially given GitHub’s investment in it.

Honestly, there was something that pushed me over the edge to do the switch. Next week I’m starting in the University of Maryland iSchool, where I will be pursuing a doctoral degree. I’m specifically hoping to examine some of the ideas I dredged up while preparing for my talk at NDF in New Zealand a couple years ago. I was given almost a year to think about what I wanted to talk about – so it was a great opportunity for me to reflect on my professional career so far, and examine where I wanted to go.

After I got back I happened across a paper by Steven Jackson called Rethinking Repair, which introduced me to what felt like a very new and exciting approach to information technology design and innovation that he calls Broken World Thinking. In hindsight I can see that both of these things conspired to make returning to school at 46 years of age look like a logical thing to do. If all goes as planned I’m going to be doing this part-time while also working at the Maryland Istitute for Technology in the Humanities, so it’s going to take a while. But I’m in a good spot, and am not in any rush … so it’s all good as far as I’m concerned.

I’m planning to use this space for notes about what I’m reading, papers, reflections etc. I thought about putting my citations, notes into Evernote, Zotero, Mendeley etc, and I may still do that. But I’m going to try to keep it relatively simple and use this space as best I can to start. My blog has always had a navel gazy kind of feel to it, so I doubt it’s going to matter much.

To get things started I thought I’d share the personal statement I wrote for admission to the iSchool. I’m already feeling more focus than when I wrote it almost a year ago, so it will be interesting to return to it periodically. The thing that has become clearer to me in the intervening year is that I’m increasingly interested in examining the role that broken world thinking has played in both the design and evolution of the Web.

So here’s the personal statement. Hoepfully it’s not too personal :-)

For close to twenty years I have been working as a software developer in the field of libraries and archives. As I was completing my Masters degree in the mid-1990s, the Web was going through a period of rapid growth and evolution. The computer labs at Rutgers University provided me with what felt like a front row seat to the development of this new medium of the World Wide Web. My classes on hypermedia and information seeking behavior gave me a critical foundation for engaging with the emerging Web. When I graduated I was well positioned to build a career around the development of software applications for making library and archival material available on the Web. Now, after working in the field, I would like to pursue a PhD in the UMD iSchool to better understand the role that the Web plays as an information platform in our society, with a particular focus on how archival theory and practice can inform it. I am specifically interested in archives of born digital Web content, but also in what it means to create a website that gets called an archive. As the use of the Web continues to accelerate and proliferate it is more and more important to have a better understanding of its archival properties.

My interest in how computing (specifically the World Wide Web) can be informed by archival theory developed while working in the Repository Development Center under Babak Hamidzadeh at the Library of Congress. During my eight years at LC I designed and built both internally focused digital curation tools as well as access systems intended for researchers and the public. For example, I designed a Web based quality assurance tool that was used by curators to approve millions of images that were delivered as part of our various digital conversion projects. I also designed the National Digital Newspaper Program’s delivery application, Chronicling America, that provides thousands of researchers access to over 8 million pages of historic American newspapers every day. In addition, I implemented the data management application that transfers and inventories 500 million tweets a day to the Library of Congress. I prototyped the Library of Congress Linked Data Service which makes millions of authority records available using Linked Data technologies.

These projects gave me hands on, practical experience using the Web to manage and deliver Library of Congress data assets. Since I like to use agile methodologies to develop software, this work necessarily brought me into direct contact with the people who needed the tools built, namely archivists. It was through these interactions over the years that I began to recognize that my Masters work at Rutgers University was in fact quite biased towards libraries, and lacked depth when it came to the theory and praxis of archives. I remedied this by spending about two years of personal study focused on reading about archival theory and practice with a focus on appraisal, provenance, ethics, preservation and access. I also began participating member of the Society of American Archivists.

During this period of study I became particularly interested in the More Product Less Process (MPLP) approach to archival work. I found that MPLP had a positive impact on the design of archival processing software since it oriented the work around making content available, rather than on often time consuming preservation activities. The importance of access to digital material is particularly evident since copies are easy to make, but rendering can often prove challenging. In this regard I observed that requirements for digital preservation metadata and file formats can paradoxically hamper preservation efforts. I found that making content available sooner rather than later can serve as an excellent test of whether digital preservation processing has been sufficient. While working with Trevor Owens on the processing of the Carl Sagan collection we developed an experimental system for processing born digital content using lightweight preservation standards such as BagIt in combination with automated topic model driven description tools that could be used by archivists. This work also leveraged the Web and the browser for access by automatically converting formats such as WordPerfect to HTML, so they could be viewable and indexable, while keeping the original file for preservation.

Another strand of archival theory that captured my interest was the work of Terry Cook, Verne Harris, Frank Upward and Sue McKemmish on post-custodial thinking and the archival enterprise. It was specifically my work with the Web archiving team at the Library of Congress that highlighted how important it is for record management practices to be pushed outwards onto the Web. I gained experience in seeing what makes a particular web page or website easier to harvest, and how impractical it is to collect the entire Web. I gained an appreciation for how innovation in the area of Web archiving was driven by real problems such as dynamic content and social media. For example I worked with the Internet Archive to archive Web content related to the killing of Michael Brown in Ferguson, Missouri by creating an archive of 13 million tweets, which I used as an appraisal tool, to help the Internet Archive identify Web content that needed archiving. In general I also saw how traditional, monolithic approaches to system building needed to be replaced with distributed processing architectures and the application of cloud computing technologies to easily and efficiently build up and tear down such systems on demand.

Around this time I also began to see parallels between the work of Matthew Kirschenbaum on the forensic and formal materiality of disk based media and my interests in the Web as a medium. Archivists usually think of the Web content as volatile and unstable, where turning off a web server can result in links breaking, and content disappearing forever. However it is also the case that Web content is easily copied, and the Internet itself was designed to route around damage. I began to notice how technologies such as distributed revision control systems, Web caches, and peer-to-peer distribution technologies like BitTorrent can make Web content extremely resilient. It was this emerging interest in the materiality of the Web that drew me to a position in the Maryland Institute for Technology in the Humanities where Kirschenbaum is the Assistant Director.

There are several iSchool faculty that I would potentially like to work with in developing my research. I am interested in the ethical dimensions to Web archiving and how technical architectures embody social values, which is one of Katie Shilton’s areas of research. Brian Butler’s work studying online community development and open data is also highly relevant to the study of collaborative and cooperative models for Web archiving. Ricky Punzalan’s work on virtual reunification in Web archives is also of interest because of its parallels with post-custodial archival theory, and the role of access in preservation. And Richard Marciano’s work on digital curation, in particular his recent work with the NSF on Brown Dog, would be an opportunity for me to further my experience building tools for digital preservation.

If admitted to the program I would focus my research on how Web archives are constructed and made accessible. This would include a historical analysis of the development of Web archiving technologies and organizations. I plan to look specifically at the evolution and deployment of Web standards and their relationship to notions of impermanence, and change over time. I will systematically examine current technical architectures for harvesting and providing access to Web archives. Based on user behavior studies I would also like to reimagine what some of the tools for building and providing access to Web archives might look like. I expect that I would spend a portion of my time prototyping and using my skills as a software developer to build, test and evaluate these ideas. Of course, I would expect to adapt much of this plan based on the things I learn during my course of study in the iSchool, and the opportunities presented by working with faculty.

Upon completion of the PhD program I plan to continue working on digital humanities and preservation projects at MITH. I think the PhD program could also qualify me to help build the iSchool’s new Digital Curation Lab at UMD, or similar centers at other institutions. My hope is that my academic work will not only theoretically ground my work at MITH, but will also be a source of fruitful collaboration with the iSchool, the Library and larger community at the University of Maryland. I look forward to helping educate a new generation of archivists in the theory and practice of Web archiving.

Cherry Hill Company: Learn About Islandora at the Amigos Online Conference

planet code4lib - Fri, 2015-08-28 19:42

On September 17, 2015, I'll be giving the presentation "Bring you Local, Unique Content to the Web Using Islandora" at the Amigos Open Source Software and Tools for the Library and Archive online conference. Amigos is bringing together practitioners from around the library field who have used open source in projects at their library. My talk will be about the Islandora digital asset management system, the fundamental building block of the Cherry Hill LibraryDAMS service.

Every library has content that is unique to itself and its community. Islandora is open source software that enables libraries to store, present, and preserve that unique content to their communities and to the world. Built atop the popular Drupal content management system and the Fedora digital object repository, Islandora powers many digital projects on the...

Read more »

SearchHub: How Shutterstock Searches 35 Million Images by Color Using Apache Solr

planet code4lib - Fri, 2015-08-28 18:00
As we countdown to the annual Lucene/Solr Revolution conference in Austin this October, we’re highlighting talks and sessions from past conferences. Today, we’re highlighting Shutterstock engineer Chris Becker’s session on how they use Apache Solr to search 35 million images by color. This talk covers some of the methods they’ve used for building color search applications at Shutterstock using Solr to search 40 million images. A couple of these applications can be found in Shutterstock Labs – notably Spectrum and Palette. We’ll go over the steps for extracting color data from images and indexing them into Solr, as well as looking at some ways to query color data in your Solr index. We’ll cover some issues such as what does relevance mean when you’re searching for colors rather than text, and how you can achieve various effects by ranking on different visual attributes. At the timeof this presetnation, Chris was the Principal Engineer of Search at Shutterstock– a stock photography marketplace selling over 35 million images– where he’s worked on image search since 2008. In that time he’s worked on all the pieces of Shutterstock’s search technology ecosystem from the core platform, to relevance algorithms, search analytics, image processing, similarity search, internationalization, and user experience. He started using Solr in 2011 and has used it for building various image search and analytics applications. Searching Images by Color: Presented by Chris Becker, Shutterstock from Lucidworks Join us at Lucene/Solr Revolution 2015, the biggest open source conference dedicated to Apache Lucene/Solr on October 13-16, 2015 in Austin, Texas. Come meet and network with the thought leaders building and deploying Lucene/Solr open source search technology. Full details and registration…

The post How Shutterstock Searches 35 Million Images by Color Using Apache Solr appeared first on Lucidworks.

DPLA: DPLA Welcomes Four New Service Hubs to Our Growing Network

planet code4lib - Fri, 2015-08-28 16:50

The Digital Public Library of America is pleased to announce the addition of four Service Hubs that will be joining our Hub network. The Hubs represent Illinois, Michigan, Pennsylvania and Wisconsin.  The addition of these Hubs continues our efforts to help build local community and capacity, and further efforts to build an on-ramp to DPLA participation for every cultural heritage institution in the United States and its territories.

These Hubs were selected from the second round of our application process for new DPLA Hubs.  Each Hub has a strong commitment to bring together the cultural heritage content in their state to be a part of DPLA, and to build community and data quality among the participants.

In Illinois, the Service Hub responsibilities will be shared by the Illinois State Library, the Chicago Public Library, the Consortium of Academic and Research Libraries of Illinois (CARLI), and the University of Illinois at Urbana Champaign. More information about the Illinois planning process can be found here. Illinois plans to make available collections documenting coal mining in the state, World War II photographs taken by an Illinois veteran and photographer, and collections documenting rural healthcare in the state.

In Michigan, the Service Hub responsibilities will be shared by the University of Michigan, Michigan State University, Wayne State University, Western Michigan University, the Midwest Collaborative for Library Services and the Library of Michigan.  Collections to be shared with the DPLA cover topics including the history of the Motor City, historically significant American cookbooks, and Civil War diaries from the Midwest.

In Pennsylvania, the Service Hub will be led by Temple University, Penn State University, University of Pennsylvania and Free Library of Philadelphia in partnership with the Philadelphia Consortium of Special Collections Libraries (PACSCL) and the Pennsylvania Academic Library Consortium (PALCI), among other key institutions throughout the state.  More information about the Service Hub planning process in Pennsylvania can be found here.  Collections to be shared with DPLA cover topics including the Civil Rights Movement in Pennsylvania, Early American History, and the Pittsburgh Iron and Steel Industry.

The final Service Hub, representing Wisconsin will be led by Wisconsin Library Services (WiLS) in partnership with the University of Wisconsin-Madison, Milwaukee Public Library, University of Wisconsin-Milwaukee, Wisconsin Department of Public Instruction and Wisconsin Historical Society.  The Wisconsin Service Hub will build off of the Recollection Wisconsin statewide initiative.  Materials to be made available document the American Civil Rights Movement’s Freedom Summer and the diversity of Wisconsin, including collections documenting the lives of Native Americans in the state.

“We are excited to welcome these four new Service Hubs to the DPLA Network,” said Emily Gore, DPLA Director for Content. “These four states have each led robust, collaborative planning efforts and will undoubtedly be strong contributors to the DPLA Hubs Network.  We look forward to making their materials available in the coming months.”

DPLA: The March on Washington: Hear the Call

planet code4lib - Thu, 2015-08-27 19:00

Fifty-two years ago this week, more than 200,000 Americans came together in the nation’s capitol to rally in support of the ongoing Civil Rights movement. It was at that march that Martin Luther King Jr.’s iconic “I Have A Dream” speech was delivered. And it was at that march that the course of American history was forever changed, in an event that resonates with protests, marches, and movements for change around the country decades later.

Get a new perspective on the historic March on Washington with this incredible collection from WGBH via Digital Commonwealth. This collection of audio pieces, 15 hours in total, offers uninterrupted coverage of the March on Washington, recorded by WGBH and the Educational Radio Network (a small radio distribution network that later became part of National Public Radio). This type of coverage was unprecedented in 1963, and offers a wholly unique view on one of the nation’s most crucial historic moments.

In this audio series, you can hear Martin Luther King Jr.’s historic speech, along with the words of many other prominent civil rights leaders–John Lewis, Bayard Rustin, Jackie Robinson, Roy Wilkins,  Rosa Parks, and Fred Shuttlesworth. There are interviews with Hollywood elite like Marlon Brando and Arthur Miller, alongside the complex views of the “everyman” Washington resident. There’s also the folk music of the movement, recorded live here, of Joan Baez, Bob Dylan, and Peter, Paul, and Mary. There are the stories of some of the thousands of Americans who came to Washington D.C. that August–teachers, social workers, activists, and even a man who roller-skated to the march all the way from Chicago.

Hear speeches made about the global nonviolence movement, the labor movement, and powerful words from Holocaust survivor Joachim Prinz. Another notable moment in the collection is an announcement of the death of W.E.B DuBois, one of the founders of the NAACP and an early voice for civil rights issues.

These historic speeches are just part of the coverage, however. There are fascinating, if more mundane, announcements, too, about the amount of traffic in Washington and issues with both marchers’ and commuters’ travel (though they reported that “north of K Street appears just as it would on a Sunday in Washington”). Another big, though less notable, issue of the day, according to WGBH reports, was food poisoning from the chicken in boxed lunches served to participants at the march. There is also information about the preparation for the press, which a member of the march’s press committee says included more than 300 “out-of-town correspondents.” This was in addition to the core Washington reporters, radio stations, like WGBH, TV networks, and international stations from Canada, Japan, France, Germany and the United Kingdom. These types of minute details and logistics offer a new window into a complex historic event, bringing together thousands of Americans at the nation’s capitol (though, as WGBH reported, not without its transportation hurdles!).

At the end of the demonstration, you can hear for yourself a powerful pledge, recited from the crowd, to further the mission of the march. It ends poignantly: “I pledge my heart and my mind and my body unequivocally and without regard to personal sacrifice, to the achievement of social peace through social justice.”

Hear the pledge, alongside the rest of the march as it was broadcast live, in this inspiring and insightful collection, courtesy of WGBH via Digital Commonwealth.

Banner image courtesy of the National Archives and Records Administration.

A view of the March on Washington, showing the Reflecting Pool and the Washington Monument. Courtesy of the National Archives and Records Administration.

Jonathan Rochkind: Am I a “librarian”?

planet code4lib - Thu, 2015-08-27 18:42

I got an MLIS degree, received a bit over 9 years ago, because I wanted to be a librarian, although I wasn’t sure what kind. I love libraries for their 100+ year tradition of investigation and application of information organization and retrieval (a fascinating domain, increasingly central to our social organization); I love libraries for being one of the few information organizations in our increasingly information-centric society that (often) aren’t trying to make a profit off our users so can align organizational interests with user interests and act with no motive but our user’s benefit; and I love libraries for their mountains of books too (I love books).

Originally I didn’t plan on continuing as a software engineer, I wanted to be ‘a librarian’.  But through becoming familiar with the library environment, including but not limited to job prospects, I eventually realized that IT systems are integral to nearly every task staff and users perform at or with a librarian — and I could have a job using less-than-great tech knowing that I could make it better but having no opportunity to do so — or I could have job making it better.  The rest is history.

I still consider myself a librarian. I think what I do — design, build, and maintain internal and purchased systems by which our patrons interact with the library and our services over the web —  is part of being a librarian in the 21st century.

I’m not sure if all my colleagues consider me a ‘real librarian’ (and my position does not require an MLIS degree).  I’m also never sure, when strangers or aquaintances ask me what I do for work, whether to say ‘librarian’, since they assume a librarian does something different then what I spend my time doing.

But David Lee King in a blog post What’s the Most Visited Part of your Library? (thanks Bill Dueber for the pointer), reminds us, I think from a public library perspective:

Do you adequately staff the busiest parts of your library? For example, if you have a busy reference desk, you probably make sure there are staff to meet demand….

Here’s what I mean. Take a peek at some annual stats from my library:

  • Door count: 797,478 people
  • Meeting room use: 137,882 people
  • Library program attendance: 76,043 attendees
  • Art Gallery visitors: 25,231 visitors
  • Reference questions: 271,315 questions asked

How about website visits? We had 1,113,146 total visits to the website in 2014. The only larger number is is our circulation count (2,300,865 items)….

…So I’ll ask my question again: Do you adequately staff the busiest parts of your library?

I don’t have numbers in front of me from our academic library, but I’m confident that our ‘website’ — by which I mean to include our catalog, ILL system, link resolver, etc, all of the places users get library services over the web, the things me and my colleagues work on — is one of the most, if not the most, used ‘service points’ at our library.

I’m confident that the online services I work on reach more patrons, and are cumulatively used for more patron-hours, than our reference or circulation desks.

I’m confident the same as true at your library, and almost every library.

What would it mean for an organization to take account of this?  “adequate staffing”, as King says, absolutely. Where are staff positions allocated?  But also in general, how are non-staff resources allocated?  How is respect allocated? Who is considered a ‘real librarian’? (And I don’t really think it’s about MLIS degree either, even though I led with that). Are IT professionals (and their departments and managers) considered technicians to maintain ‘infrastructure’ as precisely specified by ‘real librarians’, or are they considered important professional partners collaborating in serving our users?  Who is consulted for important decisions? Is online service downtime taken as seriously (or more) than an unexpected closure to the physical building, and are resources allocated correspondingly? Is User Experience  (UX) research done in an actual serious way into how your online services are meeting user needs — are resources (including but not limited to staff positions) provided for such?

What would it look like for a library to take seriously that it’s online services are, by far, the most used service point in a library?  Does your library look like that?

In the 21st century, libraries are Information Technology organizations. Do those running them realize that? Are they run as if they were? What would it look like for them to be?

It would be nice to start with just some respect.

Although I realize that in many of our libraries respect may not be correlated with MLIS-holders or who’s considered a “real librarian” either.  There may be some perception that ‘real librarians’ are outdated. It’s time to update our notion of what librarians are in the 21st century, and to start running our libraries recognizing how central our IT systems, and the development of such in professional ways, are to our ability to serve users as they deserve.

Filed under: General

SearchHub: Indexing Arabic Content in Apache Solr

planet code4lib - Thu, 2015-08-27 18:27
As we countdown to the annual Lucene/Solr Revolution conference in Austin this October, we’re highlighting talks and sessions from past conferences. Today, we’re highlighting Ramzi Alqrainy‘s session on using Solr to index and search documents and files in Arabic. Arabic language poses several challenges faced by the Natural Language Processing (NLP), largely due to the fact that Arabic language, unlike European languages, has a very rich and sophisticated morphological system. This talk will cover some of the challenges and how to solve them with Solr and will also present the challenges that were handled by Opensooq as a real case in the Middle East. Ramzi Alqrainy is one of the most recognized experts within Artificial Intelligence and Information Retrieval fields in the Middle East. He is an active researcher and technology blogger, with a focus on information retrieval. Arabic Content with Apache Solr: Presented by Ramzi Alqrainy, OpenSooq from Lucidworks Join us at Lucene/Solr Revolution 2015, the biggest open source conference dedicated to Apache Lucene/Solr on October 13-16, 2015 in Austin, Texas. Come meet and network with the thought leaders building and deploying Lucene/Solr open source search technology. Full details and registration…

The post Indexing Arabic Content in Apache Solr appeared first on Lucidworks.

LITA: August Library Tech Roundup

planet code4lib - Thu, 2015-08-27 13:00
image courtesy of Flickr user cdevers (CC BY NC ND)

Each month, the LITA bloggers will share selected library tech links, resources, and ideas that resonated with us. Enjoy – and don’t hesitate to tell us what piqued your interest recently in the comments section!

Brianna M.

Here are some of the things that caught my eye this month, mostly related to digital scholarship.

John K.

Jacob S.

  • I’m thankful for Shawn Averkamp’s Python library for interacting with ContentDM (CDM), including a Python class for editing CDM metadata via their Catcher, making it much less of a pain batch editing CDM metadata records.
  • I recently watched an ALA webinar where Allison Jai O’Dell presented on TemaTres, a platform for publishing linked data controlled vocabularies.

Nimisha B.

There have been a lot of great publications and discussions in the realm of Critlib lately concerning cataloging and library discovery. Here are some, and a few other things of note:

Michael R.

  • Adobe Flash’s days seem numbered as Google Chrome will stop displaying Flash adverts by default, following Firefox’s lead. With any luck, Java will soon follow Flash into the dustbin of history.
  • NPR picked up the story of DIY tractor repairs running afoul of the DMCA. The U.S. Copyright Office is considering a DMCA exemption for vehicle repair; a decision is scheduled for October.
  • Media autoplay violates user control and choice. Video of a fatal, tragic Virginia shooting has been playing automatically in people’s feeds. Ads on autoplay are annoying, but this…!

Cinthya I.

These are a bit all over the map, but interesting nonetheless!

Bill D.

I’m all about using data in libraries, and a few things really caught my eye this month.

David K.

Whitni W.

Marlon H.

  • Ever since I read an ACRL piece about library adventures with Raspberry Pi, I’ve wanted to build my own as a terminal for catalog searches and as an self checkout machine. Adafruit user Ruizbrothers‘ example of how to Build an All-In-One Desktop using the latest version of Raspberry Pi might just what I need to finally get that project rolling.
  • With summer session over (and with it my MSIS, yay!) I am finally getting around to planning my upgrade from Windows 8.1 to 10. Lifehacker’s Alan Henry, provides quite a few good reasons to opt for a Clean Install over the standard upgrade option. With more and more of my programs conveniently located just a quick download away and a wide array of cloud solutions safeguarding my data, I think I found my weekend project.

Share the most interesting library tech resource you found this August in the comments!

William Denton: MIME type

planet code4lib - Thu, 2015-08-27 03:03

Screenshot of an email notification I received from Bell, as viewed in Alpine:

Inside: Content-Type: text/plain; charset="ISO-8859-1"

In the future, email billing will be mandatory and email bills will be unreadable.

District Dispatch: Tech industry association releases copyright report

planet code4lib - Wed, 2015-08-26 20:37


The Computer and Communications Industry Association (CCIA) released a white paper “Copyright Reform for a Digital Economy” yesterday that includes many ideas also supported by libraries. The American Library Association shares the same philosophy that the purpose of the copyright law is to advance learning and benefit the public. We both believe that U.S. copyright law is a utilitarian system that rewards authors through a statutorily created but limited monopoly in order to serve the public. Any revision of the copyright law needs to reflect that viewpoint and recognize that today, copyright impacts everyone, not just media companies. The white paper does get a little wonky in parts, but check out at least the executive summary (or watch the webinar) to learn why we argue for fair use, licensing transparency, statutory damages reform and a respect for innovation and new creativity.

The post Tech industry association releases copyright report appeared first on District Dispatch.

Cynthia Ng: Making the NNELS Site Responsive

planet code4lib - Wed, 2015-08-26 19:09
Honestly, making a site responsive is nothing new, not even for me. Nevertheless, I wanted to document the process (no surprise there). Since as of the date of publishing this post, the responsive version of the theme hasn’t gone live yet, you get a sneak peek. I was a little worried because we have a … Continue reading Making the NNELS Site Responsive

Jonathan Rochkind: Virtual Shelf Browse is a hit?

planet code4lib - Wed, 2015-08-26 18:55

With the semester starting back up here, we’re getting lots of positive feedback about the new Virtual Shelf Browse feature.

I don’t have usage statistics or anything at the moment, but it seems to be a hit, allowing people to do something like a physical browse of the shelves, from their device screen.

Positive feedback has come from underclassmen as well as professors. I am still assuming it is disciplinarily specific (some disciplines/departments simply don’t use monographs much), but appreciation and use does seem to cut across academic status/level.

Here’s an example of our Virtual Shelf Browse.

Here’s a blog post from last month where I discuss the feature in more detail.

Filed under: General

Peter Murray: Registration Now Open for a Fall Forum on the Future of Library Discovery

planet code4lib - Wed, 2015-08-26 14:23

Helping patrons find the information they need is an important part of the library profession, and in the past decade the profession has seen the rise of dedicated “discovery systems” to address that need. The National Information Standards Organization (NISO) is active at the intersection of libraries, content suppliers, and service providers in smoothing out the wrinkles between these parties:

Next in this effort is a two-day meeting where these three groups will hear about the latest activities and plan activities to advance the standards landscape. Registration for this meeting has just opened, and included below is the announcement. I’ll be Baltimore in early October to participate and offer the closing keynote, and I hope you will be able to attend in person or participate in the live stream.

NISO will host a two–day meeting to take place in Baltimore, Maryland on October 5 & 6, 2015 on The Future of Library Discovery. In February 2015, NISO published a white paper commissioned from library consultant Marshall Breeding by NISO’s Discovery to Delivery Topic Committee. The in-person meeting will be an extension of the white paper with a series of presenters and panels offering an overview of the current resource discovery environment. Attendees will then participate in several conversations that will examine possibilities regarding how these technologies, methodologies, and products might be able to adapt to changes in the evolving information landscape in scholarly communications and to take advantage of new technologies, metadata models, or linking environments to better accomplish the needs of libraries to provide access to resources.

For the full agenda, please visit:

Confirmed speakers include:

  • Opening Keynote: Marshall Breeding, Independent Library Consultant,
  • Scott Bernier, Senior Vice President, Marketing, EBSCO
  • Michael Levine-Clark, Professor / Associate Dean for Scholarly Communication and Collections Services, University of Denver Libraries
  • Gregg Gordon, President & CEO, Social Sciences Research Network (SSRN)
  • Neil Grindley, Head of Resource Discovery, Jisc
  • Steve Guttman, Senior Product Manager, ProQuest
  • Karen Resch McKeown, Director, Product Discovery, Usage and Analytics, Gale | Cengage Learning
  • Jason S. Price, Ph.D., Director of Licensing Operations, SCELC Library Consortium
  • Mike Showalter, Executive Director, End-User Services, OCLC
  • Christine Stohn, Product Manager, ExLibris Group
  • Julie Zhu, Manager, Discovery Service Relations, Marketing, Sales & Design, IEEE
  • Closing Keynote: Peter Murray, Library Technologist and blogger at the Disruptive Library Technology Jester

This event is generously sponsored by: EBSCO, Sage Publications, ExLibris Group, and Elsevier. Thank you!

Early Bird rates until September! The cost to attend the two-day seminar in person for NISO Members (Voting or LSA) is only $250.00; Nonmember: $300.00; and for Students: $150.00. To register, click here.

Please visit the event page for the most up-to-date information on the agenda, speakers and registration information.

For any questions regarding your in-person or virtual attendance at this NISO event, contact Juliana Wood, Educational Programs Manager, via email or phone 301.654.2512.

We hope to see you in Baltimore in the Fall!

Link to this post!

DuraSpace News: DuraSpace Selects Gunter Media Group, Inc. as a Registered Service Provider for VIVO

planet code4lib - Wed, 2015-08-26 00:00

Winchester, MA  Gunter Media Group, Inc., an executive management consulting firm that helps libraries, publishers and companies leverage key operational, technical, business and human assets, has become a DuraSpace Registered Service Provider (RSP) for the VIVO Project. Gunter Media Group, Inc.  will provide VIVO related services such as strategic evaluation, project management, installation, search engine optimization and integration for institutions looking to join the VIVO network.

LITA: iPads in the Library

planet code4lib - Tue, 2015-08-25 17:00

Getting Started/Setting Things Up

Several years ago we added twenty iPad 2s to use in our children’s and teen programming. They have a variety of apps on them ranging from early literacy and math apps to Garage Band and iMovie to Minecraft and Clash of Clans*. Ten of the iPads are geared towards younger kids and ten are slanted towards teen interests.

Not surprisingly, the iPads were very popular when we first acquired them. We treated app selection as an extension of our collection development policy. Both the Children’s and Adult Services departments have a staff iPad they can use to try out apps before adding them to the programming iPads.

We bought a cart from Spectrum Industries (a WI-based company; we also have several laptop carts from them) so that we had a place to house and charge the devices. The cart has space for forty iPads/tablets total. We use an Apple MacBook and the Configurator app to handle updating the iPads and adding content to them. We created a Volume Purchase Program account in order to buy multiple copies of apps and then get reimbursed for taxes after the fact. The VPP does not allow for tax exempt status but the process of receiving refunds is pretty seamless.

The only ‘bothersome’ part of updating the iPads is switching the cable from the power plug to the USB ports (see above) and then making sure that all the iPads have their power cables plugged firmly into them to make a solid connection. Once I’d done it a few times it became less awkward. The MacBook needs to be plugged into the wall or it won’t have enough power for the iPads. It also works best running on an ethernet connection versus WiFi for downloading content.

It takes a little effort to set up the Conifgurator** but once you have it done, all you need to do is plug the USB into the MacBook, launch the Configurator, and the iPads get updated in about ten to fifteen minutes even if there’s an iOS update.

Maintaining the Service/Adjusting to Our Changing Environment

Everything was great. Patrons loved the iPads. They were easy to maintain. They were getting used.

Then the school district got a grant and gave every student, K-12, their own iPad.

They rolled them out starting with the high school students and eventually down through the Kindergartners. The iPads are the students’ responsibility. They use them for homework and note-taking. Starting in third grade they get to take them home over the summer.

Suddenly our iPads weren’t so interesting any more. Not only that, but our computer usage plummeted. Now that our students had their own Internet-capable device they didn’t need our computers any more. They do need our WiFi and not surprisingly those numbers went up.

There are restrictions for the students. For example, younger students can’t put games on their iPads. And while older students have fewer restrictions, they don’t tend to put pay apps on their iPads. That means we have things on our iPads that the students couldn’t or didn’t have.

I started meeting with the person at the school district in charge of the program a couple times a year. We talk about technology we’re implementing at our respective workplaces and figure out what we can do to supplement and help each other. I’ll unpack this in a future post and talk about creating local technology partnerships.

Recently I formed a technology committee consisting of staff from every department in the library. One of the things we’ll be addressing is the iPads. We want to make sure that they’re being used. Also, it won’t be too long and they will be out-of-date and we’ll have to decide if we’re replacing them and whether we’d just recycle the old devices or repurpose them (as OPACs potentially?).

We don’t circulate iPads but I’d certainly be open to that idea. How many of you have iPads/tablets in your library? What hurdles have you faced?

* This is a list of what apps are on the iPads as of August 2015. Pay apps are marked with a $:

  • Children’s iPads (10): ABC Alphabet Phonics, Air Hockey Gold, Bub – Wider, Bunny Fun $, Cliffed: Norm’s World XL, Dizzypad HD, Don’t Let the Pigeon Run This App! $, Easy-Bake Treats, eliasMATCH $, Escape – Norm’s World XL, Fairway Solitaire HD, Fashion Math, Go Away, Big Green Monster! $, Hickory Dickory Dock, Jetpack Joyride, Make It Pop $, Mango Languages, Minecraft – Pocket Edition $, Moo, Baa, La La La! $, My Little Pony: Twilight Sparkle, Teacher for a Day $, NFL Kicker 13, Offroad Legends Sahara, OverDrive, PewPew, PITFALL!, PopOut! The Tale of Peter Rabbit! $, Punch Quest, Skee-Ball HD Free, Sound Shaker $, Spot the Dot $, The Cat in the Hat – Dr. Seuss $, Waterslide Express
  • Teen iPads (10): Air Hockey Gold, Bad Flapping Dragon, Bub – Wider, Can You Escape, Clash of Clans, Cliffed: Norm’s World XL, Codea $, Cut the Rope Free, Despicable Me: Minion Rush, Dizzypad HD, Easy-Bake Treats, Escape – Norm’s World XL, Fairway Solitaire HD, Fashion Math, Fruit Ninja Free, GarageBand $, iMovie $, Jetpack Joyride, Mango Languages, Minecraft – Pocket Edition $, NFL Kicker 13, Ninja Saga, Offroad Legends Sahara, OverDrive, PewPew, PITFALL!, Punch Quest, Restaurant Town, Skee-Ball HD Free, Stupid Zombies Free, Temple Run, Waterslide Express, Zombies vs. Ninja

** It’s complicated but worth spelling out so I’m working on a follow-up post to explain the process of creating a VPP account and getting the Configurator set up the way you want it.

Open Knowledge Foundation: Global Open Data Index 2015 is open for submissions

planet code4lib - Tue, 2015-08-25 12:43

The Global Open Data Index measures and benchmarks the openness of government data around the world, and then presents this information in a way that is easy to understand and easy to use. Each year the open data community and Open Knowledge produces an annual ranking of countries, peer reviewed by our network of local open data experts. Launched in 2012 as tool to track the state of open data around the world. More and more governments were being to set up open data portals and make commitments to release open government data and we wanted to know whether those commitments were really translating into release of actual data.

The Index focuses on 15 key datasets that are essential for transparency and accountability (such as election results and government spending data), and those vital for providing critical services to citizens (such as maps and water quality). Today, we are pleased to announce that we are collecting submissions for the 2015 Index!

The Global Open Data Index tracks whether this data is actually released in a way that is accessible to citizens, media and civil society, and is unique in that it crowdsources its survey results from the global open data community. Crowdsourcing this data provides a tool for communities around the world to learn more about the open data available in their respective countries, and ensures that the results reflect the experience of civil society in finding open information, rather than accepting government claims of openness. Furthermore, the Global Open Data Index is not only a benchmarking tool, it also plays a foundational role in sustaining the open government data community around the world. If, for example, the government of a country does publish a dataset, but this is not clear to the public and it cannot be found through a simple search, then the data can easily be overlooked. Governments and open data practitioners can review the Index results to locate the data, see how accessible the data appears to citizens, and, in the case that improvements are necessary, advocate for making the data truly open.


Methodology and Dataset Updates

After four years of leading this global civil society assessment of the state of open data around the world, we have learned a few things and have updated both the datasets we are evaluating and the methodology of the Index itself to reflect these learnings! One of the major changes has been to run a massive consultation of the open data community to determine the datasets that we should be tracking. As a result of this consultation, we have added five datasets to the 2015 Index. This year, in addition to the ten datasets we evaluated last year, we will also be evaluating the release of water quality data, procurement data, health performance data, weather data and land ownership data. If you are interested in learning more about the consultation and its results, you can read more on our blog!

How can I contribute?

2015 Index contributions open today! We have done our best to make contributing to the Index as easy as possible. Check out the contribution tutorial in English and Spanish, ask questions in the discussion forum, reach out on twitter (#GODI15) or speak to one of our 10 regional community leads! There are countless ways to get help so please do not hesitate to ask! We would love for you to be involved. Follow #GODI15 on Twitter for more updates.

Important Dates

The Index team is hitting the road! We will be talking to people about the Index at the African Open Data Conference in Tanzania next week and will also be running Index sessions at both AbreLATAM and ConDatos in two weeks! Mor and Katelyn will be on the ground so please feel free to reach out!

Contributions will be open from August 25th, 2015 through September 20th, 2015. After the 20th of September we will begin the arduous peer review process! If you are interested in getting involved in the review, please do not hesitate to contact us. Finally, we will be launching the final version of the 2015 Global Open Data Index Ranking at the OGP Summit in Mexico in late October! This will be your opportunity to talk to us about the results and what that means in terms of the national action plans and commitments that governments are making! We are looking forward to a lively discussion!


Subscribe to code4lib aggregator