You are here

Feed aggregator

FOSS4Lib Recent Releases: Koha - 3.22.1, 3.20.7, 3.18.13

planet code4lib - Fri, 2015-12-25 11:37
Package: KohaRelease Date: Friday, December 25, 2015

Last updated December 25, 2015. Created by David Nind on December 25, 2015.
Log in to edit this page.

Monthly maintenance releases for Koha.

See the release announcements for the details:

FOSS4Lib Recent Releases: Koha - 3.22

planet code4lib - Fri, 2015-12-25 11:29
Package: KohaRelease Date: Thursday, November 26, 2015

Last updated December 25, 2015. Created by David Nind on December 25, 2015.
Log in to edit this page.

This is the six monthly feature release of the Koha open source integrated library system.

It was released on 26 November 2015 and includes 10 new features, 155 enhancements and 381 bug fixes.

For more information see the release notes:
https://koha-community.org/koha-3-22-released/

The recommended installation method is to use the packages for Debian and Ubuntu, rather than the tar file or git.

Open Knowledge Foundation: Unlocking Election Results Data: Signs of Progress but Challenges Still Remain

planet code4lib - Thu, 2015-12-24 06:10

This blog post was written by the NDI election team -Michael McNulty and Benjamin Mindes

How “open” are election results data around the world? Answering that question just became much easier. For the first time, the Global Open Data Index 2015 assessed election results data based on whether the results are made available at the polling station level. In previous years, the Index looked at whether election results were available at a higher (constituency/district) level, but not at the polling station level.

As a result, the 2015 Global Open Data Index provides the most useful global assessment to date on which countries are and are not making election results available in an open way. It also highlights specific open data principles that most countries are meeting, as well as principles that most countries are not meeting. This helps inform the reform agenda for open election data advocates in the months and years ahead.

Before we take a look at the findings and possible ways forward, let’s first consider why the Global Open Data Index’s shift from constituency/district level results to polling station results is important. This shift in criteria has shaken up the rankings this year, which has caused some discussion about why polling station-level results matter. Read on to find out!

Why are Polling Station-level Election Results Important?

Meets the open data principle of “granularity”

A commonly accepted open data principle is that data should be made available at the most granular, or “primary,” level — the level at which the source data is collected. (See the the 8 Principles of Open Government Data principle on Primary; and the G8 Open Data Charter section on Quality and Quantity.) In the case of election results, the primary level refers to the location where voters cast their ballots — the polling station. (See the Open Election Data Initiative section on Granularity. Polling stations are sometimes called precincts, polling streams, or tables depending on the context) So, if election results are not counted at the polling station level and/or only made available in an aggregate form, such as only at the ward/constituency/district level, then that dataset is not truly open, since it does not meet the principle of granularity. (See the Open Election Data Initiative section on Election Results for more details.)

Promotes transparency and public confidence

Transparency means that each step is open to scrutiny and that there can be an independent verification the process. If results aren’t counted and made public at the polling station level, there is a clear lack of transparency, because there is no way to verify whether the higher-level tabulated results can be trusted. This makes election fraud easier to conceal and mistakes harder to catch, which can undermine public confidence in elections, distort the will of the voter, and, in a close election, even change the outcome.

For example, let’s imagine that a tabulation center is aggregating ballots from 10 polling stations. Independent observers at two of those polling stations reported several people voting multiple times, as well as officials stuffing ballot boxes. If polling station results were made available, observers could check whether the number of ballots cast exceeds the number of registered voters at those polling stations, which would support the observers’ findings of fraud. However, if polling station level results aren’t made available, the results from the two “problem” polling stations would be mixed in with the other eight polling stations. There would be no way to verify what the turnout was at the two problem polling stations, and, thus, no way to cross-check the observers’ findings with the official results.

Reduces tension

Election observers can combine their assessment of the election day process with results data to verify or dispel rumors at specific polling stations, but only if polling station-level results are made public.

Bolsters public engagement

When voters are able to check the results in their own community (at their polling station), it can help build confidence and increase their engagement and interest in elections. Enhances and expands the uses of election results data It can be used to help enhance participation rates. Civil society groups can use polling station-level turnout data to more precisely target their voter education and mobilization campaigns during the next elections. Political parties and candidates can if high invalid ballot rates are found in specific PSs, it can help groups target their voter info campaigns the next time.

Aligns with an emerging global norm

Making results available at the polling station level is rapidly becoming a global norm. In most countries, political parties, election observers, the media, and voters have come to expect nothing less than for polling station level results to be posted publicly in a timely way and shared freely.

The 2015 Open Data Index shows how common this practice has become. Of the 122 countries studied, 71 of them (58%) provide election results (including results, registered voters, and number of invalid and spoiled ballots) at the polling station level. There are some significant differences across regions, however. Sub Saharan Africa and Asia had lowest percentage of countries that provide polling station level results data (42% and 41% respectively). Eastern Europe and Latin America have the highest percentage of countries with 71% each.

What Does the Index Tell Us about How to Open Up and Use Election Data?

Drawing on the 2015 Global Open Data Index findings and on open election data initiatives at the global, regional and national levels, we’ve highlighted some key priorities below.

1. Advocacy for making polling-station level results publicly available

While most countries make polling-station level results available, over 40% of the 112 countries researched in the Global Open Data Index still do not. At a regional level, Sub Saharan Africa, Asia and the Middle East & North Africa have the furthest to go.

2. Ensuring election results data is truly open

Making election data available is good first step, but it can’t be freely and easily used and redistributed by the public if it is not truly “open.” Election data is open when it is released in a manner that is granular, available for free online, complete and in bulk, analyzable (machine-readable), non-proprietary, non-discriminatory and available to anyone, license-free and permanent. Equally as important, election data must be released in a timely way. For election results, this means near real-time publication of provisional polling station results, with frequent updates.

The Global Open Data Index assesses many of these criteria, and the 2015 findings help highlight which criteria are more and less commonly met across the globe. On the positive side, of the 71 countries that make polling-station level results available, nearly all of them provide the data in a digital (90%), online (89%), public (89%) and free (87%) manner. In addition, 92% of those 71 countries have up-to-date data.

However, there are some significant shortcomings across most countries. Only 46% of the 71 countries provided data that was analyzable (machine readable). Similarly, only 46% of countries studied provided complete, bulk data sets. Western Europe (67%) had the highest percentage of countries providing complete, bulk data, while Middle East & North Africa and Sub Saharan Africa (both 38%) had the lowest percentage of countries doing so.

3. Not just election results: Making other types of election data open

While election results often get the most attention, election data goes far beyond results. It involves information relating to all aspects of the electoral cycle, including the legal framework, decisionmaking processes, electoral boundaries, polling stations, campaign finance, voter registry, ballot qualification, procurement, and complaints and disputes resolution. All of these categories of data are essential assessing the integrity of elections, and open data principles should be applied to all of them.

4. Moving from transparency to accountability

Opening election data helps make elections more transparent, but that’s just the beginning. To unlock the potential of election data, people need to have the knowledge and skills to analyze and use the data to promote greater inclusiveness and public engagement in the process, as well as to hold electoral actors, such as election management bodies and political parties, accountable. For example, with polling station data, citizen election observer groups around the world have used statistics to deploy observers to a random, representative sample of polling stations, giving them a comprehensive, accurate assessment of election day processes. With access to the voters list, many observer groups verify the accuracy of the list and highlight areas for improvement in the next elections.

Despite the increasing availability of election data, in most countries parties, the media and civil society do not yet have the capacity to take full advantage of the possibilities. The National Democratic Institute (NDI) is developing resources and tools to help equip electoral stakeholders, particularly citizen election observers, to use and analyze election data. We encourage more efforts like this so that the use of election data can reach its full potential.

For more on NDI’s Open Election Data Initiative, check out the website (available in English, Spanish and Arabic) and like us on Facebook.

David Rosenthal: Signposting the Scholarly Web

planet code4lib - Wed, 2015-12-23 16:00
At the Fall CNI meeting, Herbert Van de Sompel and Michael Nelson discussed an important paper they had just published in D-Lib, Reminiscing About 15 Years of Interoperability Efforts. The abstract is:
Over the past fifteen years, our perspective on tackling information interoperability problems for web-based scholarship has evolved significantly. In this opinion piece, we look back at three efforts that we have been involved in that aptly illustrate this evolution: OAI-PMH, OAI-ORE, and Memento. Understanding that no interoperability specification is neutral, we attempt to characterize the perspectives and technical toolkits that provided the basis for these endeavors. With that regard, we consider repository-centric and web-centric interoperability perspectives, and the use of a Linked Data or a REST/HATEAOS technology stack, respectively. We also lament the lack of interoperability across nodes that play a role in web-based scholarship, but end on a constructive note with some ideas regarding a possible path forward.They describe their evolution from OAI-PMH, a custom protocol that used the Web simply as a transport for remote procedue calls, to Memento, which uses only the native capabilities of the Web. They end with a profoundly important proposal they call Signposting the Scholarly Web which, if deployed, would be a really big deal in many areas. Some further details are on GitHub, including this somewhat cryptic use case:
Use case like LOCKSS is the need to answer the question: What are all the components of this work that should be preserved? Follow all rel="describedby" and rel="item" links (potentially multiple levels perhaps through describedby and item).Below the fold I explain what this means, and why it would be a really big deal for preservation.

Much of the scholarly Web consists of articles, each of which has a Digital Object Identifier (DOI). Herbert and Michael's paper's DOI is 10.1045/november2015-vandesompel. You can access it by dereferencing this link: http://dx.doi.org/10.1045/november2015-vandesompel. CrossRef's DOI resolver will redirect you to the current location of the article, providing location-independence. The importance of location-independent links, and the fact that they are frequently not used, was demonstrated by Martin Klein and a team from the Hiberlink project in Scholarly Context Not Found: One in Five Articles Suffers from Reference Rot. I discussed this article in The Evanescent Web.

But Herbert and Michael's paper is an anomaly. The DOI resolution service redirects you to the full text HTML of the paper. This not what usually happens. A more representative but very simple example is: http://dx.doi.org/10.2218/ijdc.v8i1.248. You are redirected to a "landing page" that contains the abstract, some information about the journal, and a lot of links. Try "View Source" to get some idea of how complex this simple example is; it links to 36 other resources. Some, such as stylesheets, should be collected for preservation. Others, such as the home pages of the journal's funders, should not be. Only one of the linked resources is the PDF of the article, which is the resource most needing preservation.

If a system is asked to ingest and preserve this DOI, it needs to be sure that, whatever else it got, it did get the PDF. In this very simple, well-organized case there are two ways to identify the link leading to the PDF:
  • The link the reader would click on to get the PDF whose target is http://www.ijdc.net/index.php/ijdc/article/view/8.1.107/300 and whose anchor text is "PDF".
  • A meta-tag with name="citation_pdf_url" content="http://www.ijdc.net/index.php/ijdc/article/view/8.1.107/300".
So we have two heuristics for finding the PDF, the anchor text and the citation_pdf_url meta-tag. Other might include anchor text "Download" or "Full Text". Similarly, the system needs to use heuristics to decide which links, such as those the funder's home pages, not to follow. Sites vary a lot, and in practice preservation crawlers need a range of such heuristics. Most landing pages are far more complex than this example.

Top half of Figure 5.The LOCKSS system's technology for extracting metadata, such as which URLs are articles, abstracts, figures, and so on, was outlined in a talk at the IIPC's 2013 GA, and detailed in one of the documents submitted for the CLOCKSS Archive's TRAC audit. It could be much simpler and more reliable if Herbert and Michael's proposal were adopted. Figure 5 in their paper shows two examples of signposting, the relevant one is the top half. It shows the normal case of accessing an article via a DOI. The DOI redirects to a landing page whose HTML text, as before, links to many resources. Some, such as A, are not part of the article. Some, such as the PDF, are. These resources are connected by typed links, as shown in the diagram. These typed links are implemented as link HTTP headers whose rel attribute expresses the type of the link using an IANA-registered type such as describes or item.

Now, when the preservation crawler is redirected to and fetches the landing page, the HTTP headers contain a set of link entries. Fetching each of them ensures that all the resources the publisher thinks are part of the article are collected for preservation. No heuristics are needed; there is no need even to parse the landing page HTML to find links to follow.

Of course, this describes an ideal world. Experience with the meta-tags that publishers use to include bibliographic metadata suggests some caution in relying solely on these link headers. Large publishing platforms could be expected to get them right most of the time, headers on smaller platforms might be less reliable. Some best practices would be needed. For example, are script tags enough to indicate JavaScript that is part of the article, or do the JavaScript files that are part of the article need a separate link header?

Despite these caveats it is clear that even if this way of unambiguously defining the boundaries of the artefact identified by a DOI was not universal, it would significantly reduce the effort needed to consistently and completely collect these artefacts for preservation. Ingest is the major cost of preservation, and "can't afford to collect" is the major cause of content failing to reach future readers. Thus anything that can significantly reduce the cost of collection is truly a big deal for preservation.

Villanova Library Technology Blog: Foto Friday: A last word from Dickens

planet code4lib - Wed, 2015-12-23 15:48

It was always said of him, that he knew how to keep Christmas well, if any man alive possessed the knowledge. May that be truly said of us, and all of us! And so, as Tiny Tim observed, God Bless Us, Every One!

A Christmas Carol – Charles Dickens, 1843.

Laura Hutelmyer is the photography coordinator for the Communication and Service Promotion Team and Special Acquisitions Coordinator in Resource Management


Like0

Library of Congress: The Signal: The Top 10 Blog Posts of 2015 on The Signal

planet code4lib - Wed, 2015-12-23 15:13

Mummers Parade on New Year’s day, Philadelphia, Pennsylvania. Photo by Carol M. Highsmith, Jan 1, 2011. Carol M. Highsmith Archive, Library of Congress Prints and Photographs Division.

It’s the end of the year on The Signal, and it gives us the chance to look back at our most popular posts of the year.

As we have in past years, we were thrilled to share projects and updates that are happening in the community or for the community. Digital stewardship on a national scale requires engaging many communities, and here at The Signal we’re pleased to share work happening at the Library and at other organizations.

As I have, I hope you take a quick read back through these posts. They are a great reflection of the diversity, range and interest in digital stewardship topics, like standards, workflows, tools, and networking and collaboration between Library partners and practitioners working in the field. We’re looking forward to 2016 when we hope to share more of the same, as well as share activities and projects that highlight national and international digital library initiatives.

Thanks to all of our contributors and readers for a great blogging year!  Here’s the entire list of top 10 posts of 2015, ranked by page views based on data as of December 22:

  1. The Personal Digital Archiving 2015 Conference
  2. Tracking Digital Collections at the Library of Congress, from Donor to Repository
  3. Mapping Libraries: Creating Real-time Maps of Global Information
  4. All in the (Apple ProRes 422 Video Codec) Family
  5. Creating Workflows for Born-Digital Collections: An NDSR Project Update
  6. A New Interface and New Web Archive Content at Loc.gov
  7. Introducing the Federal Web Archiving Working Group
  8. Reaching Out and Moving Forward: Revising the Library of Congress’ Recommended Format Specifications
  9. Digital Forensics and Digital Preservation: An Interview with Kam Woods of BitCurator
  10. Cultural Institutions Embrace Crowdsourcing

Is your favorite blog post on the list? Did you have a favorite one that didn’t make the list? Share it in the comments below!

Villanova Library Technology Blog: Available for proofreading: How to Become an Actor

planet code4lib - Tue, 2015-12-22 21:14

Our latest Distributed Proofreaders project is another vintage “how to” manual from publisher Frank Tousey. How to Become an Actor, as the title suggests, deals with theatrical matters, and like many books in this series, it is quite ambitious for its brief length, covering not just acting but also makeup, set design and other technical matters. As if that were not enough, it also includes several short plays.

The modern reader is unlikely to learn many useful skills from this text, but it does provide considerable insight into the popular entertainments of its time. To help make the book even more accessible through the creation of a new electronic edition, you can read this previous blog post to learn about the proofreading process, then join in the work at the project page.


Like0

Villanova Library Technology Blog: Falvey Hosts Stress Busters Open House

planet code4lib - Tue, 2015-12-22 20:40

Stress Busters, an open house sponsored by Falvey Memorial Library and the Villanova Electronic Enthusiasts Club, was held on Dec. 10, from 1 to 6 p.m. Soft pretzels, hot drinks, games, Star Wars themed coloring books, cootie catchers, a floor puzzle and a special appearance by Will D. Cat were featured. In keeping with the Star Wars theme, Han Solo (Sarah Wingo, liaison librarian for English and Theatre) and Darth Vader (Michelle Callaghan, Communication and Service Promotion team graduate assistant) attended the open house and also roamed the campus inviting students to the Stress Busters open house.

Students enjoy soft pretzels and other free snacks

Han Solo dueling with Darth Vader

Will D. Cat and student play with cootie catcher

Student coloring in a Star Wars coloring book

Will D. Cat, Rob LeBlanc and students playing video game

On Friday, Dec. 11, Stress Free Happy Healthy Hours were held from 10 am until 4 p.m. in room 205 in Falvey. Each hour featured a different activity, such as grown-up coloring books, making your own stress balls, snacks and drinks. From noon until 2 p.m. visiting therapy dogs were available for petting.

Photos by Alice Bampton


Like0

Villanova Library Technology Blog: Librarians 'recycle' snappy mnemonic aid for student information literacy

planet code4lib - Tue, 2015-12-22 20:37

Rob LeBlanc, first-year experience/humanities librarian, and Barbara Quintiliano, nursing/life sciences and instructional services librarian, recently published an article, “Recycling C.R.A.P.: Reframing a Popular Mnemonic for Library Instruction,” in Pennsylvania Libraries: Research and Practice, volume 3, number 2 (Fall 2015).

Librarians Barbara Quintiliano and Rob LeBlanc, with their manuscript

Quintiliano and LeBlanc were interested in applying the new Framework for Information Literacy for Higher Education that was adopted by the ACRL (Association of College and Research Libraries) in 2015. This Framework replaced the previous Information Literacy Competency Standards for Higher Education. The two librarians were reshaping their information literacy programs to incorporate the new Framework.

Quintiliano explains, “Rob and I were tossing around ideas one day about how the new Framework could be applied, and we thought of the C.R.A.P. acronym which had been used by instruction librarians … to teach students how to evaluate information, especially information that they find on the web. With a bit of imagination and prestidigitation, we were able to transform the acronym into a concise, snappy way of conveying the Framework concepts to first-year students. As first-year librarian, Rob immediately started to put it into practice.”

Quintiliano and LeBlanc originally hoped to present a session on the topic at the fall 2015 Pennsylvania Libraries Association conference. That conference, however, already had more proposals than time slots available, so the organizers suggested the topic would make an interesting article for Pennsylvania Libraries: Research and Practice.  Consequently, the two collaborated on the article, which was accepted for publication.

What is C.R.A.P. in the context of library instruction? According to LeBlanc and Quintiliano it stands for “Conversation, Revision, Authority and Property.” These concepts are taught by the authors so that students can properly evaluate information needed to write college-level research papers. The full article can be accessed here.


Like0

Villanova Library Technology Blog: The Highlighter: A Christmas present – Christmases Past

planet code4lib - Tue, 2015-12-22 20:34

Photos from Falvey Christmas parties featuring staff we remember fondly:

For “How to” videos about the Library, click the “Help” button on Falvey’s homepage.


Like22 People Like This Post

Villanova Library Technology Blog: Farewell to Librarian Kristyna Carroll

planet code4lib - Tue, 2015-12-22 20:32

Falvey held a reception on Dec. 2 to say farewell to Kristyna Carroll, a research support librarian. Jutta Seibert, team leader for Academic Integration, thanked Carroll for her five years of service to Falvey Memorial Library. Carroll, a 2007 graduate of Villanova, came to Falvey as a librarian in 2010 after graduating from Drexel University with a master’s degree in library and information science. She is leaving to spend more time with her family.

Kristyna Carroll’s cake

Kristyna Carroll cutting her cake

 

Librarians and staff enjoying the reception

 


Like0

Villanova Library Technology Blog: The Curious ‘Cat: The first thing for fun?

planet code4lib - Tue, 2015-12-22 20:25

This week, the Curious ‘Cat asks Villanova students, “After your final final, what’s the first thing you want to do for fun?

Nina Rossiello—“Nap!” 

Gracie Kim—“Probably go out after finals to eat with some friends”

 

 

 

 

 

 

 

 

 

Timothy Chobot—“Watch Monday Night Football”

 

 

 

 

 

 

 

 

 

 

Taylor Wright—“I’ll probably watch about six hours of Netflix in a row.” 

 

 

 

 

 

 

 

 

 

Sarah DeAngelis—“I’ll probably go out to dinner with my roommates as a little celebration, and eat good off-campus food. Then probably pack up because I’m going abroad next semester so I’ll say goodbye to all my friends. I’m going to Dublin; I’m super excited.”

 

 

 

 


Robert Hurlbut
—“play NHL with my roommate—it’s an Xbox game.”


Like0

Villanova Library Technology Blog: 'Cat in the Stacks: Have a Mindful Christmas

planet code4lib - Tue, 2015-12-22 20:23

I’m Michelle Callaghan, a second-year graduate student at Villanova University. This is our column, “‘Cat in the Stacks.” I’m the ‘cat. Falvey Memorial Library is the stacks. I’ll be posting about living that scholarly life, from research to study habits to embracing your inner-geek, and how the library community might aid you in all of it.

I have been on a mindfulness kick – that I’m hoping is more than just a kick – and I’m going to try to bring the practice into the Christmas season. Here’s why I think you should, too!

Let me preface this by saying I’m not by nature a mindful person – that is, I don’t always keep my brain and heart where my feet are. My mind is usually on a hundred different things, especially now that I’ve been in grad school for a few semesters, and I’m very rarely “in the moment,” as they say. But what if I try? Even for just a couple of minutes a day, what if I make the effort – even if it’s the last thing I want to be doing in my brief windows of spare time?

Only good things would happen, of course.

Let me also admit that doing mindfulness practices – noticing the weather, noticing colors, actually listening, enjoying food – has been pretty challenging for me and, no, I’m not always (rarely) successful, but I do think the effort is worth it for the few times that I am! And on that note, I read an article about mindfulness during Christmastime and how that lack of “Christmas magic” you feel as an adult is probably because you don’t even really notice or remember it’s Christmastime. Life is busy! Hours and days go by fast! Then, poof, just like that, the holiday season has passed and then it’s just cold without the pretty lights.

Photo © Nevit Dilmen

If you’re religious or spiritual then it might be useful to rely on that as a way of staying in the moment and staying centered in the vibe of the season. But staying centered can be as simple as appreciating the effort your neighbors put into the decorating this year. It can be tasting, really tasting your friend’s homemade cookies. It can be really wanting to give a gift to a loved one for no other reason but to see them happy. It can be as simple as slowing down, if only for five minutes, as a gift to yourself.

It can be reading a really good book this winter break.

Have a fantastic holiday season, Villanova. Warmth and light!

Article by Michelle Callaghan, graduate assistant on the Communication and Service Promotion team. She is currently pursuing her MA in English at Villanova University.


Like0

Eric Hellman: xISBN: RIP

planet code4lib - Tue, 2015-12-22 18:31

When I joined OCLC in 2006 (via acquisition), one thing I was excited about was the opportunity to make innovative uses of OCLC's vast bibliographic database. And there was an existence proof that this could be done, it was a neat little API that had been prototyped in OCLC's Office of Research: xISBN.

xISBN was an example of a microservice- it offered a small piece of functionality and it did it very fast. Throw it an ISBN, and it would give you back a set of related ISBNs. Ten years ago, microservices and mashups were all the rage. So I was delighted when my team was given the job of "productizing" the xISBN service- moving it out of research and into the marketplace.

Last week,  I was sorry to hear about the imminent shutdown of xISBN. But it got me thinking about the limitations of services like xISBN and why no tears need be shed on its passing.

The main function of xISBN was to say "Here's a group of books that are sort of the same as the book you're asking about." That summary instantly tells you why xISBN had to die, because any time a computer tells you something "sort of", it's a latent bug. Because where you draw the line between something that's the same and something that's different is a matter of opinion and depends on the use you want to make of the distinction. For example, if you ask for A Study in Scarlet, you might be interested in a version in Chinese, or you might be interested to get a paperback version, or you might want to get Sherlock Holmes compilations that included A Study in Scarlet. For each  question you want a slightly different answer. If you are a developer needing answers to these questions, you would combine xISBN with other information services to get what you need.

Today we have better ways to approach this sort of problem. Serious developers don't want a microservice, they want richly "Linked Data". In 2015, most of us can all afford our own data crunching big-data-stores-in-the-cloud and we don't need to trust algorithms we can't control. OCLC has been publishing rather nice Linked Data for this purpose. So, if you want all the editions for Cory Doctorow's Homeland, you can "follow your nose" and get all the data you need.

  1. First you look up the isbn at http://www.worldcat.org/isbn/9780765333698
  2. which leads you to http://www.worldcat.org/oclc/795174333.jsonld (containing a few more isbns
  3. you can follow the associated "work" record: http://experiment.worldcat.org/entity/work/data/1172568223
  4. which yields a bunch more ISBNs.

It's a lot messier than xISBN, but that's mostly because the real world is messy. Every application requires a different sort of cleaning up, and it's not all that hard.

If cleaning up the mess seems too intimidating, and you just want light-weight ISBN hints from a convenient microservice, there's always "thingISBN". ThingISBN is a data exhaust stream from the LibraryThing catalog. To be sustainable, microservices like xISBN need to be exhaust streams. The big cost to any data service is maintaining the data, so unless maintaining that data is in the engine block of your website, the added cost won't be worth it. But if you're doing it anyway, dressing the data up as a useful service costs you almost nothing and benefits the environment for everyone. Lets hope that OCLC's Linked Data services are of this sort.

In thinking about how I could make the data exhaust from Unglue.it more ecological, I realized that a microservice connecting ISBNs to free ebook files might be useful. So with a day of work, I added the "Free eBooks by ISBN" endpoint to the Unglue.it api.

xISBN, you lived a good micro-life. Thanks.

Library of Congress: The Signal: Plans for Assessing Preservation Storage Options and Lifecycles at MIT Libraries: An NDSR Project Update

planet code4lib - Tue, 2015-12-22 16:19

The following is a guest post by Alexandra Curran, National Digital Stewardship Resident at MIT Libraries.  She participates in the NDSR-Boston cohort.

Hello readers, and happy holidays!

Alexandra Curran, NDSR-Boston Resident at MIT Libraries

Looking back at the last few months of my residency working in collaboration with the Digital Preservation Unit (DPU) at MIT Libraries and especially their Lead for Digital Preservation, Nancy McGovern, I realize that I have had a tremendous opportunity to learn from experts in the field. My previous experiences focused primarily on digital collections from an access rather than preservation perspective.  They stemmed from my interest in data management and compositing during my film school days and continued into my library work. Throughout my work and education, I utilized best practices and standards to manage and create workflows and lifecycles in order to make content available. They were inherent in everything I did, but I knew that I still had much to learn in regards to digital preservation.

MIT Libraries is the host of the Digital Preservation Management (DPM) workshop series. The Five Organizational Stages of the DPM model that was co-developed by Anne R. Kenney and Nancy Y. McGovern and published in 2003 are: “The Five Organizational Stages of Digital Preservation.”

  1. Acknowledge: Understanding that digital preservation is a local concern;
  2. Act: Initiating digital preservation projects;
  3. Consolidate: Seguing from projects to programs;
  4. Institutionalize: Incorporating the larger environment; and
  5. Externalize: Embracing inter-institutional collaboration and dependency.

My project is contributing to the transition by MIT libraries to Consolidate (stage 3). I have already attended my first DPM topical workshop on digital forensics and we are using the DPM model as a frame for my preservation storage project.

My project outcomes and deliverables will consider preservation storage using the three legs of the DPM stool: organizational infrastructure, technological infrastructure, and resources framework. My results will contribute to future policies about preservation storage, a term that Digital Preservation at MIT Libraries is promulgating as a more holistic term than Archival Storage in OAIS, at MIT Libraries.

A big part of this project is contributing to the collaborative assessment process with the DPU, members of MIT Libraries IT staff, and content curators within the Libraries.  In support of the evaluation, I have been studying relevant digital preservation standards, such as Reference Model for an Open Archival Information System (OAIS) and Trustworthy Repositories Audit & Certification: Criteria and Checklist (TRAC). These will guide the review of potential preservation storage services and options for MIT Libraries. Some factors under consideration during the assessment include:

  • Whether the service is open source or proprietary;
  • How storage nodes are managed;
  • What type of preservation security services/mechanisms they use and how they work;
  • What disaster recovery policies and procedures they have implemented; and
  • How their exit strategy, if they have one, works.

Preservation Storage in the Management of Digital Content Workflow at MIT Libraries

In addition to reviewing standards, I will work with content curators at MIT Libraries to identify the digital content they intend to preserve, that is not currently in preservation storage. Next I will develop a plan for how that content might be moved to preservation storage.  I will review the Libraries’ digital content workflow to determine if additions will be necessary to move existing digital content into preservation storage. I am looking forward to working with researchers in the Libraries’ very cool, Digital Sustainability Lab to examine the ways in which potential preservation storage options might function and how they might collaborate with current tools. Based on the results of the assessment, I will create recommendations about good enough, good, and optimal options for the Libraries to consider when choosing preservation storage services and lifecycles. I plan to share updates and outcomes from the project through the NDSR Boston blog, the digital preservation website at MIT, presentations at NDSR Boston and other events.

The project is only one part of the residency requirements. The residents have the opportunity to identify and engage in a range of professional development activities. So far, I have participated in a range of professional development activities as a way to learn about the community and to introduce myself.  I presented at the New England National Digital Stewardship Alliance (NE NDSA) conference in September. In November I volunteered at iPRES at the University of North Carolina (UNC), where I learned about the fascinating recent digital preservation developments. During iPRES I particularly enjoyed the Preservation Storage Community Discussion, a very relevant session for my project. Many repositories are grappling with digital preservation storage and it was great to be there for the discussion. The Policy and Documentation Clinic was a helpful and insightful seminar because, hopefully, the outcomes and implications of my project will inform the preservation storage policy and planning here at MIT Libraries.

I look forward to returning to UNC in January for CurateGear 2016. The NDSR Boston residents will be one of participate in the Preservation Administrators Interest Group (PAIG) presentations at ALA Midwinter on January 9th at 9 am. And we will host our mid-year event on January 26th from 3pm-5 pm at Harvard. I hope to see you there!

LibUX: 030 – The Future of WordPress is JavaScript

planet code4lib - Tue, 2015-12-22 14:26

Amanda and Michael talk about what it means now that the architecture for making API endpoints easier was added to WordPress 4.4 alongside the announcement of Calypso, the JavaScript front-end of WordPress-dot-com websites.

It’s a big deal. This episode has more technical jargon than normal, but we try to explain what this means for the higher-ed and libraries in the big picture. Namely, WordPress is becoming a more powerful, malleable, front-end-framework agnostic option that can power robust sites and applications – especially in terms of cost-effectiveness. This affords libraries more negotiating power and options in the vendor space.

We also talk about issues around learning JavaScript, the ugggh-factor of having to learn a new thing — at least, that’s what Michael is saying: #libweb and #heweb folk need to learn JavaScript — and the like.

If you like you can download the MP3.

You can subscribe to LibUX on Stitcher, iTunes, or plug our feed right into your podcatcher of choice. Help us out and say something nice. You can find every podcast on www.libux.co.

The post 030 – The Future of WordPress is JavaScript appeared first on LibUX.

LibUX: 029 – Brianna Marshall and Cameron Cook

planet code4lib - Tue, 2015-12-22 14:10

This episode of LibUX was published through podcatchers of your choice last week, but — yikes! — I forgot to post it to the website. I — Michael — interview Brianna Marshall and Cameron Cook about the redesign of UW’s Research Data Services ( see the original on the Wayback Machine ).

Just look at that pun

We recorded just a day or two after LITA Forum, so toward the end I ask Brianna and Cameron about their favorite talks and takeaways.

If you like you can download the MP3.

You can subscribe to LibUX on Stitcher, iTunes, or plug our feed right into your podcatcher of choice. Help us out and say something nice. You can find every podcast on www.libux.co.

The post 029 – Brianna Marshall and Cameron Cook appeared first on LibUX.

Information Technology and Libraries: Editorial Board Thoughts: Library Analytics and Patron Privacy

planet code4lib - Tue, 2015-12-22 05:14
Editorial Board Thoughts: Library Analytics and Patron Privacy

Information Technology and Libraries: Reference is Dead, Long Live Reference: Electronic Collections in the Digital Age

planet code4lib - Tue, 2015-12-22 05:14

In a literature survey on how reference collections have changed to accommodate patrons’ web-based information-seeking behaviors, one notes a marked “us vs. them” mentality — a fear that the Internet might render reference irrelevant. These anxieties are oft-noted in articles urging libraries to embrace digital and online reference sources. Why all the ambivalence? Citing existing research and literature, this essay explores myths about the supposed superiority of physical reference collections and how patrons actually use them, potential challenges associated with electronic reference collections and how providing vital e-reference collections benefits the library as well as its patrons.

Pages

Subscribe to code4lib aggregator