You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 2 hours 32 min ago

District Dispatch: Department of Labor announces training and child care grants

Tue, 2015-12-29 20:37

The Department of Labor announced recently that up to $25 million in grants will be available through the Strengthening Working Families Initiative. Their press release states the following:

The grants will support public-private partnerships that bridge gaps between local workforce development and child-care systems. In addition to addressing these systemic barriers,

Credit: Chris Potter

funded programs will enable parents to access training and customized supportive services needed for IT, health care, advanced manufacturing jobs, and others. All participants in grant funded programs must be custodial parents, legal guardians, foster parents, or others standing in loco parentis with at least one dependent. Up to 25 percent of the grantees total budget may be used to provide quality, affordable care and other services to support their participation in training.

“For too many working parents, access to quality, affordable child care remains a persistent barrier to getting the training and education they need to move forward on a stronger, more sustainable career path,” said U.S. Secretary of Labor Thomas E. Perez. “Our economy works best when we field a full team. That means doing everything we can to provide flexible training options and streamlined services that can help everyone in America realize their dreams.”

Grants up to $4 million will be awarded to partnerships that include the public workforce system, education and training providers, business entities, and local child-care or human-service providers. In addition, all partnerships must include at least three employers. Grantees will also be required to secure an amount equal to at least 25 percent of the total requested funds through outside leveraged resources.

These grants will be awarded in the spring 2016 for programs beginning in July 2016. More information about this funding is available at grants.gov.

The post Department of Labor announces training and child care grants appeared first on District Dispatch.

LITA: LITA at ALA Midwinter – Boston

Tue, 2015-12-29 16:56

If you’re going to ALA Midwinter in Boston, don’t miss these excellent LITA activities.  Click the links for more information.  And check out the entire:

LITA at ALA Midwinter schedule

Friday, January 8, 2016, 8:30 am to 4:30 pm

LITA “Makerspaces: Inspiration and Action” tour at Midwinter!

How do you feel about 40,000 square feet full of laser cutters, acetylene torches, screen presses, and sewing machines? Or community-based STEAM programming for kids? Or lightsabers? If these sound great to you

Register Now

Saturday, January 9, 2016, 10:30 am to 11:30 am, Boston Convention & Exhibition Center – 104 BC

All Committees, and all Interest Groups, meetings

This is where and when all the face to face meetings happen.  If you want to become involved in working with LITA, show up, volunteer, meet your colleagues, express your interests, share your skills.

Sunday, January 10, 2016, 10:30 am to 11:30 am, Boston Convention & Exhibition Center – 253 A

Top Technology Trends Discussion Session

Part of the ALA News You Can Use series this is LITA’s premier program on changes and advances in technology. Top Technology Trends features our ongoing roundtable discussion about trends and advances in library technology by a panel of LITA technology experts and thought leaders. The panelists for this session include:

  • Moderator: Lisa Bunker, Pima County Public Library
  • Jason Griffey, Berkman Center for Internet & Society at Harvard University
  • Jim Hahn, University of Illinois at Urbana-Champaign
  • Jamie Hollier, Anneal, Inc. and Commerce Kitchen
  • Alex Lent, Millis Public Library
  • Thomas Padilla, Michigan State University
  • Rong Tang, Simmons College
  • Ken Varnum, University of Michigan

Sunday, January 10, 2016, 4:30 pm to 5:30 pm, Seaport Hotel, Room Harborview 2

LITA Open House

LITA Open House is an opportunity for current and prospective members to talk with Library and Information Technology Association (LITA) leaders, committee chairs, and interest group participants. Share information, encourage involvement in LITA activities, and help attendees build professional connections.

Sunday, January 10, 2016, 6:00 pm to 8:00 pm, MIJA Cantina & Tequila Bar Quincy Market – 1 Faneuil Hall Marketplace – Boston, MA

LITA Happy Hour

Please join the LITA Membership Development Committee and members from around the country for networking, good cheer, and great fun! Expect lively conversation and excellent drinks. Cash Bar. Map the location.

Monday, January 11, 2016, 8:30 am to 10:00 am, Boston Convention & Exhibition Center – 104 BC

LITA Town Meeting

Join your fellow LITA members for breakfast and a discussion led by President-elect Aimee Fifarek, about LITA’s strategic path. We will focus on how LITA’s goals–collaboration and networking; education and sharing of expertise; advocacy; and infrastructure–help our organization serve you and the broader library community. This Town Meeting will help us turn those goals into plans that will guide LITA going forward.

 

Questions or Comments?

For all other questions or comments related to LITA at ALA Midwinter Boston, contact LITA at (312) 280-4268 or Mark Beatty, mbeatty@ala.org

FOSS4Lib Recent Releases: veraPDF - 0.8

Mon, 2015-12-28 17:13

Last updated December 28, 2015. Created by Peter Murray on December 28, 2015.
Log in to edit this page.

Package: veraPDFRelease Date: Tuesday, December 22, 2015

FOSS4Lib Recent Releases: Metadata Hopper - 1.0

Mon, 2015-12-28 17:09

Last updated December 28, 2015. Created by Peter Murray on December 28, 2015.
Log in to edit this page.

Package: Metadata HopperRelease Date: Tuesday, December 22, 2015

FOSS4Lib Updated Packages: Metadata Hopper

Mon, 2015-12-28 17:05

Last updated December 28, 2015. Created by Peter Murray on December 28, 2015.
Log in to edit this page.

Metadata Hopper is a tool built by the University of Illinois at Chicago Library in cooperation with Chicago Collections. It is designed to work with the eXtensible Text Framework digital library platform from California Digital Library.

Metadata Hopper allows users to contribute content to an XTF repository and to enhance that content through shared navigation facets or 'tags.' It also generates a Dublin Core metadata file that works alongside the original metadata file to create a consistent search and browse interface.

Package Type: Metadata ManipulationLicense: BSD Revised Package Links Development Status: Production/StableOperating System: Browser/Cross-Platform Releases for Metadata Hopper Programming Language: PythonDatabase: PostgreSQLworks well with: eXtensible Text Framework

David Rosenthal: Annotation progress from Hypothes.is

Mon, 2015-12-28 16:00
I've blogged before on the importance of annotation for scholarly communication and the hypothes.is effort to implement it. At the beginning of December Hypothesis made a major announcement:
On Tuesday, we announced a major new initiative to bring this vision to reality, supported by a coalition of over 40 of the world’s essential scholarly organizations, such as JSTOR, PLOS, arXiv, HathiTrust, Wiley and HighWire Press, who are linking arms to establish a new paradigm of open collaborative annotation across the world’s knowledge.Below the fold, more details on this encouraging development.

There is now an official W3C Working Group to standardize annotation. The initiative laid out their near-term goals:
Our goal is that within three years, annotation can be deployed across much of scholarship. Today, coalition members are in different phases of engagement. Some have already implemented annotation natively and are working to increase adoption and develop new uses, others are at the very beginning of the process. Objectives for the first year, in 2016, are to begin socializing the progress that’s been made, to identify opportunities and discuss potential challenges, lay out a common roadmap, and for most to begin the process of design and experimentation necessary to implement annotation in their own context.Getting the various site's commenting and annotation systems to converge on Web standards to provide interoperability will be critical. The announcement continues:
More information about this initiative and the coalition members is available here, including a video with interviews of key members. Nature News covered it here and we also blogged about what led to the formation of this coalition. Here is the reaction that it generated.

Over the last year, we've already seen integration in a number of scholarly publishing platforms and portals, such as with USC Scalar, and Ubiquity Press. University of Michigan Press has been using Hypothesis for both pre- and post-publication discussion of texts, including this ”annotation event” around the publication of James Brown’s Ethical Programs: Hospitality and the Rhetorics of Software.It will be very interesting to see how this progresses. As with Memento and Signposting, getting the technology to work is the first but easier step. Getting it widely deployed and thus useful is much harder.

Patrick Hochstenbach: Doing my daily Sktchy portraits

Mon, 2015-12-28 12:10
Filed under: Doodles, Sketchbook Tagged: art, art model, illustration, portait, rotring, sketchbook, sketches, sktchy

Meredith Farkas: My year in reading, 2015

Sun, 2015-12-27 17:20

I’ve learned over time that work/life balance not only looks different for every person, but looks very different for an individual from one moment to the next. The needs you and your loved ones have and the things that give you the most pleasure can radically change over time. This year, I really tried to make more time in my life for reading. While reading helped transport me to other places when I was dealing with horrible work stress that no longer exists in my current (wonderful) job, I’ve found the practice of reading every evening psychologically beneficial. Reading centers me… puts me in a good mental space. I still have some nights when I’m so worn out I just want to watch guilty pleasure TV (and do), but I’ve found myself turning to books more often than not this year.

Here’s a list of books I’ve read in 2015. The books noted with an asterisk were ones I couldn’t get through. Ones in bold, were books I REALLY enjoyed or found particularly moving (it’s hard to say you enjoyed a book that emotionally put you through the ringer). I put what I’d probably say was my favorite book read this year as the featured image for this post.

This year, I’ve included some books that I read to my son, though only the long ones and only those we’ve read since summer. We are pretty religious about our approximately one hour of reading to our son each night (though now he has to do 15-20 minutes of reading to us as well), so we’ve covered a lot of territory over time. Getting into the Harry Potter books this Fall has been such a great pleasure for me (I’m the only one allowed to read those because my husband can’t do a British accent). It’s so fun to see the books through his eyes this time around.

The Martian: A Novel – Andy Weir – a rare case where the movie was better than the book. Great story, but badly written IMHO.

The Residue Years* – Mitchell Jackson

Yes, Please – Amy Poehler

Americanah – Chimamanda Ngozi Adichie – I wanted to keep following the compelling story of the two Nigerian protagonists long after the book ended.

Everything I Never Told You – Celeste Ng – about growing up under the weight of parental expectations. A tragedy.

Department of Speculation – Jenny Ofill – a book that defies description. Almost stream of consciousness, but not in an artsy pretentious way.

Juliet, Naked – Nick Hornby

Matterhorn: A Novel of the Vietnam War – Karl Marlantes – this book shook me to my core.

All the Light We Cannot See – Anthony Doerr – simply beautiful

The Invisible Circus – Jennifer Egan

Fangirl – Rainbow Rowell

The Life Changing Magic of Tidying Up – Marie Kondo – a bit dogmatic, but my house is so much less cluttered and better organized now!!!

The Book of Unknown Americans – Cristina Henríquez

Find Me – Laura Van Den Berg

Lean In – Sheryl Sandberg – I liked this much more than I’d expected to given that I don’t feel I really lean in.

The Devil in the White City – Erik Larson

So You’ve Been Publicly Shamed – Jon Ronson

The Girl on the Train – Paula Hawkins

Our Souls at Night – Kent Haruf – I was so sad for it to end, knowing I’d never read another new book by one of my very favorite authors. RIP.

Funny Girl – Nick Hornby

The Phantom Bully (Star Wars – Jedi Academy 3) – Jeffrey Brown

Last Things – Jenny Ofill

Euphoria – Lily King – easily the most overrated book of the past year IMHO

Harry Potter and the Chamber of Secrets – J. K. Rowling

Harry Potter and the Prisoner of Azkaban – J. K. Rowling

Harry Potter and the Goblet of Fire – J. K. Rowling

Ramona the Pest – Beverly Cleary

The Children Act – Ian McEwan

Disclaimer – Renée Knight

A Spool of Blue Thread* – Ann Tyler

Henry and Ribsy
– Beverly Cleary

Attachments: A Novel – Rainbow Rowell

Henry and Bezus – Beverly Cleary

I don’t Know Where You Know Me From: My Life as a Co-Star* – Judy Greer

The Narrow Road to the Deep North* – Richard Flanagan – I was surprised I couldn’t get into this book

Annihilation: A Novel – Jeff VanderMeer – I’d expected to like this, but it was a real disappointment

In the Unlikely Event – Judy Blume

Modern Romance – Aziz Ansari

Paper Towns – John Green

A Window Opens – Elisabeth Egan

Mothers, Tell Your Daughters – Bonnie Jo Campbell – female characters so vulnerable, real, and raw it’s painful to read. But also such compelling short stories

Ed Summers: Preservation

Sat, 2015-12-26 05:00

Some (lighter) winter break reading from Hagen (1977):

Vitality consists of this birth and death. This impermanence, this constant arising and fading away, are the very things that make our lives vibrant, wonderful and alive.

Yet we usually want to keep things from changing. We want to perserve things, to hold onto them. As we shall see, this desire to hold on, to somehow stop change in its tracks, is the greatest source of woe and horror and trouble in our lives. (p. 21)

Hagen, S. (1977). Buddhism plain and simple. Charles E. Tuttle Co.

Ed Summers: Dreamtime

Sat, 2015-12-26 05:00

A snippet from some winter break reading from Morton (2013) about what he calls phasing in his study of Hyperobjects:

Napangati’s Untitled 2011 is a phase of Dreamtime, a phasing painting whose waves undulate like Hendrix’s guitar. The painting itself forces me to see higher and higher dimensions of itself, as if layers of phase space were being superimposed on the other layers. These layers appear deep, as if I could reach my arms into them. They float in front of the picture surface. They move. The painting holds me, spellbound. The painting looks like a map or a plot in phase space, which is just what it is, in one sense: a map of how women walked across some sand hills. Yet what appears to be a map turns out to be a weapon. The painting emits spactime, emits an aesthetic field. The painting is a unit, a quantum that executes a function. It is a device, not just a map but also a tool, like a shaman’s rattle or a computer algorithm. The function of the painting seems to be to imprint me with the bright red shadow of a hyperobject, the Australian Outback, the Dreamtime, the long history of the Pintupi Nine, the Lost Tribe, some of the last Neolithic humans on Earth. (p. 74-75)

Morton, T. (2013). Hyperobjects: Philosophy and ecology after the end of the world. University of Minnesota Press.

FOSS4Lib Recent Releases: Koha - 3.22.1, 3.20.7, 3.18.13

Fri, 2015-12-25 11:37
Package: KohaRelease Date: Friday, December 25, 2015

Last updated December 25, 2015. Created by David Nind on December 25, 2015.
Log in to edit this page.

Monthly maintenance releases for Koha.

See the release announcements for the details:

FOSS4Lib Recent Releases: Koha - 3.22

Fri, 2015-12-25 11:29
Package: KohaRelease Date: Thursday, November 26, 2015

Last updated December 25, 2015. Created by David Nind on December 25, 2015.
Log in to edit this page.

This is the six monthly feature release of the Koha open source integrated library system.

It was released on 26 November 2015 and includes 10 new features, 155 enhancements and 381 bug fixes.

For more information see the release notes:
https://koha-community.org/koha-3-22-released/

The recommended installation method is to use the packages for Debian and Ubuntu, rather than the tar file or git.

Open Knowledge Foundation: Unlocking Election Results Data: Signs of Progress but Challenges Still Remain

Thu, 2015-12-24 06:10

This blog post was written by the NDI election team -Michael McNulty and Benjamin Mindes

How “open” are election results data around the world? Answering that question just became much easier. For the first time, the Global Open Data Index 2015 assessed election results data based on whether the results are made available at the polling station level. In previous years, the Index looked at whether election results were available at a higher (constituency/district) level, but not at the polling station level.

As a result, the 2015 Global Open Data Index provides the most useful global assessment to date on which countries are and are not making election results available in an open way. It also highlights specific open data principles that most countries are meeting, as well as principles that most countries are not meeting. This helps inform the reform agenda for open election data advocates in the months and years ahead.

Before we take a look at the findings and possible ways forward, let’s first consider why the Global Open Data Index’s shift from constituency/district level results to polling station results is important. This shift in criteria has shaken up the rankings this year, which has caused some discussion about why polling station-level results matter. Read on to find out!

Why are Polling Station-level Election Results Important?

Meets the open data principle of “granularity”

A commonly accepted open data principle is that data should be made available at the most granular, or “primary,” level — the level at which the source data is collected. (See the the 8 Principles of Open Government Data principle on Primary; and the G8 Open Data Charter section on Quality and Quantity.) In the case of election results, the primary level refers to the location where voters cast their ballots — the polling station. (See the Open Election Data Initiative section on Granularity. Polling stations are sometimes called precincts, polling streams, or tables depending on the context) So, if election results are not counted at the polling station level and/or only made available in an aggregate form, such as only at the ward/constituency/district level, then that dataset is not truly open, since it does not meet the principle of granularity. (See the Open Election Data Initiative section on Election Results for more details.)

Promotes transparency and public confidence

Transparency means that each step is open to scrutiny and that there can be an independent verification the process. If results aren’t counted and made public at the polling station level, there is a clear lack of transparency, because there is no way to verify whether the higher-level tabulated results can be trusted. This makes election fraud easier to conceal and mistakes harder to catch, which can undermine public confidence in elections, distort the will of the voter, and, in a close election, even change the outcome.

For example, let’s imagine that a tabulation center is aggregating ballots from 10 polling stations. Independent observers at two of those polling stations reported several people voting multiple times, as well as officials stuffing ballot boxes. If polling station results were made available, observers could check whether the number of ballots cast exceeds the number of registered voters at those polling stations, which would support the observers’ findings of fraud. However, if polling station level results aren’t made available, the results from the two “problem” polling stations would be mixed in with the other eight polling stations. There would be no way to verify what the turnout was at the two problem polling stations, and, thus, no way to cross-check the observers’ findings with the official results.

Reduces tension

Election observers can combine their assessment of the election day process with results data to verify or dispel rumors at specific polling stations, but only if polling station-level results are made public.

Bolsters public engagement

When voters are able to check the results in their own community (at their polling station), it can help build confidence and increase their engagement and interest in elections. Enhances and expands the uses of election results data It can be used to help enhance participation rates. Civil society groups can use polling station-level turnout data to more precisely target their voter education and mobilization campaigns during the next elections. Political parties and candidates can if high invalid ballot rates are found in specific PSs, it can help groups target their voter info campaigns the next time.

Aligns with an emerging global norm

Making results available at the polling station level is rapidly becoming a global norm. In most countries, political parties, election observers, the media, and voters have come to expect nothing less than for polling station level results to be posted publicly in a timely way and shared freely.

The 2015 Open Data Index shows how common this practice has become. Of the 122 countries studied, 71 of them (58%) provide election results (including results, registered voters, and number of invalid and spoiled ballots) at the polling station level. There are some significant differences across regions, however. Sub Saharan Africa and Asia had lowest percentage of countries that provide polling station level results data (42% and 41% respectively). Eastern Europe and Latin America have the highest percentage of countries with 71% each.

What Does the Index Tell Us about How to Open Up and Use Election Data?

Drawing on the 2015 Global Open Data Index findings and on open election data initiatives at the global, regional and national levels, we’ve highlighted some key priorities below.

1. Advocacy for making polling-station level results publicly available

While most countries make polling-station level results available, over 40% of the 112 countries researched in the Global Open Data Index still do not. At a regional level, Sub Saharan Africa, Asia and the Middle East & North Africa have the furthest to go.

2. Ensuring election results data is truly open

Making election data available is good first step, but it can’t be freely and easily used and redistributed by the public if it is not truly “open.” Election data is open when it is released in a manner that is granular, available for free online, complete and in bulk, analyzable (machine-readable), non-proprietary, non-discriminatory and available to anyone, license-free and permanent. Equally as important, election data must be released in a timely way. For election results, this means near real-time publication of provisional polling station results, with frequent updates.

The Global Open Data Index assesses many of these criteria, and the 2015 findings help highlight which criteria are more and less commonly met across the globe. On the positive side, of the 71 countries that make polling-station level results available, nearly all of them provide the data in a digital (90%), online (89%), public (89%) and free (87%) manner. In addition, 92% of those 71 countries have up-to-date data.

However, there are some significant shortcomings across most countries. Only 46% of the 71 countries provided data that was analyzable (machine readable). Similarly, only 46% of countries studied provided complete, bulk data sets. Western Europe (67%) had the highest percentage of countries providing complete, bulk data, while Middle East & North Africa and Sub Saharan Africa (both 38%) had the lowest percentage of countries doing so.

3. Not just election results: Making other types of election data open

While election results often get the most attention, election data goes far beyond results. It involves information relating to all aspects of the electoral cycle, including the legal framework, decisionmaking processes, electoral boundaries, polling stations, campaign finance, voter registry, ballot qualification, procurement, and complaints and disputes resolution. All of these categories of data are essential assessing the integrity of elections, and open data principles should be applied to all of them.

4. Moving from transparency to accountability

Opening election data helps make elections more transparent, but that’s just the beginning. To unlock the potential of election data, people need to have the knowledge and skills to analyze and use the data to promote greater inclusiveness and public engagement in the process, as well as to hold electoral actors, such as election management bodies and political parties, accountable. For example, with polling station data, citizen election observer groups around the world have used statistics to deploy observers to a random, representative sample of polling stations, giving them a comprehensive, accurate assessment of election day processes. With access to the voters list, many observer groups verify the accuracy of the list and highlight areas for improvement in the next elections.

Despite the increasing availability of election data, in most countries parties, the media and civil society do not yet have the capacity to take full advantage of the possibilities. The National Democratic Institute (NDI) is developing resources and tools to help equip electoral stakeholders, particularly citizen election observers, to use and analyze election data. We encourage more efforts like this so that the use of election data can reach its full potential.

For more on NDI’s Open Election Data Initiative, check out the website (available in English, Spanish and Arabic) and like us on Facebook.

David Rosenthal: Signposting the Scholarly Web

Wed, 2015-12-23 16:00
At the Fall CNI meeting, Herbert Van de Sompel and Michael Nelson discussed an important paper they had just published in D-Lib, Reminiscing About 15 Years of Interoperability Efforts. The abstract is:
Over the past fifteen years, our perspective on tackling information interoperability problems for web-based scholarship has evolved significantly. In this opinion piece, we look back at three efforts that we have been involved in that aptly illustrate this evolution: OAI-PMH, OAI-ORE, and Memento. Understanding that no interoperability specification is neutral, we attempt to characterize the perspectives and technical toolkits that provided the basis for these endeavors. With that regard, we consider repository-centric and web-centric interoperability perspectives, and the use of a Linked Data or a REST/HATEAOS technology stack, respectively. We also lament the lack of interoperability across nodes that play a role in web-based scholarship, but end on a constructive note with some ideas regarding a possible path forward.They describe their evolution from OAI-PMH, a custom protocol that used the Web simply as a transport for remote procedue calls, to Memento, which uses only the native capabilities of the Web. They end with a profoundly important proposal they call Signposting the Scholarly Web which, if deployed, would be a really big deal in many areas. Some further details are on GitHub, including this somewhat cryptic use case:
Use case like LOCKSS is the need to answer the question: What are all the components of this work that should be preserved? Follow all rel="describedby" and rel="item" links (potentially multiple levels perhaps through describedby and item).Below the fold I explain what this means, and why it would be a really big deal for preservation.

Much of the scholarly Web consists of articles, each of which has a Digital Object Identifier (DOI). Herbert and Michael's paper's DOI is 10.1045/november2015-vandesompel. You can access it by dereferencing this link: http://dx.doi.org/10.1045/november2015-vandesompel. CrossRef's DOI resolver will redirect you to the current location of the article, providing location-independence. The importance of location-independent links, and the fact that they are frequently not used, was demonstrated by Martin Klein and a team from the Hiberlink project in Scholarly Context Not Found: One in Five Articles Suffers from Reference Rot. I discussed this article in The Evanescent Web.

But Herbert and Michael's paper is an anomaly. The DOI resolution service redirects you to the full text HTML of the paper. This not what usually happens. A more representative but very simple example is: http://dx.doi.org/10.2218/ijdc.v8i1.248. You are redirected to a "landing page" that contains the abstract, some information about the journal, and a lot of links. Try "View Source" to get some idea of how complex this simple example is; it links to 36 other resources. Some, such as stylesheets, should be collected for preservation. Others, such as the home pages of the journal's funders, should not be. Only one of the linked resources is the PDF of the article, which is the resource most needing preservation.

If a system is asked to ingest and preserve this DOI, it needs to be sure that, whatever else it got, it did get the PDF. In this very simple, well-organized case there are two ways to identify the link leading to the PDF:
  • The link the reader would click on to get the PDF whose target is http://www.ijdc.net/index.php/ijdc/article/view/8.1.107/300 and whose anchor text is "PDF".
  • A meta-tag with name="citation_pdf_url" content="http://www.ijdc.net/index.php/ijdc/article/view/8.1.107/300".
So we have two heuristics for finding the PDF, the anchor text and the citation_pdf_url meta-tag. Other might include anchor text "Download" or "Full Text". Similarly, the system needs to use heuristics to decide which links, such as those the funder's home pages, not to follow. Sites vary a lot, and in practice preservation crawlers need a range of such heuristics. Most landing pages are far more complex than this example.

Top half of Figure 5.The LOCKSS system's technology for extracting metadata, such as which URLs are articles, abstracts, figures, and so on, was outlined in a talk at the IIPC's 2013 GA, and detailed in one of the documents submitted for the CLOCKSS Archive's TRAC audit. It could be much simpler and more reliable if Herbert and Michael's proposal were adopted. Figure 5 in their paper shows two examples of signposting, the relevant one is the top half. It shows the normal case of accessing an article via a DOI. The DOI redirects to a landing page whose HTML text, as before, links to many resources. Some, such as A, are not part of the article. Some, such as the PDF, are. These resources are connected by typed links, as shown in the diagram. These typed links are implemented as link HTTP headers whose rel attribute expresses the type of the link using an IANA-registered type such as describes or item.

Now, when the preservation crawler is redirected to and fetches the landing page, the HTTP headers contain a set of link entries. Fetching each of them ensures that all the resources the publisher thinks are part of the article are collected for preservation. No heuristics are needed; there is no need even to parse the landing page HTML to find links to follow.

Of course, this describes an ideal world. Experience with the meta-tags that publishers use to include bibliographic metadata suggests some caution in relying solely on these link headers. Large publishing platforms could be expected to get them right most of the time, headers on smaller platforms might be less reliable. Some best practices would be needed. For example, are script tags enough to indicate JavaScript that is part of the article, or do the JavaScript files that are part of the article need a separate link header?

Despite these caveats it is clear that even if this way of unambiguously defining the boundaries of the artefact identified by a DOI was not universal, it would significantly reduce the effort needed to consistently and completely collect these artefacts for preservation. Ingest is the major cost of preservation, and "can't afford to collect" is the major cause of content failing to reach future readers. Thus anything that can significantly reduce the cost of collection is truly a big deal for preservation.

Villanova Library Technology Blog: Foto Friday: A last word from Dickens

Wed, 2015-12-23 15:48

It was always said of him, that he knew how to keep Christmas well, if any man alive possessed the knowledge. May that be truly said of us, and all of us! And so, as Tiny Tim observed, God Bless Us, Every One!

A Christmas Carol – Charles Dickens, 1843.

Laura Hutelmyer is the photography coordinator for the Communication and Service Promotion Team and Special Acquisitions Coordinator in Resource Management


Like0

Library of Congress: The Signal: The Top 10 Blog Posts of 2015 on The Signal

Wed, 2015-12-23 15:13

Mummers Parade on New Year’s day, Philadelphia, Pennsylvania. Photo by Carol M. Highsmith, Jan 1, 2011. Carol M. Highsmith Archive, Library of Congress Prints and Photographs Division.

It’s the end of the year on The Signal, and it gives us the chance to look back at our most popular posts of the year.

As we have in past years, we were thrilled to share projects and updates that are happening in the community or for the community. Digital stewardship on a national scale requires engaging many communities, and here at The Signal we’re pleased to share work happening at the Library and at other organizations.

As I have, I hope you take a quick read back through these posts. They are a great reflection of the diversity, range and interest in digital stewardship topics, like standards, workflows, tools, and networking and collaboration between Library partners and practitioners working in the field. We’re looking forward to 2016 when we hope to share more of the same, as well as share activities and projects that highlight national and international digital library initiatives.

Thanks to all of our contributors and readers for a great blogging year!  Here’s the entire list of top 10 posts of 2015, ranked by page views based on data as of December 22:

  1. The Personal Digital Archiving 2015 Conference
  2. Tracking Digital Collections at the Library of Congress, from Donor to Repository
  3. Mapping Libraries: Creating Real-time Maps of Global Information
  4. All in the (Apple ProRes 422 Video Codec) Family
  5. Creating Workflows for Born-Digital Collections: An NDSR Project Update
  6. A New Interface and New Web Archive Content at Loc.gov
  7. Introducing the Federal Web Archiving Working Group
  8. Reaching Out and Moving Forward: Revising the Library of Congress’ Recommended Format Specifications
  9. Digital Forensics and Digital Preservation: An Interview with Kam Woods of BitCurator
  10. Cultural Institutions Embrace Crowdsourcing

Is your favorite blog post on the list? Did you have a favorite one that didn’t make the list? Share it in the comments below!

Villanova Library Technology Blog: Available for proofreading: How to Become an Actor

Tue, 2015-12-22 21:14

Our latest Distributed Proofreaders project is another vintage “how to” manual from publisher Frank Tousey. How to Become an Actor, as the title suggests, deals with theatrical matters, and like many books in this series, it is quite ambitious for its brief length, covering not just acting but also makeup, set design and other technical matters. As if that were not enough, it also includes several short plays.

The modern reader is unlikely to learn many useful skills from this text, but it does provide considerable insight into the popular entertainments of its time. To help make the book even more accessible through the creation of a new electronic edition, you can read this previous blog post to learn about the proofreading process, then join in the work at the project page.


Like0

Villanova Library Technology Blog: Falvey Hosts Stress Busters Open House

Tue, 2015-12-22 20:40

Stress Busters, an open house sponsored by Falvey Memorial Library and the Villanova Electronic Enthusiasts Club, was held on Dec. 10, from 1 to 6 p.m. Soft pretzels, hot drinks, games, Star Wars themed coloring books, cootie catchers, a floor puzzle and a special appearance by Will D. Cat were featured. In keeping with the Star Wars theme, Han Solo (Sarah Wingo, liaison librarian for English and Theatre) and Darth Vader (Michelle Callaghan, Communication and Service Promotion team graduate assistant) attended the open house and also roamed the campus inviting students to the Stress Busters open house.

Students enjoy soft pretzels and other free snacks

Han Solo dueling with Darth Vader

Will D. Cat and student play with cootie catcher

Student coloring in a Star Wars coloring book

Will D. Cat, Rob LeBlanc and students playing video game

On Friday, Dec. 11, Stress Free Happy Healthy Hours were held from 10 am until 4 p.m. in room 205 in Falvey. Each hour featured a different activity, such as grown-up coloring books, making your own stress balls, snacks and drinks. From noon until 2 p.m. visiting therapy dogs were available for petting.

Photos by Alice Bampton


Like0

Villanova Library Technology Blog: Librarians 'recycle' snappy mnemonic aid for student information literacy

Tue, 2015-12-22 20:37

Rob LeBlanc, first-year experience/humanities librarian, and Barbara Quintiliano, nursing/life sciences and instructional services librarian, recently published an article, “Recycling C.R.A.P.: Reframing a Popular Mnemonic for Library Instruction,” in Pennsylvania Libraries: Research and Practice, volume 3, number 2 (Fall 2015).

Librarians Barbara Quintiliano and Rob LeBlanc, with their manuscript

Quintiliano and LeBlanc were interested in applying the new Framework for Information Literacy for Higher Education that was adopted by the ACRL (Association of College and Research Libraries) in 2015. This Framework replaced the previous Information Literacy Competency Standards for Higher Education. The two librarians were reshaping their information literacy programs to incorporate the new Framework.

Quintiliano explains, “Rob and I were tossing around ideas one day about how the new Framework could be applied, and we thought of the C.R.A.P. acronym which had been used by instruction librarians … to teach students how to evaluate information, especially information that they find on the web. With a bit of imagination and prestidigitation, we were able to transform the acronym into a concise, snappy way of conveying the Framework concepts to first-year students. As first-year librarian, Rob immediately started to put it into practice.”

Quintiliano and LeBlanc originally hoped to present a session on the topic at the fall 2015 Pennsylvania Libraries Association conference. That conference, however, already had more proposals than time slots available, so the organizers suggested the topic would make an interesting article for Pennsylvania Libraries: Research and Practice.  Consequently, the two collaborated on the article, which was accepted for publication.

What is C.R.A.P. in the context of library instruction? According to LeBlanc and Quintiliano it stands for “Conversation, Revision, Authority and Property.” These concepts are taught by the authors so that students can properly evaluate information needed to write college-level research papers. The full article can be accessed here.


Like0

Pages