You are here

Feed aggregator

District Dispatch: What’s new in the library ebook lending market?

planet code4lib - Wed, 2016-06-08 06:29

A young boy enjoys reading an ebook on his tablet. Courtesy: Milltown Public Library.

What has changed in the library ebook lending environment in the past year? A panel of library and publishing experts will provide an update on the library ebook lending market and discuss best ways for libraries to advance library access to digital content at the 2016 American Library Association’s (ALA) Annual Conference in Orlando, Fla. The session, “Digital Content Working Group—Update and Future Directions,” takes place from 8:30 to 10:00 a.m. on Sunday, June 26, 2016, in room W205 of the Orange County Convention Center.

Library leaders from ALA’s Digital Content Working Group (DCWG) will provide an update on the DCWG’s activities. The event features an expert panel that focuses on future directions. The ALA Digital Content Working Group was established by ALA leadership to address the greatest digital opportunities and challenges for libraries.

During the session, participants will hear from a number of library ebook lending experts, including Carolyn Anthony, director of the Skokie Public Library, and co-chair of the American Library Association’s Digital Content Working Group; Michael Blackwell, director of St. Mary’s County Library in Leonardtown, Md.; Erika Linke, associate dean of the University Libraries and director of Research & Academic Services for Carnegie Mellon University and co-chair, American Library Association Digital Content Working Group; Trevor Owens, senior program officer of the Institute for Museum and Library Services’ National Digital Platform.

Want to attend other policy sessions at the 2016 ALA Annual Conference? View all ALA Washington Office sessions

The post What’s new in the library ebook lending market? appeared first on District Dispatch.

District Dispatch: Revolutionary ways to offer e-books to the print-disabled

planet code4lib - Wed, 2016-06-08 06:06

There has been a shift in the way people access information: E-books and the widespread use of graphics to convey information have created a “new normal” for how we read and learn. While these resources are readily available, too many of them are not accessible for the print-disabled. As a result, people with disabilities such as vision impairments, physical limitations and severe learning disabilities, often face barriers to information.

Photo by Elio-Rojano via Flickr

During the session “Accessible Books for All” at the 2016 American Library Association (ALA) Annual Conference, a panel of e-books and accessibility experts will discuss the successful partnership between Benetech/Bookshare, the New York Public Library and others to provide free access to over 400,000 books, periodicals and more to qualified library patrons.

The conference session takes place on Monday, June 27, 2016, 10:30-11:30 a.m., in room W105A of the Orange County Convention Center. Session speakers include Jill Rothstein, managing librarian, Andrew Heiskell Braille and Talking Book Library, New York Public Library (NYPL); and Lisa Wadors Verne, program manager, Education, Research and Partnerships, Benetech.

Want to attend other policy sessions at the 2016 ALA Annual Conference? View all ALA Washington Office sessions

The post Revolutionary ways to offer e-books to the print-disabled appeared first on District Dispatch.

Ed Summers: Baltimore Stories

planet code4lib - Wed, 2016-06-08 04:00

I’m in Baltimore today to participate in a public event in the Baltimore Stories series sponsored by the University of Maryland and the Maryland Humanities Council with support from the National Endowment for the Humanities. Check out the #bmorestories hashtag and the schedule for information about other events in the series. I’m particularly honored and excited to take part because of MITH’s work in build a research archive of tweets related to the protests in Baltimore last year, and a little bit of collaboration with Denise Meringolo and Joe Tropea to connect their BaltimoreUprising archive with Twitter. And of course there is my involvement in the Documenting the Now project, where the role of narrative and its place in public history is so key.

Since it’s a public event I’m not really sure who is going to show up tonight. The event isn’t about talking heads and powerpoints. It’s an opportunity to build conversations about the role of narrative in community and cultural/historical production. This blog post isn’t the text of a presentation, it’s just a way for me to organize some thoughts and itemize a few things about narrative construction in social media that I hope to have a chance talk about with others who show up.

Voice

To give you a little bit more of an idea about what Baltimore Stories is aiming to do here’s a quote from ther NEH proposal:

The work we propose for this project will capitalize on the public awareness and use of the humanities by bringing humanities scholars and practitioners into conversation with the public on questions that are so present in the hearts and minds of Baltimoreans today: Who are we, Baltimore? Who owns our story? Who tells our stories? What stories have been left out? How can we change the narrative? We imagine the exploration of narratives to be of particular interest to Baltimore, and to other cities and communities affected by violence and by singular narratives that perpetuate violence and impede understanding and cooperation. We believe that the levels of transdisciplinarity and collaboration with public organizations in this project are unprecedented in a city-wide collaboration of the humanities and communities.

In one of the previous Baltimore Stories events, film maker and multi-media artist Ralph Crowder talked about his work documenting the events in Ferguson Missouri last year. I was particularly struck by his ability to connect with the experience of the highschool students in the audience and relate it to his work as an artist. One part of this conversation really stuck with me:

Don’t ever be in a situation where you feel like you don’t have a voice. Don’t ever be in a situation where someone else is talking for your experience. You be the person who does that for you. Because if you can’t do that someone else is going to come around and they’re going to talk for what you are going through, and many times they are going to get paid some money off of your struggle.

As someone that received a grant from a large foundation to help build tools and community around social media archiving and events like those in Ferguson, Missouri and those in Baltimore I can’t help but feel implicated by these words–and to recognize their truth. I grew up in the middle class suburbs of New Jersey, where I was raised by both my Mom or Dad. I shouldn’t have been, but I was shocked by how many kids raised their hands when Crowder asked how many were being raised by only one parent, and how many were being raised by no parent at all. I know I am coming from a position of privilege, and that that the mechanics of this privilege afford yet more privilege. But getting past myself for a moment I can see what Crowder is saying here is important. What he is saying is in fact fundamental to the work we are doing on Documenting the Now, and the value proposition of the World Wide Web itself.

For all its breathless techno solutionist promises and landscape fraught with abuse and oppression, social media undeniably expands individuals’ ability to tell their own stories in their own voice to a (potentially) world wide audience. This expansion is of course relative to individual’s ability to speak to that same audience in earlier mediums like letters, books, magazines, journals, television and radio. The Web itself made it much easier to publish information to a large audience. But really the ability to turn the crank of Web publishing was initially limited to people with a very narrow set of technical skills. What social media did was greatly expand the number of people who could publish on the Web, and share their voices with each other, and with the world.

Stories

The various social media platforms (Twitter, Instagram, YouTube, Google+, Facebook, Tumblr, etc) condition what can be said, who it can be said to, and who can say things back. This is an incredibly deep topic that people have written books about. But to get back to Crowder’s advice, consider the examples of DeRay McKesson, Devin Allen, Johnetta Elzie, Alicia Garza and countless others who used social media platforms like Twitter, Facebook and Instagram to document the BlackLivesMatter movement, and raise awareness about institutionalized racism. Checkout DeRay’s Ferguson Beginnings where he has been curating his own tweets from Ferguson in 2014. Would people like me know about Ferguson and BlackLivesMatter if people like DeRay, Devin, Johnetta and Alicia weren’t using social media to document their experience?

It’s not my place to tell this story. But as someone who studies the Web, builds things for the Web here are a few things to consider when deciding how to and where to tell your story on the Web.

Hashtags

Hashtags matter. Bergis and I saw this when we were collecting the #BaltimoreRiots and #BaltimoreUprising tweets. We saw the narrative tension between the two hashtags, and the attempts to reframe the protests that were going on here in Baltimore.

Take a look at a sampling of random tweets with media using the #BaltimoreRiots and #BaltimoreUprising. Do you get a sense of how they are being used differently? #BlackLivesMatter itself is emblematic of the power of naming in social media. Choosing a hashtag is choosing your audience.

Visibility

When you post to a social media platform be aware of who gets to see it and the trade offs associated with that decision. For example, by default when you publish a tweet you are publishing a message for the world. Only people who follow you get an update about it. But anyone in the world can see it if they have the URL for your tweet. You can delete a tweet which will remove the tweet from the Web, but it won’t pull it back from the clients and other places that may have stored it. You can choose to make your account protected which means it is only viewable by people you grant access to. The different social media platforms have different controls for who gets to see your content. Try to understand the controls that are available. When you share things publicly they obviously can be seen by a wider audience which gives it more reach. But publishing publicly also means your content will be seen by all sorts of actors, which may have other ramifications.

Surveillance

One of the ramifications is that social media is being watched by all sorts of actors, including law enforcement. We know from Edward Snowden and Glenn Greenwald that the National Security Agency is collecting social media content. The full dimensions of this work is hidden, but just google for police Facebook evidence or read this Wikipedia article and you’ll find lots of stories of how local law enforcement are using social media. Even if you aren’t doing anything wrong be aware of how your content might be used by these actors. It can be difficult but don’t let this surveillance activity have a chilling effect on you using exercising your right to freedom of speech, and using your voice to tell your story.

Harassment

When you publish your content publicly on the Web you open yourself up to harassment from individuals who may disagree with you. For someone like DeRay and many other public figures on social media this can mean having to block thousands of users because some of them send death threats and other offensive material. This is no joke. Learn how to leverage your social media tools to ignore or avoid these actors. Learn strategies for working with this content. Perhaps you want to have a friend look at your messages for you, to get some distance. Perhaps you can experiment with BlockBots and other collaborative ways of ignoring bad actors on the Web. If there are controls reporting spammers and haters use them.

Ownership

If you look closely at the Terms of Service for major social media platforms like Twitter, Instagram and Facebook you will notice that they very clearly state that you are the owner of the content you put on there. You also often grant the social media platform a right to redistribute and share that content as well. But ultimately it is yours. You can take your content and put it in multiple platforms such as YouTube, Vine and Facebook. You may want to use a site like Wikimedia Commons or Flickr that allow you to attach a Creative Commons license to your work. Creative Commons provides a set of licenses that let you define how your content can be shared on the Web. If you are using a social media platform like Tumblr that lets you change the layout of the website you can add a creative commons license to your site. Ultimately this is the advantage of creating a blog on Wordpress, Medium or hosting it yourself since you can claim copyright and license your material as you see fit. However it is a trade off since it may be more difficult to get the exposure that you will see in platforms like Instagram, Vine or Facebook. If you want you could give low resolution versions to social media outlets with links to high resolution versions you publish yourself with your own license.

Archive

Many social media platforms like Facebook, Twitter and Medium Instagram allow you to download an archive of all the content you’ve put there. When you are selecting a social media platform make sure you can see how to get your content out again. This could be useful if you decide to terminate your account for whatever reason, but retain a record of your work. Perhaps you want to move your content to another social media platform. Or perhaps you are creating a backup in case the platform goes away. Maybe, just maybe, you are donating your work to a local library or archive. Being able to download your content is key.

District Dispatch: Emily Sheketoff to join advocacy panel at ALA Annual

planet code4lib - Tue, 2016-06-07 18:51

With years of hard-won experience under their belt, retired librarians and library workers are well-positioned to put their advocacy skill to use for libraries. Their years in the field have given them a wealth of knowledge and stories that need to be shared with legislators, and in retirement they have the added advantage of

Photo credit: Howard Lake

being able to speak up as a private citizen without job-induced time restraints. This year, attendees at ALA Annual will have the opportunity to learn how they can leverage their time and experience to protect the libraries they love.

Join the Retired Members Round Table and the Federal Legislation Advocacy Group (FLAG) to learn a few simple ways you can promote libraries on a federal, state, and local level. Emily Sheketoff, Executive Director of the ALA Washington office, will cover ways to have an impact on federal elected officials. She will be joined by Marci Merola, Director of the ALA Office for Library Advocacy, who will cover matters of a more local nature. The third panelist, Jan Sanders, Director of the Pasadena Public Library, will relate several successful advocacy projects she implemented at her library and share the insights she gained.

Program details:

Fast and Easy: Advocacy That YOU Can Do!

Sunday, June 26, 2016

1:00 – 2:30pm

OCCC, Room W106

The post Emily Sheketoff to join advocacy panel at ALA Annual appeared first on District Dispatch.

David Rosenthal: The Need For Black Hats

planet code4lib - Tue, 2016-06-07 15:00
I was asked to provided some background for a panel on "Security" at the Decentralized Web Summit held at the Internet Archive. Below the fold is a somewhat expanded version.

Nearly 13 years ago my co-authors and I won Best Paper at SOSP for the peer-to-peer anti-entropy protocol that nodes in a LOCKSS network use to detect and repair damage to their contents. The award was for showing a P2P network that failed gradually and gracefully under attack from a very powerful adversary. Its use of proof-of-work under time constraints is related to ideas underlying blockchains.

The paper was based on a series of simulations of 1000-node networks, so we had to implement both sides, defence and attack. In our design discussions we explicitly switched between wearing white and black hats; we probably spent more time on the dark side. This meant that we ended up with a very explicit and very pessimistic threat model, which was very helpful in driving the design

The decentralized Web will be attacked, in non-obvious ways. Who would have thought that IP's strength, the end-to-end model, would also bring one of its biggest problems, pervasive surveillance? Or that advertising would be the death of Tim Berners-Lee's Web

I'd like to challenge the panelists to follow our example, and to role-play wearing black hats in two scenarios:
  • Scenario 1. We are the NSA. We have an enormous budget, no effective oversight, taps into all the major fiber links, and a good supply of zero-days. How do we collect everyone's history of browsing the decentralized Web? (I guarantee there is a team at NSA/GCHQ asking this question).
  • Scenario 2. We are the Chinese government. We have an enormous budget, an enormous workforce, a good supply of zero-days, total control over our country's servers and its connections to the outside world. How do we upgrade the Great Firewall of China to handle the decentralized Web, and how do we censor our citizens use of it? (I guarantee there is a team in China asking these questions).
I'll kick things off by pointing out one common factor between the two scenarios, that the adversaries have massive resources. Massive resources are an inescapable problem for decentralized systems, and the cause is increasing returns to scale or network effects. Increasing returns are the reason why the initially decentralized Web is now dominated by a few huge companies like Google and Facebook. They are the reason that Bitcoin's initially decentralized blockchain recently caused Mike Hearn to write this:
the block chain is controlled by Chinese miners, just two of whom control more than 50% of the hash power. At a recent conference over 95% of hashing power was controlled by a handful of guys sitting on a single stage.One necessary design goal for networks such as Bitcoin is that the protocol be incentive-compatible, or as Ittay Eyal and Emin Gun Sirer express it:
the best strategy of a rational minority pool is to be honest, and a minority of colluding miners cannot earn disproportionate benefits by deviating from the protocolThey show that the Bitcoin protocol was, and still is, not incentive-compatible. More recently, Sirer and others have shown that the Distributed Autonomous Organization based on Ethereum isn't incentive-compatible either. Even if these protocols were, increasing returns to scale would drive centralization and thus ensure attacks with massive resources, whether from governments, large corporations. And lets not forget that attacks can be mounted using botnets.

Massive resources enable Sybil attacks. The $1M attack CMU mounted in 2014 against the Tor network used both traffic confirmation and Sybil attacks:
The particular confirmation attack they used was an active attack where the relay on one end injects a signal into the Tor protocol headers, and then the relay on the other end reads the signal. These attacking relays were stable enough to get the HSDir ("suitable for hidden service directory") and Guard ("suitable for being an entry guard") consensus flags. Then they injected the signal whenever they were used as a hidden service directory, and looked for an injected signal whenever they were used as an entry guard.Traffic confirmation attacks don't need to inject signals, they can be based on statistical correlation. Correlations in the time domain are particularly hard for interactive services, such as Tor and the decentralized Web, to disguise.
Then the second class of attack they used, in conjunction with their traffic confirmation attack, was a standard Sybil attack — they signed up around 115 fast non-exit relays, all running on 50.7.0.0/16 or 204.45.0.0/16. Together these relays summed to about 6.4% of the Guard capacity in the network. Then, in part because of our current guard rotation parameters, these relays became entry guards for a significant chunk of users over their five months of operation.Sybil attacks are very hard for truly decentralized networks to defend against, since no-one is in a position to do what the Tor project did to CMU's Sybils:
1) Removed the attacking relays from the network.Richard Chirgwin at The Register reports on Philip Winter et al's Identifying and characterizing Sybils in the Tor network. Their sybilhunter program found the following kinds of Sybils:
  • Rewrite Sybils – these hijacked Bitcoin transactions by rewriting their Bitcoin addresses;
  • Redirect Sybils – these also attacked Bitcoin users, by redirecting them to an impersonation site;
  • FDCservers Sybils – associated with the CMU deanonymisation research later subpoenaed by the FBI;
  • Botnets of Sybils – possibly misguided attempts to help drive up usage;
  • Academic Sybils – they observed the Amazon EC2-hosted nodes operated by Biryukov, Pustogarov, and Weinmann for this 2013 paper; and
  • The LizardNSA attack on Tor.
The Yale/UT-Austin Dissent project is an attempt to use cryptographic techniques to provide anonymity while defending against both Sybil and traffic analysis attacks, but they believe there are costs in doing so:
We believe the vulnerabilities and measurability limitations of onion routing may stem from an attempt to achieve an impossible set of goals and to defend an ultimately indefensible position. Current tools offer a general-purpose, unconstrained, and individualistic form of anonymous Internet access. However, there are many ways for unconstrained, individualistic uses of the Internet to be fingerprinted and tied to individual users. We suspect that the only way to achieve measurable and provable levels of anonymity, and to stake out a position defensible in the long term, is to develop more collective anonymity protocols and tools. It may be necessary to constrain the normally individualistic behaviors of participating nodes, the expectations of users, and possibly the set of applications and usage models to which these protocols and tools apply. They note:
Because anonymity protocols alone cannot address risks such as software exploits or accidental self-identification, the Dissent project also includes Nymix, a prototype operating system that hardens the user’s computing platform against such attacks.Getting to a shared view of the threats the decentralized Web is intended to combat before implementations are widely deployed is vital. The lack of such a view in the design of TCP/IP and the Web is the reason we're in the mess we're in. Unless the decentralized Web does a significantly better job handling the threats than the current one, there's no point in doing it. Without a "black hat" view during the design, there's no chance that it will do a better job.

District Dispatch: Putting libraries front and center during the presidential election

planet code4lib - Tue, 2016-06-07 05:42

Photo by Sebastiaan ter Burg via Flickr

The presidential election is right around the corner, with the presidency, Congress, and the U.S. Supreme Court in the balance, and a new Librarian of Congress imminent. Learn about actions that the American Library Association (ALA) is taking to prepare for the coming opportunities and challenges at the 2016 ALA Annual Conference in Orlando, Fla. Join political and library leaders at the conference session “Taking Libraries Transform and the Policy Revolution! to the New Presidential Administration,” where experts will discuss strategic efforts to influence federal policy initiatives in Washington, D.C., and how these efforts transfer to the state and local levels. The session takes place on Saturday, June 25, 2016, 10:30-11:30 a.m.,in the Orange County Convention Center in room W105B.

Speakers include Susan Hildreth, former director, Institute of Museum and Library Services (IMLS); ALA Treasurer-elect; and executive director of the Peninsula (Calif.) Library System; Anthony Sarmiento, executive director of Senior Service America, Inc., member of the ALA Public Policy Advisory Council and past senior official with AFL-CIO; Alan S. Inouye, director of the American Library Association Office for Information Technology Policy (OITP); and Mark Smith, director and Librarian of the Texas State Library and Archives Commission. This conference session is sponsored by ALA’s Office for Information Technology Policy and United for Libraries.

Want to attend other policy sessions at the 2016 ALA Annual Conference? View all ALA Washington Office sessions

The post Putting libraries front and center during the presidential election appeared first on District Dispatch.

DuraSpace News: OpenVIVO: Connect, Share and Discover the VIVO Community at the 2016 VIVO Conference

planet code4lib - Tue, 2016-06-07 00:00

OpenVIVO (OpenVIVO.org) is for anyone who's interested in VIVO or the VIVO community - take a look!

Tara Robertson: alternate formats: who pays?

planet code4lib - Mon, 2016-06-06 22:44
from opensource.com

Yesterday a had a big realization. Many textbook publishers continue to publish inaccessible content and those costs are borne by the public education system through alternate format production. Publishers are not responsible for producing accessible material and universities and colleges purchase things that aren’t accessible to all their students and then pay again to make them accessible. In BC I’d estimate that at least $1 million per year is spent on obtaining or producing alternate formats. This is an access issue, a human rights issue, and it’s also an economics issue.

Here are some of the conversations and pieces of information that led to this observation.

Creating an Inclusive Quality Standard of Education

I was sad to miss The Guelph Accessibility Conference at University of Guelph last week. Karen McCall presented Creating an Inclusive Quality Standard of Education (PDF handouts of her slides) where she argues that access to education is a human right. At work I’m more focused on the technical workflows and had forgotten about the human rights issues around access to education. She says that “accommodation is the norm, rather than the exception” and that this keeps people with disabilities “on the periphery of society” (slide 3). She states that “what this does is shift “the ‘cost” of inclusive design and inclusive communities to the corporate sector instead of in primary, secondary and tertiary education” (slide 3).

Karen states that in the US $79 billion is spent on ICT (information communication technology) a year, so there is enough purchasing power to demand that things are accessible from the start. She argues that “the best way to ensure inclusive communities is to mandate the procurement of eAccessible only products and services” (slide 6). This would also encourage competition and innovation in the market, which would benefit everyone.

Universal design for learning workshops

Recently I’ve presented a few workshops on universal design for learning (UDL) with Amanda Coolidge and Sue Doner. These workshops build on the personas from the Accessibility Toolkit. The workshop materials are also CC-BY licensed, so feel free to use or adapt them.

 

Appendix: Redesign or Accommodation Activity Guidelines

In this workshop we also compare disability accommodation and UDL. There will always be a need for disability accommodation, but we argue that using the UDL principles can solve many of the common access issues (videos without captions, images that lack descriptions, poor organization of information and concepts).

Disability Accommodation Universal design for learning reactive proactive accommodation is for one student who has appropriate documentation improves accessibility for many students students with disabilities; students who have a disability and lack the documentation; students with a disability who for whom the stigma in accessing services is too great; students for whom English is not their first language; students with a variety of learning styles for many students there is a stigma in accessing disability services the onus is on the instructor to think about how they are teaching rather than on the individual student to request a retrofit

Jennifer LeVecque, from Camosun’s Disability Services Department, pointed out that for print coursepacks from the campus bookstore it’s possible that the publisher gets paid more than once. First, the library might already be paying to license journal articles databases that have those articles. Second, the bookstore (or the copyright office) might be paying the publisher for the rights to produce the coursepack, then passing those costs on to the student. When most academic libraries opted out of Access Copyright tariff in 2012, many worked to change the workflow for producing and licensing coursepacks, encouraging faculty to link directly to the articles that the library had licensed. This is also a UDL best practice as it supports multiple ways of representation and allows students who have print disabilities to access these digital files using whatever assistive technology they use.

CAPER-BC Advisory Committee meeting

At the CAPER BC Advisory Committee meeting there were questions about why publishers are producing new e-textbooks that are not accessible. Jewelles Smith, BC Director for NEADS, suggested that it would be useful to collaborate in assessing the accessibility of specific publisher e-textbook platforms, or of common e-textbook titles that are being used. Last month Benetech published their Buy Accessible guidelines, which is a list of specific questions for people who are selecting and purchasing textbooks to ask publishers and vendors.

So what?

Many for profit textbook publishers continue to publish content that is inaccessible and the public education system spends money to remediate these textbooks to make them accessible. Textbook publishers make a lot of money and have shrugged off their ethical and legal (depending on where you live) responsibilities to students with disabilities and faculty keep choosing to use these textbooks, and bookstores keep buying them. Then Disability Service Offices and organizations like where I work spend a lot of time and money retrofitting. This is not a financially sustainable model.

Solutions

We need to build in language around accessibility into procurement policies at universities and colleges. Where things are not accessible we need to make the cost of retrofit explicit and charge that cost back to the publisher. With digital workflows publishers have the opportunity to make fully accessible digital versions of textbooks available for students to buy. Right now alternate format production is a market externality to publishers, so there is no financial incentive or cost to meeting accessibility guidelines. If we believe that education is a human right for all, then we need procurement policies and laws that reflect this.

HangingTogether: “Ground Truthing” MARC

planet code4lib - Mon, 2016-06-06 20:00

Although the thought was revolutionary back in 2002, librarians now widely recognize that our metadata requirements have outgrown the MARC standard. After 50 years of service it’s time to make library data more actionable and linkable on the web. But to do that we need to bring our existing data assets into some sort of new regime. And doing that well is going to take understanding what we have to work with.

Before I go on I need to make it clear that when I say “MARC” I’m really conflating a number of things that attempt to describe and control which data elements are recorded and how.  For the purposes of this piece, MARC comprises the various flavors of the MARC standard (primarily MARC21), cataloging rules as expressed in AACR (first edition 1, then edition 2), and even punctuation rules as described by ISBD. These are the foundational elements of the library bibliographic universe as it has been developing since the 1960s.

In a perfect world these standards would have described exactly the right things to do and the right ways to accomplish them and they would have been widely understood and uniformly applied. This is not a perfect world.

So as the profession moves forward to create a new bibliographic infrastructure that is web-ready, linked and linkable, an important — indeed vital — question that must be answered is: What do we have to work with? That is, if we want to create canonical linked data entities for elements like authors, subjects, works, etc. we need data that is largely machine actionable in some critical ways. In other words, instead of looking at our data from afar, as satellites look at the Earth, we must “ground truth” our data just as those working with remote sensing do — we need to check our perceptions on the ground to make sure they are accurate. This is what my project, “MARC Usage in WorldCat” has been doing for the last 3 years.

For example, look at how many ways we have recorded the fact that an item includes “illustrations”. It’s clear that processing any of our data will require a lot of normalization, at minimum, to be able to understand that “ill.” and “illustrations” and “ilustrations” and “ilustr.” and “il.” and whatever, all mean the same thing.

This is not the only use for the site, as Reinhold Huevelmann at the German National Library (DNB) points out:

Mapping an element from an internal metadata format, e.g. Pica+, to MARC 21 (see http://www.heuvelmann.de/Pica_Turntable.jpg ) sometimes needs grounded discussions and informed decisions.  In cases of doubt, apart from the MARC 21 standard and its documentation, the reality of MARC 21 data elements used in OCLC’s WorldCat provides good guidance in choosing the right target element, to prevent theoretically available, but exotic options, and thus to enhance the visibility of bibliographic data and the resources described.

As Reinhold points out, the standards are one thing, but another is how they have been used “on the ground”. There are often decisions that must be made between competing options and it can be helpful to know what decisions others have made.

Thus this project is unlike most of what OCLC Research does, as it does not come to any conclusions, but rather is a tool that can be used to help you reach your own conclusions about library bibliographic data and how it continues to evolve over time. A side benefit has also been that occasionally we catch errors that can be corrected.

About Roy Tennant

Roy Tennant works on projects related to improving the technological infrastructure of libraries, museums, and archives.

Mail | Web | Twitter | Facebook | LinkedIn | Flickr | YouTube | More Posts (96)

District Dispatch: CopyTalk webinar on state government overreach now available

planet code4lib - Mon, 2016-06-06 17:36

From Lotus Head

In recent years entities of state government have attempted to rely on copyright as a means to suppress the dissemination of taxpayer-funded research and as a means to chill criticism but failed in the courts due to a lack of copyright authority. Ernesto Falcon, legislative counsel with the Electronic Frontier Foundation, reviews the status of pending California legislation, the court decisions that lead to its creation, and the debate that now faces the California legislature in this CopyTalk webinar.

 

The post CopyTalk webinar on state government overreach now available appeared first on District Dispatch.

Islandora: Registration now open: iCampMO 2016

planet code4lib - Mon, 2016-06-06 16:31

Registration is now open for the last Islandora Camp of 2016: Islandora Camp Missouri, taking place in Kansas City from October 12 - 14. You can save some money with the Early Bird rate until August 15th. We also have a Call for Proposals open until August, so you can share your own work with Islandora and related systems. 

Interested in becoming a sponsor? Have questions about the event? Contact us.

Big thanks to our host and sponsor:

Villanova Library Technology Blog: The Community Bibliography – A Falvey Memorial Library Project

planet code4lib - Mon, 2016-06-06 13:00

The Community Bibliography is “[a] celebration of Villanova University community authors and scholars past, present and future.” It is “an open repository of the entire published output of the Villanova University community.” The goal is to digitally preserve “our proud scholarly heritage, from our community’s historical publications of the 19th century to the cutting edge research of today.” Community is defined as any individual (faculty, staff, student, alumnus, Augustinian, administrator) affiliated with Villanova University.

This Bibliography may be of interest to Villanova alumni returning for Reunion 2016 (Thursday, June 9 – Sunday, June 12). The Community Bibliography hosts citations for alumni authors from the Class of 1920 through the Class of 2015. Here is an opportunity to check out what your classmates have accomplished.

The Community Bibliography evolved from discussions among Library Director (at the time) Joe Lucia; Darren Poley, Theology/Outreach librarian; Michael Foight, Special Collections and Digital Library coordinator;  and Andrew Nagy, a former Falvey technical developer. Poley explains, “The idea was to use the citation management software Andrew developed for the Finding Augustine project to manage a comprehensive list of published artifacts by anyone affiliated with Villanova since the inception of the University. Michael and I agreed that his team would manage the image management associated with creating an institutional repository, while my Outreach team would oversee the development and maintain a bibliography that would be fully searchable on the Web and that [we] would not need to worry about copyright issues since it would only be supplying the citations.”

A data entry pilot project began in January 2007 and that was a pivotal year for the Community Bibliography. In May the project officially came under the supervision of the Outreach team and, three months later, the project gained momentum with increased multi-faceted data gathering. Later that year Falvey personnel began talking to people outside of Falvey about inter-operability. In November a content review produced procedural and system refinements.

The Community Bibliography was unveiled to the University’s academic leaders at a March 1, 2008, gala dinner in Falvey. There, Poley said, “Our Community Bibliography specifically allows for all works, popular and scholarly, to be documented, but why bother? This information is already gathered both formally and informally. Professors keep track of works for Curriculum Vitae, offices and departments monitor faculty and staff publications. But how does one know altogether what Villanova as a community has published? The problem is that there is no one place where information on all of these works is available … Our Community Bibliography becomes the device for allowing ourselves and others to see in a measurable way what our community has produced.”

A February 2008 newsletter article, “The ‘institutional repository’ rethought:  Community Bibliography debuts,” not only explains the significance of the project, but also tells how it relates to the Faculty Fulltext project created by the Digital Library.

Stephen Spatz, assistant Outreach and Research Support librarian, does most of the day-by-day work on the Bibliography. He gathers and uploads citations of works by Villanova University community members; he searches mostly Falvey’s database collection, but also occasionally locates materials in faculty and departmental webpages and “even in a few cases, typewritten bibliographies, both published and unpublished.” He says, “There are currently about 12,000 citations in the database, most of which cover the most recent scholarly output of the VU community, but about 5% predate 1980 and, even in some cases, stretch back into the 19th century.” Spatz also maintains the Digital Library’s Faculty Fulltext database “which aims to parallel the citation-only content of the Community Bibliography with full-text versions of the most recent scholarly output of VU faculty.” Spatz also supervises students who do some of the data entry.

The two projects, Community Bibliography and Faculty Fulltext, developed from an academic movement to counter the commercialization of intellectual property, making information freely available as a means of sharing and promoting scholarship. Falvey’s early creation of these two projects puts it on the cutting edge of new ways of using technology to share scholarly information.

For more information contact communitybibliography@villanova.edu

 

Darren Poley, Stephen Spatz and Michael Foight generously contributed information for this article.


Like0

Terry Reese: MarcEdit Updates

planet code4lib - Mon, 2016-06-06 05:27

This update has been a little while coming and represents a significant number of updates, bug fixes and enhancements.  On the Mac side, the two largest updates where the implementation of the Delimited Text Translator and the OAI Harvester, on all platforms – Alma Integration.  You can find notes about the Alma integration here: http://blog.reeset.net/archives/1950

Please see the full list of updates below.  Downloads can be picked up through the automatic update mechanism in MarcEdit or via the downloads page at: http://marcedit.reeset.net/downloads

–tr

MarcEdit Windows/Linux Updates:

* Bug Fix: ILS Integration: Local Integration — corrected display rendering and search for keyword
* Bug Fix: Add/Delete Records — Corrected problem when using the Add field only if not a duplicate option
* Enhancement: Validate Headings — added dynamic caching
* Enhancement: Build Links — added dynamic caching
* Enhancement: ILS Integration — First version of Alma integration
* Bug Fix: Math conversion — Degree/minute/seconds to Degrees correction
* Settings Change: Updated the RDA Field conversion to limit abbreviation checking in the 245 field to the 245$c
* Enhancement: RDA Abbreviations — new abbreviations added
* Enhancement: Select/Delete MARC Records — added option to expose specialized search options like Field # searching, Range Searching, File Searching and Size searching.
* Bug Fix: OAI Harvester — Debug URL wasn’t correct when adding date values.
* Bug Fix: RDA Helper — Added Data validation to ensure that invalid 008 data doesn’t cause a data crash.
* Enhancement: Delimited Text Translator — Added more preview options
* Enhancement: Delimited Text Translator — Added Holdings LDR/008 values
* Enhancement: UI Improvements — a large number of textboxes that accept file paths now support drag and drop.

MarcEdit Mac Updates:

* Bug Fix: ILS Integration: Local Integration — corrected display rendering and search for keyword
* Bug Fix: Add/Delete Records — Corrected problem when using the Add field only if not a duplicate option
* Enhancement: Validate Headings — added dynamic caching
* Enhancement: Build Links — added dynamic caching
* Enhancement: ILS Integration — First version of Alma integration
* Bug Fix: Math conversion — Degree/minute/seconds to Degrees correction
* Settings Change: Updated the RDA Field conversion to limit abbreviation checking in the 245 field to the 245$c
* Enhancement: RDA Abbreviations — new abbreviations added
* Enhancement: Select/Delete MARC Records — added option to expose specialized search options like Field # searching, Range Searching, File Searching and Size searching.
* Bug Fix: RDA Helper — Added Data validation to ensure that invalid 008 data doesn’t cause a data crash.
* Enhancement: UI Improvements — a large number of textboxes that accept file paths now support drag and drop.
* Enhancement: OAI Harvester Implemented
* Enhancement: Delimited Text Translator implemented

Terry Reese: MarcEdit Alma Integration

planet code4lib - Mon, 2016-06-06 05:27

Over the past month, I’ve been working with ExLibris (thank you to Ori Miller at ExLibris) and Boston College (thanks to Margaret Wolfe) to provide direct integration between MarcEdit and Alma via the Alma Apis.  Presently, the integration allows users to search, create, and update records.  Setup is pretty easy (I think) and once you have your API access setup correctly – you should be off and running.  But, it will be interesting to see if that’s the case as more people play around with this in their sandboxes.

Setting up integration

MarcEdit Alma integration requires that you configure an API key with Alma that supports the bib api and the user api.  The bib api represents the endpoints where the record editing and retrieval happen, while the user api is used to provide a thin layer of authentication before MarcEdit attempts to run an operation (since Alma doesn’t have it’s own authentication process separate from having a key). 

I’d recommend testing this first in your Sandbox.  To do this, you’ll need to know your sandbox domain, and be able to configure the API accordingly.  If you don’t know how to do this, you’ll want to contact ExLibris. 

Once you have your API key, open MarcEdit’s main window and click the Preferences icon.

This will open the Preference’s window.  Select the ILS Integration Link, and then check the Enable ILS Integration Checkbox, select Alma from the listbox and then enter the domain for your sandbox.  Alma’s API doesn’t require a username, so leave that blank, but enter your API key into the Password Textbox.  Finally, you’ll need to have setup a Z39.50 connection to your instance.  This is how MarcEdit searches Alma for record retrieval.  If you haven’t setup a Z39.50 Connection, you can do that here, or you can open the Z39.50 Client, Select Modify Databases, Add a new Z39.50 Server, and enter the information for your Alma Instance.  Here’s an example configuration (minus the username and password) for Boston College’s Sandbox:

With your Z39.50 Server configured and selected – the ILS Integration Preference’s window will look something like this:

Save these settings.  Now, when you open the MarcEditor, you’ll see a new menu item:

This menu item will allow you to search and update/create records.  To find items, click on the menu and select Search.  You’ll get the following window:

If I run a search for Boston, I’ll retrieve 5 results based on the limit set in the Limit textbox:

You can either download all the items by clicking the Download All Items, or you can select the items individually that you want to download, and right click on the Results.  This will give you a menu allowing you to download the records. 

When downloaded, the record will be opened into MarcEdit like the below:

Couple notes about the download.  If the download includes an 852 (and they can) – you’ll want to delete that field, otherwise the field will get duplicated.  Right now, I’m trying to figure out if MarcEdit should just remove the value, or if there is an applicable use case for keeping it. 

Download the record, make the edits that you want to make to the record, and then click the Update/Create option from the Alma window.

When you click the Update/Create – the tool will upload your data to your Alma server.  If there is an error, you’ll receive the returned error message.  If the process was successful, you’ll get an message telling you that the data had been processed. If you are interesting in seeing the resulting XML output – MarcEdit automatically copies the data to the clipboard. 

Couple of notes about the process – in my testing, I found that updating Serials records was spotty.  I’m thinking this might have something to do with permissions – but I’m not positive about that.  I’m hoping to do a bit more investigation – but I wanted to get this out for folks to start playing with it and maybe providing some feedback.

Secondly, there is a holdings API – it would be possible to allow users to modify holdings data via MarcEdit, but I’d need use-cases in order to see how it fits into this process.

I’m sure this will be a process that I’ll be refining over the next few weeks – but in the mean time, I’d welcome any and all comments. 

–tr

* I’ll be posting a short youtube video and will update the url here.

District Dispatch: Ask away: Get your copyright questions answered

planet code4lib - Mon, 2016-06-06 05:26

Photo by Teddy Mafia via Flickr

Have a question about copyright policies? Library copyright experts will be available during the 2016 American Library Association’s (ALA) Annual Conference in Orlando, Fla. to respond to vexing copyright questions about licensing, fair use, electronic reserves, using music, images and video content, and more. Join copyright leaders during the interactive session “Ask Us Anything: Copyright Open House,” at which participants will have the opportunity to engage copyright experts on all of their copyright concerns. The session takes place on Sunday, June 26, 2016, from 1:00–2:30 p.m. in Orange County Convention Center, in room S329.

The program will include a late breaking copyright policy update from copyright leaders. The session will be a great opportunity to meet copyright geeks keen on helping academic, public and school librarians. The session is co-sponsored by the ALA Committee on Legislation (COL) Copyright Subcommittee. Participants will hear from a number of dynamic copyright experts, including Michael Brewer, head of the Research & Learning Department at the University of Arizona Libraries; Chris Le Beau, assistant teaching professor of the School of Information Science & Learning Technologies, University of Missouri; Laura Quilter, Copyright and Information Policy Librarian at the University of Massachusetts, Amherst; Carrie Russell, program director of the Public Access to Information for the American Library Association’s Office for Information Technology Policy; and Peggy Tahir, Education & Copyright Librarian at the University of California–San Francisco (UCSF) Library.

Want to attend other policy sessions at the 2016 ALA Annual Conference? View all ALA Washington Office sessions

The post Ask away: Get your copyright questions answered appeared first on District Dispatch.

LibUX: Tim Broadwater, UX Architect

planet code4lib - Mon, 2016-06-06 04:44

Tim is an artist and front-end developer, presently the UX Architect at West Virginia University Libraries and several times certified by the Nielsen Norman Group. He’s written two articles for LibUX ( “Value vs. Feasibility” / “Why am I doing this to our users?” ), and he’s been super amazing to work with.

I asked to pick his brain about his experience in NN/g’s certification programs and the burgeoning UX degree field – and I am left feeling pretty good about the state of library user experience design.

If you like, you can download the MP3 or subscribe to LibUX on Stitcher, iTunes, Google Play Music, or just plug our feed straight into your podcatcher of choice. Help us out and say something nice. Your sharing and positive reviews are the best marketing we could ask for.

Here are the pulls
  • 3:50 – UX Certifications and the burgeoning UX degree field
  • 9:22 – Are we at peak UX?

I know a handful of professionals who are great web developers and great designers …, and they refer to themselves as UX designers or UX architects, however they have never once conducted any type of usability study or intercept or any type of evaluation that involves their users – they don’t meet their users, ever. I think the term gets broadly applied to where it becomes a buzzword. Tim Broadwater

  • 15:55 – Pitching user research to stakeholders
  • 16:35 – Tim’s case study

We can shoot from the hip over and over and over again and sometimes we get an “okay” success, but … most of the time we get an absolute failure. How do we go forward? We have to make decisions based on user data. … Our target audience is constantly changing so we have to always be able to take the pulse. Tim Broadwater

  • 20:21 – We — Michael and Tim — love the hamburger menu. Unashamedly. And it’s going to be around for years.

I can’t deny [the hamburger menu] affords a certain amount of convenience in terms of design because of the … complexity of maintaining a front-end framework that must be as malleable [as a libraries’ must be] to adapt to so many different kinds of applications and so many different kinds of users. Michael Schofield

  • 28:50 – This has become Navigation UX Talk with Tim and Mike.
  • 34:03 – Left navigation? Ugh! As if!

I think left hanf navigation is kind of a lazy way to deal with your secondary tier navigation. There are so many different options now that are out there. I think what we’re seeing now is that with long scrolling pages and different kind of navigation items or navigations that are sticky, staying on the page, … there are different ways to get to the same information and it’s more important to evaluate what works best for you or your users, as oppose to playing it safe or going with your peers.Tim Broadwater

Why do all higher-education websites look the same? Because we’re all looking at each other’s for peer research! No one is looking at apartments.com, which has this great search box functionality and I would argue that’s a perfect example for a library website … – and it uses the hamburger icon as well. Tim Broadwater

The post Tim Broadwater, UX Architect appeared first on LibUX.

Patrick Hochstenbach: Portrait of Anna Boulais

planet code4lib - Mon, 2016-06-06 04:29
Filed under: Figure Drawings, portaits Tagged: fountainpen, ink, paper, sktchy, twsbi

Hydra Project: Hydra Virtual Connect 2016

planet code4lib - Sun, 2016-06-05 12:19
What is Hydra Virtual Connect?

Hydra Virtual Connect (HVC) is an opportunity for Hydra Project participants to gather online to touch base on the progress of community efforts at a roughly halfway point between face-to-face Hydra Connect meetings. Hydra is a growing, active community with many initiatives taking place across interest groups, working groups, local and collaborative development projects, and other efforts, and it can be difficult for community members to keep up with all of this activity on a regular basis. HVC will give the Hydra community a chance to come together to catch up on developments, make new connections, and re-energize itself towards Hydra Connect 2016 in Boston in October.

Suggestions for an event such as this have come from a number of members of the Hydra community, and the idea was further discussed and refined at the Hydra Power Steering meeting in March 2016.

When will Hydra Virtual Connect take place?

Hydra Virtual Connect 2016 will take place on Thursday, July 7 from 11:00 AM – 2:00 PM EDT / 8:00 AM – 11:00 AM PDT / 16:00-19:00 BST / 15:00-18:00 UTC.  Reserve the time slot!!!

Further details

Further details can be found on the HVC wiki page here.

Booking details for the face-to-face Hydra Connect in Boston this October will be announced shortly.

David Rosenthal: He Who Pays The Piper

planet code4lib - Fri, 2016-06-03 22:00
As expected, the major publishers have provided an amazingly self-serving response to the EUs proposed open access mandate. My suggestion for how the EU should respond in turn is:
When the EU pays for research, the EU controls the terms under which it is to be published. If the publishers want to control the terms under which some research is published, publishers should pay for that research. You can afford to.;-)

Pages

Subscribe to code4lib aggregator