You are here

Feed aggregator

District Dispatch: Copyright services at universities

planet code4lib - Mon, 2016-04-25 16:05

From Lotus Head

It happens more often than you think.  One day your administrator tells you that you are the Library’s copyright librarian.  But why? You are not an expert on copyright law…but you soon will become one when you seek the advice and best practices of copyright librarian veterans.  This month’s webinar is back by popular demand!

Higher education copyright programs and services

May 5, 2016 at 11am Pacific/2pm Eastern

Universities and their libraries provide copyright information to the members of their communities in different ways. Hear three copyright and scholarly communication librarians describe the services they offer regarding copyright to their faculty, staff, and students. Our presenters will include Sandra Enimil, Program Director, Copyright Resources Center at Ohio State University Libraries, Sandy De Groote, Scholarly Communications Librarian from the University of Illinois at Chicago, and Cindy Kristof, Head of Copyright and Document Services from Kent State University.
Please join us at the webinar URL. Enter as a guest, no password required. We are limited on the number of concurrent viewers we can have, so we ask you to watch with others at your institution if at all possible.  The presentations are recorded and will be available online soon after the presentation.  Oh yeah – it’s free!

The post Copyright services at universities appeared first on District Dispatch.

Code4Lib Journal: Editorial Introduction: People

planet code4lib - Mon, 2016-04-25 15:56
by Meghan Finch Two issues ago, coordinating editor Carol Bean identified a focus on data, in our profession and in the Issue 30 articles, and also recognized that as information professionals, it goes beyond just the data to the conventions and standards necessary for working with data. [1] I’d like to offer a similar sentiment […]

Code4Lib Journal: An Open-Source Strategy for Documenting Events: The Case Study of the 42nd Canadian Federal Election on Twitter

planet code4lib - Mon, 2016-04-25 15:56

This article examines the tools, approaches, collaboration, and findings of the Web Archives for Historical Research Group around the capture and analysis of about 4 million tweets during the 2015 Canadian Federal Election. We hope that national libraries and other heritage institutions will find our model useful as they consider how to capture, preserve, and analyze ongoing events using Twitter.

While Twitter is not a representative sample of broader society - Pew research shows in their study of US users that it skews young, college-educated, and affluent (above $50,000 household income) – Twitter still represents an exponential increase in the amount of information generated, retained, and preserved from 'everyday' people. Therefore, when historians study the 2015 federal election, Twitter will be a prime source.

On August 3, 2015, the team initiated both a Search API and Stream API collection with twarc, a tool developed by Ed Summers, using the hashtag #elxn42. The hashtag referred to the election being Canada's 42nd general federal election (hence 'election 42' or elxn42). Data collection ceased on November 5, 2015, the day after Justin Trudeau was sworn in as the 42nd Prime Minister of Canada. We collected for a total of 102 days, 13 hours and 50 minutes.

To analyze the data set, we took advantage of a number of command line tools, utilities that are available within twarc, twarc-report, and jq. In accordance with the Twitter Developer Agreement & Policy, and after ethical deliberations discussed below, we made the tweet IDs and other derivative data available in a data repository. This allows other people to use our dataset, cite our dataset, and enhance their own research projects by drawing on #elxn42 tweets.

Our analytics included:

  • breaking tweet text down by day to track change over time;
  • client analysis, allowing us to see how the scale of mobile devices affected medium interactions;
  • URL analysis, comparing both to Archive-It collections and the Wayback Availability API to add to our understanding of crawl completeness;
  • and image analysis, using an archive of extracted images.

Our article introduces our collecting work, ethical considerations, the analysis we have done, and provides a framework for other collecting institutions to do similar work with our off-the-shelf open-source tools. We conclude by ruminating about connecting Twitter archiving with a broader web archiving strategy.

Code4Lib Journal: How to Party Like it’s 1999: Emulation for Everyone

planet code4lib - Mon, 2016-04-25 15:56
Emulated access of complex media has long been discussed, but there are very few instances in which complex, interactive, born-digital emulations are available to researchers. New York Public Library has made 1980-90's era video games from 5.25" floppy disks in the Timothy Leary Papers accessible via a DosBox emulator. These games appear in various stages of development and display the work of at least four of Leary's collaborators on the games. 56 disk images from the Leary Papers are currently emulated in the reading room. New York University has made late 1990s-mid 2000's era Photoshop files from the Jeremy Blake Papers accessible to researchers. The Blake Papers include over 300 pieces of media. Cornell University Library was awarded a grant from the NEH to analyze approximately 100 born-digital artworks created for CD-ROM from the Rose Goldsen Archive of New Media Art to develop preservation workflows, access strategies, and metadata frameworks. Rhizome has undertaken a number of emulation projects as a major part of its preservation strategy for born-digital artworks. In cooperation with the University of Freiburg in Germany, Rhizome recently restored several digital artworks for public access using a cloud-based emulation framework. This framework (bwFLA) has been designed to facilitate the reenactments of software on a large scale, for internal use or public access. This paper will guide readers through how to implement emulation. Each of the institutions weigh in on oddities and idiosyncrasies they encountered throughout the process — from accession to access.

Code4Lib Journal: How We Went from Worst Practices to Good Practices, and Became Happier in the Process

planet code4lib - Mon, 2016-04-25 15:56
Our application team was struggling. We had good people and the desire to create good software, but the library as an organization did not yet have experience with software development processes. Work halted. Team members felt unfulfilled. The once moderately competent developer felt frustrated, ashamed, helpless, and incompetent. Then, miraculously, a director with experience in software project management and an experienced and talented systems administrator were hired and began to work with the team. People in the group developed a sense of teamwork that they had not experienced in their entire time at the library. Now we are happy, excited, and energetic. We hope that you will appreciate our “feel-good” testimony of how excellent people and appropriate processes transformed an unhealthy work environment into a fit and happy team.

Code4Lib Journal: Shining a Light on Scientific Data: Building a Data Catalog to Foster Data Sharing and Reuse

planet code4lib - Mon, 2016-04-25 15:56
The scientific community's growing eagerness to make research data available to the public provides libraries -- with our expertise in metadata and discovery -- an interesting new opportunity. This paper details the in-house creation of a "data catalog" which describes datasets ranging from population-level studies like the US Census to small, specialized datasets created by researchers at our own institution. Based on Symfony2 and Solr, the data catalog provides a powerful search interface to help researchers locate the data that can help them, and an administrative interface so librarians can add, edit, and manage metadata elements at will. This paper will outline the successes, failures, and total redos that culminated in the current manifestation of our data catalog.

Code4Lib Journal: Creation of a Library Tour Application for Mobile Equipment using iBeacon Technology

planet code4lib - Mon, 2016-04-25 15:56
We describe the design, development, and deployment of a library tour application utilizing Bluetooth Low Energy devices know as iBeacons. The tour application will serve as library orientation for incoming students. The students visit stations in the library with mobile equipment running a special tour app. When the app detects a beacon nearby, it automatically plays a video that describes the current location. After the tour, students are assessed according to the defined learning objectives. Special attention is given to issues encountered during development, deployment, content creation, and testing of this application that depend on functioning hardware, and the necessity of appointing a project manager to limit scope, define priorities, and create an actionable plan for the experiment.

Code4Lib Journal: Measuring Library Vendor Cyber Security: Seven Easy Questions Every Librarian Can Ask

planet code4lib - Mon, 2016-04-25 15:56

This article is based on an independent cyber security risk management audit for a public library system completed by the authors in early 2015 and based on a research paper by the same group at Clark University in 2014. We stress that while cyber security must include raising public knowledge in regard to cyber security issues and resources, and libraries are indeed the perfect place to disseminate this knowledge, librarians are also in a unique position as the gatekeepers of information services provided to the public and should conduct internal audits to ensure our content partners and IT vendors take cyber security as seriously as the library and its staff.

One way to do this is through periodic reviews of existing vendor relationships. To this end, the authors created a simple grading rubric you can adopt or modify to help take this first step towards securing your library data. It is intended to be used by both technical and non-technical staff as a simple measurement of what vendor agreements currently exist and how they rank, while at the same time providing a roadmap for which security features or policy statements the library can or should require moving forward.

Code4Lib Journal: Building Bridges with Logs: Collaborative Conversations about Discovery across Library Departments

planet code4lib - Mon, 2016-04-25 15:56
This article describes the use of discovery system search logs as a vehicle for encouraging constructive conversations across departments in an academic library. The project focused on bringing together systems and teaching librarians to evaluate the results of anonymized patron searches in order to improve communication across departments, as well as to identify opportunities for improvement to the discovery system itself.

FOSS4Lib Recent Releases: VuFind - 3.0

planet code4lib - Mon, 2016-04-25 14:50

Last updated April 25, 2016. Created by Demian Katz on April 25, 2016.
Log in to edit this page.

Package: VuFindRelease Date: Monday, April 25, 2016

FOSS4Lib Recent Releases: VuFind - 2.5.4

planet code4lib - Mon, 2016-04-25 14:50
Package: VuFindRelease Date: Monday, April 25, 2016

Last updated April 25, 2016. Created by Demian Katz on April 25, 2016.
Log in to edit this page.

Maintenance release (for PHP 7 / Ubuntu 16.04 compatibility)

Patrick Hochstenbach: Brush Inking Exercise

planet code4lib - Sun, 2016-04-24 07:44
Filed under: portaits, Sketchbook Tagged: brush, ink, Photoshop, portrait, sketchbook

Open Library Data Additions: Amazon Crawl: part by

planet code4lib - Sat, 2016-04-23 06:56

Part by of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata, Text

Nicole Engard: Bookmarks for April 22, 2016

planet code4lib - Fri, 2016-04-22 20:30

Today I found the following resources and bookmarked them on Delicious.

Digest powered by RSS Digest

The post Bookmarks for April 22, 2016 appeared first on What I Learned Today....

Related posts:

  1. Digital Cameras – I’m up for Suggestions
  2. First Big Present
  3. Google Homepage Themes

District Dispatch: A Policy Revolution! can happen only if we work together

planet code4lib - Fri, 2016-04-22 19:53

I was quite happy to see last week’s announcement of awardees of IMLS grants for the National Leadership Grants for Libraries Program and the Laura Bush 21st Century Librarian Program. No; I didn’t receive a grant. But we are named collaborators on three of them. Such cooperative efforts are key to our policy work, as there is only so much that the Office for Information Technology Policy can achieve on its own. But by working with talented and effective partners, we expand our reach and impact considerably.

We look forward to working with Professor Mega Subramaniam of the University of Maryland on her effort to develop and deliver a post-master’s certificate in Youth Experience (YX) design. Even better, ALA’s Young Adult Library Services Association is another project partner. This project focuses on the development of a 12-credit online post-master’s certificate program focused on learning sciences including topics like adult mentorship, participatory design, and design thinking. Mega is also part of the advisory committee on our recently-announced Libaries Ready-to-code Project, a collaboration between Google and ALA.

One project examines how rural libraries address the challenges of Internet connectivity with hotspot lending programs. Research outcomes will address the role of rural libraries in local information ecosystems, the impact of hotspot lending programs on users’ quality of life and digital literacy, community outcomes of these programs, and practical requirements for offering hotspot lending programs. We look forward to supporting the efforts of Professor Sharon Strover at the University of Texas and her team.

Finally, we are pleased to be associated with principal investigator Iris Xie, Professor, University of Wisconsin—Milwaukee and her effort to develop digital library design guidelines on accessibility, usability, and utility for blind and visually impaired (BVI) users. This project will generate three products: 1) digital library design guidelines, organized by types of help-seeking situations associated with accessibility, usability, and utility; 2) a report on the current status of how digital libraries satisfy BVI users’ help needs; and 3) a methodology that can be applied to other underserved user groups to develop similar guidelines. Our involvement with this project will complement our other work on improving access to information resources for people with disabilities.

Congratulations to the grant recipients and we look forward to productive and interesting work ahead.

The post A Policy Revolution! can happen only if we work together appeared first on District Dispatch.

Evergreen ILS: Evergreen Community Releases 2015 Annual Report

planet code4lib - Fri, 2016-04-22 14:12

The Evergreen community released its 2015 Annual Report this morning during the 2016 International Evergreen Conference in Raleigh, North Carolina.

The annual report highlights a busy year for Evergreen with 60 new library locations moving to the system, bringing the total number of known Evergreen libraries to nearly 1,800.  In addition to two new feature releases, 2015 also saw a lot of work completed for the new web-based staff client, which is scheduled to be ready to replace the current staff client in Spring 2017.

The annual report is available from the Evergreen web site at https://evergreen-ils.org/wp-content/uploads/2016/04/Evergreen%20Annual%20Report%202015%20Max%20Resolution.pdf

 

OCLC Dev Network: Federated Queries with SPARQL

planet code4lib - Fri, 2016-04-22 13:00

Learn about choosing blending data from different SPARQL endpoints using federated queries.

Eric Hellman: Using Let's Encrypt to Secure an Elastic Beanstalk Website

planet code4lib - Thu, 2016-04-21 20:10
Since I've been pushing the library and academic publishing community to implement HTTPS on all their informations services, I was really curious to see how the new Let's Encrypt (LE) certificate authority is really working, with its "general availability" date imminent. My conclusion is that "general availability" will not mean "general usability" right away; its huge impact will take six months to a year to arrive. For now, it's really important for the community to put our developers to work on integrating Let's Encrypt into our digital infrastructure.

I decided to secure the www.gitenberg.org website as my test example. It's still being developed, and it's not quite ready for use, so if I screwed up it would be no disaster. Gitenberg.org is hosted using Elastic Beanstalk (EB) on Amazon Web Services (AWS), which is a popular and modern way to build scaleable web services. The servers that Elastic Beanstalk spins up have to be completely configured in advance- you can't just log in and write some files. And EB does its best to keep servers serving. It's no small matter to shut down a server and run some temporary server, because EB will spin up another server to handle rerouted traffic. These characteristics of  Elastic Beanstalk exposed some of the present shortcomings and future strengths of the Let's Encrypt project.

Here's the mission statement of the project:
Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit.While most of us focus on the word "free", the more significant word here is "automated":
Automatic: Software running on a web server can interact with Let’s Encrypt to painlessly obtain a certificate, securely configure it for use, and automatically take care of renewal.Note that the objective is not to make it painless for website administrators to obtain a certificate, but to enable software to get certificates. If the former is what you want, in the near term, then I strongly recommend that you spend some money with one of the established certificate authorities. You'll get a certificate that isn't limited to 90 days, as the LE certificates are, you can get a wildcard certificate, and you'll be following the manual procedure that your existing web server software expects you to be following.

The real payoff for Let's Encrypt will come when your web server applications start expecting you to use the LE methods of obtaining security certificates. Then, the chore of maintaining certificates for secure web servers will disappear, and things will just work. That's an outcome worth waiting for, and worth working towards today.

So here's how I got Let's Encrypt working with Elastic Beanstalk for gitenberg.org.

The key thing to understand here is that before Let's Encrypt can issue me a certificate, I have to prove to them that I really control the hostname that I'm requesting a certificate for. So the Let's Encrypt client has to be given access to a "privileged" port on the host machine designated by DNS for that hostname. Typically, that means I have to have root access to the server in question.

In the future, Amazon should integrate a Let's Encrypt client with their Beanstalk Apache server software so all this is automatic, but for now we have to use the Let's Encrypt "manual mode". In manual mode, the Let's Encrypt client generates a cryptographic "challenge/response", which then needs to be served from the root directory of the gitenberg.org web server.

Even running Let's Encrypt in manual mode required some jumping through hoops. It won't run on Mac OSX. It doesn't yet support the flavor of Linux used by Elastic Beanstalk, so it does no good configuring Elastic Beanstalk to install it there. Instead I used the Let's Encrypt Docker container, which works nicely, and I ran a Docker-Machine inside "virtualbox" on my Mac.

Having configured Docker, I ran
docker run -it --rm -p 443:443 -p 80:80 --name letsencrypt \    
-v "/etc/letsencrypt:/etc/letsencrypt" \
-v "/var/lib/letsencrypt:/var/lib/letsencrypt" \
quay.io/letsencrypt/letsencrypt:latest -a manual -d www.gitenberg.org \
--server https://acme-v01.api.letsencrypt.org/directory auth 
(the --server option requires your domain to be whitelisted during the beta period.) After paging through some screens asking for my email address and permission to log my IP address, the client responded with
Make sure your web server displays the following content at http://www.gitenberg.org/.well-known/acme-challenge/8wBDbWQIvFi2bmbBScuxg4aZcVbH9e3uNrkC4CutqVQ before continuing:
8wBDbWQIvFi2bmbBScuxg4aZcVbH9e3uNrkC4CutqVQ.hZuATXmlitRphdYPyLoUCaKbvb8a_fe3wVj35ISDR2ATo do this, I configured a virtual directory "/.well-known/acme-challenge/" in the Elastic Beanstalk console with a mapping to a "letsencrypt/" directory in my application (configuration page, software configuration section, static files section.). I then made a file named  "8wBDbWQIvFi2bmbBScuxg4aZcVbH9e3uNrkC4CutqVQ" with the specified content in my letsencrypt directory, committed the change with git, and deployed the application with the elastic beanstalk command line interface. After waiting for the deployment to succeed, I checked that http://www.gitenberg.org/.well-known/acme-challenge/8wBD... responded correctly, and then hit <enter>. (Though the LE client tells you that the MIME type "text/plain" MUST be sent, elastic beanstalk sets no MIME header, which is allowed.)

And SUCCESS!
IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at   /etc/letsencrypt/live/www.gitenberg.org/fullchain.pem. Your cert   will expire on 2016-02-08. To obtain a new version of the   certificate in the future, simply run Let's Encrypt again....except since I was running Docker inside virtualbox on my Mac, I had to log into the docker machine and copy three files out of that directory (cert.pem, privkey.pem, and chain.pem). I put them in my local <.elasticbeanstalk> directory. (See this note for a better way to do this.)

The final step was to turn on HTTPS in elastic beanstalk. But before doing that, I had to upload the three files to my AWS Identity and Access Management Console. To do this, I needed to use the aws command line interface, configured with admin privileges. The command was
aws iam upload-server-certificate \--server-certificate-name gitenberg-le \--certificate-body file://<.elasticbeanstalk>/cert.pem \--private-key file://<.elasticbeanstalk>/privkey.pem \--certificate-chain file://<.elasticbeanstalk>/chain.pemOne more trip to the Elastic Beanstalk configuration console (network/load balancer section), and gitenberg.org was on HTTPS.


Given that my sys-admin skills are rudimentary, the fact that I was able to get Let's Encrypt to work suggests that they've done a pretty good job of making the whole process simple. However, the documentation I needed was non-existent, apparently because the LE developers want to discourage the use of manual mode. Figuring things out required a lot of error-message googling. I hope this post makes it easier for people to get involved to improve that documentation or build support for Let's Encrypt into more server platforms.

(Also, given that my sys-admin skills are rudimentary, there are probably better ways to do what I did, so beware.)

If you use web server software developed by others, NOW is the time to register a feature request. If you are contracting for software or services that include web services, NOW is the time to add a Let's Encrypt requirement into your specifications and contracts. Let's Encrypt is ready for developers today, even if it's not quite ready for rank and file IT administrators.

Update (11/12/2015):
I was alerted to the fact that while https://www.gitenberg.org was working, https://gitenberg.org was failing authentication. So I went back and did it again, this time specifying both hostnames. I had to guess at the correct syntax. I also tested out the suggestion from the support forum to get the certificates saved in may mac's filesystem. (It's worth noting here that the community support forum is an essential and excellent resource for implementers.)

To get the multi-host certificate generated, I used the command:
docker run -it --rm -p 443:443 -p 80:80 --name letsencrypt \
-v "/Users/<my-mac-login>/letsencrypt/etc/letsencrypt:/etc/letsencrypt" \
-v "/Users/<my-mac-login>/letsencrypt/etc/letsencrypt/var/lib/letsencrypt:/var/lib/letsencrypt" \
-v "/Users/<my-mac-login>/letsencrypt/var/log/letsencrypt:/var/log/letsencrypt" \
quay.io/letsencrypt/letsencrypt:latest -a manual \
-d www.gitenberg.org -d gitenberg.org \
--server https://acme-v01.api.letsencrypt.org/directory authThis time, I had to go through the challenge/response procedure twice, once for each hostname.
With the certs saved to my filesystem, the upload to AWS was easier:aws iam upload-server-certificate \
--server-certificate-name gitenberg-both \
--certificate-body file:///Users/<my-mac-login>/letsencrypt/etc/letsencrypt/live/www.gitenberg.org/cert.pem \
--private-key file:///Users/<my-mac-login>/letsencrypt/etc/letsencrypt/live/www.gitenberg.org/privkey.pem \
--certificate-chain file:///Users/<my-mac-login>/letsencrypt/etc/letsencrypt/live/www.gitenberg.org/chain.pemAnd now, traffic on both hostnames is secure!

Resources I used:

Update 12/6/2015:  Let's Encrypt is now in public beta, anyone can use it. I've added details about creating the virtual directory in response to a question on twitter.

Update 4/21/2016: When it came time for our second renewal, Paul Moss took a look at automating the process. If you're interested in doing this, read his notes.

Library of Congress: The Signal: Closing the Gap in Born-Digital and Made-Digital Curation

planet code4lib - Thu, 2016-04-21 18:06

This is a guest post by Jessica Tieman.

Jessica Tieman. Photo from the University of Illinois at Urbana-Champaign.

As part of the National Digital Stewardship Residency program, the 2015-2016 Washington, D.C. cohort will present their year-end symposium, entitled “Digital Frenemies: Closing the Gap in Born-Digital and Made-Digital Curation,” on Thursday, May 5th, 2016 at the National Library of Medicine. Since June, our colleague Nicole Contaxis has worked with NLM to create a pilot workflow for the curation, preservation and presentation of historically valuable software products developed by NLM.

Why “Digital Frenemies”? Our group has observed trends in digital stewardship that divide field expertise into “made digital” and “born digital.” We believe the landscape of the digital preservation field shouldn’t seem so divided. Rather, the future will be largely defined by the symbiotic relationships between content creation and format migration. It will depend on those endeavors where our user communities intersect rather than lead to us to focus on challenges specific to our individual areas of the field.

The symposium will showcase speakers from cultural heritage and academic institutions, who will address the relationship between digitized and born-digital material. Guest speakers will explore topics such as preserving complex software and game technologies through emulation, creating cultural digital collections through mobile public library labs, collecting and curating data and much more. Featured sessions will be presented by Jason Scott of the Archive Team; Mercè Crosas, chief data science and technology officer of the IQSS at Harvard University; and Caroline Catchpole from Culture in Transit.

The event is free but registration is required as space is limited. We encourage those interested in attending the event or following along on social media to visit our website.

Pages

Subscribe to code4lib aggregator