You are here

Feed aggregator

Open Knowledge Foundation: Seeking new Executive Director at Open Knowledge

planet code4lib - Tue, 2014-11-11 09:45

Today we are delighted to put out our formal announcement for a new Executive Director. In our announcement about changes in leadership in September we had already indicated we would be looking to recruit a new senior executive and we are now ready to begin the formal process.

We are very excited to have this opportunity to bring someone new on board. Please do share this with your networks and especially anyone in particular you think would be interested. We emphasize that we are conducting a world-wide search for the very best candidates, although the successful candidate would ideally be able commute to London or Berlin as needed.

Full role details are below – to apply or to download further information on the required qualifications, skills and experience for the role, please visit http://www.perrettlaver.com/candidates quoting reference 1841. The closing date for applications is 9am (GMT) on Monday, 8th December 2014.

Role Details

Open Knowledge is a multi-award winning international not-for-profit organisation. We are a network of people passionate about openness, using advocacy, technology and training to unlock information and enable people to work with it to create and share knowledge. We believe that by creating an open knowledge commons and developing tools and communities around this we can make a significant contribution to improving governance, research and the economy. We’re changing the world by promoting a global shift towards more open ways of working in government, arts, sciences and much more. We don’t just talk about ideas, we deliver extraordinary software, events and publications.

We are currently looking for a new Executive Director to lead the organisation through the next exciting phase of its development. Reporting into the Board of Directors, the Executive Director will be responsible for setting the vision and strategic direction for the organisation, developing new business and funding opportunities and directing and managing a highly motivated team. S/he will play a key role as an ambassador for Open Knowledge locally and internationally and will be responsible for developing relationships with key stakeholders and partners.

The ideal candidate will have strong visionary and strategic skills, exceptional personal credibility, a strong track record of operational management of organisations of a similar size to Open Knowledge, and the ability to influence at all levels both internally and externally. S/he will be an inspiring, charismatic and engaging individual, who can demonstrate a sound understanding of open data and content. In addition, s/he must demonstrate excellent communication and stakeholder management skills as well as a genuine passion for, and commitment to, the aims and values of the Open Knowledge.

To apply or to download further information on the required qualifications, skills and experience for the role, please visit http://www.perrettlaver.com/candidates quoting reference 1841. The closing date for applications is 9am (GMT) on Monday, 8th December 2014.

The role is flexible in terms of location but ideally will be within commutable distance of London or Berlin (relocation is possible) and the salary will be competitive with market rate.

PeerLibrary: Check out our brand new screencast video of PeerLibrary 0.3!

planet code4lib - Tue, 2014-11-11 03:39

We are proud to announce an updated screencast which demos the increased functionality and updated user interface of the PeerLibrary website. This screencast debuted at the Mozilla Festival in October as part of our science fair presentation. The video showcases an article by Paul Dourish and Scott D. Mainwaring entitled “Ubicomp’s Colonial Impulse” as well as the easy commenting and discussion features which PL emphasizes. One of the MozFest conference attendees actually recognized the article which drew him towards our booth and into a conversation with our team. Check out the new screencast and let us know what you think!

PeerLibrary: PeerLibrary Heads to London for MozFest 2014!

planet code4lib - Tue, 2014-11-11 03:38

Mozilla Festival brings developers, educators, and tech enthusiasts from a variety of fields together with the common goal of promoting and building the open web. Among others, some of the sessions most relevant to PeerLibrary’s goals included “Community Building” and “Science and the Web”. A delegation from the PeerLibrary team presented at the science fair on the first evening of the conference. This provided an opportunity to reconnect with some of our UK based supporters and contributors as well as introduce the platform to hundreds of MozFest attendees. We received valuable feedback from the web dev community and have a slew of new features and improvements to consider implementing in the coming months. Another phenomenal conference and we’re already looking forward to MozFest 2015!

DuraSpace News: Fedora 4 at the 2014 eResearch Australasia Conference

planet code4lib - Tue, 2014-11-11 00:00

Winchester, MA More than 400 delegates made the trip to Melbourne, Victoria, Australia in October to learn about current best practices in research support and to share innovative examples and ideas at the eResearch Australasia Conference. The annual Conference focuses on how information and communications technologies help researchers collect, manage and reuse information.

FOSS4Lib Upcoming Events: Islandora Camp BC

planet code4lib - Mon, 2014-11-10 19:56
Date: Monday, February 16, 2015 - 08:00 to Wednesday, February 18, 2015 - 17:00Supports: Islandora

Last updated November 10, 2014. Created by Peter Murray on November 10, 2014.
Log in to edit this page.

The first Islandora Camp of 2015 will be in Vancouver, BC from February 16 - 18, for our West Coast Islandorians and anyone else who would like to see beautiful British Columbia while learning about Islandora.  Many thanks to our sponsor Simon Fraser University for making this camp possible!
If you have any questions about this or future camps, please contact us.

Harvard Library Innovation Lab: Hiring! We want your design energy.

planet code4lib - Mon, 2014-11-10 19:31

 

The Harvard Library Innovation Lab and the Berkman Center for Internet & Society are looking for a web designer to help us build tools to explore the open Internet and define the future of libraries.

Our projects range in scope from fast-moving prototypes to long-term innovations. The best way to get a feel for what we do is by looking at some of our current efforts.

 


Perma.cc, a web archiving service that is powered by libraries

 


H2O, a platform for creating, sharing and adapting open course materials

 


Amber, a server side plugin to keep links working on blogs and websites

 

What you’ll do

  • - Work with our multi-disciplinary team to build elegant web tools
  • - Contribute to our broad vision for the Internet, libraries, and society
  • - Rely on your good design sense and user-centricity
  • - Create beautiful graphics and use modern web technologies to share them
  • - Have fun while producing meaningful work with fantastic folks

This is a term limited position running through Spring and Summer semesters (January-August 2015).

Find details and apply for the position by searching for 34346BR in the Harvard Recruitment Management System. If you have questions, email us directly at lil@law.harvard.edu.

 

Code4Lib: Keynote voting for the 2015 conference is now open!

planet code4lib - Mon, 2014-11-10 19:04

All nominees have been contacted and the 19 (!) nominees included in this election are all potentially available to speak. The top two available vote recipients will be invited to be our keynote speakers this year. Voting will end on Tuesday, November 18th, 2014 at 20:00:00 PM PDT.

http://vote.code4lib.org/election/31

When rating nominees, please consider whether they are likely to be an
excellent contributor in each the following areas:

1) Appropriateness. Is this speaker likely to convey information that is useful to many members of our community?
2) Uniqueness. Is this speaker likely to cover themes that may not commonly appear in the rest of the program?
3) Contribution to diversity. Will this person bring something rare, notable, or unique to our community, through unusual experience or background?

http://vote.code4lib.org/election/31

If you have any issues with your code4lib.org account, please contact Ryan Wick at
ryanwick@gmail.com.

Eric Lease Morgan: My first R script, wordcloud.r

planet code4lib - Mon, 2014-11-10 18:50

This is my first R script, wordcloud.r:

#!/usr/bin/env Rscript # wordcloud.r - output a wordcloud from a set of files in a given directory # Eric Lease Morgan <eric_morgan@infomotions.com> # November 8, 2014 - my first R script! # configure MAXWORDS = 100 RANDOMORDER = FALSE ROTPER = 0 # require library( NLP ) library( tm ) library( methods ) library( RColorBrewer ) library( wordcloud ) # get input; needs error checking! input <- commandArgs( trailingOnly = TRUE ) # create and normalize corpus corpus <- VCorpus( DirSource( input[ 1 ] ) ) corpus <- tm_map( corpus, content_transformer( tolower ) ) corpus <- tm_map( corpus, removePunctuation ) corpus <- tm_map( corpus, removeNumbers ) corpus <- tm_map( corpus, removeWords, stopwords( "english" ) ) corpus <- tm_map( corpus, stripWhitespace ) # do the work wordcloud( corpus, max.words = MAXWORDS, random.order = RANDOMORDER, rot.per = ROTPER ) # done quit()

Given the path to a directory containing a set of plain text files, the script will generate a wordcloud.

Like Python, R has a library well-suited for text mining — tm. Its approach to text mining (or natural language processing) is both similar and dissimilar to Python’s. They are similar in that they both hope to provide a means for analyzing large volumes of texts. It is similar in that they use different underlying data structures to get there. R might be more for analytic person. Think statistics. Python may be more for the “literal” person, all puns intended. I will see if I can exploit the advantages of both.

David Rosenthal: Gossip protocols: a clarification

planet code4lib - Mon, 2014-11-10 18:11
blogged on the Library of Congress' Digital Preservation blog about one of his take-aways from the Library's Designing Storage Architectures workshop; the importance of anti-entropy protocols for preservation. He talks about these as "a subtype of “gossip” protocols" and cites LOCKSS as an example, saying:
Not coincidentally, LOCKSS “consists of a large number of independent, low-cost, persistent Web caches that cooperate to detect and repair damage to their content by voting in “opinion polls” (PDF). In other words, gossip and anti-entropy.The main use for gossip protocols is to disseminate information in a robust, randomized way, by having each peer forward information it receives from other peers to a random selection of other peers. As the function of LOCKSS boxes is to act as custodians of copyright information, this would be a very bad thing for them to do.

It is true that LOCKSS peers communicate via an anti-entropy protocol, and it is even true that the first such protocol they used, the one I implemented for the LOCKSS prototype, was a gossip protocol in the sense that peers forwarded hashes of content to each other. Alas, that protocol was very insecure. Some of the ways in which it was insecure related directly to its being a gossip protocol.

An intensive multi-year research effort in cooperation with Stanford's CS department to create a more secure anti-entropy protocol led to the current  protocol, which won "Best Paper" at the 2003 Symposium on Operating System Principles. It is not a gossip protocol in any meaningful sense (see below the fold for details). Peers never forward information they receive from other peers, all interactions are strictly pair-wise and private.

For the TRAC audit of the CLOCKSS Archive we provided an overview of the operation of the LOCKSS anti-entropy protocol; if you are interested in the details of the protocol this, rather than the long and very detailed paper in ACM Transactions on Computer Systems (PDF), is the place to start.

According to Wikipedia:
a gossip protocol is one that satisfies the following conditions:
  • The core of the protocol involves periodic, pairwise, inter-process interactions.
  • The information exchanged during these interactions is of bounded size.
  • When agents interact, the state of at least one agent changes to reflect the state of the other.
  • Reliable communication is not assumed.
  • The frequency of the interactions is low compared to typical message latencies so that the protocol costs are negligible.
  • There is some form of randomness in the peer selection. Peers might be selected from the full set of nodes or from a smaller set of neighbors.
  • Due to the replication there is an implicit redundancy of the delivered information.
The current LOCKSS anti-entropy protocol does not meet this definition. Peer communications are periodic and pairwise, but each pairwise communication forms part of a poll (anti-entropy operation) not the whole of one. When peers communicate, their state may change but the new state may not be a reflection of the state of the other. There is no implicit redundancy of the delivered information, the information delivered between two peers is specific to that pair of peers and is never shared with any other peer.

The redundancy of preserved content in a LOCKSS network is a higher-level concept than the details of individual peer communication. The current protocol is a peer-to-peer consensus protocol.

OCLC Dev Network: Learning Linked Data: Some Handy Tools

planet code4lib - Mon, 2014-11-10 17:30

I’ve been working with Linked Data off and on for a while now but really the last year has been my deepest dive into it. Much of that dive involved writing a PHP library to interact with the WorldCat Discovery API. Since I started seeing how much could be done with Linked Data in discovery, I’ve been re-adjusting my worldview and acquiring a new skills set to work with Linked Data. This meant understanding the whole concept of triples and the subject, predicate, object nomenclature. In our recent blog posts on the WorldCat Discovery API, we touched on some of the basics of Linked Data.  We also mentioned some tools for working with Linked Data in Ruby.

Library of Congress: The Signal: Digital Preservation Capabilities at Cultural Heritage Institutions: An Interview With Meghan Banach Bergin

planet code4lib - Mon, 2014-11-10 16:55

Meghan Banach Bergin, Bibliographic Access and Metadata Coordinator, University of Massachusetts Amherst Libraries.

The following is a guest post by Jefferson Bailey of Internet Archive and co-chair of the NDSA Innovation Working Group.

In this edition of the Insights Interview series we talk with Meghan Banach Bergin, Bibliographic Access and Metadata Coordinator, University of Massachusetts Amherst Libraries. Meghan is the author of a Report on Digital Preservation Practices at 148 Institutions. We discuss the results of her research and its implications of her work for digital preservation policies in general and at her institution in particular.

Jefferson: Thanks for talking with us today. Tell us about your sabbatical project.

Meghan: Thank you, I’m honored to be interviewed for The Signal blog.  The goal of my sabbatical project last year was to investigate how various institutions are preserving their digital materials.  I decided that the best way to reach a wide range of institutions was to put out a web-based survey. I was thrilled at the response. I received responses from 148 institutions around the world, roughly a third each were large academic libraries, smaller academic libraries and non-academic institutions (including national libraries, state libraries, public libraries, church and corporate archives, national parks archives, historical societies, research data centers and presidential libraries).

It was fascinating to learn what all of these different institutions were doing to preserve our cultural heritage for future generations.  I also conducted phone interviews with 12 of the survey respondents from various types of institutions, which gave me some additional insight into the issues involved in the current state of the digital preservation landscape.

Jefferson: What made you choose this topic for your sabbatical research? What specific conclusions or insight did you hope to gain in conducting the survey?

Meghan: We have been working to build a digital preservation program over the last several years at the University of Massachusetts Amherst Libraries and I thought I could help move it forward by researching what other institutions are doing in terms of active, long-term preservation of digital materials.  I was hoping to find systems or models that would work for us at UMass or for the Five Colleges Consortium.

Jefferson: How did you go about putting together the survey? Were there any specific areas that you wanted to focus on?

Meghan: I had questions about a lot of things, so I brainstormed a list of everything I wanted to know.  When I reviewed the resulting list, four main areas of inquiry emerged: solutions, services, staffing and cost.  I wanted to know what systems were being used for digital preservation and what digital preservation services were being offered, particularly at academic institutions.  Here at UMass we currently offer research data curation services and digital preservation consulting services, but we don’t have a specific budget or staff devoted to digital preservation, which was why I also wanted to know what kind of staffing other institutions had devoted to their digital preservation programs and the cost of those programs.

Jefferson: What surprised you about the responses? Or what commonalities did you find in the answers that you hadn’t considered when writing the questions?

Meghan: I was surprised at the sheer volume and variety of tools and technologies being used to preserve digital materials.  I think this shows that we are in an experimental phase and that everyone is trying to figure out what solutions will work best for different kinds of digital collections and materials, as well as what solutions will work best given the available staffing, skill sets and resources at their institutions.  It also shows that there is a lot of development happening right now, and this makes me feel optimistic about the future of the digital preservation field.

Jefferson: Did any themes or trends emerge from reading people’s responses?

Meghan: Some common themes did emerge.  Most people reported that budgets are tight and that they are trying to manage digital preservation with existing staff that also have other primary job responsibilities aside from digital preservation. Almost everyone I talked to said that they thought they needed additional staff.  Also, most of those interviewed were not completely satisfied with the systems and tools they were using. One person said, “No system is perfect right now. It’s a matter of getting a good enough system.” Others mentioned various issues such as difficulties with interoperability between systems and tools, lack of functionality such as the ability to capture technical or preservation metadata or to migrate file formats, and struggles with implementation and use of the systems. People were using multiple systems and tools in an effort to get all of the different functionality they were looking for. One archivist described their methods as “piecemeal” and said that “It would be good if we could make these different utilities more systematic. Right now every collection is its own case and we need an overall solution.”

Jefferson: Your summary report does a nice job balancing the technical and managerial issues involved with digital preservation. Could you tell us a little bit more about what those are and what your survey revealed in these areas?

Meghan: The survey, and the follow-up phone interviews, highlighted the fact that people are dealing with a wide range of technical issues, including storage cost and capacity, the complexities of web archiving and video preservation, automating processes, the need for a technical infrastructure to support long-term digital preservation, the complexity of preserving a wide variety of formats, and keeping up with standards, trends, and technology, especially when there aren’t overall agreed-upon best practices.  The managerial issues mainly centered around staffing levels, staff skill sets and funding.

Jefferson: I was curious to see that while 90% of respondents had “undertaken efforts to preserve digital materials” only 25% indicated they had a “written digital preservation policy.” What do you think accounts for this discrepancy? And, having recently contributed to writing a policy yourself, what would you say to those just starting to consider it?

Meghan: We were inspired to write our policy by Nancy McGovern’s Digital Preservation Management workshop, and we used an outline she provided at the workshop.  It was time consuming, and I think that’s why a lot of institutions have decided to skip writing a policy and just proceed straight to actually doing something to preserve their digital materials.  This approach has its merits, but we felt like writing the policy gave us the opportunity to wrap our heads around the issues, and having the policy in place provides us with a clearer path forward.

Some of the things we felt were important to define in our policy were the scope of what we wanted to preserve and the roles and responsibilities of the various stakeholders.  To those who are just starting to consider writing a digital preservation policy, I would recommend forming a small group to talk through the issues and looking at lots of examples of policies from other institutions.  Also, I would suggest looking at Library of Congress Junior Fellow Madeline Sheldon’s report Analysis of Current Digital Preservation Policies: Archives, Libraries and Museums.

Cover page of Staffing for Effective Digital Preservation: An NDSA Report

Jefferson: Your survey also delved into both staffing and services being provided by institutions. Tell us a bit about some of your findings in those areas (and for staffing, how they compare to the NDSA Staffing Survey (PDF).

Meghan: Almost everyone said that they didn’t have enough staff.  One librarian said, “No one is dedicated to working on digital preservation. It is hard to fulfill my main job duties and still find time to devote to working on digital preservation efforts.” Another stated that, “Digital preservation gets pushed back a lot, because our first concern is patron requests, getting collections in and dealing with immediate needs.”  My survey results echoed the NDSA staffing survey findings in that almost every institution felt that digital preservation was understaffed, and that most organizations are retraining existing staff to manage digital preservation functions rather than hiring new staff.  As far as services, survey respondents reported offering various digital preservation services such as consulting, education and outreach.  However, most institutions are at the stage of just trying to raise awareness about the digital preservation services they offer.

Jefferson: Your conclusion poses a number of questions about the path forward for institutions developing digital preservation programs. How does the future look for your institution and what advice would you give to institutions in a similar place as far as program development?

Meghan: I think the future of our digital preservation program at UMass Amherst looks very positive.  We have made great advances toward digital preservation over the last decade.  We have implemented an institutional repository to manage and provide access to the scholarly output of the University, created a digital image repository to replace the old slide library and developed a Fedora-based repository system to manage and preserve our digital special collections and archives. We wrote our digital preservation policy to guide us in our path forward.

We are planning to join a LOCKSS PLN to preserve the content in our institutional repository; we just joined the Hathi Trust which should provide digital preservation for the materials we have digitized through the Internet Archive; and we are working with the Five Colleges to test and possibly implement new digital preservation tools and technologies.  It helps to have the support of the administration at your institution, which we are very fortunate to have.  My guess is that we will see an increase in collaboration in the future, so my advice would be to pay attention to the development of national-level collaborative digital preservation initiatives and to think about state or regional opportunities to work together on digital preservation efforts.

Jefferson: Finally, after conducting the survey and writing your sabbatical report, how do you feel about the current state of digital preservation?

Meghan: I think it’s really encouraging to see institutions trying different technologies and experimenting with what will work even with limited resources and uncertainty over what the best solution might be.  Despite the many challenges, we aren’t just throwing our hands up in the air and doing nothing.  We are trying different things, sharing the results of our efforts with each other, and learning as a community.  It’s an exciting time of innovation in the digital preservation field!

Eric Lease Morgan: My first Python script, concordance.py

planet code4lib - Mon, 2014-11-10 16:43

Below is my first Python script, concordance.py:

#!/usr/bin/env python2 # concordance.py - do KWIK search against a text # # usage: ./concordance.py <file> <word>ph # Eric Lease Morgan <emorgan@nd.edu> # November 5, 2014 - my first real python script! # require import sys import nltk # get input; needs sanity checking file = sys.argv[ 1 ] word = sys.argv[ 2 ] # do the work text = nltk.Text( nltk.word_tokenize( open( file ).read( ) ) ) text.concordance( word ) # done quit()

Given the path to a plain text file as well as a word, the script will output no more than twenty-five lines containing the given word. It is a keyword-in-context (KWIC) search engine, one of the oldest text mining tools in existence.

The script is my first foray into Python scripting. While Perl is cool (and “kewl”), it behooves me to learn the language of others if I expect good communication to happen. This includes others using my code and me using the code of others. Moreover, Python comes with a library (module) call the Natural Langauge Toolkit (NLTK) which makes it relatively easy to get my feet wet with text mining in this environment.

Islandora: iCampBC T-Shirt Logo Contest

planet code4lib - Mon, 2014-11-10 16:23

One of the features of Islandora Camp is the camp t-shirt given to all attendees. Every camp has its own logo. This is the logo won a free registration for our last Islandora Camp, in Denver:


 

 We want to give a free registration and a couple of extra t-shirts to the iCampBC attendee who comes up with the best logo to represent our first trip to western Canada.

Entries will be accepted through January 3rd, 2015. Entries will be put up on the website for voting and a winner will be selected and announced January 10th, 2015.

Here are the details to enter:

The Rules:
  • Camp Registration is not necessary to enter; anyone with an interest in Islandora is welcome to send in a design - however, the prize is a free registration, so you'll have to be able to come to camp to claim it.
  • Line art and text are acceptable; photographs are not.
  • You are designing for the front of the shirt for an area up to 12 x 12 inches. Your design must be a single image.
  • Your design may be up to four colours. The t-shirt colour will be determined in part by the winning design.
  • By entering the contest you agree that your submission is your own work. The design must be original, unpublished, and must not include any third-party logos (other than the Islandora logo, which you are free to use in your design) or copyrighted material.
The Prizes:
  • One free registration to Islandora Camp BC (or a refund if you are already registered)
  • An extra t-shirt with your awesome logo
  • Bragging rights
How to Enter:
  • Please submit the following by email to community@islandora.ca:
    • Your full name
    • A brief explanation of your logo idea
    • Your logo entry as an attachment. Minimum 1000 x 1000 pixels. High-resolution images in .eps or .ai format are preferred. We will accept .png and .jpg for the contest, but the winner must be able to supply a high resolution VECTOR art version of their entry if it is selected as the winner. Don't have a vector program? Try Inkscape - it's free!
  • Entries will be accepted through January 3rd, 2015.
Details:
  • Multiple entries allowed.
  • Submissions will be screened by the Islandora Team before posting to the website for voting.
  • By submitting your design, you grant permission for your design to be used by the Islandora project, including but not limited to website promotions, printed materials and (of course) t-shirt printing.
  • We reserve the right to alter your image as necessary for printing requirements and/or incorporate the name and date of the camp into the final t-shirt design. You are free to include these yourself as part of your logo.
  • The Islandora Team reserves the right to make the final decision.
Previous Camp Logos

Thank you and good luck!

OCLC Dev Network: November Release Update

planet code4lib - Mon, 2014-11-10 14:30

Deployment of the latest software release was unsuccessful.  We have restored the software to its current release. We will let you know when another installation date is established.

OCLC Dev Network: November Release Update

planet code4lib - Mon, 2014-11-10 14:30

Deployment of the latest software release was unsuccessful.  We have restored the software to its current release. We will let you know when another installation date is established.

Islandora: Islandora Show and Tell: Barnard College

planet code4lib - Mon, 2014-11-10 14:23

Today marks the launch of a new regular blog item for islandora.ca: Islandora Show and Tell. Credit for the idea belongs to the crowd at iCampCO, who reacted to our usual impromptu parade of Islandora site demos from the community by suggesting that this sort of thing should happen far more often. Colorado Alliance's Robin Dean coined the name "Islandora Show and Tell," and here we are.

The launch of Show and Tell coincided handily with the launch of a particularly innovative and beautifully designed Islandora site: Barnard Digital Collections.

Right off the bat, the site stands out for its striking home page, with a full photo background and simple search/browse box as a gateway to the collection:

Other customizations include landing pages for the collection (with new thumbnails), school newspaper, and yearbook; modifications to the OpenSeadragon viewer to add thumbnails; and a custom content model for digital exhibits that pulls in Islandora objects based on PID. If you want to see how the peices work, you can check out Barnard's custom code up on GitHub . They have also shared a detailed MODs form for photos with our Islandora Ingest Forms repo.

The collection itself is a delight, especially the newspaper and yearbooks. I always start any visit to a new Islandora site by dropping "cats" into simple search, because that's how my head works and odd words reveal interesting objects. In Barnard's digital collection, it helped me learn about 1938's Playful Play Day, a 1976 assurance from Allied Chemical that stockholders are people too, and a comic strip that captures the true spirit of 1991:

I highly reccomended taking your own tour of the collection and seeing what you can discover. Even if you're not a Barnard alum, it's a facinating and very accessible collection - and you can always share cool finds with the rest of us on #islandora. You can also check out the story of Barnard College's site development from the point of view of discoverygarden, Inc, who published a recent case study.

Now, for the actual Show and Tell. Martha Tenney, Digital Archivist at Barnard, agreed to answer some questions about their site and how it came together:

What is the primary purpose of your repository? Who is the intended audience?

The Barnard Digital Collections feature materials from the Barnard Archives and Special Collections. We currently have three collections of digitized materials--the school newspaper, the yearbook, and photographs--but we hope to grow the collections substantially to include other digitized as well as born-digital materials. The intended audience is primarily the Barnard community--students, staff, alums, and faculty--as well as researchers and anyone with an interest in the history of Barnard and/or our special collections.

Why did you choose Islandora?

I came into my position with a strong inclination towards using Fedora and open-source software in general, but I wanted to do my due diligence and completed an environmental scan of various repository software solutions, both open-source and commercial. Islandora seemed to have the most features that we wanted, and I was excited about the active user community populated by other small institutions. (In particular, Joanna DiPasquale, at Vassar, was a tremendous help, providing us with guidance and advice throughout this process. I also talked with folks at the University of New Hampshire, Hamilton, and Grinnell, and they all provided great advice and guidance about the technical and administrative infrastructure that we would need to have to make Islandora work for us.) I would add that we chose Islandora over Hydra--another open-source Fedora-based repository stack that I think is really exciting--because we needed a more turnkey approach, and we felt that it would be easier to hire and train for the expertise needed to support Islandora in-house. 

Which modules or solution packs are most important to your repository?

We lean a lot on the different solution packs for the various formats we have in the collections--the book solution pack, the newspaper solution pack, and the large image solution pack--as well as their dependencies. I also use the Solr module quite a bit, coupled with the form builder and the simple workflow module, to configure the search interface and manage the process of undergraduates ingesting and creating metadata for photographs.

What feature of your repository are you most proud of?

I love our front page. And I'm really proud of all the batch ingesting and metadata scripting that Dillon Savage (our applications developer) did to make the newspapers and yearbooks accessible. 
 
Who built/developed/designed your repository (i.e, who was on the team?)

Lisa Norberg, Dean of the Barnard Library, had the initial vision for the digital collections and put the pieces into place so that I could be hired and so that we could bring on Dillon. Shannon O'Neill, Associate Director of the Barnard Archives and Special Collections, helped to make the case for an open-source solution and supported me as I worked with many of Barnard's IT staff--particularly Rodolfo Nunez and Laura Hopwood from our systems group--to make sure we had the infrastructure required for Islandora and maintain our installation. On a day-to-day basis, Dillon Savage and I do the most work on the repository. Dillon works on custom development, fixing bugs, and scripts and batch ingests, while I work more on metadata, design, and overseeing individual ingests. Many undergraduates who work in the Archives, as well as other library staff, contribute to the collections by ingesting and describing photographs and creating digital exhibits. We've received input from many other folks at Barnard as well, and I hope the collections will become an even more collaborative project--bringing in faculty, students, staff, and alums--in the future. Finally, we contracted with discoverygarden to do our install and support; their expertise has been indispensable.

Do you have plans to expand your site in the future?

The photograph collection is still growing and will continue to grow quite a bit, and we hope to add more collections soon--likely starting with some manuscript collections. We'll continue to add items to the collections individually and do larger-scale digitization projects when time and funds allow. I'm also excited to see how we can use Islandora for born-digital materials, and Dillon and I are working all the time to improve searchability and add new features to the site.

What is your favourite object in your collection to show off?

I love the issue of the school newspaper, the Barnard Bulletin, from February 25th, 1965: a brief report on Malcolm X's final speech, delivered at Barnard three days before his assassination, is next to a story about the new editor of the Bulletin and an announcement about a campus SNCC meeting. I think it encapsulates the breadth of the collections--how they speak not only to the history of Barnard but also to the trajectory of women's higher education and to broader historical narratives.

 

Many thanks to Martha Tenney and Barnard College for kicking off Islandora Show and Tell. Stay tuned for more great islandora sites in the weeks to come!

Islandora: Islandora Show and Tell: Barnard College

planet code4lib - Mon, 2014-11-10 14:23

Today marks the launch of a new regular blog item for islandora.ca: Islandora Show and Tell. Credit for the idea belongs to the crowd at iCampCO, who reacted to our usual impromptu parade of Islandora site demos from the community by suggesting that this sort of thing should happen far more often. Colorado Alliance's Robin Dean coined the name "Islandora Show and Tell," and here we are.

The launch of Show and Tell coincided handily with the launch of a particularly innovative and beautifully designed Islandora site: Barnard Digital Collections.

Right off the bat, the site stands out for its striking home page, with a full photo background and simple search/browse box as a gateway to the collection:

Other customizations include landing pages for the collection (with new thumbnails), school newspaper, and yearbook; modifications to the OpenSeadragon viewer to add thumbnails; and a custom content model for digital exhibits that pulls in Islandora objects based on PID. If you want to see how the peices work, you can check out Barnard's custom code up on GitHub . They have also shared a detailed MODs form for photos with our Islandora Ingest Forms repo.

The collection itself is a delight, especially the newspaper and yearbooks. I always start any visit to a new Islandora site by dropping "cats" into simple search, because that's how my head works and odd words reveal interesting objects. In Barnard's digital collection, it helped me learn about 1938's Playful Play Day, a 1976 assurance from Allied Chemical that stockholders are people too, and a comic strip that captures the true spirit of 1991:

I highly reccomended taking your own tour of the collection and seeing what you can discover. Even if you're not a Barnard alum, it's a facinating and very accessible collection - and you can always share cool finds with the rest of us on #islandora. You can also check out the story of Barnard College's site development from the point of view of discoverygarden, Inc, who published a recent case study.

Now, for the actual Show and Tell. Martha Tenney, Digital Archivist at Barnard, agreed to answer some questions about their site and how it came together:

What is the primary purpose of your repository? Who is the intended audience?

The Barnard Digital Collections feature materials from the Barnard Archives and Special Collections. We currently have three collections of digitized materials--the school newspaper, the yearbook, and photographs--but we hope to grow the collections substantially to include other digitized as well as born-digital materials. The intended audience is primarily the Barnard community--students, staff, alums, and faculty--as well as researchers and anyone with an interest in the history of Barnard and/or our special collections.

Why did you choose Islandora?

I came into my position with a strong inclination towards using Fedora and open-source software in general, but I wanted to do my due diligence and completed an environmental scan of various repository software solutions, both open-source and commercial. Islandora seemed to have the most features that we wanted, and I was excited about the active user community populated by other small institutions. (In particular, Joanna DiPasquale, at Vassar, was a tremendous help, providing us with guidance and advice throughout this process. I also talked with folks at the University of New Hampshire, Hamilton, and Grinnell, and they all provided great advice and guidance about the technical and administrative infrastructure that we would need to have to make Islandora work for us.) I would add that we chose Islandora over Hydra--another open-source Fedora-based repository stack that I think is really exciting--because we needed a more turnkey approach, and we felt that it would be easier to hire and train for the expertise needed to support Islandora in-house. 

Which modules or solution packs are most important to your repository?

We lean a lot on the different solution packs for the various formats we have in the collections--the book solution pack, the newspaper solution pack, and the large image solution pack--as well as their dependencies. I also use the Solr module quite a bit, coupled with the form builder and the simple workflow module, to configure the search interface and manage the process of undergraduates ingesting and creating metadata for photographs.

What feature of your repository are you most proud of?

I love our front page. And I'm really proud of all the batch ingesting and metadata scripting that Dillon Savage (our applications developer) did to make the newspapers and yearbooks accessible. 
 
Who built/developed/designed your repository (i.e, who was on the team?)

Lisa Norberg, Dean of the Barnard Library, had the initial vision for the digital collections and put the pieces into place so that I could be hired and so that we could bring on Dillon. Shannon O'Neill, Associate Director of the Barnard Archives and Special Collections, helped to make the case for an open-source solution and supported me as I worked with many of Barnard's IT staff--particularly Rodolfo Nunez and Laura Hopwood from our systems group--to make sure we had the infrastructure required for Islandora and maintain our installation. On a day-to-day basis, Dillon Savage and I do the most work on the repository. Dillon works on custom development, fixing bugs, and scripts and batch ingests, while I work more on metadata, design, and overseeing individual ingests. Many undergraduates who work in the Archives, as well as other library staff, contribute to the collections by ingesting and describing photographs and creating digital exhibits. We've received input from many other folks at Barnard as well, and I hope the collections will become an even more collaborative project--bringing in faculty, students, staff, and alums--in the future. Finally, we contracted with discoverygarden to do our install and support; their expertise has been indispensable.

Do you have plans to expand your site in the future?

The photograph collection is still growing and will continue to grow quite a bit, and we hope to add more collections soon--likely starting with some manuscript collections. We'll continue to add items to the collections individually and do larger-scale digitization projects when time and funds allow. I'm also excited to see how we can use Islandora for born-digital materials, and Dillon and I are working all the time to improve searchability and add new features to the site.

What is your favourite object in your collection to show off?

I love the issue of the school newspaper, the Barnard Bulletin, from February 25th, 1965: a brief report on Malcolm X's final speech, delivered at Barnard three days before his assassination, is next to a story about the new editor of the Bulletin and an announcement about a campus SNCC meeting. I think it encapsulates the breadth of the collections--how they speak not only to the history of Barnard but also to the trajectory of women's higher education and to broader historical narratives.

 

Many thanks to Martha Tenney and Barnard College for kicking off Islandora Show and Tell. Stay tuned for more great islandora sites in the weeks to come!

Patrick Hochstenbach: Homework assignment #9 Sketchbookskool

planet code4lib - Mon, 2014-11-10 07:44
“Spend a few hours drawing one person. Follow them around make drawings. “ I wanted to draw my wife, but she complained and said that I should draw the cat. But where was that animal? I grabbed  some cat candy

John Miedema: Using Orlando and Watson Named Entities to analyze literature from Open Library. A simple example.

planet code4lib - Mon, 2014-11-10 03:10

Jane Austen’s Letters are a collection of Austen’s personal observations about her family, friends, and life. Great stuff for a literary researcher.The Letters are in the public domain. Public domain books provide a corpus of unstructured content for literary analysis. I am very grateful to Jessamyn West and Open Library for obliging my request for a download of public domain novels and related literary works, over 2100 titles. It allows this first simple example of how Orlando metadata and IBM Watson technology can work together to analyze literature.

In Figure 1, I observe in Watson Content Analytics (WCA) that there are 129  works from Open Library matching on the Orlando entry for Jane Austen. I could continue to explore the Orlando relationships available as facets here, but for this example I just add the Jane Austen entry to the search filter.

In Figure 2, I look at the WCA Named Entity Recognition (NER) annotators for Person. NER is automatic annotation of content by Person, Location and Organization. It is enabled with a simple switch in WCA. In this view, I suppose I am interested in Austen’s publisher, Frank S. Holby, who matches on 28 of the 128 works. Note that this Person was not Orlando metadata but rather discovered from the body of works by NER. I add Holby’s name to my search criteria.

In Figure 3, I switch to the WCA Documents view to begin inspecting the search results. I see a number of works, the Letters, highlighting the Orlando match on Jane Austen and the NER match on Frank S. Holby.

 

Nicole Engard: Bookmarks for November 9, 2014

planet code4lib - Sun, 2014-11-09 20:30

Today I found the following resources and bookmarked them on <a href=

  • Shareabouts Shareabouts is a flexible tool for gathering public input on a map.
  • Blueimp's AJAX Chat AJAX Chat is a free and fully customizable open source web chat implemented in JavaScript, PHP and MySQL
  • Firechat – open source chat built on Firebase Firechat is an open-source, real-time chat widget built on Firebase. It offers fully secure multi-user, multi-room chat with flexible authentication, moderator features, user presence and search, private messaging, chat invitations, and more.
  • Live helper chat Live Support chat for your website.

Digest powered by RSS Digest

The post Bookmarks for November 9, 2014 appeared first on What I Learned Today....

Related posts:

  1. New Website Tools & Technology Update
  2. Another way to use Zoho
  3. AJAX Matters

Pages

Subscribe to code4lib aggregator