You are here

Feed aggregator

David Rosenthal: Talk at Storage Valley Supper Club

planet code4lib - Fri, 2014-11-14 16:00
I gave a very short talk to the Storage Valley Supper Club's 8th meeting. Below the fold, an edited text with links to the sources.

I'm David Rosenthal and I'm a customer. This will be a very short talk making one simple point, which is the title of the talk:
Storage Will Be
Much Less Free
Than It Used To Be
My five minutes of fame happened last Monday when Chris Mellor at The Register published this piece, with a somewhat misleading title. It is based on work I had been blogging about since at least 2011, ever since a conversation at the Library of Congress with Dave Anderson of Seagate. For the last 16 years I've been working at Stanford Library's LOCKSS Program on the problem of keeping data safe for the long term. There are technical problems, but the more important problems are economic. How do you fund long-term preservation?

Working with students at UC Santa Cruz's Storage Systems Research Center I built an economic model of long-term storage. Here is an early version computing the net present value of the expenditures through time to keep an example dataset for 100 years, the endowment for short, as the rate at which storage gets cheaper, the Kryder rate for short, varies. The different lines reflect media service lives of 1 to 5 years.

At the historic 30-40%/year we are in the flat part of the graph, where the endowment is low and it doesn't vary much with the Kryder rate. This meant that long-term storage was effectively free; if you could afford to store the data for a few years, you could afford to store it "for ever" because the cost of storing it for the rest of time would have become negligible.

But suppose the Kryder rate drops below about 20%/year. We are in the steep part of the graph where the endowment needed is much higher and depends strongly on the precise Kryder rate. Which, of course, we are not going to know, so the cost of long-term storage becomes much harder to predict.

We don't have to suppose. This graph, from Preeti Gupta at UCSC, shows that in 2010, before the floods in Thailand, the Kryder rate had dropped. Right now, disk is about 7 times as expensive as would have been predicted in 2010. The red lines show the range of industry projections going forward, 10-20%/year. In 2020 disk is projected to be between 100 and 300 times as expensive as would have been projected in 2010. As my first graph showed, this is a big deal for anyone who needs to keep data for the long term.

No-one should be surprised that in the real world exponential curves can't go on for ever. Here is Randall Munroe's explanation. In the real world exponential growth is always the first part of an S-curve.

Why has the Kryder rate slowed? This 2009 graph from Seagate shows that what looks like a smooth Kryder graph is actually the superimposition of a series of S-curves, one for each technology. One big reason for the slowing is technical, each successive technology transition gets harder - the long delay in getting HAMR into production is the current example. But this has economic implications. Each technology transition is more expensive, so the technology needs to remain in the market longer to earn a return on the investment. And the cost of the transition drives industry consolidation, so we now have only a little over 2 disk manufacturers. This has transformed disks from a very competitive, low-margin business into a stable 2-vendor one with reasonably good margins. Increasing margins slows the Kryder rate.

This isn't about technology "hitting a wall" and the increase in bit density stopping. It is about the interplay of technological and business factors slowing the rate of decrease in $/GB. For people who look only at the current cost of storage, this is irritating. For those of us who are concerned with the long-term cost of storage, it is a very big deal.

Library of Congress: The Signal: Presenting the NDSR Boston Residents, and their Projects!

planet code4lib - Fri, 2014-11-14 14:57

The following is a guest post by the entire cohort of the NDSR Boston class of 2014-15.

The first ever Boston cohort of the National Digital Stewardship Residency kicked off in September, and the five residents have been busy drinking from the digital preservation firehose at our respective institutions. You can look forward to individual blog posts from each resident as this 9-month residency goes on, but we decided to start with a group post to outline each of our projects as they’ve developed so far. (To keep up with us on a more regular basis, keep an eye on our digital preservation test kitchen blog.)

Sam DeWitt – Tufts University

Sam

I will be at Tufts’ Tisch Library during my residency, looking at ways that the university might better understand the research data it produces. The National Science Foundation has required data management plans from grant-seekers for several years now and some scholarly journals have followed suit by mandating that researchers submit their data sets along with accepted work. These dictates play a significant role in the widespread movement.

Data sharing, as a concept, is particularly trendy right now (try adding ‘big data’ to the term ‘data sharing’ in a Google search) but the the practice is open to debate. Its advantages and disadvantages are articulated quite nicely here. As someone who works in the realm of information science, I generally believe research is meant to be shared and that concerns can be mitigated by policy. But that is easier said than done, as Christine Borgman so succinctly argues in “The Conundrum of Sharing Research Data”: “The challenges are to understand which data might be shared with whom, under what conditions, why, and to what effects. Answers to these questions will inform data policy and practice.”

I hope that in these few months I can gain a broader understanding of the data Tufts produces while I continue to examine the policies, practices and procedures that aid in their curation and dissemination.

Rebecca Fraimow – WGBH

Rebecca

My project is designed a little differently from the ones that my NDSR peers are undertaking; instead of tackling a workflow from the top down, I’m starting with the individual building blocks and working up.  Over the course of my residency, my job is to embed myself into the different aspects of daily operations within the WGBH Media, Library and Archives department.  Everything that I find myself banging my head into as I go along, I document and make part of the process for redesigning the overall workflow.

Since WGBH MLA is currently in the process of shifting over to a Fedora-based Hydra repository — a major shift from the previous combination of Filemaker databases and proprietary Artesia DAM — it’s the perfect time for the archives to take a serious look at reworking some legacy practices, as well as designing new processes and procedures for securing the longevity of a growing ingest stream that is still shifting from primarily object-based to almost entirely file-based.

At the end of the residency, I’ll be creating a webinar in order to share some best practices (or, at least, working practices) with the rest of the public broadcasting world.  Many broadcasting organizations are struggling through archival workflow problems without having the benefit of WGBH’s strong archiving department.  It’s exciting to know that the work I’m doing is going to have a wider outward-facing impact — after all, sharing knowledge is kind of what public broadcasting is all about.

Joey Heinen – Harvard University

Joey

As has been famously outlined by the Library of Congress, digital formats are just as susceptible to obsolescence as analog formats due to any number of factors. At Harvard Library, my host for the NDSR, we are grappling with formats migration frameworks at a broad level though looking to implement a plan for three specific, now-obsolete formats — Kodak PhotoCD, RealAudio and SMIL Playlists. So far my work has involved an examination of the biggest challenges for each format.

For example, Kodak PhotoCD incorporates a form of chroma subsampling (Photo YCC) based off of the Rec. 709 standard for digital video rather than the various RGB or CIE profiles more typical for still images. Photo YCC captures color information that is beyond what is perceptible to the human eye and is well beyond the confines of color profiles such as RGB (an example of format attributes that drive the migration process so as not to lose fundamental content and information from the original).

Other challenges that impact a project such as this are managing the human components (stakeholder roles and arriving upon shared conclusions about the format’s most noteworthy characteristics) as well as ensuring that existing tools for converting, validating and characterizing are correctly managing and reporting on the format (I explored some of these issues here). A bibliography (PDF) that I compiled is guiding this process, the contents of which has allowed me to approach the systems at Harvard in order to find the right partners and technological avenues for developing a framework. Look for more updates on the NDSR-Boston website (as well as my more substantive project update on “The Signal” in April 2015).

Jen LaBarbera – Northeastern University

Jen

My residency is at Northeastern University’s Archives and Special Collections, though as with a lot of digital preservation projects and/or programs, my work spans a number of other departments — library technology services, IT, Digital Scholarship Group and metadata management.

My project at Northeastern relies heavily on the new iteration of Northeastern’s Fedora-based digital repository (DRS), which is currently in its soft-launch phase and is set to roll out in a more public way in early 2015. My projects at Northeastern are best summed up by the following three goals: 1) create a workflow for ingesting recently born-digital content to the new DRS, 2) create a workflow for ingesting legacy born-digital (obsolete format) content to the new DRS, and 3) help Northeastern Libraries develop a digital preservation plan.

I’m starting with the first goal, ingesting recently born-digital content. As a test case to help us create a more general workflow, we’re working on ingesting the content of the Our Marathon archive. Our Marathon is a digital archive created as a digital humanities project following the bombing at the 2013 Boston Marathon. The goal is to transfer all materials (in a wide variety of formats) from their current states/platforms (Omeka, external hard drives, Google Drive, local server) to the new DRS. I’ve spent the first part of this residency drinking in all the information I can about the DRS, digital humanities projects (in general and at Northeastern), and wrapping my brain around these projects; now, the real fun begins!

Tricia Patterson – MIT Libraries

Tricia

My residency is within MIT’s Lewis Music Library, a subject-specific library at MIT that is much-loved by students, faculty, and alumni. They are currently looking at digitizing and facilitating access to some of their analog audio special collections of MIT music performances, which has also catalyzed a need to think about their digital preservation. The “Music at MIT” digital audio project was developed in order to inventory, digitize, preserve, and facilitate access to audio content in their collections. And since audio content is prevalent throughout MIT collections, the “Making Music Last” initiative was designed to extend the work of the “Music at MIT” digital audio project and develop an optimal, detailed digital preservation workflow – which is where I came in!

Through the completion of a gap analysis of the existing workflow, a broad review of other fields’ workflow methodologies, and collaborations with stakeholders across the board, our team is working on creating a high and low-level life cycle workflow, calling out a digital audio use case, and evaluating suitable options for an access platform. This comprehensive workflow will contribute to the overall institutional knowledge instead of limiting important information to one stakeholder and clarify roles between individuals throughout the process, improving engagement and communication. Finally, mapping out the work process enhances our understanding of requirements for tools – such as Archivematica or BitCurator – that should be adopted and incorporated with a high degree of confidence for success. As the process moves from design to implementation and testing, the detailed workflow also ensures reliability and repeatable quality in our processes. It’s been a highly collaborative and educational process so far – stay tuned for how it pans out!

Islandora: 2015 Islandora Camps

planet code4lib - Fri, 2014-11-14 14:53

Another year, another lineup of Islandora Camps to bring the community together. We have a great roster of camps for 2015, hopefully providing all of you out there with at least one that's close and convenient so you can partake in islandora's secret sauce.

Dates are not quite set yet for the latter events, but here's the general schedule so you can plan ahead:

Islandora Camp BC - Vancouver, BC February 16 - 18 Islandora Camp EU2 - Madrid, Spain May 27 - 29 Islandora Conference (way more info on this in days to come) - August Islandora Camp CT - Hartford, CT Late October or Early November   See you at Islandora Camp!

Open Knowledge Foundation: Global Witness and Open Knowledge – Working together to investigate and campaign against corruption related to the extractives industries

planet code4lib - Fri, 2014-11-14 11:34

Sam Leon, one of Open Knowledge’s data experts, talks about his experiences working as an School of Data Embedded Fellow at Global Witness.

Global Witness are a Nobel Peace Prize nominated not-for-profit organisation devoted to investigating and campaigning against corruption related to the extractives industries. Earlier this year they received the TED Prize and were awarded $1 million to help fight corporate secrecy and on the back of which they launched their End Anonymous Companies campaign.

In February 2014 I began a six month ‘Embedded Fellowship’ at Global Witness, one of the world’s leading anti-corruption NGOs. Global Witness are no strangers to data. They’re been publishing pioneering investigative research for over two decades now, piecing together the complex webs of financial transactions, shell companies and middlemen that so often lie at the heart of corruption in the extractives industries.

Like many campaigning organisations, Global Witness are seeking new and compelling ways to visualise their research, as well as use more effectively the large amounts of public data that have become available in the last few years.

“Sam Leon has unleashed a wave of innovation at Global Witness”
-Gavin Hayman, Executive Director of Global Witness

As part of my work, I’ve delivered data trainings at all levels of the organisation – from senior management to the front line staff. I’ve also been working with a variety of staff to use data collected by Global Witness to create compelling infographics. It’s amazing how powerful these can be to draw attention to stories and thus support Global Witness’s advocacy work.

The first interactive we published on the sharp rise of deaths of environmental defenders demonstrated this. The way we were able to pack some of the core insights of a much more detailed report into a series of images that people could dig into proved a hit on social media and let the story travel further.

See here for the full infographic on Global Witness’s website.

But powerful visualisation isn’t just about shareability. It’s also about making a point that would otherwise be hard to grasp without visual aids. Global Witness regularly publish mind-boggling statistics on the scale of corruption in the oil and gas sector.

“The interactive infographics we worked on with Open Knowledge made a big difference to the report’s online impact. The product allowed us to bring out the key themes of the report in a simple, compelling way. This allowed more people to absorb and share the key messages without having to read the full report, but also drew more people into reading it.”
-Oliver Courtney, Senior Campaigner at Global Witness

Take for instance, the $1.1 billion that the Nigerian people were deprived of due to the corruption around the sale of Africa’s largest oil block, OPL 245.

$1.1 billion doesn’t mean much to me, it’s too big of a number. What we sought to do visually was represent the loss to Nigerian citizens in terms of things we could understand like basic health care provision and education.

See here for the full infographic on Shell, ENI and Nigeria’s Missing Millions.


In October 2014, to accompany Global Witness’s campaign against anonymous company ownership, we worked with developers from data journalism startup J++ on The Great Rip Off map.

The aim was to bring together and visualise the vast number of corruption case studies involving shell companies that Global Witness and its partners have unearthed in recent years.


It was a challenging project that required input from designers, campaigners, developers, journalists and researchers, but we’re proud of what we produced.

Open data principles were followed throughout as Global Witness were committed to creating a resource that its partners could draw on in their advocacy efforts. The underlying data was made available in bulk under a Creative Commons Attribution Sharealike license and open source libraries like Leaflet.js were used. There was also an invite for other parties to submit case studies into the database.

“It’s transformed the way we work, it’s made us think differently how we communicate information: how we make it more accessible, visual and exciting. It’s really changed the way we do things.”
-Brendan O’Donnell, Campaign Leader at Global Witness

For more information on the School of Data Embedded Fellowship Scheme, and to see further details on the work we produced with Global Witness, including interactive infographics, please see the full report here.

ZBW German National Library of Economics: Publishing SPARQL queries live

planet code4lib - Thu, 2014-11-13 23:00

SPARQL queries are a great way to explore Linked Data sets - be it our STW with it's links to other vocabularies, the papers of our repository EconStor, or persons or institutions in economics as authority data. ZBW therefore offers since a long time public endpoints. Yet, it is often not so easy to figure out the right queries. The classes and properties used in the data sets are unknown, and the overall structure requires some exploration. Therefore, we have started collecting queries in our new SPARQL Lab, which are in use at ZBW, and which could serve as examples to deal with our datasets for others.

A major challenge was to publish queries in a way that allows not only their execution, but also their modification by users. The first approach to this was pre-filled HTML forms (e.g. http://zbw.eu/beta/sparql/stw.html). Yet that couples the query code with that of the HTML page, and with a hard-coded endpoint address. It does not scale to multiple queries on a diversity of endpoints, and it is difficult to test and to keep in sync with changes in the data sets. Besides, offering a simple text area without any editing support makes it quite hard for users to adapt a query to their needs.

And then came YASGUI, an "IDE" for SPARQL queries. Accompanied by the YASQE and YASR libraries, it offers a completely client-side, customable, Javascript-based editing and execution environment. Particular highlights from the libraries' descriptions include:

  • SPARQL syntax highlighting and error checking
  • Extremely customizable: All functions and handlers from the CodeMirror library are accessible
  • Persistent values (optional): your query is stored for easier reuse between browser sessions
  • Prefix autocompletion (using prefix.cc)
  • Property and class autocompletion (using the Linked Open Vocabularies API)
  • Can handle any valid SPARQL resultset format
  • Integration of preflabel.org for fetching URI labels

With a few lines of custom clue code, and with the friendly support of Laurens Rietveld, author of the YASGUI suite, it is now possible to load any query stored on GitHub into an instance on our beta site and execute it. Check it out - the URI

http://zbw.eu/beta/sparql-lab/?queryRef=https://api.github.com/repos/jneubert/sparql-queries/contents/class_overview.rq&endpoint=http://data.nobelprize.org/sparql

loads, views and executes the query stored at https://github.com/jneubert/sparql-queries/blob/master/class_overview.rq on the endpoint http://data.nobelprize.org/sparql (which is CORS enabled - a requirement for queryRef to work).

Links like this, with descriptions of query's purpose, grouped according to tasks and datasets, and ordered in a sensible way, may provide a much more accessible repository and starting point for explorations than just a directory listing of query files. For ongoing or finished research projects, such a repository - together with versioned data sets deployed on SPARQL endpoints - may offer a easy-to-follow and traceable way to verify presented results. GitHub provides an infrastructure for publicly sharing the version history, and makes contributions easy: Changes and improvements to the published queries can be proposed and integrated via pull requests, an issue queue can handle bugs and suggestions. Links to queries authored by contributors, which may be saved in different repositories and project contexts, can be added straightaway. We would be very happy to include such contributions - please let us know.

SPARQL Lab Linked data  

HangingTogether: Reordering Ranganathan: the evolution of a research project

planet code4lib - Thu, 2014-11-13 21:08

In early 2012, I started on the report that became Reordering Ranganathan: Shifting User Behaviors, Shifting Priorities with Lynn Silipigni Connaway.  Back then we called it the User Behavior Report.  Not a catchy title, but it broadly reflected what we both studied.  Our intention was to learn about each others’ research and bring our experiences, perspectives, and research together under one umbrella.

You may be wondering why we had to learn about each others’ research given we worked for the same organization.  I actually started at OCLC Research just 6 months prior in 2011.  Lynn and I had very disparate experiences, perspectives, and paths to OCLC. I earned a Ph.D. in Business Administration – Information Systems; Lynn earned her Ph.D. in Library and Information Science with a minor in Public Policy.  Before beginning a research career, I worked in tech companies and Lynn worked in school, academic, and public libraries.

As colleagues, we wanted to explore how our research interests overlapped and begin to think about collaborative user behavior projects.  We wanted to develop a common set of ideas we could collectively contribute to through our research.  We also wanted to describe the ideas in ways that would be relevant to our intended audiences– librarians, library researchers, information scientists.

In studying user behavior, we both are interested in how people discover, access, and use/reuse content.  In an early outline for our report we wrote “We want to know how people are getting their information, why they are making these choices, and what information or sources are meeting their needs.”

At one of our meetings, Lynn suggested using Ranganathan’s five laws as a framework for our report.  I was intrigued.  Given my background, I never had heard of them.  But as we began reviewing the laws and literature about them, it was interesting for me to think about them in the context of my research interests.

Over the course of several meetings we discussed our understanding of each law and thought about how our research areas applied. In doing so we began to stretch, adapt, and change each law’s wording to help us more clearly articulate to each other why we thought our research fit.

Take the first law, “books are for use.”  Like many researchers, our interests extend beyond books to other physical and digital materials in the library and more generally on the Web.  Moreover, we are interested in “how people are getting their information.”  Our interpretation of the law reflects these overlapping interests – develop the physical and technical infrastructure needed to deliver physical and digital materials.  Our interpretations of the other laws developed in similar ways.

Discussions with a colleague, Andy Havens, prompted us to reorder the laws as well.  When we thought about it, we agreed that scarcity of time not content is the challenge for people these days.  Inundated with information, we want not only quick, but also convenient ways to find, get, and use what we need. And with that the reordering began.

We organized the report so that each chapter could stand on its own.  In each chapter, we examine the law in today’s environment given scholars’ interpretations and research in our areas of interest. We also discuss some ideas about how to apply our interpretations of the law given findings from the research.

Although the project began as a means to help us think about the purpose and scope of our research and how our interests overlapped, we also were interested to see what libraries were doing in practice when it came to our interpretation of Ranganathan’s five laws.  Could we find examples of what we described?

We found a number of exciting, interesting ways the laws are currently unfolding in practice.  We only could include a small fraction, but our hope is that reading the report or listening to the webinar will not only spark new initiatives, but also encourage you to share your current ones.

About Ixchel Faniel

Ixchel Faniel is Associate Research Scientist for OCLC Research. She is currently working on projects examining data reuse within academic communities to identify how contextual information about the data that supports reuse can best be created and preserved. She also examines librarians' early experiences designing and delivering research data services with the objective of informing practical, effective approaches for the larger academic community.

Mail | Web | More Posts (1)

Islandora: Islandora Camp BC - Instructors Announced!

planet code4lib - Thu, 2014-11-13 20:36

We are very pleased to share the roster of workshop instructors for the upcoming Islandora Camp in Vancouver, BC. Camp will, as usual, split up into two groups for hands-on Islandora time on the second day: one group exploring the front-end in the Admin track, and the other looking at code in the Developer track. Here are your instructors:

Developer Track

Mark Jordan is the Head of Library Systems at Simon Fraser University. He has been developing in Drupal since 2007 and is currently leading the effort to migrate SFU Library's digital collections to Islandora. He is a member of the Islandora 7.x-1.3 and 7.x-1.4 release teams and is component manager for several Islandora modules that deal with digital preservation (and developer of several other Islandora-related tools available at his GitHub page). He is also author of Putting Content Online: A Practical Guide for Libraries (Chandos, 2006). Mark taught in the Developer track at iCampCA in 2014.

Mitch MacKenzie is a Solution Architect at discoverygarden where he manages the execution of Islandora projects for institutions across North America and Europe. Mitch has been developing Islandora tools for three years and has been building with Drupal since 2006. His development contributions include the initial work on the Islandora Compound Solution Pack, Islandora Sync, Simple Workflow, and porting the XML Forms family of modules to Islandora 7. This is Mitch's first Islandora Camp as an instructor.

Admin Track

Melissa Anez has been working with Islandora since 2012 and has been the Community and Project Manager of the Islandora Foundation since it was founded in 2013. She spends her time arranging Islandora events, doing what she can to keep the Islandora community ticking along, and writing about herself in the third person in blog posts. Melissa taught in the Admin Track at several previous camps.

Erin Tripp is a librarian, journalist, and business development manager for an open source software services company. Personally, Erin believes in investing in people and ideas – making the open source software space a natural fit. Since 2011, Erin’s been involved in the Islandora project and has been involved in close to 30 different Islandora projects as a project manager. The projects ranged from consulting, installation, custom development, and data migrations. This is Erin's first Islandora Camp as an instructor.

The rest of Camp will be filled with sessions, and we want you to get up and lead the room. A Call for Proposals is open until December 15th. You can check out the slides linked on schedules from previous camps to see what kinds of presentations the community has given in the past.

Cherry Hill Company: Using Islandora for Digital Content Delivery at LITA Forum 2014

planet code4lib - Thu, 2014-11-13 19:32

The 2014 LITA Forum took place in Albuquerque, NM the first week of November. I had the opportunity to present on Islandora and what we accomplished with the Detroit Public Library Digital Collections site that we built.

Presentation

The slides for the presentation are available on SlideShare and on ALA Connect.

At the beginning of my presentation I took some time to answer the question, "Why use a DAMS?" instead of building an image gallery on your website or using a web service. (We will have a blog post up next week expanding on this topic.)

About half the audience had heard of Islandora, but only a few were using it in their libraries. Most of them did not know the details about what Islandora is and what...

Read more »

LITA: Digital Curation Tools I Want to Learn

planet code4lib - Thu, 2014-11-13 13:00

When I first started my job as Digital Curation Coordinator in June, I didn’t quite know what I would be doing. Then I started figuring it out.  As I’ve gotten settled, I’ve realized that I want to be more proactive in identifying tools and platforms that the researchers I’m working with are using so that I can connect with their experience more easily.

However, the truth is that I find it hard to know what tools I should focus on. What usually happens when I learn about a new tool is a cursory read through the documentation… I familiarize myself well enough to share in a few sentences what it does, but most of the time I don’t become incredibly familiar. There are just soooo many tools out there. It’s daunting.

Knowing my tendencies, I decided it would be a good challenge for me to dig deeper into three areas where I am more ignorant than I’d like to be.

Data analysis programs R, SPSS, & SAS

I don’t know a lot about data analysis but I think it will be critical in terms of how well I can understand researchers. Of the three, I’m most familiar with SPSS already and I’ll probably devote the most time to learning R (perhaps through this data science MOOC, which fellow LITA blog writer Bryan pointed out). With SAS, I’m mostly interested in learning how it differs from the others rather than delving too deep.

Metadata editors Colectica & Morpho

Why these two? It’s pretty arbitrary, I guess: I learned about them in a recent ecology data management workshop I was presenting at. As is often the case, I learned a lot from the other presenters! A big part of my job is figuring out how to help researchers manage their data – and a big barrier to that is the painstaking work of creating metadata.

Digital forensics tool BitCurator

I was lucky enough to be able to attend a two-day workshop at my institution, so I have played around with this in the past. BitCurator is an impressive suite of tools that I’m convinced I need to find a use case to explore further. This is a perfect example of a tool I know decently already – but I really want to know better, especially since I already have people bringing me obsolete media and asking what I can do about it.

What tools do you want to learn? And for anyone who helps researchers with data management in some capacity, what additional tools do you recommend I look into?

Hydra Project: OR2015 – Call fpr Proposals

planet code4lib - Thu, 2014-11-13 09:46

Of interest to many Hydranauts:

The Tenth International Conference on Open Repositories, OR2015, will be held on June 8-11, 2015 in Indianapolis (Indiana, USA). The organizers are pleased to invite you to contribute to the program. This year’s conference theme is:

LOOKING BACK, MOVING FORWARD: OPEN REPOSITORIES AT THE CROSSROADS

OR2015 is the tenth OR conference, and this year’s overarching theme reflects that milestone: Looking Back/Moving Forward: Open Repositories at the Crossroads. It is an opportunity to reflect on and to celebrate the transformative changes in repositories, scholarly communication and research data over the last decade. More critically however, it will also help to ensure that open repositories continue to play a key role in supporting, shaping and sharing those changes and an open agenda for research and scholarship.

The full call for proposals can be found at http://www.or2015.net/call-for-proposals/

DuraSpace News: Announcing DuraCloud™ @TDL for Digital Preservation

planet code4lib - Thu, 2014-11-13 00:00

From Kristi Park, Marketing Manager, Texas Digital Library

Austin, Texas  The Texas Digital Library (TDL) is pleased to announce the development of cost-effective, easy-to-use digital preservation storage for its member institutions through DuraCloudTM@TDL. With DuraCloudTM @TDL, members can accurately plan preservation costs, enjoy predictability of service, and rely on known, durable technologies for ensuring the integrity of their digital collections.

DuraSpace News: CALL for Proposals: Tenth International Conference on Open Repositories 2015

planet code4lib - Thu, 2014-11-13 00:00

The Tenth International Conference on Open Repositories, OR2015, will be held on June 8-11, 2015 in Indianapolis (Indiana, USA). The organizers are pleased to invite you to contribute to the program. This year's conference theme is: 

LOOKING BACK, MOVING FORWARD: OPEN REPOSITORIES AT THE CROSSROADS

DPLA: From the DPLA Collections: Top 10 Mustachioed Men of the Civil War

planet code4lib - Wed, 2014-11-12 19:36

Happy Movember, DPLA friends! The month of November brings about a great many things—Thanksgiving, brisk breezes, falling leaves—including ditching the razor for a good cause. Movember encourages participants to grow out mustaches and beards to raise awareness for men’s health issues.

In celebration, we’re providing some historic grooming inspiration. Check back once a week for a selection of some of the best beards and mustaches from the DPLA collection, and up your “Movember” game!

This week, we’re featuring the one thing the North and South could unite around: excellent facial hair.

Major General John M. Schofield, Officer of the Federal Army Captain William Harris Northrup  John F. Mackie, Medal of Honor Recipient Lewis C. Shepard, Medal of Honor Recipient  Portrait of Captain George E. Dolphin, of Minnesota  Portrait of Andrew Anderson, of Minnesota Portrait of Captain Asgrim K. Skaro [?], of Minnesota Portrait of Jacob Dieter, of Minnesota Portrait of Simon Gabert Portrait of Jeremiah C. Donahower, of Minnesota

LITA: Jobs in Information Technology: November 12

planet code4lib - Wed, 2014-11-12 19:26

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Archivist for Collection Management, University of North Carolina -Charlotte, Charlotte,  NC

Deputy Director – Digital Services, Meridian Library District,  Meridian,  ID

Librarian – E-Learning, College of Southern Nevada,  Las Vegas, NV

 

Visit the LITA Job Site for more available jobs and for information on submitting a  job posting.

 

Islandora: Islandora Documentation Survey

planet code4lib - Wed, 2014-11-12 19:23

Hello Islandora Community!

Documentation is an important aspect of any software and an essential tool for the Islandora community.

With the new release out, the Islandora Documentation Interest Group (IDIG) would love to hear about your experience using the newly updated 7. x -1.4 documentation.

By participating in this survey, you will help the IDIG to better understand and serve the needs of our users.

To contribute, please fill out the Islandora Community Survey, which should take approximately 15 minutes to complete.

If you have any questions please contact us at community@islandora.ca

Evergreen ILS: Welcome Julia to the Evergreen OPW Internship Project

planet code4lib - Wed, 2014-11-12 19:19

The Free and Open Source Software Outreach Program for Women announced the names of interns who will be participating in the program’s December 2014 – March 2015 internship round.

The Evergreen project is pleased to announce that Julia Lima of Villa Carlos Paz, Cordoba, Argentina will be working with the Evergreen community during this internship period to create a User Interface Style Guide for the new web client.

Julia is a student studying design at the Universidad Provincial de Cordoba in Argentina. She will work on the style guide during her summer break. As part of her contribution to the project this fall, Julia made three specific recommendations to improve the existing web client User Interface.

The Evergreen OPW mentors selected Julia’s proposal after reviewing proposals from nine potential candidates, several of whom worked with the community during the application period and submitted very good proposals. Bill Erickson, Grace Dunbar, and Dan Wells will serve as project mentors for the UI Style Guide.

Expect to hear more from Julia as she begins working on her project next month.

We also hope to continue hearing from the many candidates with whom we connected during the application period. They brought a lot of enthusiasm and fresh ideas to the project, and we encourage everyone to keep working with us as time allows.

I also want to extend thanks to all the mentors who worked with potential candidates during the application period and reviewed the applications; to others in the community who helped the candidates with installation, answered their questions, and provided feedback to their ideas; and to the Evergreen Oversight Board for supporting the project by funding the internship.

 

 

 

 

LITA: Why Learn Unix? My Two Cents

planet code4lib - Wed, 2014-11-12 12:00

There’s an conversation shaping up on the Code4Lib email list with the title “Why Learn Unix?”, and this is a wonderful question to ask. A lot of technical library jobs are asking for UNIX experience and as a result a lot of library schools are injecting bits and pieces of it into their courses, but without a proper understanding of the why of Unix, the how might just go in one ear and out the other. When I was learning about Unix in library school, it was in the context of an introductory course to library IT.  I needed no convincing, I fell in love almost immediately and cemented my future as a command line junkie. Others in the course were not so easily impressed, and never received a satisfactory answer to the question of “Why Learn Unix?” other than a terse “Because It’s Required”. Without a solid understanding of a technology’s use, it’s nearly impossible to maintain motivation to learn it. This is especially true of something as archaic and intimidating as the Unix command line interface that looks like something out of an early 90’s hacker movie. Those who don’t know Unix get along just fine, so what’s the big deal?

The big deal is that Unix is the 800 lb. gorilla of the IT world. While desktops and laptops are usually a pretty even split between Windows and Mac, the server world is almost entirely Unix (either Linux or BSD, both of which are UNIX variants). If you work in a reasonably technical position, you have probably had to log in to one of these Unix servers before to do something. If you are in library school and looking to get a tech oriented library job after graduating, this WILL happen to you, maybe even before you graduate (a good 50% of my student worker jobs were the result of knowing Unix). As libraries move away from vendor software and externally hosted systems towards Open Source software, Unix use is only going to increase because pretty much all Open Source software is designed to run on Linux (which is itself Open Source software). The road to an Open Source future for libraries is paved with LIS graduates who know their way around a command line.

So let’s assume that I’ve convinced you to learn Unix. What now? The first step on the journey is deciding how much Unix you want to learn. Unix is deep enough that one can spend a great deal of time getting lost in its complexities (not to say that this wouldn’t be time well spent). The most important initial steps of any foray into the world of Unix should start with how to log in to the system (which can vary a lot depending on whether you are using Windows or Mac, and what Unix system you are trying to log in to). Once you have that under control, learn the basic commands for navigating around the system, copying and deleting files, and checking the built-in manual (University of Illinois has a great cheat sheet).

How to learn Unix as opposed to why is a completely separate conversation with just as many strong opinions, but I will say that learning Unix requires more courage than intelligence. The reason most people actively avoid using Unix is because it is so different from the point-and-click world they are used to, but once you get the basics under your belt you may find that you prefer it. There are a lot of things that are much easier to do via command line (once you know how), and if you get really good at it you can even chain commands together into a script that can automatically perform complex actions that might take hours (or days, or weeks, or years) to do by hand. This scriptability is where Unix systems really shine, but by no means do you have to dive in this deep to find value in learning Unix. If you take the time to learn the basics, there will come a time when that knowledge pays off. Who knows, it might even change the direction of your career path.

Do you have any questions or opinions about the need for librarians to learn Unix? Are you struggling with learning Unix and want to air your grievances? Are you a wizard who wants to point out the inaccurate parts of my post? Let me know in the comments!

PeerLibrary: Check out our brand new screencast video of PeerLibrary 0.3! We...

planet code4lib - Wed, 2014-11-12 06:11


Check out our brand new screencast video of PeerLibrary 0.3!

We are proud to announce an updated screencast which demos the increased functionality and updated user interface of the PeerLibrary website. This screencast debuted at the Mozilla Festival in October as part of our science fair presentation. The video showcases an article by Paul Dourish and Scott D. Mainwaring entitled “Ubicomp’s Colonial Impulse” as well as the easy commenting and discussion features which PeerLibrary emphasizes. One of the MozFest conference attendees actually recognized the article which drew him towards our booth and into a conversation with our team. Check out the new screencast and let us know what you think!

DuraSpace News: The DSpace 5 Testathon is Underway!

planet code4lib - Wed, 2014-11-12 00:00

From Hardy Pottinger on behalf of the DSpace Committers

The DSpace 5.0 Testathon is going on right now, and will continue through November 21, 2014.

TESTATHON ESSENTIALS

• Details on how to participate: see [1]

• Details about new features, bug fixes in 5.0 and release schedule: see [2]

Pages

Subscribe to code4lib aggregator