You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib -
Updated: 17 hours 29 min ago

David Rosenthal: Infrastructure for Emulation

Tue, 2015-09-08 23:00
I've been writing a report about emulation as a preservation strategy. Below the fold, a discussion of one of the ideas that I've been thinking about as I write, the unique position national libraries are in to assist with building the infrastructure emulation needs to succeed.

Less and less of the digital content that forms our cultural heritage consists of static documents, more and more is dynamic. Static digital documents have traditionally been preserved by migration. Dynamic content is generally not amenable to migration and must be preserved by emulation.

Successful emulation requires the entire software stack be preserved. Not just the bits the content creator generated and over which the creator presumably has rights allowing preservation, but also the operating system, libraries, databases and services upon which the execution of the bits depends. The creator presumably has no preservation rights over this software, necessary for the realization of their work. A creator wishing to ensure that future audiences can access their work has no legal way to do so. In fact, creators cannot even legally sell their work in any durably accessible form. They do not own an instance of the infrastructure upon which it depends, they merely have a (probably non-transferable) license to use an instance of it.

Thus a key to future scholars' ability to access the cultural heritage of the present is that in the present all these software components be collected, preserved, and made accessible. One way to do this would be for some international organization to establish and operate a global archive of software. In an initiative called PERSIST, UNESCO is considering setting up such a Global Repository of software. The technical problems of doing so are manageable, but the legal and economic difficulties are formidable.

The intellectual property frameworks, primarily copyright and the contract law underlying the End User License Agreements (EULAs), under which software is published differ from country to country. At least in the US, where much software originates, these frameworks make collecting, preserving and providing access to collections of software impossible except with the specific permission of every copyright holder. The situation in other countries is similar. International trade negotiations such as the TPP are being used by copyright interests to make these restrictions even more onerous.

For the hypothetical operator of the global software archive to identify the current holder of the copyright on every software component that should be archived, and negotiate permission with each of them for every country involved, would be enormously expensive. Research has shown that the resources devoted to current digital preservation efforts, such as those for e-journals, e-books and the Web, suffice to collect and preserve less than half of the materialin their scope. Absent major additional funding, diverting resources from these existing efforts to fund the global software archive would be robbing Peter to pay Paul.

Worse, the fact that the global software archive would need to obtain permission before ingesting each publisher's software means that there would be significant delays before the collection would be formed, let alone be effective in supporting scholars' access.

An alternative approach worth considering would separate the issues of permission to collect from the issues of permission to provide access. Software is copyright. In the paper world, many countries had copyright deposit legislation allowing their national library to acquire, preserve and provide access (generally restricted to readers physically at the library) to copyright material. Many countries, including most of the major software producing countries, have passed legislation extending their national library's rights to the digital domain.

The result is that most of the relevant national libraries already have the right to acquire and preserve digital works, although not the right to provide unrestricted access to them. Many national libraries have collected digital works in physical form. For example, the German National Library's CD-ROM collection includes half a million items. Many national libraries are crawling the Web to ingest Web pages relevant to their collections.

It does not appear that national libraries are consistently exercising their right to acquire and preserve the software components needed to support future emulations, such as operating systems, libraries and databases. A simple change of policy by major national libraries could be effective immediately in ensuring that these components were archived. Each national library's collection could be accessed by emulations on-site. No time-consuming negotiations with publishers would be needed.

An initial step would be for national libraries to assess the set of software components that would be needed to provide the basis for emulating the digital artefacts already in their collections, which of them were already to hand, and what could be done to acquire the missing pieces. The German National Library is working on a project of this kind with the bwFLA team at the University of Freiburg, which will be presented at iPRES2015.

The technical infrastructure needed to make these diverse national software collections accessible as a single homogeneous global software archive is already in place. Existing emulation frameworks access their software components via the Web, and the Memento protocol aggregates disparate collections into a single resource.

Of course, absent publisher agreements it would not be legal for national libraries to make their software collections accessible in this way. But negotiations about the terms of access could proceed in parallel with the growth of the collections. Global agreement would not be needed; national libraries could strike individual, country-specific agreements which would be enforced by their access control systems.

Incremental partial agreements would be valuable. For example, agreements allowing scholars at one national library to access preserved software components at another would reduce duplication of effort and storage without posing additional risk to publisher business models.

By breaking the link that makes building collections dependent on permission to provide access, by basing collections on the existing copyright deposit legislation, and by making success depend on the accumulation of partial, local agreements instead of a few comprehensive global agreements, this approach could cut the Gordian knot that has so far prevented the necessary infrastructure for emulation being established.

Bohyun Kim: From Programmable Biology to Robots and Bitcoin – New Technology Frontier

Tue, 2015-09-08 20:50

A while ago, I gave a webinar on the topic of the new technology frontier for libraries. This webinar was given for the South Central Regional Library Council Webinar Series.  I don’t get asked to pick technologies that I think are exciting for libraries and library patrons too often. So I went wild! These are the six technology trends that I picked.

  • Maker Programs
  • Programmable Biology (or Synthetic Biology)
  • Robots
  • Drones
  • Bitcoin (Virtual currency)
  • Gamification (or Digital engagement)

OK, actually the maker programs, drones, and gamification are not too wild, I admit. But programmable biology, robots, and bitcoin were really fun to talk about.

I did not necessarily pick the technologies that I thought would be widely adopted by libraries, as you can guess pretty well from bitcoin. Instead, I tried to pick the technologies that are tackling interesting problems, solutions of which are likely to have a great impact on our future and our library patrons’ lives. It is important to note not only what a new technology is and how it works but also how it can influence our lives, and therefore library patrons and libraries ultimately.

Below are my slides. And if you must, you can watch the webinar recording on Youtube as well. Would you pick one of these technologies if you get to pick your own? If not, what else would that be?

Back to the Future Part III: Libraries and the New Technology Frontier

Eric Hellman: Hey, Google! Move Blogspot to HTTPS now!

Tue, 2015-09-08 15:35
Since I've been supporting a Library Privacy Pledge to implement HTTPS, I've made an inventory of the services I use myself, to make sure that all the services I use will by HTTPS by the end of 2016. The main outlier: THIS BLOG!

This is odd, because Google, the owner of Blogger and Blogspot, has made noise about moving its services to HTTPS, marking HTTP pages as non-secure, and is even giving extra search engine weight to webpages that use HTTPS.

I'd like to nudge Google, now that it's remade its logo and everything, to get their act together on providing secure service for Blogger. So I set the "description" of my blog to "Move Blogspot to HTTPS NOW." If you have a blog on Blogspot, you can do the same. Go to your control panel and click settings. "description" is the second setting at the top. Depending on the design of your page, it will look like this:

So Google, if you want to avoid a devastating loss of traffic when I move Go-To-Hellman to another platform on January 1, 2017, you better get cracking. Consider yourself warned.

Library of Congress: The Signal: The National Digital Platform for Libraries: An Interview with Trevor Owens and Emily Reynolds from IMLS

Tue, 2015-09-08 14:01

I had the chance to ask Trevor Owens and Emily Reynolds at the Institute of Museum and Library Services (IMLS) about the national digital platform priority and current IMLS grant opportunities.  I was interested to hear how these opportunities could support ongoing activities and research in the digital preservation and stewardship communities.

Erin: Could you give us a quick overview of the Institute of Museum and Library Services national digital platform? In what way is it similar or different from how IMLS has previously funded research and development for digital tools and services?

Trevor Owens, IMLS senior program officer.

Trevor: The national digital platform has to do with the digital capability and capacity of libraries across the U.S. It is the combination of software applications, social and technical infrastructure, and staff expertise that provide library content and services to all users in the US. The idea for the platform has been developed in dialog with a range of stakeholders through annual convenings. For more information on those, you can see the notes (PDF) and videos from our 2014 and 2015 IMLS Focus convenings.

As libraries increasingly use digital infrastructure to provide access to digital content and resources, there are more opportunities for collaboration around the tools and services used to meet their users’ needs. It is possible for every library in the country to leverage and benefit from the work of other libraries in shared digital services, systems, and infrastructure. We need to bridge gaps between disparate pieces of the existing digital infrastructure for increased efficiencies, cost savings, access, and services.

IMLS is focusing on the national digital platform as an area of priority in the National Leadership Grants to Libraries and the Laura Bush 21st Century Librarian grant programs. Both of these programs have October 1st deadlines for two-page preliminary proposals and will have another deadline for proposals in February. It is also relevant to the Sparks! Ignition Grants for Libraries program.

Erin: One of the priorities identified in the 2015 NDSA National Agenda for Digital Stewardship (PDF) centers around enhancing staffing and training, and the report on the recent national digital platform convening (PDF) stresses issues in supporting professional development and training.  There’s obvious overlap here; how do you see the current education and training opportunities in the preservation community contributing to the platform?  How would you like to see them expanded?

Emily Reynolds, IMLS program specialist and 2014 Future Steward NDSA Innovation Awardee.

Emily: We know that there are many excellent efforts that support digital skill development for librarians and archivists. Since so much of this groundwork has been done, with projects like POWRR, DigCCurr, and the Digital Preservation Management Workshops, we’d love to see collaborative approaches that build on existing curricula and can serve as stepping stones or models for future efforts. That is to say, we don’t need to keep reinventing the wheel! Increasing collaboration also broadens the opportunities for updating training as time passes and desirable skills change.

The impact that the education and training component has on the national digital platform as a whole is tremendous. Even for projects without a specific focus on professional development or training, we’re emphasizing things like documentation and outreach to professional staff. After all, what good is all of this infrastructure if the vast majority of librarians can’t use it? We need to make sure that the tools and systems being used nationally are available and usable to professionals at all types of organizations, even those with fewer resources, and training is a big part of making that happen.

Erin:  Another priority identified in the Agenda is supporting content selection at scale.  For example, there are huge challenges in collecting and preserving large amounts of digital content that libraries and archives that may be interested in for their users, patrons, or researchers.  One of those challenges is knowing what’s been created or being collected or available for access.  Do you see the national digital platform supporting any activities or research around digital content selection?

Trevor: Yes, content selection at scale fits squarely in a broader need for using computational methods to scale up library practices in many different areas. One of the panels at the national digital platform convening this year focused directly on scaling up practice in libraries and archives. Broadly, this included discussions of crowdsourcing, linked data, machine learning, natural language processing and data mining. All of these have considerable potential to move further away from doing things one at a time and duplicating effort.

As an example that directly addresses the issue of content selection at scale, in the first set of grants awarded through the national digital platform, one focuses directly on this issue for web archives. In Combining Social Media Storytelling with Web Archives (LG-71-15-0077) (PDF), Old Dominion University and the Internet Archive are working to develop tools and techniques for integrating “storytelling” social media and web archiving. The partners will use information retrieval techniques to (semi-)automatically generate stories summarizing a collection and mine existing public stories as a basis for librarians, archivists, and curators to create collections about breaking events.

Erin: Supporting interoperability seems to be a strong and necessary component of the platform.  Could you discuss broadly and specifically what role interoperable tools or services could fill for the platform? For example, IMLS recently funded the Hydra-in-a-Box project, an open source digital repository, so it would be interesting to hear how you see the digital preservation community’s existing and developing tools and services working together to benefit the platform.

“Defining and Funding the National Digital Platform” panel (James G. Neal, Amy Garmer, Brett Bobley, Trevor Owens). Courtesy of IMLS.

Trevor: First off, I’d stress that the platform already exists, it’s just not well connected and there are lots of gaps where it needs work. The Platform is the aggregate of the tools and services that libraries, archives and museums build, use and maintain. It also includes the skills and expertise required to put those tools and services into use for users across the country. Through the platform, we are asking the national community to look at what exists and think about how they can fill in gaps in that ecosystem. From that perspective, interoperability is a huge component here. What we need are tools and services that easily fit together so that libraries can benefit from the work of others.

The Hydra-in-a-box project is a great example of how folks in the library and archives community are thinking. The full name of that project, Fostering a New National Library Network through a Community-­Based, Connected Repository System (LG-70-15-0006) (PDF), gets into more of the logic going on behind it. What I think reviewers found compelling about this project is how it brought together a series of related problems and initiatives, and is working to bridge different, but related, library communities.

On one hand, the Digital Public Library of America is integrating with a lot of different legacy systems, from which it’s challenging to share collection data. The Fedora Hydra open source software community has been growing significantly across academic libraries. There is a barrier for entrants to start using Hydra. Large academic libraries that often have several developers working on their projects are the ones who are able to use and benefit from Hydra at this point. By working together, these partners can create and promulgate a solution that makes it easier for more organizations to use Hydra. When more organizations can use Hydra, more organizations can then become content hubs for the DPLA. The partnership with DuraSpace brings their experience in sustaining digital projects, and the possibility of establishing hosted solutions for a system that could provide Hydra to smaller institutions.

“The State of Distributed National Capacity” panel (James Shulman, Sibyl Schaefer, Evelyn McLellan, Dan Cohen, Tom Scheinfeldt) Courtesy of IMLS.

Erin: IMLS hosted Focus Convenings on the national digital platform in April 2014 and April 2015.  Engaging communities and end users at the local level seemed to be a recurring theme at both meetings, but also how to encourage involvement and share resources at the national level.  What are some of the opportunities the digital preservation community could address related to engagement activities to support this theme?

Emily: I think this is a question we’re still actively trying to figure out, and we are interested in seeing ideas from libraries and librarians on how we can help in these areas. We know that there are communities whose records and voices aren’t equally represented in a range of national efforts, and we know that in many cases there are unique issues around cultural sensitivity. Addressing those issues requires direct and sustained contact with, and understanding of, the groups involved.  For example, one of the reasons Mukurtu CMS has been so successful with Native communities is because of how embedded in the project those communities’ concerns are. Those relationships have allowed Mukurtu to create a national network of collections while still encouraging individual repositories to maintain local perspectives and relationships.

Engaging communities to participate in national digital platform activities is another way to address concerns about local involvement. We’ve seen great success with the Crowd Consortium, for example, and the tools and relationships that are being developed around crowdsourcing. Various institutions have also done a great deal of work in this area through use of HistoryPin and similar tools. Crowdsourcing and other opportunities for community engagement in digital collections have the unique capacity to solicit and incorporate the viewpoints and input of a huge range of participants.

Erin: Do you have any thoughts on what would make a proposal compelling? Either a theme or project-related topic that fits with the national digital platform priority?

Participants at IMLS Focus: The National Digital Platform. Courtesy of IMLS.

Trevor: The criteria for evaluating proposals for any of our programs are spelled out in the relevant IMLS Notice of Funding Opportunity. The good news is that there aren’t any secrets to this. The proposals likely to be the most compelling are going to be the ones that best respond to the criteria for any individual program. Across all of the programs, applicants need to make the case that there is a significant need for the work they are going to engage in. Things like the report from the national digital platform convening are a great way to establish the case for the need for the work an applicant wants to do.

I’m also happy to offer thoughts on some points in proposals that aren’t quite as competitive. For the National Leadership Grants, I can’t stress enough the words National and Leadership. This is a very competitive program and the things that rise to the top are generally going to be the things that have a clear, straightforward path to making a national impact. So spend a lot of time thinking about what that national impact would be and how you would measure the change a project could make.

Emily: The Laura Bush 21st Century Librarian Program focuses on building human capital capacity in libraries and archives, through continuing education, as well as through formal LIS master’s and doctoral programs. Naturally, when we talk about “21st century skills” in this program, a lot of capabilities related to technology and the national digital platform surface. Projects in this program are most successful when they show awareness of work that has come before, and explain how they are building upon that previous work. Similarly, and as with all of our programs, reviewers are looking to see how the results of the project will be shared with the field.

For example, the National Digital Stewardship Residency (NDSR) has been very successful with Laura Bush peer reviewers. The original Library of Congress NDSR built on the Library’s existing DPOE curriculum. Subsequently, the New York and Boston NDSR programs adapted the Library of Congress’s model based on resident feedback and other findings. Now we’re seeing a new distributed version of the model being piloted by WGBH. This is a great example of a project that is replicable and iterative. Each organization modified it based on their specific situation, contributing to an overall vision of the program and increasing the impact of IMLS funding.

The Sparks! grants are a little different than the grants of other programs because the funding cap for this program is much lower, at $25,000, and has no cost share requirement. Sparks! is intended to fund projects that are innovative and potentially somewhat risky. It’s a great opportunity for prototyping new tools, exploring new collaborations, and testing new services. As a special funding opportunity within the IMLS National Leadership Grants for Libraries program, Sparks! guidelines also call for potential for broad impact and innovative approaches. Funded projects are required to submit a final report in the form of a white paper that is published on the IMLS website, in order to ensure that these new approaches are shared with the community.

Maura Marx, Acting Director of IMLS, wrapping up at IMLS Focus. Courtesy of IMLS.

Erin: I’m sure many of our readers have applied for IMLS grants in previous cycles. Could you talk a bit about the current proposal process?  Is there any other info you’d like to share with our readers about it?

Emily: The traditional application process, and the one currently used in the Sparks! program, is that applicants submit a full proposal at the time of the application deadline. This includes a narrative, a complete budget and budget justification, staff resumes, and a great deal of other documentation. With Sparks!, these applications are sent directly to peer reviewers in the field, and funding decisions are made based on their scores.

We’ve made some significant changes to the National Leadership Grants and Laura Bush 21st Century Librarian program. For FY16, both programs will require the submission of only a two-page preliminary proposal, along with a couple of standard forms. The preliminary proposals will be sent to peer reviewers, and IMLS will hold a panel meeting with the reviewers to select the most promising proposals. That subset of applicants is then invited to submit full proposals, with a deadline six to eight weeks later. The full proposals go through another round of panel review before funding decisions are made. We’re also adding a second annual application deadline for each program, currently slated for February 2016.

This process was piloted with the National Leadership Grants this past year, and we’ve seen a number of substantial benefits for applicants. Of course, the workload of creating a two-page preliminary proposal is much less than for the full proposal. But for the applicants who are invited to submit a full proposal, also gain the peer reviewers’ comments to help them strengthen their applications. And for unsuccessful applicants, the second deadline makes it possible for them to revise and resubmit their proposal. We’ve found that the resulting full proposals are much more competitive, and reviewers are still able to provide substantial feedback for unsuccessful applicants.

Erin: Now for the quintessential interview question: where do you see the platform in five years?

Trevor: I think we can make a lot of progress in five years. I can see a series of interconnected national networks and projects where different libraries, archives, museums and related non-profits are taking the lead on aspects directly connected to the core of their missions, but benefiting from the work of all the other institutions, too. The idea that there is one big library with branches all over the world is something that I think can increasingly become a reality. In sharing that digital infrastructure, we can build on the emerging value proposition of libraries identified in the Aspen Institute’s report on public libraries (PDF).  By pooling those efforts, and establishing and building on radical collaborations, we can turn the corner on the digital era. We can stop playing catch up and have a seat at the table. We can make sure that our increasingly digital future is shaped by values at the core of libraries and archives around access, equity, openness, privacy, preservation and the integrity of information.

Islandora: Islandora Contributor Licence Agreements

Tue, 2015-09-08 12:59

We are now making a concerted effort to collect Contributor License Agreements (CLAs) from all project contributors. The CLAs are based on Apache's agreements; they give the Islandora Foundation non-exclusive, royalty free copyright and patent licenses for contributions. They do not transfer intellectual property ownership to the project from the contributor, nor do they otherwise limit what the creator can do with their contributions. This license is for your protection as a contributor as well as the protection of the Foundation and its users; it does not change your rights to use your own contributions for any other purpose.

The CLA's are here:

Current CLAs on file are here.

We are seeking corporate CLAs (cCLA) from all institutions that employ Islandora contributors. We are also seeking individual CLAs (iCLAs) from all individual contributors, in addition to the cCLA. (In most cases the cCLA is probably sufficient, but getting iCLAs in addition helps the project avoid worrying about whether certain contributions were "work for hire", and also help provide continuity in case a developer continues to contribute even after changing employment).

All Foundation members and individual contributors will soon be receiving a direct email request to sign the CLAs, along with instructions on how to submit them. At a certain point later this year, we will no longer accept code contributions that are not covered by a CLA and will look to excise any legacy code that isn't covered by an agreement.

If you have any questions, please don't hesitate to ask on the Islandora list, or to send an email to


SearchHub: Search-Time Parallelism at Etsy: An Experiment With Apache Lucene

Tue, 2015-09-08 08:52
As we countdown to the annual Lucene/Solr Revolution conference in Austin this October, we’re highlighting talks and sessions from past conferences. Today, we’re highlighting Shikhar Bhushan from Etsy’s experiments at Etsy with search-time parallelism. Is it possible to gain the parallelism benefit of sharding your data into multiple indexes, without actually sharding? Isn’t your Lucene index already composed of shards i.e. segments? This talk will present an experiment in parallelizing Lucene’s guts: the collection protocol. An express goal was to try to do this in a lock-free manner using divide-and-conquer. Changes to the Collector API were necessary, such as orienting it to work at the level of child “leaf”-collectors so that segment-level state could be accumulated in parallel. I will present technical details that were learned along the way, such as how Lucene’s TopDocs collectors are implemented using priority queues and custom comparators. Onto the parallelizability of collectors — how some collectors like hit counting are embarrassingly parallelizable, how some like DocSet collection were a delightful challenge, and others where the space-time tradeoffs need more consideration. Performance testing results, which currently span from worse to exciting, will be discussed. Shikhar works on Search Infrastructure at Etsy, the global handmade and vintage marketplace. He has contributed patches to Solr/Lucene, and maintains several open-source projects such as a Java SSH library and a discovery plugin for elasticsearch. He previously worked at Bloomberg where he delivered talks introducing developers to Python and internal Python tooling. He has a special interest in JVM technology and distributed systems. Search-time Parallelism: Presented by Shikhar Bhushan, Etsy from Lucidworks Join us at Lucene/Solr Revolution 2015, the biggest open source conference dedicated to Apache Lucene/Solr on October 13-16, 2015 in Austin, Texas. Come meet and network with the thought leaders building and deploying Lucene/Solr open source search technology. Full details and registration…4

The post Search-Time Parallelism at Etsy: An Experiment With Apache Lucene appeared first on Lucidworks.

Terry Reese: MarcEdit 6.1 (Windows/Linux)/MarcEdit Mac (1.1.25) Update

Tue, 2015-09-08 02:23

So, this update is a bit of a biggie.  If you are a Mac user, the program officially moves out of the Preview and into release.  If you are a Mac user, this version brings the following changes:

** 1.1.25 ChangeLog

  • Bug Fix: MarcEditor — changes may not be retained after save if you make manual edits following a global updated.
  • Enhancement: Delimited Text Translator completed.
  • Enhancement: Export Tab Delimited complete
  • Enhancement: Validate Headings Tool complete
  • Enhancement: Build New Field Tool Complete
  • Enhancement: Build New Field Tool added to the Task Manager
  • Update: Linked Data Tool — Added Embed OCLC Work option
  • Update: Linked Data Tool — Enhance pattern matching
  • Update: RDA Helper — Updated for parity with the Windows Version of MarcEdit
    * Update: MarcValidator — Enhancements to support better checking when looking at the mnemonic format.

If you are on the Windows/Linux version – you’ll see the following changes:

* 6.1.60 ChangeLog

  • Update: Validate Headings — Updated patterns to improve the process for handling heading validation.
  • Enhancement: Build New Field — Added a new global editing tool that provides a pattern-based approach to building new field data.
  • Update: Added the Build New Field function to the Task Management tool.
  • UI Updates: Specific to support Windows 10.

The Windows update is a significant one.  A lot of work went into the Validate Headings function, which impacts the Linked Data tools and the underlying linked data engine.  Additionally, the Build New Fields tool provides a new global editing function that should simplify complex edits.  If I can find the time, I’ll try to mark up a youtube video demoing the process.

You can get the updates from the MarcEdit downloads page: or if you have MarcEdit configured to check automated updates – the tool will notify you of the update and provide a method for you to download it.

If you have questions – let me know.


DuraSpace News: Telling DSpace Stories at University of Texas Libraries with Colleen Lyon

Tue, 2015-09-08 00:00

“Telling DSpace Stories” is a community-led initiative aimed at introducing project leaders and their ideas to one another while providing details about DSpace implementations for the community and beyond. The following interview includes personal observations that may not represent the opinions and views of the University of Texas or the DSpace Project.

William Denton: OLITA lending library report on BKON beacon

Mon, 2015-09-07 23:19

In June I borrowed a BKON A-1 from the OLITA technology lending library. It’s a little black plastic box with a low energy Bluetooth transmitter inside, and you can configure it to broadcast a URL that can be detected by smartphones. I was curious to see what it was like, though I have no use case for it. If you borrow something from the library you’re supposed to write it up, so here’s my brief review.

  1. I took it out of its box and put two batteries in.
  2. I installed a beacon detector on my phone and scanned for it.
  3. I saw it:
  4. I followed the instructions on the BKON Quick Start Guide.
  5. I set up an account.
  6. I couldn’t log in. I tried two browsers but for whatever unknown reason it just wouldn’t work.
  7. I took out the two batteries and put it back in its box.

I’ll give it back to Dan Scott, who said he’s going to ship it back to the manufacturer so they can install the new firmware. I wish better luck to the next borrower.

Access Conference: Watch out for the Livestream!

Mon, 2015-09-07 18:09

Cast your FOMO feelings aside, a livestream of the conference will be on the website Wednesday to Friday HERE . An archived copy will be available on Youtube after the conference as well!

Terry Reese: MarcEdit Mac–Release Version 1 Notes

Mon, 2015-09-07 02:45

This has been a long-time coming – making up countless hours and the generosity of a great number of people to test and provide feedback (not to mention the folks that crowd sourced the purchase of a Mac) – but MarcEdit’s Mac version is coming out of Preview and will be made available for download on Labor Day.  I’ll be putting together a second post officially announcing the new versions (all versions of MarcEdit are getting an update over labor day), so if this interests you – keep an eye out.

So exactly what is different from the Preview versions?  Well, at this point, I’ve completed all the functions identified for the first set of development tasks – and then some.  New to this version will be the new Validate Headings tool just added to the Windows version of MarcEdit, the new Build New Field utility (and inclusion into the Task Automation tool), updates to the Editor for performance, updates to the Linking tool due to the validator, inclusion of the Delimited Text Translator and the Export Tab Delimited Text Translator – and a whole lot more.

At this point, the build is made, the tests have been run – so keep and eye out tomorrow – I’ll definitely be making it available before the Ohio State/Virginia Tech football game (because everything is going to stop here once that comes on). 

To everyone that has helped along the way, providing feedback and prodding – thanks for the help.  I’m hoping that the final result will be worth the wait and be a nice addition to the MarcEdit family.  And of course, this doesn’t end the development on the Mac – I have 3 additional sprints planned as I work towards functional parity with the Windows version of MarcEdit.


William Denton: A Catalogue of Cuts

Mon, 2015-09-07 00:36

I wrote a short piece for the newsletter of the York University Faculty Association: York University Libraries: A Catalogue of Cuts. We’ve had year after year of budget cuts at York and York University Libraries, but we in the library we don’t talk about them in public much. We should.

(Librarians at York University are members of YUFA and have academic status. I am in the final year of my second term as a steward for the Libraries chapter of YUFA. Patti Ryan is my fellow steward.)

Nicole Engard: Bookmarks for September 6, 2015

Sun, 2015-09-06 20:30

Today I found the following resources and bookmarked them on Delicious.

  • Gimlet Your library’s questions and answers put to their best use. Know when your desk will be busy. Everyone on your staff can find answers to difficult questions.

Digest powered by RSS Digest

The post Bookmarks for September 6, 2015 appeared first on What I Learned Today....

Related posts:

  1. What makes a librarian?
  2. RSS
  3. Tech Savvy Staff and Patrons

Open Knowledge Foundation: Event Guide, 2015 Open Data Index

Sun, 2015-09-06 18:46

Getting together at a public event can be a fun way to contribute to the 2015 Global Open Data Index. It can also be a great way to engage and organize people locally around open data. Here are some guidelines and tips for hosting an event in support of the 2015 Index and getting the most out of it.

Hosting an event around the Global Open Data Index is an excellent opportunity to spread the word about open data in your community and country, not to mention a chance to make a contribution to this year’s Index. Ideally, your event would focus broadly on open data themes, possibly even identifying the status of all 15 key datasets and completing the survey. Set a reasonable goal for yourself based on the audience you think you can attract. You may choose to not even make a submission at your event, but just discuss the state of open data in your country, that’s fine too.

It may make sense to host an event focused around one or more of the datasets. For instance, if you can organize people around government spending issues, host a party focused on the budget, spending, and procurement tender datasets. If you can organize people around environmental issues, focus on the pollutant emissions and water quality datasets. Choose whichever path you wish, but it’s good to establish a focused agenda, a clear set of goals and outcomes for any event you plan.

We believe the datasets included in the survey represent a solid baseline of open data for any nation and any citizenry; you should be prepared to make this case to the participants at your events. You don’t have to have be an expert yourself, or even have topical experts on hand to discuss or contribute to the survey. Any group of interested and motivated citizens can contribute to a successful event. Meet people where they are at, and help them understand why this work is important in your community and country. It will set a good tone for your event by helping participants realize they are part of a global effort and that the outcomes of their work will be a valuable national asset.

Ahmed Maawy, who hosted an event in Kenya around the 2014 Index, sums up the value of the Index with these key points that you can use to set the stage for your event:

  • It defines a benchmark to assess how healthy and helpful our open datasets are.
  • It allows us to make comparisons between different countries.
  • Allows us to asses what countries are doing right and what countries are doing wrong and to learn from each other.
  • Provides a standard framework that allows us to identify what we need to do or even how to implement or make use of open data in our countries and identify what we are strong at or what we are week at.
What to do at an Open Data Index event

It’s great to start your event with an open discussion so you can gauge the experience in the room and how much time you should spend educating and discussing introductory materials. You might not even get around to making a contribution, and that’s ok. Introducing the Index in anyway will put your group on the right path.

If you’re hosting an event with mostly newcomers, it’s always a good idea to look to the Open Definition and the Open Data Handbook for inspiration and basic information.

  • If your group is more experienced, everything you need to contribute to the survey can be found in this year’s Index contribution tutorial.
  • If you’re actively contributing at an event, we recommend splitting into teams and assigning one or more datasets to each of the group and having them use the Tutorial as a guide. There can only be one submission per dataset, so be sure to not have teams working on the same task.
  • Pair more experienced people with less experienced people so teams can better rely on themselves to answer questions and solve problems.

More practical tips can be found at the 2015 Open Data Index Event Guide.

Photo credits: Ahmed Maawy

Ranti Junus: Phone’s cracked screen, replaced.

Sat, 2015-09-05 23:56

I usually am quite careful when it comes to my phone: use phone case, apply the screen protector, things like that. But I suppose accident happens regardless. So, during the first week of August, I accidentally dropped a big screwdriver on the phone (don’t ask why) and heard a “crack” sound. Uugghh… my heart dropped when I saw the crack. Really bad.

The phone with the cracked screen. Looks scary.

Hoping the screen protector was strong enough to protect the touchscreen (after all, I used tempered glass screen protector), I turned it on and, bummer, the touch screen is completely borked. Fortunately, the hard drive was not affected so software worked fine. However, I could not interact with the apps, even when I tried to shutdown the phone. So, the only thing I could do was to let the phone run until it was running out of the battery and shutdown by default.

The software works just fine, but since the touch display is damaged, I cannot interact with it at all.

I checked the company’s website and their user forum, and found out one could send the phone back to the company in China and get charged for $150 (apparently this kind of physical damage doesn’t get covered by the warranty) or spend about $50 for the screen/touch display and replace it oneself. Being a tinkerer I am and always want to see the guts of any electronic devices, I decided to risk it and do the screen replacement myself. The downside: opening up the phone means I will void the warranty. But, at this point, warranty means little to me if I have to spend big bucks anyway to have the phone fixed. Besides, I am going to learn something new here. Worst case scenario: I failed. But then I can always sell the phone as parts on eBay. So, nothing really to loose here. Besides, I still have my Moto X phone as a backup phone.

YouTube provides various instructions on DIY phone screen replacement. I found two videos that really helped me to understand the ins and outs of replacing the screen.

The first video below nicely showed how to remove the damaged screen and put the replacement back. He showed which areas we need to pay attention to so we won’t damage the component.

The second video was created by a professional technician, so his method is very structured. The tools he used helped me to figure out the tools I need.

I basically watched those two videos probably a dozen times or so to make sure I didn’t miss anything (and, yes, I donated to their Paypal account as my thanks.)

It took me a while to finally finished the screen replacement work. I removed the cracked screen first, and then had to wait for about 3 weeks to receive the screen replacement. I just used whatever online store they recommended to get the parts that I need.

Below is a set of thumbnails with captions explaining my work. Each thumbnail is clickable to its original image.


Phone with its cracked screen. Ready to be worked on for screen replacement. 2.

The back of the phone. The SIM card is removed and the back cover is ready to be opened. 3.

The phone with back cover removed. The battery occupies most of the section. There’s a white dot sticker on the top right corner covering one of the screws. Removing that screw will void the warranty. 4.

The top part of the phone that covers the hard disk, camera lens, and SIM car reader is removed. There’s a white, square sticker on the top left corner. It will turn pink if the phone is exposed to moisture (dropped into a puddle of water, etc.) 5.

Bottom part of the phone is removed. It houses the USB port, the touch capacity, and the antenna. 6.

The battery is removed. It took me quite a while to work on this because the glue was so strong and I was so worried I might bend the battery too much and damage it. 7.

All the components that would need to be removed had been removed. The hard disk, the main cable, the touch capacity/USB port/antenna part. Looking good. 8.

The video instruction from ModzLink suggested to use a heat to loosen up the glue. Good thing I have a blow dryer with a nozzle that allows me to focus the hot air on certain section of the screen. The guitar pick was used to tease out the glass part once the surface is hot enough. 9.

It took me about 20 minutes to finally get the screen hot enough and the glue loosen up. By the way, I vacuumed the screen first to remove glass debris so the blow drier won’t blow them all over the place. 1o.

I used the magnifying glass from my soldering station to make sure all glue and loose debris were gone. 11.

The screen replacement, on the left, finally arrived. Even though they said it’s an original screen, I’m not really sure, considering the original one has extra copper lines on the sides. 12.

The casing is clean so all I need to do is inserting the screen replacement in it. 13.

Carefully putting the adhesive strips on the sides of the casing. 14.

New screen in place. I had to redo it because I forgot to put the speaker grill on the top at the first time. 15.

Added new adhesive strips so the battery will stick on it. Put the rest of the components back. 16.

Added a new tempered glass screen protector, put the SIM card back in, and turned on the phone.


Success. I got my favorite phone back.

It was scary the first time I worked on the phone, mostly because I don’t want to break things. But I eventually felt comfortable dealing with the components and, should similar thing happened again (knocks on the wood it won’t), I at least know what to do now.


Jonathan Rochkind: Memories of my discovery of the internet

Sat, 2015-09-05 14:12

As I approach 40 years old, I find myself getting nostalgic and otherwise engaged in memories of my youth.

I began high school in 1989. I was already a computer nerd, beginning from when my parents sent me to a Logo class for kids sometime in middle school; I think we had an Apple IIGS at home then, with a 14.4 kbps modem. (Thanks Mom and Dad!).  Somewhere around the beginning of high school, maybe the year before, I discovered some local dial-up multi-user BBSs.

Probably from information on a BBS, somewhere probably around 1994, me and a friend discovered Michnet, a network of dial-up access points throughout the state of Michigan, funded, I believe, by the state department of education. Dialing up Michnet, without any authentication, gave you access to a gopher menu. It didn’t give you unfettered access to the internet, but just to what was on the menu — which included several options that would require Michigan higher ed logins to proceed, which I didn’t have. But also links to other gophers which would take you to yet other places without authentication. Including a public access unix system (which did not have outgoing network connectivity, but was a place you could learn unix and unix programming on your own), and ISCABBS. Over the next few years I spent quite a bit of time on ISCABBS, a bulletin board system with asynchronous message boards and a synchronous person-to-person chat system, which at that time routinely had several hundred simultaneous users online.

So I had discovered The Internet. I recall trying to explain it to my parents, and that it was going to be big; they didn’t entirely understand what I was explaining.

When visiting colleges to decide on one in my senior year, planning on majoring in CS, I recall asking at every college what the internet access was like there, if they had internet in dorm rooms, etc. Depending on who I was talking to, they may or may not have known what I was talking about. I do distinctly recall the chair of the CS department at the University of Chicago telling me “Internet in dorm rooms? Bah! The internet is nothing but a waste of time and a distraction of students from their studies, they’re talking about adding internet in dorm rooms but I don’t think they should! Stay away from it.” Ha. I did not enroll at the U of Chicago, although I don’t think that conversation was a major influence.

Entering college in 1993, in my freshmen year in the CS computer lab, I recall looking over someone’s shoulder and seeing them looking at a museum web page in Mozilla — the workstations in the lab were unix X-windows systems of some kind, I forget what variety of unix. I had never heard of the web before. I was amazed, I interupted them and asked “What is that?!?”. They said “it’s the World Wide Web, duh.”  I said “Wait, it’s got text AND graphics?!?”  I knew this was going to be big. (I can’t recall the name of the fellow student a year or two ahead who first showed me the WWW, but I can recall her face. I do recall Karl Fogel, who was a couple years ahead of me and also in CS, kindly showing me things about the internet on other occasions. Karl has some memories of the CS computer lab culture at our college at the time here, I caught the tail end of that).

Around 1995, the college IT department hired me as a student worker to create the first-ever experimental/prototype web site for the college. The IT director had also just realized that the web was going to be big, and while the rest of the university hadn’t caught on yet, he figured they should do some initial efforts in that direction. I don’t think CSS or JS existed yet then, or at any rate I didn’t use them for that website. I did learn SQL on that job.  I don’t recall much about the website I developed, but I do recall one of the main features was an interactive campus map (probably using image maps).  A year or two or three later, when they realized how important it was, the college Communications unit (ie, advertising for the college)  took over the website, and I think an easily accessible campus map disappeared not to return for many years.

So I’ve been developing for the web for 20 years!

Ironically (or not), some of my deepest nostalgia these days is for the pre-internet pre-cell-phone society; even most of my university career pre-dated cell phones, you wanted to get in touch with someone you called their dorm room, maybe left a message on their answering machine.  The internet, and then cell phones, eventually combining into smart phones, have changed our social existence truly immensely, and I often wonder these days if it’s been mostly for the better or not.

Filed under: General

Ed Summers: Seminar Week 1

Sat, 2015-09-05 01:59

These are some notes for the readings from my first Seminar class. It’s really just a test to see if my BibTeX/Jekyll/Pandoc integration is working. More about that in a future post hopefully…

(Shera, 1933) was written in the depths of the Great Depression … and it shows. There is a great deal of concern about fiscal waste in libraries and a strong push for centralization, in line with FDR’s New Deal. The paper sees increasing cultural homogenization and a blurring of the rural and the urban that hasn’t seemed to come to pass. His thoughts about the television apparatus at the elbow seems almost memex like in its vision of the future. I must admit given all of what he gets wrong, I really like his idea of looking at the current state of our social situation and relations for the seeds of what tomorrow might look like. But at the same time I have trouble understanding how else you could meaningfully try to predict future trends. There is a tension between his desire for centralization of control, while allowing for decentralization, that seems quintessentially American.

(Taylor, 1962) muses about the nature of questions, how they progress in an almost Freudian way from the unconscious to a fully sublimated formal question of an information system. One thing that is particularly interesting is his formulation about how questions themselves are only fully understood in the context of an accepted answer. It’s almost as if the causal chain of question/answer is inverted, with the question being determined by the answer, and time running backwards. I know this is a flight of fancy on my part, but it seemed like a quirky and fun interpretation. The paper is deeply ironic because it opens up new vistas of future information science research by asking a lot of questions about questions. The method is admittedly rhetorical, and the paper is largely a philosophical meditation on how people with questions fit into information systems, rather than a methodological qualitative or quantitative study of some kind. It makes me wonder about the information system his questions are aimed at. Is scientific inquiry an information system? Also, perhaps this is heretical, but is there really such a thing as an information need? Don’t we have needs/desires for particular outcomes which information can help us realize: information as tool for achieving something, not as an object that is needed? I guess this could be considered a pragmatist critique of a particular strand of information science. I guess this would be a good place to invoke Maslow’s Hierarchy of Needs.

(Borko, 1968) attempts to define what information since in the wake of the American Documentation Institute changed its name to the American Society for Information Science. He explicitly calls out Robert Taylor’s definition, who was instrumental in helping create the Internet at DARPA.

He summarizes information science as the interdisciplinary study of information behavior. It’s kind of strange to think of information behaving independent of humans isn’t it? Are we really studying the behavior of people as reflected in their information artifacts, or is the behavior of information really something that happens independent of people? This question makes me think of Object Oriented Ontology a bit. A key part of his definition is the feedback loop where the traditional library and archive professions apply the theories of information science, which in turn are informed by practice. This relationship between theory and practice is a significant dimension to his definition. It seems like perhaps today many of the disciplines he identified have been subsumed into computer science departments? But it seems information science has a way of tying different disciplines together that were previously siloed?

(Bush, 1945) is a classic in the field of computing, cited mostly for its prescience in anticipating the hyperlink, and the World Wide Web. He is quite gifted at connecting scientific innovation with tools that are graspable by humans. One disquieting thing is the degree to which women, or as he calls them, “girls” are made part of the machinery of computation. To what extent are people unwittingly made part of this machinery of war that Bush assembled in the form of the Manhattan Project. Who does this machinery serve? Does it inevitably serve those in power? If we fast forward to today, what machinery are we made part of, by the transnational corporations that run our elections, and deliver us our information? Can this information system resist the forms of tyranny that it was created by? Ok, enough crazy talk for now :-)

Borko, H. (1968). Information science: What is it? American Documentation, 3–5.

Bush, V. (1945). As we may think. Atlantic. Retrieved from

Shera, J. H. (1933). Recent social trends and future library policy. Library Quarterly, 3, 339–353.

Taylor, R. S. (1962). The process of asking questions. American Documentation, 391–396.

Erin White: Back-to-school mobile snapshot

Fri, 2015-09-04 19:40

This week I took a look at mobile phone usage on the VCU Libraries website for the first couple weeks of class and compared that to similar time periods from the past couple years.


Here’s some data from the first week of class through today.

Note that mobile is 9.2% of web traffic. To round some numbers, 58% of those devices are iPhones/iPods and 13% are iPads. So we’re looking at about 71% of mobile traffic (about 6.5% of all web traffic) from Apple devices. Dang. After that, it’s a bit of a long tail of other device types.

To give context, about 7.2% of our overall traffic came from the Firefox browser. So we have more mobile users than Firefox users.


Mobile jumped to 9% of all traffic this year. This is partially due to our retiring our mobile-only website in lieu of a responsive web design. As with the other years, at least 2/3 of the mobile traffic is an iOS device.


Mobile was 4.7% of all traffic; iOS was 74% of all traffic; tablets, amazingly, were 32% of all mobile traffic.

I have one explanation for the relatively low traffic from iPhone: at the time, we had a separate mobile website that was catching a lot of traffic for handheld devices. Most phone users were being automatically redirected there.

Observations Browser support

Nobody’s surprised that people are using their phones to access our sites. When we launched the new VCU Libraries website last January, the web team built it with a responsive web design that could accommodate browsers of many shapes and sizes. At the same time, we decided which desktop browsers to leave behind – like Internet Explorer 8 and below, which we also stopped fully supporting when we launched the site. Looking at stats like this helps us figure out which devices to prioritize/test most with our design.

Types of devices

Though it’s impossible to test on every device, we have targeted most of our mobile development on iOS devices, which seems to be a direction we should keep going as it catches a majority of our mobile users. It would also be useful for us to look at larger-screen Android devices, though (any takers?). With virtual testing platforms like BrowserStack at our disposal we can test on many types of devices. But we should also look at ways to test with real devices and real people.


Thinking broadly about strategy, making special mobile websites/m-dots doesn’t make sense anymore. People want full functionality of the web, not an oversimplified version with only so-called “on-the-go” information. Five years ago when we debuted our mobile site, this might’ve been the case. Now people are doing everything with their phones–including writing short papers, according to our personas research a couple years ago. So we should keep pushing to make everything usable no matter the screen.

District Dispatch: Library groups keep up fight for net neutrality

Fri, 2015-09-04 14:55

From Flickr

Co-authored by Larra Clark and Kevin Maher

Library groups are again stepping to the front lines in the battle to preserve an open internet. The American Library Association (ALA), Association of College and Research Libraries (ACRL), Association for Research Libraries (ARL) and the Chief Officers of State Library Agencies (COSLA) have requested the right to file an amici curiae brief supporting the respondent in the case of United States Telecom Association (USTA) v. Federal Communications Commission (FCC) and United States of America. The brief would be filed in the US Court of Appeals for the District of Columbia Circuit—which also has decided two previous network neutrality legal challenges. ALA also is opposing efforts by Congressional Appropriators to defund FCC rules.

Legal brief to buttress FCC rules, highlight library values

The amici request builds on library and higher education advocacy throughout the last year supporting the development of strong, enforceable open internet rules by the FCC. As library groups, we decided to pursue our own separate legal brief to best support and buttress the FCC’s strong protections, complement the filings of other network neutrality advocates, and maintain the visibility for the specific concerns of the library community. Each of the amici parties will have quite limited space to make its arguments (likely 4,000-4,500 words), so particular library concerns (rather than broad shared concerns related to free expression, for instance) are unlikely to be addressed by other filers and demand a separate voice. The FCC also adopted in its Order a standard that library and higher education groups specifically and particularly brought forward—a standard for future conduct that reflects the dynamic nature of the internet and internet innovation to extend protections against questionable practices on a case-by-case basis.

Based on conversations with FCC general counsel and lawyers with aligned advocates, we plan to focus our brief on supporting the future conduct standard (formally referenced starting on paragraph 133 in the Order as “no unreasonable interference or unreasonable disadvantage standard for internet conduct”) and why it is important to our community, re-emphasize the negative impact of paid prioritization for our community and our users if the bright-line rules adopted by the FCC are not sustained, and ultimately make our arguments through the lens of the library mission and promoting our research and learning activities.

As the library group motion states, we argue that FCC rules are “necessary to protect the mission and values of libraries and their patrons, particularly with respect to the rules prohibiting paid prioritization.” Also, the FCC’s general conduct standard is “an important tool in ensuring the open character of the Internet is preserved, allowing the Internet to continue to operate as a democratic platform for research, learning and the sharing of information.”

USTA and amici opposed to FCC rules filed their briefs July 30, and the FCC filing is due September 16. Briefs supporting the FCC must be filed by September 21.

Congress threatens to defund FCC rules

ALA also is working to oppose Republican moves to insert defunding language in appropriations bills that could effectively block the FCC from implementing its net neutrality order. Under language included in both the House and Senate versions of the Financial Services and General Government Appropriations Bill, the FCC would be prohibited from spending any funds towards implementing or enforcing its net neutrality rules during FY2016 until specified legal cases and appeals (see above!) are resolved. ALA staff and counsel have been meeting with Congressional leaders to oppose these measures.

The Obama Administration criticized the defunding move in a letter from Office of Management and Budget (OMB) Director Shaun Donovan stating, “The inclusion of these provisions threatens to undermine an orderly appropriations process.” While not explicitly threatening a Presidential veto, the letter raises concern with appropriators attempts at “delaying or preventing implementation of the FCC’s net neutrality order, which creates a level playing field for innovation and provides important consumer protections on broadband service…”

Neither the House or Senate versions of the funding measure has received floor consideration. The appropriations process faces a bumpy road in the coming weeks as House and Senate leaders seek to iron out differing funding approaches and thorny policy issues before the October 1 start of the new fiscal year. Congress will likely need to pass a short-term continuing resolution to keep the government open while discussions continue. House and Senate Republican leaders have indicated they will work to avoid a government shut-down. Stay tuned!

The post Library groups keep up fight for net neutrality appeared first on District Dispatch.

DPLA: DPLA Archival Description Working Group

Fri, 2015-09-04 14:55

The Library, Archives, and Museum communities have many shared goals: to preserve the richness of our culture and history, to increase and share knowledge, to create a lasting record of human progress.

However, each of these communities approaches these goals in different ways. For example, description standards vary widely among these groups. The library typically adopts a 1:1 model where each item has its own descriptive record. Archives and special collections, on the other hand, usually describe materials in the aggregate as a collection. A single record, usually called a “finding aid,” is created for the entire collection. Only the very rare or special item typically warrants a description all its own. So the archival data model typically has one metadata record for many objects (or a 1:n ratio).

At DPLA, our metadata application profile and access platform have been centered on an item-centric library model for description: one metadata record for each individual digital object. While this method works well for most of the items in DPLA, it doesn’t translate to the way many archives are creating records for their digital objects. Instead, these institutions are applying an aggregate description to their objects.

Since DPLA works with organizations that use both the item-level and aggregation-based description practices, we need a way to support both. The Archival Description Working Group will help us get there.

The group will explore solutions to support varying approaches to digital object description and access and will produce a whitepaper outlining research and recommendations. While the whitepaper recommendations will be of particular use to DPLA or other large-scale aggregators, any data models or tools advanced by the group will be shared with the community for further development or adoption.

The group will include representatives from DPLA Hubs and Contributing Institutions, as well as national-level experts in digital object description and discovery. Several members of the working group have been invited to participate, but DPLA is looking for a few additional members to volunteer. As a member of the working group, active participation in conference calls is required, as well as a willingness to assist with research and writing.

If you are interested in being part of the Archival Description Working Group, please fill out the volunteer application form by 9/13/15. Three applicants will be chosen to be a part of the working group, and others will be asked to be the first reviewers of the whitepaper and any deliverables. An announcement of the full group membership will be made by the end of the month.