Today I found the following resources and bookmarked them on Delicious.
Digest powered by RSS Digest
Lisa Hart CrossRef's Director of Finance and Operations celebrated her 15 year CrossRef anniversary on April 1! She was also appointed to American Society of Association Executives (ASAE's) Finance & Business Council for a 1 year term, and is also part of ORCID's audit committee.
Other anniversaries celebrated in April include: Chuck Koscher, CrossRef's Director of Technology celebrating his 13th year, Patricia Feeney, Product Support Manager celebrating her 8th year and Amy Bosworth, Accounts Receivable Administrator celebrating her 3rd year at CrossRef.
Additionally Lindsay Russell, CrossRef's Payroll and Benefits Coordinator is going to be recognized at the New England Society of Association Executives (NESAE's) annual meeting as a Rising Professional, and has completed their Rising Professional program.
Lastly, Susan Collins, CrossRef's Member Services Coordinator is running in the Boston Marathon this Monday, April 20th. We wish her the best of luck.
Congratulations to all and go Susan!
Today I had the pleasure of attending a talk by Gina Likins from Red Hat at the 2015 Consortium for Computing Sciences in Colleges (CCSC): South Central conference about teaching open source.
Gina started by asking the audience how many people in the room taught open source already – and no one raised their hands!! That means Gina had to start with the background of what open source is. Gina says open source is a cookie (yum). When you bake a cookie you can share the cookies and the recipe with your friends and family. If one of the people you share with is allergic to nuts and the recipe calls for nuts then that person can alter the recipe to make it so that it doesn’t kill them. There is also the potential for people to take the recipe and improve upon it. Now – of course you can go to the store and buy some cookies – but you don’t really know what’s in them and you can’t replicate them. You can try … but then you get sued for sharing those proprietary cookies.
Another open source example – you wouldn’t buy a car with the hood welded shut … so why do we buy proprietary software? If you can’t see what’s going on and see what’s happening under the hood then you’re stuck with the car exactly the way it is and that might not be so great. While some people are fine with that, computer geeks shouldn’t be. We should want to get in there and tinker with it.
Next some legal terminology. It’s important to understand copyright. Gina shared with her a pretty flower drawing and now that it’s up there it’s copyrighted (so of course she added a creative commons license to the picture).
So what’s the difference between open source and free and open source software? The difference is that the free licenses always require that you share what you do under the free license. So if you’re under an open source license and you make a modification you can change the license. If you’re under a free license you don’t have that option – the license must stay the same.
So now – a bit of history because it’s important to know where the magic comes from. In the 50s software and hardware came bundled together. In the 60s that was changed because the DOJ thought that bundling hardware and software was monopolistic. In 1983 Richard Stallman launched the GNU project which were the beginnings of the open source movement as a thing. In 1989 the first GPL was released. This history and intro is the minimum that every computer science graduate should know! Especially the licensing part because students need to know what rights people have to use their software.
90% of fortune 500 companies use open source software!! The governments of all 50 states use open source software. 76% of today’s developers use open source. Students need to know about this so that they’re ready when they’re looking for a job. By learning open source you learn to code from others by working in a virtual team and collaborating. By working on an open source project you learn how to learn – because no one is there to sit down and teach you you have to learn a lot yourself – this is how students learn how to problem solve, ask smart questions and read documentation.
By teaching open source and using open source students get to work on real code, fork that code, and talk about why that was a good/bad idea. As a side note – I personally don’t remember any of the programs I wrote in my computer science classes – none of them had any benefit to me or were saved for me to go back and look at. Students working on open source get to know that they’re working on real code being used by real people. If you’re looking for a project – take a look at Humanitarian Free and Open Source (HFOSS) projects because these attract a more diverse audience – this is a great way to get more women in your classes.
Working on a project is an important skill to teach students because you’re never ever going to work alone in the real world. Furthermore the likelihood that you’ll be writing your own code is very tiny!! Usually you’ll have to add to a project that exists already – and learning how to communicate with other developers is key for this. Working on open source will also allow students to make connections with actual industry connections so that they can use those when it’s time to find a job. It’s a way for student to prove themselves!
Given all that – how do we differentiate open source from proprietary. We already talked about licenses but there are other things to know about. First is the open source principles and second is the community!
The principles includes:
- Open exchange: Communication is transparent
- Participation: When we are free to collaborate, we create
- Rapid prototyping: Can lead to rapid failures, but that leads to better solutions
- Meritocracy: The best ideas win
- Community: Together, we can do more
All that sounds awesome right? Well, there are some ‘gotchas’.
First off, as academics you’re used to knowing everything about the thing you’re teaching. Open source projects are scary because you’re not going to know them inside and out. There’s an opportunity here though by putting yourself in this role to teach your kids that it’s okay to not know everything and show them how to ask the right questions and learn how to learn. This is how we grow – even if our code isn’t accepted – you grow. Learning that will make it so that students can learn any system.
Next you’ll be a stranger in a strange land. There is no manager or single person in charge – it’s a bit of the wild wild west. This is not an environment you can control – you will be a guest. It won’t be like stepping in to a classroom and saying this is what we’re doing today.
Open source can occasionally be aggressive. With freedom and transparency comes opinions – and sometimes those opinions are not expressed politely. If there were an HR department in open source then some of these things wouldn’t happen – but that’s not how open source works. It’s the Internet – it’s all open and anyone can say anything. The good thing is that instructors are helping students in these situations – hopefully to tell them what is proper etiquette and what isn’t. Hopefully teaching open source in schools will prevent some of this. Learn more about etiquette in open source projects from Gina’s ApacheCon Keynote.
Even with all that it’s extremely important!! Students need to learn what open source is and how to contribute.
Quote from Gina: “It’s amazing how wonderful scary things can be”
How do we move forward? Check out POSSE. Teaches professors what they need to know so they can teach open source in their classes. You can also look at TeachingOpenSource.org and sign up for the mailing list. Finally be sure to look in to OpenHatch which provides tools for building your curriculum and/or learning what open source is like.
DPLA: The Digital Public Library of America Announces New Partnerships, Initiatives, and Milestones at DPLAfest 2015
Indianapolis, IN — On the second anniversary of the Digital Public Library of America’s launch, DPLA announced a number of new partnerships, initiatives, and milestones that highlight its rapid growth, and prepare it to have an even larger impact in the years ahead. At DPLAfest 2015 in Indianapolis, hundreds of people from DPLA’s expanding community gathered to discuss DPLA’s present and future. Announcements included:Content milestones
Over 10 Million Items from 1,600 Contributing Institutions
On the second anniversary of its launch, the Digital Public Library of America surpassed a remarkable 10,000,000 items in its aggregated collection of openly available books, photographs, maps, artworks, manuscripts, audio, video, and material culture. This represents a quadrupling of the original collection at launch, which stood at 2.4 million items.
DPLA now has 1,600 contributing institutions from across the country, including libraries, archives, museums, and cultural heritage sites. Included within this wide-ranging collaboration are small rural public libraries and historical societies, large universities and community colleges, federal, state, and local government agencies, corporations, independent collections, and many more organizations of all stripes. In April 2013 there were just 500 contributing institutions.
This tremendous growth can be attributed, in part, to existing partners whose collections are newly available this week, including the Empire State Digital Network and the California Digital Library. Minnesota Digital Library, a partner since DPLA’s inception, is making available nearly half a million new records, an incredible 900% increase in just the past few months.
New Hub Partnerships
With Indiana’s bicentennial coming up in 2016, DPLA is delighted to announce that close to 50,000 items from Indiana Memory were added to DPLA’s collection in the last week, including postcards, photographs, and other unique and compelling documents from Indiana’s rich history.
Joining Indiana as newly covered states in early 2015 are Tennessee, Maine, and Maryland, which are forming Service Hubs for collections in their states. DPLA expects to have new content from those states, as well as ongoing contributions from our many other states, in the coming months. In addition, DPLA added the Digital Library of the Caribbean as a hub partner, which will be contributing a vast array of materials from that region.
DPLA now has 15 Service Hubs, covering 19 states. Recent grants from the National Endowment for the Humanities and the Institute of Museum and Library Services are targeted toward coverage of additional states in a succession of application phases that has already begun.Education
The Digital Public Library of America and PBS LearningMedia are excited to announce today a major collaboration, bringing together the complementary strengths, networks, and content of our two nationwide organizations to better serve teachers, students, and the general public. By interweaving PBS’s unparalleled media resources and connections to the world of education and lifelong learning with DPLA’s vast and growing storehouse of openly available materials and community of librarians, archivists, and curators, the partners hope to make rich new resources accessible and discoverable for all.
In support of our respective organizations’ mutual interests in education, DPLA and PBS plan to work together to bring the high-quality DPLA digital content to as many teachers and students as possible. In the future, PBS and DPLA will explore additional, related ideas, such as professional development resources for teachers, the possible inclusion of PBS media within DPLA, and fostering local relationships between PBS’s affiliates and DPLA’s state-based service hubs.
Learning Registry Collaboration
Beginning today, the Digital Public Library of America’s exhibitions will be discoverable through the Learning Registry, which distributes top educational resources to states and schools around the country. The U.S. Departments of Education and Defense launched the Learning Registry in 2011 as an open source community and technology designed to improve the quality and availability of learning resources in education. By connecting DPLA’s metadata with the Learning Registry’s digital platform, schools, teachers, and students will more easily find the rich and open resources within DPLA’s collections. Since they cover major themes in American history and culture, DPLA’s exhibitions are already widely used in education, and this partnership will ensure an even broader audience for them, and set the stage for other DPLA resources to be more widely discoverable in the future.Ebooks-related announcements and remarks
Sloan Foundation-funded Work on Ebooks
DPLAfest marks a key moment in the Digital Public Library of America’s work on improving access to ebooks. Generously funded by the Alfred P. Sloan Foundation, dozens of librarians, authors, publishers, and readers are gathered in Indianapolis to discuss the current landscape of this complex challenge. Our goals were to identify community leaders, scaleable infrastructure, and avenues of participation that have the potential to transform libraries’ and librarians’ contributions and roles. The expectation is that we will come away with the framework for a possible demonstration effort, as well as a means to closer unite strong contributors in the space towards a common goal.
Collaboration with HathiTrust for Open Ebooks
In a related development, the Digital Public Library of America and HathiTrust plan to highlight how they will work together to help exciting new initiatives that open up access to books. The Humanities Open Book grant program, a joint initiative of the National Endowment for the Humanities and the Andrew W. Mellon Foundation, for instance, will award grants to publishers to identify select previously published books and acquire the appropriate rights to produce an open access ebook edition available under a Creative Commons license. Participants in the program must deposit an EPUB version of the book in a trusted preservation service to ensure future access. DPLA and HathiTrust are well-prepared to accept these books and provide a wider distribution point for them.Governance
New Board Chair and New Board Member Announced
In advance of DPLAfest 2015, the Board of Directors of the Digital Public Library of America announced the appointment of Amy Ryan, President of the Boston Public Library, as its next chair, effective for two years. Ryan will succeed the current chair, John Palfrey, Head of School at Phillips Academy in Andover, Massachusetts. Palfrey has been a central figure in DPLA’s history, from his co-leadership of the Secretariat during DPLA’s planning phase, and subsequently as founding chair of the DPLA Board.
Ryan has over 35 years of public library experience. Before being named to lead the Boston Public Library, she was the director of the nationally recognized Hennepin County Library in Minnesota, and prior to that Ryan served in leadership positions for over 28 years with Minneapolis Public Library.
In addition, at DPLAfest the Board announced the appointment of Jennifer 8. Lee as a new member, effective July 2015. A former New York Times reporter, Jennifer 8. Lee is an author, journalist and digital media entrepreneur. She is the co-founder and CEO of Plympton, a publisher of serialized fiction on digital platforms. Lee is the author of the New York Times-bestselling book, The Fortune Cookie Chronicles, and serves on the boards of the Nieman Foundation, the Center for Public Integrity, the Asian American Writers’ Workshop, Hacks/Hackers, Awesome Foundation and the Robert F. Kennedy journalism awards. She is a member of the New York Public Library Young Lions Committee. Jenny graduated with a degree in Applied Math and Economics from Harvard, where she was vice president of The Harvard Crimson.
Lee will be replacing Cathy Casserly on the board. Like Palfrey, Casserly has been enormously helpful to DPLA in its inception and growth as an organization. Her unparalleled experience with Creative Commons and Open Educational Resources, and her keen sense of nonprofit management, has been a boon to the young organization.Technology
The Digital Public Library of America (DPLA), Stanford University, and the DuraSpace organization announced this week that their collaboration has been awarded a $2 million National Leadership Grant from the Institute of Museum and Library Services (IMLS). Nicknamed Hydra-in-a-Box, the project aims foster a new, national, library network through a community-based repository system, enabling discovery, interoperability and reuse of digital resources by people from this country and around the world.
The partners will engage with libraries, archives, and museums nationwide, especially current and prospective DPLA hubs and the Hydra community, to systematically capture the needs for a next-generation, open source, digital repository. They will collaboratively extend the existing Hydra project codebase to build, bundle, and promote a feature-complete, robust digital repository that is easy to install, configure, and maintain—in short, a next-generation digital repository that will work for institutions large and small, and is capable of running as a hosted service. Finally, starting with DPLA’s own metadata aggregation services, the partners will work to ensure that these repositories have the necessary affordances to support networked aggregation, discovery, management and access to these resources, producing a shared, sustainable, nationwide platform.
For more information, please see the full press release.
DPLA Becomes an Official Hydra Project Partner
In concert with the Hydra-in-a-box project, the Digital Public Library of America became an official Hydra Project partner. Hydra is a repository solution that is being used by institutions worldwide to provide access to their digital content. A large, multi-institutional collaboration, the project gives like-minded institutions a mechanism to combine their individual repository development efforts into a collective solution with breadth and depth that exceeds the capacity of any individual institution to create, maintain or enhance on its own. The motto of the project partners is “if you want to go fast, go alone. If you want to go far, go together.” Hydra is open source, and enables advanced, modern, flexible user and administrative interfaces. Mark A. Matienzo, DPLA’s Director of Technology, notes that “by becoming a Hydra partner, DPLA is expressing its commitment to contributing and furthering a vibrant open source community.” The Hydra project has over 25 partners, including academic libraries, public libraries, and non-profit organizations.Sponsors
The Digital Public Library of America wishes to thank its generous DPLAfest Sponsors:
- The Alfred P. Sloan Foundation
- Anonymous Donor
- Digital Library Federation
- Digital Library Systems Group at Image Access
DPLA also wishes to thank its gracious hosts for DPLAfest 2015:
- Indianapolis Public Library
- Indiana State Library
- Indiana Historical Society
- IUPUI University Library
Ruby 2.2 finally introduces a #unicode_normalize method on strings. Defaults to :nfc, but you can also normalize to other unicode normalization forms such as :nfd, :nfkc, and :nfkd.some_string.unicode_normalize(:nfc)
Unicode normalization is something you often have to do when dealing with unicode, whether you knew it or not. Prior to ruby 2.2, you had to install a third-party gem to do this, adding another gem dependency. Of the gems available, some money-patched string in ways I wouldn’t have preferred, some worked only on MRI and not jruby, some had unpleasant performance characteristics, etc. Here’s some benchmarks I ran a while ago on available gems giving unicode normalization and performance, although since I did those benchmarks new options appeared and performance characteristics changed , but now we don’t need to deal with it, just use the stdlib.
One thing I can’t explain is that the only ruby stdlib documentation I can find on this, suggests the method should be called just `normalize`. But nope, it’s actually `unicode_normalize`. Okay. Can anyone explain what’s going on here?
`unicode_normalized?` (not just `normalized?`) is also available, also taking a normalization form argument.
The next major release of Rails, Rails 5, is planned to require ruby 2.2. I think a lot of other open source will follow that lead. I’m considering switching some of my projects over to require ruby 2.2 as well, to take advantage of some of the new stdlib like this. Although I’d probably wait until JRuby 9k comes out, planned to support 2.2 stdlib and other changes. Hopefully soon. In the meantime, I might write some code that uses #unicode_normalize when it’s present, otherwise monkey-patches in a #unicode_normalize method implemented with some other gem — although that still requires making the other gem a dependency. Which I’ll admit there are some projects I have that really should be unicode normalizing in some places, but I could barely get away without it, and skipped it because I didn’t want to deal with the dependency. Or I could require MRI 2.2 or jruby latest, and just monkey-patch a simple pure-java #unicode_normalize if JRuby and not String.instance_methods.include? :unicode_normalize.
Filed under: General
Emscripten comes with its own SDK, which bundles the specific versions of clang and node that it needs.
Install the Emscripten SDK and follow the instructions for setting it up.
Run ./emsdk_env.sh to set the PATH variable (you need to do this each time you want to use Emscripten).xml.js
xml.js is an Emscripten port of libxml2’s xmllint command, making it usable in a web browser.
Clone xml.js (and set up the submodules, if not done automatically).
Run npm install to install gulp.
Compile xmllint.js:gulp clean gulp libxml2 # compile libxml2 gulp compile # compile xmllint.js
Start a web server in the xml.js directory and open test/test.html to test it.Importing multiple schema files
I’ve made a fork of xml.js which a) allows all the command-line arguments to be specified, so can be used for validating against a DTD rather than an XML schema, and b) allows a list of files to be specified, which are imported into the pseudo-filespace so that xmllint can access them. This makes running xmllint in the browser much more like running xmllint on the command line.
There is one caveat, which is that this version of xmllint still seems to try to fetch the DTD from the URL in the XML’s doctype declaration rather than that specified with the --dtdvalid argument, so the doctype needs to be edited to match the local file path to the DTD.
Updated April 13, 2015
Aletheia - Associacao Cientifica e Cultural
Bowen Publishing Company
Indonesian Economist Association
International Neuroscience Institute
KIMS Foundation and Research Center
NPP Polis (Political Studies)
W.E. Upjohn Institute for Employment Research
Wydawnictwo Uniwersytetu Marii Curi-Sklodowskiej w Lublinie
Association Culturelle Franco-Coreenne
Journal of Istanbul Faculty of Medicine
Journal of Natural Sciences
Korean Association for Political Economy
Kyung Hee University Management Research Institute
Russian Ilizarov Scientific Centre Restorative Traumatology and Orthopaedics
The Institute for Legal Studies
The Institute for Northeast Asia Research
The Institute of the History of Christianity in Korea
The Korean Association for Saramdaum Education
The Korean Society for Early Childhood Education
Last Update April 6, 2015
Academic and Educational Forum on International Relations
Alexander Graham Bell Association for the Deaf and Hard of Hearing
American Academy of Insurance Medicine
Australasian Association for Information and Communication Technology
Austrian Statistical Society
Cancer Research Frontiers
Friends Science Publishers
Indonesian Society Fisheries Product Processing
Journal of Experimental and Agricultural Sciences
Orthopaedic Section, APTA, Inc.
Penerbit Universiti Kebangsaan Malaysia (UKM Press)
Prompt Scientific Publishing
Tobacco Regulatory Science Group
Universidade da Coruna
University of Dubrovnik
Adiyaman Universitesi Egitim Bilimleri Dergisi
Aufklarung Journal of Philosophy
Bulletin of Legal Medicine
CBCD Colegio Brasileiro de Cirugia Digestiva
International Cardiovascular Forum Journal
International Journal of Academic Research in Education
Journal of Computer and Education Research
Korea Research Academy of Distribution and Management
Korean Association for Psychodrama and Sociodrama
Korean Society for Environmental Education
Korean Society for Medical Mycology
Korean Society of Mechanical Technology
PE Polunina Elizareta Gennadievna
Society for Korea Classical Chinese Education
The Center for Social Welfare Research Yonsei University
The English Language Linguistics Society of Korea
The Korean Liver Cancer Study Group
The Korean Society for German History
The Korean Society of Christian Religious Education
The Phonology-Morphology Circle of Korea
The Study of History Education
Universidade Estadual Paulista - Campus de Tupa
Updated April 13, 2015
Total no. participating publishers & societies 6087
Total no. voting members 3308
% of non-profit publishers 57%
Total no. participating libraries 1943
No. journals covered 38,609
No. DOIs registered to date 73,195,928
No. DOIs deposited in previous month 483,190
No. DOIs retrieved (matched references) in previous month 59,784,568
DOI resolutions (end-user clicks) in previous month 124,765,975
Congratulations to Ed Pentz who celebrated 15 years in January as Executive Director of CrossRef.
In addition to Ed, Applications Developer Jon Stark, celebrated 11 years and Susan Collins, our Member Services Coordinator her 7th anniversary.
Paula Dwyer, our Controller and Vaishali Patel, our Technical Support Analyst both celebrate 4 years at CrossRef.
Chris Cocci, our Staff Accountant, Amy Kelley, our Operations Administrator and Penny Martin, our Part-Time UK Office Manager, have both been with us for 1 year.
Congratulations to all!
A new Library Explorers is out, Wearables in the library.
It’s spring! Sit at the picnic table and read some rounded up links.
Icon based labels are visually appealing, but often don’t clearly express their meaning. The power of text.
The diaspora of the 1×1 gif.
Botanical manufacturing molds growing trees into furniture
3D print a tiny planter
Recipes developed by a supercomputer and its algorithm. Judged for Pleasantness, Surprise, and Synergy.
This is a guest blog post by Matt Smith, who is a learning technologist at UCL. He is interested in how technology can be used to empower communities.Introduction
Fantasy Frontbench is a not-for-profit and openly licensed project aimed at providing the public with an engaging and accessible platform for directly comparing politicians.
A twist on the popular fantasy football concept, the site uses open voting history data from Public Whip and They Work For You. This allows users to create their own fantasy ‘cabinet’ by selecting and sorting politicians on how they have voted in Parliament on key policy issues such as EU integration, Updating Trident, Same-sex marriage and NHS reform.
Once created, users can see how their fantasy frontbench statistically breaks down by gender, educational background, age, experience and voting history. They can then share and debate their selection on social media.
The site is open licensed and we hope to make datasets of user selections available via figshare for academic inquiry.Aim of the project
Our aim is to present political data in a way that is engaging and accessible to those who may traditionally feel intimidated by political media. We wish to empower voters through information and provide them with the opportunity to compare politicians on the issues that most matter to them. We hope the tool will encourage political discourse and increase voter engagement.
The site features explanations of the electoral system and will hopefully help learners to easily understand how the cabinet is formed, the roles and responsibilities of cabinet ministers and the primary processes of government. Moreover, we hope as learners use the site, it will raise questions surrounding the way in which MPs vote in Parliament and the way in which bills are debated and amended. Finally, we host a gallery page which features a number of frontbenches curated by our team. This allows learners to see how different groups and demographics of politicians would work together. Such frontbenches include an All Female Frontbench, Youngest Frontbench, Most Experienced Frontbench, State Educated Frontbench, and a Pro Same-sex Marriage Frontbench, to name but a few.Development
Over the coming weeks, we will continue to develop the site, introducing descriptions of the main political parties, adding graphs which will allow users to track or ‘follow’ how politicians are voting, as well as adding historical frontbenches to the gallery e.g. Tony Blair’s 1997 Frontbench, Margaret Thatcher’s 1979 Frontbench and Winston Churchill’s Wartime Frontbench.
For further information or if you would like to work with us, please contact firstname.lastname@example.org or tweet us at [@FantasyFbench](http://twitter.com/FantasyFbench).Acknowledgements
Fantasy Frontbench is a not-for-profit organisation and is endorsed and funded by the Joseph Rowntree Reform Trust Ltd.
Javiera Atenas provided advice on open licensing and open data for the project.
This week, the Senate Committee on Health, Education, Labor and Pensions (aka “HELP Committee”) met to mark-up (debate, amend and vote on) the Every Child Achieves Act of 2015, a bill that would reauthorize the Elementary and Secondary Education Act (ESEA), formerly known as No Child Left Behind.
The American Library Association (ALA) sought amendments to require that every student have access to an “effective school library program,” defined in statute to require that: every school library be staffed by a certified librarian; equipped with up-to-date materials and technology; and enriched by a curriculum jointly developed by a grantee school’s librarians and classroom teachers and codifying the currently funded Innovative Approaches to Literacy (IAL) program under ESEA.
While we did not get all we had hoped for, the Committee did adopt Sen. Sheldon Whitehouse’s (with co-sponsors: Sens. Bob Casey, Susan Collins, and Elizabeth Warren) amendment to amend Title V of ESEA establishing “effective school library programs” as an eligible use of funds under a program for literacy and arts education. Passed by unanimous consent as part of Chairman Sen. Lamar Alexander’s “manager’s amendment” package, this provision would allow grants to be awarded to low-income communities for “developing and enhancing effective school library programs, which may include providing professional development for school librarians, books, and up-to-date materials to low-income schools.”
The bill that the Committee marked up and passed will next be taken up by the full Senate, although we don’t yet know when. Our champion, Senator Jack Reed, intends to propose a stronger amendment on the Senate floor than the one adopted by the HELP Committee to broadly provide dedicated funding for school libraries and librarians in ESEA.
We would like to thank all of the library advocates who reached out to their senators and representatives to demand that Congress support effective school library programs. As we move forward in the advocacy process, there is more work to do. Stay tuned as we await further word!
The following is a guest post by Joey Heinen, National Digital Stewardship Resident at Harvard University Library.
As has been famously outlined by the Library of Congress on their website on sustainability factors for digital formats, digital material is just as susceptible to obsolescence as analog formats. Within digital preservation there are a number of strategies that can be employed in order to protect your data including refreshing, emulation or migration, to name a few. As the National Digital Stewardship Resident at Harvard Library, I am responsible for developing a format migration framework which can be continuously adapted for migration projects at Harvard.
In order to test the viability of this framework, I am also planning for migration of three obsolete formats within the Digital Repository Service (DRS) – Kodak PhotoCD, SMIL playlists and RealAudio. While each format will have its own challenges for a standard workflow, there are certain processes which will always be incorporated into the overall migration framework. In a sense I am helping to create a series of incantations that must be uttered in order to raise these much-cherished digital materials back from the dead. No sage-burning necessary.
Migration is the chosen digital preservation strategy for this project since the aim of migration is to move content from its previously tenuous origins to a format with much greater promise in terms of support and usage. Our overall goal is to continue to provide remote access on modern platforms in a way that best matches the original format.
A Framework Emerges – First Steps
I began my residency by performing a broad literature review on the status of migration projects across the library field. This was a great way to acquaint myself with the terrain, but greater depth would be needed by using some real examples and understanding the institutional context of Harvard – its staff structure, its resources, its policies and its digital repository. Bouncing back and forth between the broader framework and the individual format plans, some patterns began to emerge. After further processing, we have arrived upon some core attributes that will inform the overall framework. The specifics of this framework are still in development and are much too large to narrate here, but I’ll discuss some of the most distinct themes.
The mention of “stakeholder involvement” first is deliberate – without gaining a sense for the “who,” the project cannot commence. Depending on the type of content, the exact cast of characters may vary but the types of roles will stay somewhat consistent. For the framework, we identified the following key areas of responsibility and corresponding responsible parties:
- Project Management (that’s me!).
- Technical Guidance/Format Experts (those who understand the format best).
- Documentation (that’s me too! Though gathering provenance and creation of documentation throughout the migration may originate from other departments, depending).
- Quality Assurance/Plan Approval (that’s pretty much everyone but at different points in the process).
- Systems Conformance/Technical Infrastructure (this is almost always our friends in Library IT staff and Metadata who inform us of how the plan does or does not comply with current technological procedures and infrastructure).
- Content Ownership (curators or collection managers, involvement is generally just to be informed of major decisions).
Defined Project Phases
In general, our migration plans can be broken down into these essential phases:
- Planning for the Test.
- Refining the Plan.
- Executing the Plan.
- Verifying Results and Project Wrap-Up.
From these project phases, we then defined the following within each phase:
- Workflow Activities – essential steps in the migration workflow.
- Workflow Components – ways of grouping the more granular activities.
- Project Deliverables – this could take on the form of: the migrated content itself; documentation or metadata generated along the way; diagrams of the workflow and the migration path (e.g. how the content in relation to the Harvard repository will change from pre- to post-migration); or new revelations in digital preservation policies e.g. storage and retention plans.
Last but not least, we want to consider how other projects within the library might impact the migration plan, whether in terms of timing and staff availability, as well as projects that might impact the infrastructure upon which migration is supported. For example, the metadata from Harvard’s DRS is being migrated to a new version of the DRS which includes changes to how relationships between files and objects are described. The relationship structure of still image objects will be completely different before and after this metadata migration so a plan to migrate the Kodak PhotoCD files will need to take this into consideration.
Format Specifics – Examples
In terms of how this framework has been used on the actual formats, we have made the most progress on Kodak PhotoCD, mostly because it’s less complex and less staff intensive than the SMIL/RealAudio formats. So far we have completed the analysis, creation of the test, the testing itself and are beginning to define how the old image objects will be changed relative to the inclusion of migrated content, additional artifacts (e.g. metadata) and the new content model structuring. The details of our decisions around successfully migrating PhotoCD content is too verbose for this post (though more information can be found on the NDSR blog). However, the Migration Workflow and Migration Pathway diagrams shown here help to show “how the sausage is made.”
The Migration Workflow demonstrates every step of the process from gathering documentation for initial analysis to ingest of the migrated content into the repository. In the example at left, we see the first two components of Phase 1 of the Migration Workflow – Format/Tools Research and Confirming Migration Criteria. As is shown in the corresponding legend, stakeholder involvement is determined based on a colored box which names the stakeholder group within each component. These roles were designed based off the RACI Responsibility Assignment Matrix which defines 4 levels of responsibility.
The Migration Pathway diagram (at right) shows how content will be transformed by a migration. A diagram is produced for each “bucket” of content for which the same tools, settings and outputs can be used unanimously based on shared technical characteristics. This example, from the Horblit Collection, a collection of daguerreotypes initially digitized in PhotoCD form, shows the ways in which the original PhotoCD content as found within the DRS will be converted and newly packaged and ingested into the repository. It considers how the image objects look now (DRS1), how they will look after the metadata migration (DRS2) and how the object will look after the content is migrated.
In the two months remaining for my residency I will be completing the overall framework, and working on the Kodak PhotoCD and SMIL/RealAudio plans (though execution of these plans will certainly fall outside of this timeline). After planning for the format-specific migration and going through several passes at the overall framework, we are getting closer to an actionable model for ongoing migration projects.
It has been fascinating to oscillate between deep analysis of the technical and infrastructural challenges faced with each format and finding ways to abstract these processes into a template that can be continuously adapted. The result will certainly be of use to Harvard, and our hope is that in sharing it with the larger digital preservation field that it will be useful to others as well. For the finalized spells and incantations, check the NDSR blog or Harvard website at the end of May. Presto Change-o!
After the passage of SEA 101 (the Indiana Religious Freedom Restoration Act), many scheduled attendees of DPLAFest were conflicted about its location in Indianapolis. Emily Gore, DPLA Director for Content, captured both this conflict and the opportunity the location provides when she wrote:
We should want to support our hosts and the businesses in Indianapolis who are standing up against this law… At DPLAfest, we will also have visible ways to show that we are against this kind of discrimination, including enshrining our values in our Code of Conduct. We encourage you to use this as an opportunity to let your voice and your dollars speak.
As DPLAFest attendees, patronizing businesses identifying themselves with Open for Service is an important start, but some of us wanted to do more. During our visit to Indianapolis, we are donating money to local charities supporting the communities and values that SEA 101 threatens.
One such local charity is the Indiana Youth Group (IYG). The IYG “provides safe places and confidential environments where self-identified lesbian, gay, bisexual, transgender, and questioning youth are empowered through programs, support services, social and leadership opportunities and community service. IYG advocates on their behalf in schools, in the community and through family support services.” IYG was written up as a direct-action donation option in the New Civil Rights Movement, and they provide services and support in parts of the state with a more hostile legal environment than Indianapolis.
This kind of local, direct action effort needs our support in Indiana right now. If you can, please consider donating to the Indiana Youth Group while in Indiana for DPLAFest. There is an existing GoFundMe campaign that IYG recommended linked below. If you choose to donate via GoFundMe, please consider tagging your donation with #DPLAFest so that we can communicate the goodwill of DPLAFest attendees as a group to the charity. The GoFundMe campaign sends money directly to IYG regardless of fundraising goals.
GoFundMe for Indiana Youth Group: http://www.gofundme.com/qpkabg
You can also donate via PayPal through IYG’s website. If you choose to donate through PayPal, please consider mentioning DPLAFest in the related forms on IYG site. IYG has offered to collate those responses with donations to again communicate the positive support DPLAFest attendees give to the charity and to LGBTQ youth in the state of Indiana.
Thank you for considering joining us and other DPLAFest attendees in supporting LGBTQ communities in Indiana. We look forward to seeing you in Indianapolis.
Open Knowledge Foundation: Honouring the memory of leading Open Knowledge community member Subhajit Ganguly
It is with great sadness that we have learned that Mr. Subhajit Ganguly, an Open Knowledge Ambassador in India and a leading community member in the entire region, has suddenly and tragically passed away.
Following a short period of illness Subhajit Ganguly, who was only 30 years old, passed away on the morning of April 7, local time, in the hospital in his hometown of Kolkata, India. His demise came as a shock to his family and loved ones, as well as to his colleagues and peers in the global open data and open knowledge community.
Subhajit was known as a relentless advocate for justice and equality, and a strong proponent and community builder around issues such as open data, open science and open education, which were all areas to which he devoted a large part of both his professional and personal time. Most recently he was the main catalyst and organiser of India Open Data Summit and he successfully contributed as project lead for the Indian Local City Census as well as being a submitter and reviewer of datasets in the Global Open Data Index, a global community-driven project that compares the openness of datasets worldwide to ensure another most pressing issue for him: Political transparency and accountability.
Subhajit was also instrumental in building the Open Knowledge India Local Group over the past two years, alongside also volunteering his time to coordinate other groups and initiatives within the open data landscape. Just last summer he attended the Open Knowledge Festival in Berlin to join his fellow community leaders to plan the future of open knowledge and open data in India, regionally in AsiaPAC, and globally.
Ever since the news passed across the globe during the last few days, messages and praise of Subhajit’s being and work have been pouring in from community leaders and members from near and far. He will be tremendously missed, and we join the many voices across the world mourning his loss.
Our thoughts and condolences go out to his family and loved ones. We hope that his work and vision will continue to stand as a significant example to follow for people around the world. May Subhajit rest in peace.
In September, I wrote a post about new collaborative technology from Crestron. We installed AirMedia in our library, and we are now looking at AirTame as a possible next generation version of collaborative technology.
It works on all mobile devices. AirMedia does this too, but the tablet features have been less than ideal. Airtame was able to raise more money than expected and is currently working to scale its production.
My university is also considering how collaborative technologies can be used in the classroom. This type of technology will allow for enhanced group work, enhanced presentations, and the instructor being able to move around the classroom to work with different students instead of being tied to the front of the classroom.
As technology continues to move toward mobile and wearable, the ability to show a group what is on a small screen will become more important in both education and the business world.
How is your library using collaborative technology?
How can libraries support new communication methods using collaborative technology?
DuraSpace News: Recordings Available: “Integrating ORCID Persistent Identifiers with DSpace, Fedora and VIVO.”
DuraSpace launched its 11th Hot Topics Community Webinar Series, “Integrating ORCID Persistent Identifiers with DSpace, Fedora and VIVO” last month. Curated by ORCID’s European Regional Director, Josh Brown, this series provided detailed insights into how ORCID persistent digital identifiers can be integrated with DSpace and Fedora repositories and with the VIVO open source semantic
The Web service maintenance scheduled for Friday, April 17 has been canceled and will be rescheduled. Stay tuned to Developer Network for updates on future maintenance windows.
We apologize for any inconvenience.