You are here

Feed aggregator

DPLA: New DPLA Job Opportunity: Ebook Project Manager

planet code4lib - Mon, 2015-08-10 17:00

Come work with us! We’re pleased to share an exciting new DPLA job opportunity: Ebook Project Manager. The deadline to apply is August 31. We encourage you to share this posting far and wide!

Ebook Project Manager

The Digital Public Library of America seeks a full-time Ebook Project Manager to assist DPLA with its new ebook initiatives. The Ebook Project Manager should be a knowledgeable, creative community leader who can move our early stage ebook work from conversation to action. We are seeking a creative individual who demonstrates strong organizational and project management skills, with a broad knowledge of the ebook landscape. The Ebook Project Manager will work closely with the Business Development Director to develop DPLA’s ebook strategy and services, and will coordinate DPLA’s National Ebook Working Group, organize future meetings, and administer discrete pilots targeting key areas of our framework for ebooks.

Responsibilities of the Ebook Project Manager:

  • Serves as DPLA’s primary point person for service development, community engagement and other aspects of DPLA developing ebook program;
  • Leads community convenings; facilitates stakeholder conversations; and synthesizes issues, decisions, and system/service requirements;
  • Organizes and directs the DPLA ebook curation group;
  • Coordinates external communications to the broader DPLA community;
  • Works with DPLA network partners to identify and curate open content for use by content distribution partners.

Requirements for the position:

  • Strong knowledge of current ebook landscape, with a preference given to candidates who demonstrate deep understanding of the public library marketplace, publisher distribution/acquisition processes, and library collection development/acquisition workflow;
  • Understanding of the technology behind ebooks, including EPUB and EPUB conversion processes, web- and app-based display of ebooks;
  • Experience with project management, especially as it relates to large-scale digital projects;
  • MLS or equivalent experience with books, cataloguing, and metadata;
  • Demonstrated commitment to DPLA’s mission to maximize access to our shared culture.

This position is full-time, ideally situated either in DPLA’s Boston headquarters, or remotely in New York, Washington, or another location in the northeast corridor, but other locations will also be considered.

Like its collection, DPLA is strongly committed to diversity in all of its forms. We provide a full set of benefits, including health care, life and disability insurance, and a retirement plan. Starting salary is commensurate with experience.

Please send a letter of interest, a resume/cv, and contact information for three references to by August 31, 2015. Please put “Ebook Project Manager” in the subject line. Questions about the position may also be directed to

About DPLA

The Digital Public Library of America strives to contain the full breadth of human expression, from the written word, to works of art and culture, to records of America’s heritage, to the efforts and data of science. Since launching in April 2013, it has aggregated 11 million items from 1,600 institutions. The DPLA is a registered 501(c)(3) non-profit.

Islandora: Goodbye, Islandoracon. You were awesome.

planet code4lib - Mon, 2015-08-10 14:43

Last week marked a huge milestone for the Islandora Community as we came together for our first full length conference, in the birthplace of Islandora at the University of Prince Edward Island in Charlottetown, PEI. With a final headcount of 80 attendees, a line up of 28 sessions and 16 workshops, and a day-long Hackfest to finish things off, there is a lot to reflect on.

Mark Leggott opened the week with a Keynote that looked back over the history of the Islandora project through the lens of evolution - from its single-celled days as an idea at UPEI to the "Futurzoic" era ahead of us. We spent the rest of Day One talking about repository strategies, how Islandora works as a community, and how Islandora can work for communities of end users. The day ended with a BBQ on the lawn of the Robertson Library and a first exposure to a variety of Canadian potato chip flavours (roast chicken flavour, anyone?)

Day Two split the conference into two tracks, which meant some tough choices between some really great sessions on Islandora tools, sites, migration strategies, working with the arts and digital humanities, and the future with Fedora 4. You can find the slides from many sessions linked in the conference schedule. We ended with beer, snacks, and brutally hard bar trivia.

Day Three launched two days of 90-minute workshops in two tracks, delving into the details of Islandora with some hands-on training from Islandora experts. We covered everything from the basics of setting up the Drupal side of an Islandora Site, to a detailed look at the Tuque API or mass-migrating content with Drush scripts. The social events continued as well, with our big conference dinner at Fishbones, complete with live music and an oyster bar on Wednesday night, and a seafood tasting hosted by conference sponsor discoverygarden, Inc, where this view from the deck was augmented with an actual, literal, rainbow:

Not pictured: Rainbow

After the workshops finished up on Thursday, we held the first Islandora Foundation AGM, where new Chairman Mark Jordan (Simon Fraser University) received the ceremonial screaming monkey and former Chairman Mark Leggott took a new place as the Foundation's Treasurer. There was also lively debate around the subject of individual membership in the IF (more on that in days to come). 

Finally, we had the Hackfest, which went off better than we could have hoped. In addition to some bug fixes and improvements, the teams of the Hackfest produced an whopping four new tools, one of which is so use-ready that it has been proposed for adoption in the next release. The Hackfest tools are:

With apologies to anyone whose name I've left out. It was a big crowd and everyone did great work.

From the conference planning team and the Islandora Foundation, thank you very much to our attendees for making our first conference a big success. We hope you enjoyed yourself and learned a ton. And we hope you'll join us again at our next conference!

Next up, for those who can't wait for the second Islandoracon: Islandora Camp CT in Hartford, Connecticut, October 20 - 23

Shelley Gullikson: Adventures in Information Architecture Part 2: Thinking Big, Thinking Small

planet code4lib - Mon, 2015-08-10 12:45

When we last saw them in Part 1, our Web Committee heroes were stuck with a tough decision: do we shoehorn the Ottawa Room content into an information architecture that doesn’t really fit it, or do we try to revamp the whole IA?

There was much hand-wringing and arm-waving. (Okay, did a lot of hand-wringing and arm-waving.) Our testing showed that users were either using Summon or asking someone to get information, and that when they needed to use the navigation they were stymied. Almost no one looked at the menus. What are our menus for if no one is using them? Are they just background noise? If so, should we just try to make the background noise more pleasant? What if the IA isn’t there primarily to organize and categorize our content, but to tell our users something about our library? Maybe our menus are grouping all the rocks in one spot and all the trees in another spot and all the sky bits somewhere else and what we really need to do is build a beautiful path that leads them…

Oh, hey, (said our lovely and fabulous Web Committee heroes) why don’t you slow down there for a second? What is the problem we need to solve? We’ve already tossed around some ideas that might help, why don’t we look at those to see if they solve our problem? Yes, those are interesting questions you have, and that thing about the beautiful path sounds swell, but… maybe it can wait.

And they kindly took me by the hand — their capes waving in the breeze — and led me out of the weeds. And we realized that we had already come up with a couple of solutions. We could use our existing category of “Research” (which up to now only had course guides and subject guides in it) to include other things like the resources in the Ottawa Room and all our Scholarly Communications / Open Access stuff. We could create a new category called “In the Library” (or maybe “In the Building” is better?) and add information about the physical space that people are searching our site for because it doesn’t fit anywhere in our current IA.

The more we talked about small, concrete ideas like this we realized they might also help with some of the issues left back in the weeds. The top-level headings on the main page (and in the header menu) would read: “Find Research Services In the Building.” Which is not unpleasant background noise for a library.

DuraSpace News: NOW AVAILABLE: Fedora 4.3.0—Towards Meeting Key Objectives

planet code4lib - Mon, 2015-08-10 00:00

Winchester, MA  On July 24, 2015 Fedora 4.3.0  was released by the Fedora team. Full release notes are included in this message and are also available on the wiki: This new version  furthers several major objectives including:

  • Moving Fedora towards a clear set of standards-based services

  • Moving Fedora towards runtime configurability

Terry Reese: MarcEdit 6 Wireframes — Validating Headings

planet code4lib - Sun, 2015-08-09 14:44

Over the last year, I’ve spent a good deal of time looking for ways to integrate many of the growing linked data services into MarcEdit.  These services, mainly revolving around vocabularies, provide some interesting opportunities for augmenting our existing MARC data, or enhancing local systems that make use of these particular vocabularies.  Examples like those at the Bentley ( are real-world demonstrations of how computers can take advantage of these endpoints when they are available. 

In MarcEdit, I’ve been creating and testing linking tools for close to a year now, and one of the areas I’ve been waiting to explore is whether libraries to utilized linking services to build their own authorities workflows.  Conceptually, it should be possible – the necessary information exists…it’s really just a matter of putting it together.  So, that’s what I’ve been working on.  Utilizing the linked data libraries found within MarcEdit, I’ve been working to create a service that will help users identify invalid headings and records where those headings reside. 

Working Wireframes

Over the last week, I’ve prototyped this service.  The way that it works is pretty straightforward.  The tool extracts the data from the 1xx, 6xx, and 7xx fields, and if they are tagged as being LC controlled, I query the service to see what information I can learn about the heading.  Additionally, since this tool is designed for work in batch, there is a high likelihood that headings will repeat – so MarcEdit is generating a local cache of headings as well – this way it can check against the local cache rather than the remote cache when possible.  The local cache will constantly be grown – with materials set to expire after a month.  I’m still toying with what to do with the local cache, expirations, and what the best way to keep it in sync might be.  I’d originally considered pulling down the entire LC names and subjects headings – but for a desktop application, this didn’t make sense.  Together, these files, uncompressed, consumed GBs of data.  Within an indexed database, this would continue to be true.  And again, this file would need to be updated regularly.  To, I’m looking for an approach that will give some local caching, without the need to make the user download and managed huge data files. 

Anyway – the function is being implemented as a Report.  Within the Reports menu in the MarcEditor, you will eventually find a new item titled Validate Headings.

When you run the Validate Headings tool, you will see the following window:

You’ll notice that there is a Source file.  If you come from the MarcEditor, this will be prepopulated.  If you come from outside the MarcEditor, you will need to define the file that is being processed.  Next, you select the elements to authorize.  Then Click Process.  The Extract button will initially be enabled until after the data run.  Once completed, users can extract the records with invalid headings.

When completed, you will receive the following report:

This includes the total processing time, average response from LC’s service, total number of records, and the information about how the data validated.  Below, the report will give you information about headings that validated, but were variants.  For example:

Record #846
Term in Record: Arnim, Bettina Brentano von, 1785-1859
LC Preferred Term: Arnim, Bettina von, 1785-1859

This would be marked as an invalid heading, because the data in the record is incorrect.  But the reporting tool will provide back the Preferred LC label so the user can then see how the data should be currently structured.  Actually, now that I’m thinking about it – I’ll likely include one more value – the URI to the dataset so you can actually go to the authority file page, from this report. 

This report can be copied or printed – and as I noted, when this process is finished, the Extract button is enabled so the user can extract the data from the source records for processing. 

Couple of Notes

So, this process takes time to run – there just isn’t any way around it.  For this set, there were 7702 unique items queried.  Each request from LC averaged 0.28 seconds.  In my testing, depending on the time of day, I’ve found that response rate can run between 0.20 seconds per request to 1.2 seconds per response.  None of those times are that bad when done individually, but when taken in aggregate against 7700 queries – it adds up.  If you did the math, 7702*0.2 = 1540 seconds to just ask for the data.  Divide that by 60 and you get 25.6 minutes.  The total time to process that means that there are 11 minutes of “other” things happening here.  My guess, that other 11 minutes is being eaten up by local lookups, character conversions (since LC request UTF8 and my data was in MARC8) and data normalization.  Since there isn’t anything I can do about the latency between the user and the LC site – I’ll be working over the next week to try and remove as much local processing time from the equation as possible. 

Questions – let me know.


Manage Metadata (Diane Hillmann and Jon Phipps): Five Star Vocabulary Use

planet code4lib - Fri, 2015-08-07 18:50

Most of us in the library and cultural heritage communities interested in metadata are well aware of Tim Berners-Lee’s five star ratings for linked open data (in fact, some of us actually have the mug).

The five star rating for LOD, intended to encourage us to follow five basic rules for linked data is useful, but, as we’ve discussed it over the years, a basic question rises up: What good is linked data without (property) vocabularies? Vocabulary manager types like me and my peeps are always thinking like this, and recently we came across solid evidence that we are not alone in the universe.

Check out: “Five Stars of Linked Data Vocabulary Use”, published last year as part of the Semantic Web Journal. The five authors posit that TBL’s five star linked data is just the precondition to what we really need: vocabularies. They point out that the original 5 star rating says nothing about vocabularies, but that Linked Data without vocabularies is not useful at all:

“Just converting a CSV file to a set of RDF triples and linking them to another set of triples does not necessarily make the data more (re)usable to humans or machines.”

Needless to say, we share this viewpoint!

I’m not going to steal their thunder and list here all five star categories–you really should read the article (it’s short), but only note that the lowest level is a zero star rating that covers LD with no vocabularies. The five star rating is reserved for vocabularies that are linked to other vocabularies, which is pretty cool, and not easy to accomplish by the original publisher as a soloist.

These five star ratings are a terrific start to good practices documentation for vocabularies used in LOD, which we’ve had in our minds for some time. Stay tuned.

Patrick Hochstenbach: Penguin in Africa II

planet code4lib - Fri, 2015-08-07 18:01
Filed under: Comics Tagged: africa, cartoon, comic, comics cartoons, inking, kinshasa, Penguin

District Dispatch: Envisioning copyright education

planet code4lib - Fri, 2015-08-07 16:52

I have been an ALA employee for a while now, primarily on copyright policy and education. During that time, I have worked with several librarian groups, taught a number of copyright workshops, and appreciate that more librarians have a better understanding of what copyright is than was true several years ago. Nonetheless, on a regular basis, librarians across the country, primarily academic but also school librarians, find themselves tasked with the assignment to be the “copyright person” for their library or educational institution. These new job responsibilities are usually unwanted, because the victims recognize that they don’t know anything about copyright. The fortunate among them make connections with more knowledgeable colleagues, or perhaps have the funding to attend a copyright workshop here or there that may be, but often is not, reliable. In short, their graduate degree in library and information science, accredited or not, has not prepared them for the assignment. Information policy course work in library school is limited to a discussion of censorship and banned books week.

Sounds a bit harsh, doesn’t it?

I don’t expect or recommend that graduate students become fluent in the details of every aspect of the copyright law. What they do need to know is the purpose of the copyright law, why information professionals in particular have a responsibility for upholding balanced copyright law by representing the information rights of their communities, why information policy understanding must go hand in hand with librarianship, and of course, what is fair use? They need to understand copyright law as a concept, not a set of dos and don’ts.

Recently, this void in library and information science education is being investigated. I know several librarians that are conducting research on MLIS programs, the need for copyright education, how copyright is taught and the requirements of those teaching information policy courses. More broadly, the University of Maryland Information School published Re-envisioning the MLS: Findings, Issues and Considerations, the first year report of a three-year study on the future of the Masters of Library Science degree and how we prepare information professionals for their careers. If you already have your masters’ degrees, don’t feel left out. Look forward to new learning, knowing that not all of the old learning is for naught.  The values of librarianship have survived and will continue to be at the heart of what we need to know and do.

The post Envisioning copyright education appeared first on District Dispatch.

Open Knowledge Foundation: Onwards to AbreLatAm 2015: what we learned last year

planet code4lib - Fri, 2015-08-07 14:44

This post was co-written by Mor Rubinstein and Neal Bastek. It is cross-posted and available in Spanish at the AbreLatAm blog.

AbreLatAm, for us “gringos”, is magical. Even in the age where everyone is glued to a screen, face to face connection is still the strongest connection humans can have; it fosters the trust that can lead to new cooperations and innovations. However, in the case of Latin America, it also creates a family. This feeling creates both a sense of solidarity and security that lets people share and consult about their open data and transparency issues with greater passion and awareness of the challenges and conditions we face daily in our own communities. It is unique, and difficult to replicate. You may not realise it, but in our experience, this feeling is not so common in other parts of the world, where the culture of work is more strict and, with all due respect for our differences, less personal. AbreLatAm therefore is a gift to the movement itself and not just to those of us lucky enough to attend.

For open data practitioners from outside of America Latina like us, AbreLatAm is a place to learn how communities evolve and how they work together. It is a place for us to listen, deeply. So, our command of the Spanish language is not so great (pero es mejor que ayer!) but we don’t need Spanish to feel the atmosphere, see the sparks and contribute, in English, with hand gestures to amplify the event. We try hard to understand the context and the words (and are grateful for the support we have from patient translators!) and are understand the unique problems in the region. For example, the high levels of corruption, the low levels of trust in government and highest rates of inequality in the world. However, other problems are universal, and we should all examine how to solve them together. The question is how?

The Open Knowledge Network has gained tremendous inspiration from AbreLatAm. What appeared early on as a good opportunity to promote the Global Open Data Index and build connections with the Latin American community has become so much more — a fertile ground for sharing and feedback. Some of the processes that we are doing now in this year’s Index, such as our methodology consultation and datasets selections, were the direct result of our participation in AbreLatAm last year.

Neal and Mor promoting the Index in last’s year AbreLatam

We are very excited to see what we will learn this year. As AbreLatAm matures, it also receives more attention and attracts more participants. AbreLatAm was, and still is, a pioneering community participatory event. The challenges now are about scaling, and it is a mirror to similar challenges around the globe. How can we harness the energy of an un-conference with such a vast amount of participants? How can we go from talking and sharing to coordinated global action?

The movement’s ability to scale will only be a success if it’s rooted in community-based, citizen driven needs and not handed down from on high by way of intellectual and academic arguments rooted in a Eurocentric experience. AbreLatAm is an ideal setting for discovering this demand in the Latin American context and matching it and adapting it to global practices and experiences that have succeeded elsewhere– be it in the North or South! Likewise, the LATAM community has much to share in terms of their own experiences and success, and at Open Knowledge we’re keenly interested in bringing those back to our global network for reflection and consideration.

District Dispatch: The future of the MLS: New report from the University of Maryland

planet code4lib - Fri, 2015-08-07 13:17


Last summer, the iSchool at the University of Maryland launched the Re-Envisioning the MLS initiative. The premise is that future professionals in library and library-related fields will likely need fundamentally different educational preparation than what is provided by current curricula. Based on an extensive body of research, outreach, and analysis, yesterday the iSchool released its report Re-Envisioning the MLS: Findings, Issues, and Considerations.

The Maryland initiative is important to our work in public policy—particularly through ALA’s Policy Revolution initiative and ALA’s Libraries Transform campaign—as the field needs more professionals with an outward orientation. Fundamentally, the focus of library work is evolving from internal optimization of information resources and systems within a library to collaborative efforts across libraries and with non-library entities. Thus, the role of “policy advocate” becomes a greater part of a librarian’s job, whether that advocacy occurs at the community/local level, regional level, state level, or with a national focus. The Maryland initiative is important enough to me that I’ve served on the iSchool’s MLS Advisory Board during the past year to provide input into the process and this report.

As summarized in the report release:

The findings have a number of implications for LIS education and MLS programs, including:

• Attributes of Successful Information Professionals. Successful information professionals are not those who wish to seek a quiet refuge out of the public’s view. They need to be collaborative, problem solvers, creative, socially innovative, flexible and adaptable, and have a strong desire to work with the public.
• Ensure a Balance of Competencies and Abilities. MLS programs need to ensure that students have a range of competencies, but that aptitude needs to be balanced with a progressive attitude (“can do,” “change agent,” “public service”).
• Re-Thinking the MLS Begins with Recruitment. Neither a love of books or libraries is enough for the next generation of information professionals. Instead they must thrive on change, embrace public service, and seek challenges that require creative solutions. Attracting students with a strong desire to serve the public is critical.
• Be Disruptive, Savvy, and Fearless. Through creativity, collaboration, innovation, and entrepreneurship, information professionals have the opportunity to disrupt current approaches and practices to existing social challenges. The future belongs to those who are socially innovative, entrepreneurial, and change agents who are bold, fearless, willing to take risks, go “big,” and go against convention.

The report is far from the end point of the initiative, as the next stage focuses on redesign of the curriculum with continued stakeholder engagement and ultimately implementation. And, of course, there is much more in the report than described here; I urge you to take a look. Background materials and other research used to produce the report are available at Feel free to provide comments, either to the University of Maryland folks or to me. I look forward to my continuing collaboration on this excellent initiative.

The post The future of the MLS: New report from the University of Maryland appeared first on District Dispatch.

LITA: If You Build It They Might Not Come

planet code4lib - Fri, 2015-08-07 13:00

I’ve felt lately that I am trying to row upstream when getting faculty and students to use our research guides. They have great content, we discuss them in instruction sessions, and we prominently feature them on our webpage. In spite of this though they are not used nearly as much as I think they should be.

Licensed under a CC BY-SA 2.0 by Side Wages

This summer, I spent time brainstorming ways to market the guides to increase usage and it hit me that maybe I’m going about the process all wrong. I’m trying to promote a resource to students that is outside the typical resources they use. Our students use the university’s learning management system, Moodle, extensively. It is the way they access courses and communicate with their professors and fellow classmates.

We have integrated links in Moodle directly to the library, but based on our Google Analytics students go directly from the library homepage to the databases. They don’t frequently traffic other parts of the website. So instead of rowing upstream, what if we start using Moodle? I’m still brainstorming what this could look like but here are a few ideas:

  • Enroll students in a library course (I’ve seen this done, but I’m not sure it is the best fit for my institution)
  • Create lessons and pages in Moodle that faculty can import into their own courses
  • Work more closely with the instructional design team to include library resources in the courses

How do you use the LMS to encourage student use of the library?



Shelley Gullikson: Adventures in Information Architecture Part 1: Test what we have

planet code4lib - Thu, 2015-08-06 22:51

For a while now, Web Committee has been discussing revamping the information architecture on our library website. There are some good reasons:

  • more than half our of visitors are arriving at the site through a web search and so only have the menu — not the home page — to orient them to what our site is and does
  • the current architecture does not have an obvious home for our growing scholarly communications content
  • the current architecture is rather weak on the connection with the library building, which is a problem because:
    • people are searching the site for content about the building
    • there are more visits to the building than visits to the website

However, we also know that changing the IA is hard. Our students have already told us that they don’t like it when the website changes, so we really want to make sure that any change is a positive one. But that takes time.

And we have a pressing need to do something soon. The Library will be opening a new Ottawa Resource Room in the fall that has related web content and we can’t decide where it fits. So: user testing! Maybe our users can see something we don’t in our current IA. (Spoiler: they can’t)

We did guerrilla-style testing with a tablet, asking people to show us how they would find:

  • information relating to Ottawa (we asked what program they were in to try to make it relevant; for example we asked the Child Studies major about finding information related to child welfare in specific Ottawa neighbourhoods)
  • information about the Ottawa Room
  • (for another issue) how they would get help with downloading ebooks

As an aside: We’re not so naive to think that students use the library website for all of their information needs. We made a point of asking them where on the library website they would go because we needed to put the information somewhere on the website. For the ebooks question, we also asked what they would really do if they had problems with ebooks. 6/8 people said they would ask someone at the library. Yup. They’d talk to real person. Anyway, back to IA…

We talked to 8 different students. For information relating to Ottawa, the majority would do a Summon search. Makes sense. For information about the Ottawa Room itself, the answers were all over the place and nothing was repeated more than twice. So our users weren’t any better than we were at finding a place in our current IA for this information. (Hey, it was worth a try!)

So… we either need to shove the Ottawa Room somewhere, anywhere, in the structure we have or we need to tweak the IA sooner rather than later. So on to Web Committee for discussion and (I hope!) decisions.

District Dispatch: Massive advocacy surge forestalls cybersecurity showdown

planet code4lib - Thu, 2015-08-06 20:41

Congratulations! You, and more than 6 million other fax-firing outraged citizens, helped convince the Senate on its last day of pre-recess debate not to take up S. 754, the privacy-hostile Cybersecurity Information Sharing Act (CISA). . . at least until Congress returns after Labor Day.

Kick the can down the road

While modified by its principal authors, Intelligence Committee Chair Richard Burr (R-NC) and Ranking Democrat Dianne Feinstein (D-CA), to respond to profound criticism from ALA and many other privacy advocates, S. 754 as it came to the Senate floor on Thursday remained a deeply troubling and flawed bill. Many Senators were and will again be prepared to offer amendments to blunt CISA’s sharpest anti-privacy edges.  Even if they succeed, however, the Senate’s final version of S. 754 still will have to be reconciled with the House’s very different cyber bills and the end product is unlikely to be one that civil liberties advocates, ALA among them, can support.

Summer may be a time for hammocks and naps, but not when it comes to making sure that every Senator know just how bad a bill CISA is and just how much you want your Senator to  vote “NO” when and if it returns to the Senate floor. Stay tuned to District Dispatch, and ALA’s Twitter and Facebook pages, for more on when and how best to deliver that message.  For now, enjoy that lemonade in the shade; you earned it!

The post Massive advocacy surge forestalls cybersecurity showdown appeared first on District Dispatch.

Nicole Engard: Bookmarks for August 6, 2015

planet code4lib - Thu, 2015-08-06 20:30

Today I found the following resources and bookmarked them on Delicious.

  • Computer Science Learning Opportunities We have developed a range of resources, programs, scholarships, and grant opportunities to engage students and educators around the world interested in computer science.

Digest powered by RSS Digest

The post Bookmarks for August 6, 2015 appeared first on What I Learned Today....

Related posts:

  1. Contribute to Open Source
  2. Teach Students Open Source
  3. Collaborative Teaching for More Effective Learning

Patrick Hochstenbach: Penguin in Africa

planet code4lib - Thu, 2015-08-06 03:53
Filed under: Comics Tagged: africa, cartoon, comic, hunt, hunter, hunting, Penguin

DPLA: Summer of Space Exploration

planet code4lib - Wed, 2015-08-05 19:22

This summer has been one full of space exploration. NASA’s New Horizons mission brought us new discoveries and breathtaking images of Pluto. July and August also marked a host of scientific milestones, marking man’s first walk on the moon, among other breakthroughs that helped pave the way for New Horizons. You can explore some of the milestones of American space exploration in the DPLA collections.

The first space probe to send back images of the moon, the Ranger VII, was launched on July 28, 1964. It was also the first successful flight in the Ranger program, which had tried and failed to send a number of unmanned spacecraft to photograph the moon in the early 1960s. The Ranger VII sent more than 4,000 pictures back to earth and helped scientists prepare the eventual Apollo landing sites.

Close-up images of the Moon from the Ranger VII, 1964. Courtesy of the National Archives and Records Administration.

The Ranger Lunar Probe. Courtesy of the National Air and Space Museum via Smithsonian Institution.















Five years later, NASA astronauts walked the very lunar surface that the Ranger VII photographed. The Apollo 11 spacecraft carried the first humans to the moon, with Neil Armstrong’s historic first steps being broadcast live on TV for viewers back on Earth. You can view objects (including Armstrong’s spacesuit) in the DPLA, via the National Air and Space Museum collection.

A National Security Action memo about the Apollo program, 1962. Courtesy of the National Archives and Records Administration.


Decades later, the Curiosity, an unmanned rover, explored the surface of another planet–Mars. The Curiosity, a robotic vehicle, launched in November 2011 and touched down on the surface of Mars in August 2012. It sent back the first images of Mars, and videos from the Curiosity were watched across the world online. On the one-year anniversary of its landing, the rover played “Happy Birthday,” marking the first song played on another planet. Learn more about the Curiosity mission in this collection of stories from Minnesota Public Radio. You can also read NASA’s Curiosity flight data, from the United States Government Publishing Office.

The most recent space exploration discoveries come from New Horizons, a space probe which captured stunning images of Pluto this summer. Aside from the mock-up of the New Horizons probe, from the National Air and Space Museum, you can find a variety of other related space history items in the DPLA collections, too. Notably, these depictions of the solar system (this ornate 1876 quilt, and an orrery mechanical model, both from the National Museum of American History) created before the discovery of Pluto show just how far scientists have come in their astronomical discoveries.

“The Solar System,” 1869. Courtesy of David Rumsey.

LITA: Jobs in Information Technology: August 5, 2015

planet code4lib - Wed, 2015-08-05 18:51

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week:

Manager, Information Technology, Timberland Regional Library, Olympia, WA

Vice President – Digital Services, Backstage Library Works, Bethlehem, PA

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

Open Knowledge Foundation: Beauty behind the scenes

planet code4lib - Wed, 2015-08-05 15:57

Good things can often go unnoticed, especially if they’re not immediately visible. Last month the government of Sweden, through Vinnova, released a revamped version of their open data portal, Ö The portal still runs on CKAN, the open data management system. It even has the same visual feeling but the principles behind the portal are completely different. The main idea behind the new version of Ö is automation. Open Knowledge teamed up with the Swedish company Metasolutions to build and deliver an automated open data portal.

Responsive design

In modern web development, one aspect of website automation called responsive design has become very popular. With this technique the website automatically adjusts the presentation depending on the screen size. That is, it knows how best to present the content given different screen sizes. Ö got a slight facelift in terms of tweaks to its appearance, but the big news on that front is that it now has a responsive design. The portal looks different if you access it on mobile phones or if you visit it on desktops, but the content is still the same.

These changes were contributed to CKAN. They are now a part of the CKAN core web application as of version 2.3. This means everyone can now have responsive data portals as long as they use a recent version of CKAN.

New Ö

Old Ö

Data catalogs

Perhaps the biggest innovation of Ö is how the automation process works for adding new datasets to the catalog. Normally with CKAN, data publishers log in and create or update their datasets on the CKAN site. CKAN has for a long time also supported something called harvesting, where an instance of CKAN goes out and fetches new datasets and makes them available. That’s a form of automation, but it’s dependent on specific software being used or special harvesters for each source. So harvesting from one CKAN instance to another is simple. Harvesting from a specific geospatial data source is simple. Automatically harvesting from something you don’t know and doesn’t exist yet is hard.

That’s the reality which Ö faces. Only a minority of public organisations and municipalities in Sweden publish open data at the moment. So a decision hasn’t been made by a majority of the public entities for what software or solution will be used to publish open data.

To tackle this problem, Ö relies on an open standard from the World Wide Web Consortium called DCAT (Data Catalog Vocabulary). The open standard describes how to publish a list of datasets and it allows Swedish public bodies to pick whatever solution they like to publish datasets, as long as one of its outputs conforms with DCAT.

Ö actually uses a DCAT application profile which was specially created for Sweden by Metasolutions and defines in more detail what to expect, for example that Ö expects to find dataset classifications according the Eurovoc classification system.

Thanks to this effort significant improvements have been made to CKAN’s support for RDF and DCAT. They include application profiles (like the Swedish one) for harvesting and exposing DCAT metadata in different formats. So a CKAN instance can now automatically harvest datasets from a range of DCAT sources, which is exactly what Ö does. For Ö, the CKAN support also makes it easy for Swedish public bodies who use CKAN to automatically expose their datasets correctly so that they can be automatically harvested by Ö For more information have a look at the CKAN DCAT extension documentation.

Dead or alive

The Web is decentralised and always changing. A link to a webpage that worked yesterday might not work today because the page was moved. When automatically adding external links, for example, links to resources for a dataset, you run into the risk of adding links to resources that no longer exist.

To counter that Ö uses a CKAN extension called Dead or alive. It may not be the best name, but that’s what it does. It checks if a link is dead or alive. The checking itself is performed by an external service called deadoralive. The extension just serves a set of links that the external service decides to check to see if some links are alive. In this way dead links are automatically marked as broken and system administrators of Ö can find problematic public bodies and notify them that they need to update their DCAT catalog (this is not automatic because nobody likes spam).

These are only the automation highlights of the new Ö Other changes were made that have little to do with automation but are still not immediately visible, so a lot of Ö’s beauty happens behind the scenes. That’s also the case for other open data portals. You might just visit your open data portal to get some open data, but you might not realise the amount of effort and coordination it takes to get that data to you.

Image of Swedish flag by Allie_Caulfield on Flickr (cc-by)

This post has been republished from the CKAN blog.

Patrick Hochstenbach: Figure drawing on mondays

planet code4lib - Wed, 2015-08-05 13:15
Trying out a combintation of B4 pencil and white pencil on brown paper. Bit hard to see the white lines while drawing. These were quick 4-minute sketches. Filed under: Figure Drawings Tagged: art, art model, nude model, Nudes, pencil

LITA: Learning through WordClouds: Visualizing LITA Jobs Data

planet code4lib - Wed, 2015-08-05 13:11

I am in no way attempting to create an evidenced-based scholarly study on employment movements.  This is an attempt to satisfy my recent fascination with data visualization and curiosity to use them to inspire discussion.  On August 4, 2015, sometime in the morning, I took data from the employment opportunities advertised on the LITA Job site in order to see some trends.  The jobs are posted under the regions Northeastern, Southern, Midwestern, and Western Regions; none posted outside of the United States at the time of my mini-experiment.  This information may be helpful to current job seekers or folks currently employed who may be interested in areas to venture out or compliment their current repertoire. I hope these visualizations will conjure some discussion or ideas.  Out of the sixty-seven total ads listed, 34 were from universities, 14 from colleges, 9 from public libraries, and 10 from other libraries such as vendors or special libraries.

Organization/Library-type employment post percentage – university, college, public, and other

Job Titles
As librarians, we master the art of keyword searching but sometimes we may struggle with finding those specific words that can bring back that needed information.  This may happen with job searching.  Library, librarian and technology as keywords can only take you so far.  In the past, when looking for employment, I felt I may be unaware of exciting jobs out there due to not knowing the magic terms.

wordcloud of advertised job titles minus the word librarian, library, and university

After visualizing the job titles on the list, I discovered I like reading the more obscure words rarely used.  These terms are a helpful way to understand duties, but also motivate you.  Take for instance the enticing words included on some; emerging, collaborator, integrated, initiative, or innovation. I especially love the job title Data and Visualization Librarian, posted by Dartmouth College Library.

Duties and Required/ Preferred Qualifications
Out of the 67 current posts, 44 positions had this information readily available, 23 were filled, a broken link, or the link provided lead to the homepage or job search page of the organization.

Wordcloud of duties, and required/preferred qualifications

After you get passed the usual words that pop out, there may be knowledge from the smaller, more obscure words.  For programmers, the usual contenders were CSS (cascading style sheets), Java, XSL (EXtensible Stylesheet Language), APIs (Application programming interface), and RDF (Resource Description Framework).  I was not aware of MVC.  It seems that ASP.NET MVC is a Microsoft web and app creation tool.  Microsoft has wonderful tutorials at .    Another learning experience came from a somewhat prominent acronym – RIS. RIS is a standardized tagging system used to effectively interchange citation information between platforms.  XML’s XPath and D3 were also new to me. Some areas to possibly develop your skills are in RDA (Resource Description & Access) and 3D software and printing.

This small exercise gave me, not only a small snippet of employment information to be aware of, but gave me more respect towards the use of word clouds.

Word Cloud Web Tools:
Word Cloud Generator:


Subscribe to code4lib aggregator