You are here

Feed aggregator

Galen Charlton: Testing Adobe Digital Editions 4.0.1, round 2

planet code4lib - Fri, 2014-10-24 21:04

Yesterday I did some testing of version 4.0.1 of Adobe Digital Editions and verified that it is now using HTTPS when sending ebook usage data to Adobe’s server

Of course, because the HTTPS protocol encrypts the datastream to that server, I couldn’t immediately verify that ADE was sending only the information that the privacy statement says it is.

Emphasis is on the word “immediately”.  If you want to find out what a program is sending via HTTPS to a remote server, there are ways to get in the middle.  Here’s how I did this for ADE:

  1. I edited the hosts file to refer “” to the address of a server under my control.
  2. I used the script from openssl to create a certificate authority of my very own, then generated an SSL certificate for “” signed by that CA.
  3. I put the certificate for my new certificate authority into the trusted root certificates store on my Windows 7 deskstop.
  4. I put the certificate in place on my webserver and wrote a couple simple CGI scripts to emulate the ADE logging data collector and capture what got sent to them.

I then started up ADE and flipped through a few pages of an ebook purchased from Kobo.  Here’s an example of what is now getting sent by ADE (reformatted a bit for readability):

"id":"F5hxneFfnj/dhGfJONiBeibvHOIYliQzmtOVre5yctHeWpZOeOxlu9zMUD6C+ExnlZd136kM9heyYzzPt2wohHgaQRhSan/hTU+Pbvo7ot9vOHgW5zzGAa0zdMgpboxnhhDVsuRL+osGet6RJqzyaXnaJXo2FoFhRxdE0oAHYbxEX3YjoPTvW0lyD3GcF2X7x8KTlmh+YyY2wX5lozsi2pak15VjBRwl+o1lYQp7Z6nbRha7wsZKjq7v/ST49fJL", "h":"4e79a72e31d24b34f637c1a616a3b128d65e0d26709eb7d3b6a89b99b333c96e", "d":[ { "d":"ikN/nu8S48WSvsMCQ5oCrK+I6WsYkrddl+zrqUFs4FSOPn+tI60Rg9ZkLbXaNzMoS9t6ACsQMovTwW5F5N8q31usPUo6ps9QPbWFaWFXaKQ6dpzGJGvONh9EyLlOsbJM" }, { "d":"KR0EGfUmFL+8gBIY9VlFchada3RWYIXZOe+DEhRGTPjEQUm7t3OrEzoR3KXNFux5jQ4mYzLdbfXfh29U4YL6sV4mC3AmpOJumSPJ/a6x8xA/2tozkYKNqQNnQ0ndA81yu6oKcOH9pG+LowYJ7oHRHePTEG8crR+4u+Q725nrDW/MXBVUt4B2rMSOvDimtxBzRcC59G+b3gh7S8PeA9DStE7TF53HWUInhEKf9KcvQ64=" }, { "d":"4kVzRIC4i79hhyoug/vh8t9hnpzx5hXY/6g2w8XHD3Z1RaCXkRemsluATUorVmGS1VDUToDAvwrLzDVegeNmbKIU/wvuDEeoCpaHe+JOYD8HTPBKnnG2hfJAxaL30ON9saXxPkFQn5adm9HG3/XDnRWM3NUBLr0q6SR44bcxoYVUS2UWFtg5XmL8e0+CRYNMO2Jr8TDtaQFYZvD0vu9Tvia2D9xfZPmnNke8YRBtrL/Km/Gdah0BDGcuNjTkHgFNph3VGGJJy+n2VJruoyprBA0zSX2RMGqMfRAlWBjFvQNWaiIsRfSvjD78V7ofKpzavTdHvUa4+tcAj4YJJOXrZ2hQBLrOLf4lMa3N9AL0lTdpRSKwrLTZAFvGd8aQIxL/tPvMbTl3kFQiM45LzR1D7g==" }, { "d":"bSNT1fz4szRs/qbu0Oj45gaZAiX8K//kcKqHweUEjDbHdwPHQCNhy2oD7QLeFvYzPmcWneAElaCyXw+Lxxerht+reP3oExTkLNwcOQ2vGlBUHAwP5P7Te01UtQ4lY7Pz" } ]

In other words, it’s sending JSON containing… I’m not sure.

The values of the various keys in that structure are obviously Base 64-encoded, but when run through a decoder, the result is just binary data, presumably the result of another layer of encryption.

Thus, we haven’t actually gotten much further towards verifying that ADE is sending only the data they claim to.  That packet of data could be describing my progress reading that book purchased from Kobo… or it could be sending something else.

That extra layer of encryption might be done as protection against a real man-in-the-middle attack targeted at Adobe’s log server — or it might be obfuscating something else.

Either way, the result remains the same: reader privacy is not guaranteed. I think Adobe is now doing things a bit better than they were when they released ADE 4.0, but I could be wrong.

If we as library workers are serious about protection patron privacy, I think we need more than assurances — we need to be able to verify things for ourselves. ADE necessarily remains in the “unverified” column for now.

Nicole Engard: Bookmarks for October 24, 2014

planet code4lib - Fri, 2014-10-24 20:30

Today I found the following resources and bookmarked them on <a href=

  • Klavaro Klavaro is just another free touch typing tutor program. We felt like to do it because we became frustrated with the other options, which relied mostly on some few specific keyboards. Klavaro intends to be keyboard and language independent, saving memory and time (and money).

Digest powered by RSS Digest

The post Bookmarks for October 24, 2014 appeared first on What I Learned Today....

Related posts:

  1. My new keyboard
  2. Learn a New Language
  3. Track Prices on Amazon with RSS

CrossRef: CrossRef and Inera Recognized at New England Publishing Collaboration Awards Ceremony

planet code4lib - Fri, 2014-10-24 19:53

On Tuesday evening, 21 October 2014, Bookbuilders of Boston named the winners of the first New England Publishing Collaboration (NEPCo) Awards. From a pool of ten finalists, NEPCo judges October Ivins (Ivins eContent Solutions), Eduardo Moura (Jones & Bartlett Learning), Alen Yen (iFactory), and Judith Rosen of Publishers Weekly selected the following:

  • First Place: Inera, Inc., collaborating with CrossRef

  • Second Place (Tie): Digital Science, collaborating with portfolio companies; and NetGalley, collaborating with the American Booksellers Association

  • Third Place: The Harvard Common Press, collaborating with portfolio companies

Based on an embrace of disruption and the need to transform the traditional value chain of content creation, the New England Publishing Collaboration (NEPCo) Awards showcase results achieved by two or more organizations working as partners. Other companies short-listed for the awards this year were Cenveo Publisher Services, Firebrand Technologies, Focal Press (Taylor & Francis), Hurix Systems, The MIT Press, and StoryboardThat.

Criteria for the awards included, results achieved,industry significance,depth of collaboration, and presentation.

An audience voting component was included--Digital Science was the overall winner among audience members.

Keynote speaker David Weinberger, co-author of Cluetrain Manifesto and senior researcher at the Harvard Berkman Center, was introduced by David Sandberg, co-owner of Porter Square Books.

Source: Bookbuilders of Boston

Eric Lease Morgan: Doing What I’m Not Suppose To Do

planet code4lib - Fri, 2014-10-24 18:09

I suppose I’m doing what I’m not suppose to do. One of those things is writing in books.

I’m attending a local digital humanities conference. One of the presenters described and demonstrated a program from MIT called Annotation Studio. Using this program a person can upload some text to a server, annotate the text, and share the annotations with a wider audience. Interesting!?

I then went for a walk to see an art show. It seems I had previously been to this art museum. The art was… art, but I did not find it beautiful. The themes were disturbing.

I then made it to the library where I tried to locate a copy of my one and only formally published book — WAIS And Gopher Servers. When I was here previously, I signed the book’s title page, and I came back to do the same thing. Alas, the book had been moved to remote storage.

I then proceeded to find another book in which I had written something. I was successful, and I signed the title page. Gasp! Considering the fact that no one had opened the book in years, and the pages were glued together I figured, “What the heck!”

Just as importantly, my contribution to the book — written in 1992 — was a short story called, “A day in the life of Mr. D“. It is an account of how computers would be used in the future. In it the young boy uses it to annotate a piece of text, and he gets to see the text of previous annotators. What is old is new again.

P.S. I composed this blog posting using an iPad. Functional but tedious.

OCLC Dev Network: Interlibrary Loan Policies Directory Release on October 26

planet code4lib - Fri, 2014-10-24 15:45

The Interlibrary Loan Policies Directory will be updated this weekend. We have changed the mediatype for Atom-wrapped JSON responses from "application/json" to "application/atom+json". This change is backward compatible - users can continue using “application/json” as needed for the time being - but we do recommend incorporating this mediatype change soon. 

Harvard Library Innovation Lab: Link roundup October 24, 2014

planet code4lib - Fri, 2014-10-24 14:25

Frames, computers, design, madlibs and boats. Oh my!

Building the Largest Ship In the World, South Korea

This is a huge boat, er ship, er vessel. – Pics of the world’s largest ship.

What a _________ Job: How Mad Libs Are Written | Splitsider

Really makes me want to try writing a Mad Libs

Introduction – Material Design – Google design guidelines

Google’s material design docs are worth a peruse

Disney rendered its new animated film on a 55,000-core supercomputer


Freeze Frame: Joey McIntyre and Public Garden Visitors Hop Into Huge Frames – Boston Visitors’ Guide

These frames make picture taking fun and easy. Fantastic, I bet when you’re with a group of friends. #fopg

Open Knowledge Foundation: Uncovering the true cost of access

planet code4lib - Fri, 2014-10-24 14:00

This post is part of our Open Access Week blog series to highlight great work in Open Access communities around the world.

Large amounts of public money are spent on obtaining access to published research results, amounting to billions of dollars per year.

Despite the huge amounts of public money spent on allowing researchers to access the published results of taxpayer funded research [1], there is little fiscal transparency in the scholarly publishing market and frequent examples of secrecy, where companies or brokers insert non-disclosure clauses into contracts so the cost of subscriptions remains opaque. This prevents objective analysis of the market, prevents libraries negotiating effectively with publishers for fair prices and makes it hard to ascertain the economic consequences of open access policies.

This matters. Open access campaigners are striving to make research results openly and freely available to everyone in a sustainable and cost effective manner. Without detailed data on current subscription costs for closed content and the emerging cost of article processing charges (APCs) [2], it is very difficult to accurately model and plan this transition.

Library budgets are stretched and their role within institutions is changing, making high journal costs an increasing concern.

Specifically, there are concerns that in the intervening period, publishers may be benefiting from ‘double dipping’ – offering hybrid products which incur APCs for open access articles and subscription fees for all other content which could result in higher overall income. In a market where the profit margins of several major publishers run at 35-40% and they exert monopolistic control over a large proportion of our accumulated scientific and scholarly knowledge, there is understandably a lot of anger and concern about the state and future of the market.

Over the past year, members of the Open Knowledge open science and open access working groups have joined many other advocates and concerned researchers, librarians and citizens in working tirelessly to gather information on the true cost of knowledge. Libraries do not routinely publish financial information at this level of granularity and may be constrained by contractual obligations, so the route chosen to obtain data in the UK has been Freedom of information act (FOI) requests. High profile mathematician and OA advocate Tim Gowers revealed that the cost at Elsevier journals at top universities. Two further rounds of FOI requests by librarian and OKFest attendee Stuart Lawson and Ben Meghreblian have given an even broader overview across five major publishers. This has been released as open data and efforts continue to enrich the dataset. Working group members in Finland and Hong Kong are working to obtain similar information for their countries and further inform open access advocacy and policy globally.

Subscription data only forms part of the industry picture. A data expedition at Oxford Open Science for Open Data Day 2014 tried to look into the business structure of academic publishers using Open Corporates and quickly encountered a high level of complexity so this area requires further work. In terms of APCs and costs to funders, the working groups contributed to a highly successful crowdsourcing effort led by Theo Andrew and Michelle Brook to validate and enrich the Wellcome Trust publication dataset for 2013-2014 with further information on journal type and cost, thus enabling a clearer view of the cost of hybrid journal publications for this particular funder and also illustrating compliance with open access policies.

Mapping open access globally at #OKFestOA. The session conclusion was that far more data is needed to present a truly global view.

This work only scratches the surface and anyone who could help in a global effort to uncover the cost of access to scholarly knowledge would be warmly welcomed and supported by those who have now built up experience in obtaining this information. If funders and institutions have datasets they could contribute this would also be a fantastic help.

Please sign up to the wiki page here and join the related discussion forum for support in making requests. We hope by Open Access Week 2015 we’ll be posting a much more informative and comprehensive assessment of the cost of accessing scholarly knowledge!


[1] A significant proportion of billions of dollars per year (estimated $9.4 billion on scientific journals alone in 2011). See STM report (PDF – 6.3MB).

[2] An open access business model where fees are paid to publishers for the service of publishing an article, which is then free to users.

Photo credits:

Money by 401(K) 2012 under CC-BY-SA 2.0

OKFest OA Map, Jenny Molloy, all copyright and related or neighboring rights waived to the extent possible under law using CC0 1.0 waiver. Published from the United Kingdom.

Library by seier+seier under CC-BY 2.0

Library of Congress: The Signal: Residency Program Success Stories, Part Two

planet code4lib - Fri, 2014-10-24 13:57

The following is a guest post by Julio Díaz Laabes, HACU intern and Program Management Assistant at the Library of Congress.

This is the second part of a two part series on the former class of residents from the National Digital Stewardship Residency program. Part One covered four residents from the first year of the program and looked at their current professional endeavors and how the program helped them achieve success in their field. In this second part, we take a look at the successes of the remaining six residents of the 2013-2014 D.C class.

Top (left to right): Lauren Work, Jaime McCurry and Julia Blase
Bottom (left to right): Emily Reynolds, Molly Schwartz and Margo Padilla.

Lauren Work is employed as the Digital Collections Librarian at the Virginia Commonwealth University in Richmond, VA. She is responsible for Digitization Unit projects at VCU and is involved in a newly launched open access publishing platform and repository. Directly applying her experience during the residency, Lauren is also part of a team working to develop digital preservation standards at VCU and is participating in various digital discovery and outreach projects. On her experience being part of NDSR, Lauren said, “The residency gave me the ability to participate in and grow a network of information professionals focused on digital stewardship. This was crucial to my own professional growth.” Also, the ability to interact with fellow residents gave her “a tightly-knit group of people that I will continue to look to for professional support throughout my career.”

Following her residency at the Folger Shakespeare Library, Jaime McCurry  became the Digital Assets Librarian at Hillwood Estate, Museum and Gardens in Washington, D.C. She is responsible for developing and sustaining local digital stewardship strategies and preservation policies and workflows; development of a future digital institutional repository and performing outreach services to raise understanding and interest in Hillwood digital collections. On what was the most interesting aspect of her job, Jaime said “it’s the wide range of digital activities I am able to be involved in, from digital asset management to digital preservation, to access, outreach and web development.” In line with Lauren, Jaime stated, “NDSR helped me to establish a valuable network of colleagues and professionals in the DC area and also to further strengthen my project management and public speaking skills.”

At the conclusion of NDSR, Julia Blase accepted a position with Smithsonian Libraries as Project Manager for the Field Book Project, a collaborative initiative to improve the accessibility of field book content through cataloging, conservation, digitization and online publication of digital catalog data and images. For Julia, one of the most exiting aspects of the project is its cooperative nature; it involves staff at Smithsonian Libraries, Smithsonian Archives, Smithsonian National Museum of Natural History and members and affiliates of the Biodiversity Heritage Library. “NDSR helped introduce me to the community of digital library and archivist professionals in the DC area. It also gave me the chance to present at several conferences, including CNI (Coalition for Networked Information) in St. Louis, where I met some of the people I work with today.”

Emily Reynolds is a Library Program Specialist at the Institute of Museum and Library Services, a federal funding agency. She works on discretionary grant programs including the Laura Bush 21st Century Librarian Program, which supports education and professional development for librarians and archivists (the NDSR program in Washington D.C., Boston and New York were funded through this program). “The NDSR helped in my current job because of the networking opportunities that residents were able to create as a result. The cohort model allowed us to connect with professionals at each other’s organization, share expertise with each other, and develop the networks and professional awareness that are vital for success,” she said. On the most interesting aspect of her job, Emily commented that “because of the range of grants awarded by IMLS, I am able to stay up-to-date on some of the most exciting and innovative projects happening in all kinds of libraries and archives. Every day in the office is different, given the complexities of the grant cycle and the diversity of programs we support.”

Molly Schwartz was a resident at the Association of Research Libraries. Now she is a Junior Analyst at the U.S State Department in the bureau of International information Program’s Office of Audience Research and Measurement. One of her biggest achievements is being awarded a 2014-2015 Fulbright Grant to work with the National Library of Finland and Aalto University on her project, User-Centered Design for Digital Cultural Heritage Portals. During this time, she will focus her research on the National Library of Finland’s online portal, Finna and conduct user-experience testing to improve the portal’s usability with concepts form user-centered designs.

Lastly, Margo Padilla is now the Strategic Programs Manager at the Metropolitan New York Library Council. She works alongside METRO staff to identify trends and technologies, develop workshops and services and manage innovative programs that benefit libraries, archives and museums in New York City. She is also the Program Director for NDSR-New York . “I used my experience as a resident to refine and further develop the NDSR program. I was able to base a lot of the program structure on the NDSR-DC model and the experience of the NDSR-DC cohort.” Margo also says that her job is especially rewarding “because I have the freedom to explore new ideas or projects, and leveraging the phenomenal work of our member community into solutions for the entire library, archive and museum community.”

Seeing the wide scope of positions the residents accepted after finishing the program, it is clear the NDSR has been successful in creating in-demand professionals to tackle digital preservation in many forms across the private and public sectors. The 2014-2015 Boston and New York classes are already underway and the next Washington D.C. class begins in June of 2015 (for more on that, see this recent blog post) . We expect these new NDSR graduates to form the next generation of digital stewards and to reach the same level of success as those in our pilot program.


William Denton: Escape Meta Alt from Word

planet code4lib - Fri, 2014-10-24 02:45

Escape from Microsoft Word by Edward Mendelson is an interesting short post about writing in Microsoft Word compared to that old classic WordPerfect:

Intelligent writers can produce intelligent prose using almost any instrument, but the medium in which they write will always have some more or less subtle effect on their prose. Karl Popper famously denounced Platonic politics, and the resulting fantasies of a closed, unchanging society, in his book The Open Society and Its Enemies (1945). When I work in Word, for all its luxuriant menus and dazzling prowess, I can’t escape a faint sense of having entered a closed, rule-bound society. When I write in WordPerfect, with all its scruffy, low-tech simplicity, the world seems more open, a place where endings can’t be predicted, where freedom might be real.

But of course if the question is “Word or WordPerfect?” the answer is: Emacs. Everything is text.

Galen Charlton: Testing Adobe Digital Editions 4.0.1

planet code4lib - Fri, 2014-10-24 01:25

A couple hours ago, I saw reports from Library Journal and The Digital Reader that Adobe has released version 4.0.1 of Adobe Digital Editions.  This was something I had been waiting for, given the revelation that ADE 4.0 had been sending ebook reading data in the clear.

ADE 4.0.1 comes with a special addendum to Adobe’s privacy statement that makes the following assertions:

  • It enumerates the types of information that it is collecting.
  • It states that information is sent via HTTPS, which means that it is encrypted.
  • It states that no information is sent to Adobe on ebooks that do not have DRM applied to them.
  • It may collect and send information about ebooks that do have DRM.

It’s good to test such claims, so I upgraded to ADE 4.0.1 on my Windows 7 machine and my OS X laptop.

First, I did a quick check of strings in the ADE program itself — and found that it contained an instance of “” rather than “”.  That was a good indication that ADE 4.0.1 was in fact going to use HTTPS to send ebook reading data to that server.

Next, I fired up Wireshark and started ADE.  Each time it started, it contacted a server called, presumably to verify that the DRM authorization was in good shape.  I then opened and flipped through several ebooks that were already present in the ADE library, including one DRM ebook I had checked out from my local library.

So far, it didn’t send anything to  I then checked out another DRM ebook from the library (in this case, Seattle Public Library and its OverDrive subscription) and flipped through it.  As it happens, it still didn’t send anything to Adobe’s logging server.

Finally, I used ADE to fulfill a DRM ePub download from Kobo.  This time, after flipping through the book, it did send data to the logging server.  I can confirm that it was sent using HTTPS, meaning that the contents of the message were encrypted.

To sum up, ADE 4.0.1’s behavior is consistent with Adobe’s claims – the data is no longer sent in the clear and a message was sent to the logging server only when I opened a new commercial DRM ePub.  However, without decrypting the contents of that message, I cannot verify that it only information about that ebook from Kobo.

But even then… why should Adobe be logging that information about the Kobo book? I’m not aware that Kobo is doing anything fancy that requires knowledge of how many pages I read from a book I purchased from them but did not open in the Kobo native app.  Have they actually asked Adobe to collect that information for them?

Another open question: why did opening the library ebook in ADE not trigger a message to the logging server?  Is it because the fulfillmentType specified in the .acsm file was “loan” rather than “buy”? More clarity on exactly when ADE sends reading progress to its logging server would be good.

Finally, if we take the privacy statement at its word, ADE is not implementing a page synchronization feature as some, including myself, have speculated – at least not yet.  Instead, Adobe is gathering this data to “share anonymous aggregated information with eBook providers to enable billing under the applicable pricing model”.  However, another sentence in the statement is… interesting:

While some publishers and distributors may charge libraries and resellers for 30 days from the date of the download, others may follow a metered pricing model and charge them for the actual time you read the eBook.

In other words, if any libraries are using an ebook lending service that does have such a metered pricing model, and if ADE is sending reading progress information to an Adobe server for such ebooks, that seems like a violation of reader privacy. Even though the data is now encrypted, if an Adobe ID is used to authorize ADE, Adobe itself has personally identifying information about the library patron and what they’re reading.

Adobe appears to have closed a hole – but there are still important questions left open. Librarians need to continue pushing on this.

DuraSpace News: Evolving Role of VIVO in Research and Scholarly Networks Presented at the Thomson Reuters CONVERISTM Global User Group Meeting

planet code4lib - Fri, 2014-10-24 00:00

Winchester, MA  Thomson Reuters hosted a CONVERIS Global User Group Meeting for current and prospective users in Hatton Garden, London, on October 1-2, 2014.  About 40 attendees from the UK, Sweden, the Netherlands, European Institutions from other countries, and the University of Botswana met to discuss issues pertaining to Research Information Management Systems, the CONVERIS Roadmap, research analytics, and new features and functions being provided by CONVERIS (

HangingTogether: Notes from the DC-2014 Pre-conference workshop “Fonds &amp; Bonds: Archival Metadata, Tools, and Identity Management”

planet code4lib - Thu, 2014-10-23 21:12

Earlier this month I had the good fortune to attend the “Fonds & Bonds” one-day workshop, just ahead of the DC-2014 meeting in Austin, TX. The workshop was held at the Harry Ransom Center of the University of Texas, Austin, which was just the right venue. Eric Childress from OCLC Research and Ryan Hildebrand from the Harry Ransom Center did much of the logistical work, while my OCLC Research colleague Jen Schaffner worked with Daniel Pitti of the Institute for Advanced Technology in the Humanities, University of Virginia and Julianna Barrera-Gomez of the University of Texas at San Antonio to organize the workshop agenda and presentations.

Here are some brief notes on a few of the presentations that made a particular impression on me.

The introduction by Gavan McCarthy (Director of the eScholarship Research Centre (eSRC), University of Melbourne) and Daniel Pitti to the Expert Group on Archival Description (EGAD) included a brief tour of standards development, how this led to the formation of EGAD, and noted EGAD’s efforts to develop the conceptual model for Records in Context (RIC). Daniel very ably set this work within its standards-development context, which was a great way to help focus the discussion on the specific goals of EGAD.

Valentine Charles (of Europeana) and Kerstin Arnold (from the ArchivesPortal Europe APEx project) provided a very good tandem presentation on “Archival Hierarchy and the Europeana Data Model”, with Kerstin highlighting the work of Archives Portal Europe and the APEx project. It was both reaffirming and challenging to hear that it’s difficult to get developers to understand an unexpected data model, when they confront it through a SPARQL endpoint or through APIs. We’ve experienced that in our work as well, and continue to spend considerable efforts in attempting to meet the challenge.

Tim Thompson (Princeton University Library) and Mairelys Lemus-Rojas (University of Miama Libraries) gave an overview of the Remixing Archival Metadata Project (RAMP) project, which was also presented in an OCLC webinar earlier this year. RAMP is “a lightweight web-based editing tool that is intended to let users do two things: (1) generate enhanced authority records for creators of archival collections and (2) publish the content of those records as Wikipedia pages.” RAMP utilizes both VIAF and OCLC Research’s WorldCat Identities as it reconciles and enhances names for people and organizations.

Ethan Gruber (American Numismatic Society) gave an overview of the xEAC project (Ethan pronounces xEAC as “zeek”), which he also presented in the OCLC webinar noted previously in which Tim presented RAMP. xEAC is an open-source XForms-based application for creating and managing EAC-CPF collections. Ethan is terrific at delving deeply into the possibilities of the technology at hand, and making the complex appear straight-forward.

Gavan McCarthy gave a quite moving presentation on the Find & Connect project, where we were able to see some of the previously-discussed descriptive standards and technologies resulting in something with real impact on real lives. Find & Connect is a resource for Forgotten Australians, former child migrants and others interested in the history of child welfare in Australia.

And Daniel Pitti gave a detailed presentation on the SNAC project. OCLC Research has supported this project from its early stages, providing access to NACO and VIAF authority data, and supplying the project with over 2M WorldCat records representing items and collections held by archival institutions … essentially the same data that supports most of OCLC Research’s ArchiveGrid project. The aspirations for the SNAC project are changing, moving from an experimental first phase where data from various sources was ingested, converted, and enriched to produce EAC-CPF records (with a prototype discovery layer on top of those), to the planning for a Cooperative Program which would transform that infrastructure into a sustainable international cooperative hosted by the U.S. National Archives and Records Administration. This is an ambitious and important effort that everyone in the community should be following.

The workshop was very well attended and richly informative. It provided a great way to quickly catch up on key developments and trends in the field. And the opportunity to easily network with colleagues in a congenial setting, including an hour to see a variety of systems demonstrated live, was also clearly appreciated.

About Bruce Washburn

Mail | Web | Twitter | Facebook | LinkedIn | Google+ | Flickr | More Posts (10)

Nicole Engard: ATO2014: Lessons from Open Source Schoolhouse

planet code4lib - Thu, 2014-10-23 20:56

Charlie Reisinger from the Penn Manor School District talked to us next about open source at his school. This was an expanded version of his lightning talk from the other night.

Penn Manor has 9 IT team members – which is a very lean staff for 4500 devices. They also do a lot of their technology in house.

Before we talk about open source we took a tangent in to the nature of education today. School districts are so stuck on the model they’re using and have used for centuries. But today kids can learn anything they would like with a simple connection to the Internet. You can be connected to the most brilliant minds that you’d like. Teachers are no longer the fountains of all knowledge. The classroom hasn’t been transformed by technology – if you walked in to a classroom 60 years ago it would look pretty much like a classroom today.

In schools that do allow students to have laptops they lock them down. This is a terrible model for student inquiry. The reason most of us are here today is because we had a system growing up that we could get in to and try to break/fix/hack.

So what is Penn Manor doing differently? First off they’re doing everything with open source. They use Koha, Moodle, Linux, WordPress, Ubuntu, OwnCloud, SIPfoundry and VirtualBox.

This came to them partially out of fiscal necessity. When Apple discontinued the white macbook the school was stuck in a situation where they needed to replace these laptops with some sort of affordable device. Using data they collected from the students laptops they found that students spent most of their time on their laptops in the browser or in a word processor so they decided to install Linux on laptops. Ubuntu was the choice because the state level testing would work on that operating systems.

This worked in elementary, but they needed to scale it up to the high schools which was much harder because each course needed different/specific software. They needed to decide if they could provide a laptop for every student.

The real guiding force in decided to provide one laptop per student was the English department. They said that they needed the best writing device that could be given to them. This knocked out the possibility of giving tablets to all students – instead a laptop allows for this need. Not only did they give all students laptops with Linux installed – they gave them all root access. This required trust! They created policies and told the students they trusted them to use the laptops as responsible learners. How’s that working out? Charlie has had 0 discipline issues associated with that. Now, if they get in to a jam where they screwed up the computer – maybe this isn’t such a bad thing because now they have to learn to fix their mistake.

They started this as a pilot program for 90 of their online students before deploying to all 1700 students. These computers include not just productivity software, but Steam! That got the kids attention. When they deployed to everyone though, Steam came off the computers, but the kids knew it was possible so it forced them to figure out how to install it on Linux which is not always self explanatory. This prodded the kids in to learning.

Charlie mentioned that he probably couldn’t have done this 5 years ago because the apps that are available today are so dense and so rich.

There was also the issue of training the staff on the change in software, but also in having all the kids with laptops. This included some training of the parents as well.

Along with the program they created a help desk program as a 4 credit honors level course as independent study for the high school students. They spent the whole time supporting the one to one program (one laptop per student). These students helped with the unpacking, inventorying, and the imaging ( built by one of the students) of the laptops over 2 days. The key to the program is that the students were treated as equals. This program was was picked up and talked about on

Charlie’s favorite moment of the whole program was watching his students train their peers on how to use these laptops.

The post ATO2014: Lessons from Open Source Schoolhouse appeared first on What I Learned Today....

Related posts:

  1. ATO2014: Open Source Schools: More Soup, Less Nuts
  2. ATO2014: Women in Open Source
  3. ATO2014: The first FOSS Minor at RIT

Nicole Engard: Bookmarks for October 23, 2014

planet code4lib - Thu, 2014-10-23 20:30

Today I found the following resources and bookmarked them on <a href=

  • Nest
  • Material Design Icons Material Design Icons are the official open-source icons featured in the Google Material Design specification.
  • SmartThings Control and monitor your home from one simple app

Digest powered by RSS Digest

The post Bookmarks for October 23, 2014 appeared first on What I Learned Today....

Related posts:

  1. Open Access Day in October
  2. Create Android Apps
  3. Launchy for Windows – Like Finder for Mac

Nicole Engard: ATO2014: Open sourcing the public library

planet code4lib - Thu, 2014-10-23 19:55

Phil Shapiro one of my fellow moderators talked to us next about open source and libraries.

Too many people ask what is the future of libraries and not what “should the future be”. A book that we must read is “Expect More: Demanding Better Libraries For Today’s Complex World“. If we don’t expect more of libraries we’re not going to see libraries change. We have to change the frame of mind that libraries belong the directors – they actually belong to the people and they should be serving the people.

Phil asks how we get some community participate in managing libraries. Start looking at your library’s collection and see if there is at least 1% of the collection in the STEM arena. Should that percent be more? 5%, 10%, more? There is no real answer here, but maybe we need to make a suggestion to our libraries. Maybe instead our funds should go to empower the community more in the technology arena. Maybe we should have co-working space in our library – this can be fee based even – could be something like $30/mo. That would be a way for libraries to help the unemployed and the community as a whole.

Libraries are about so much more than books. People head to the library because they’re wondering about something – so having people who have practical skills on your staff is invaluable. Instead of pointing people to the books on the topic, having someone for them to talk to is a value added service. What are our competitors going to be doing while we’re waiting for the transition from analog to digital to happen in libraries. We need to set some milestones for all libraries. Right now it’s only the wealthy libraries that seem to be moving in this way.

A lot of the suggestions Phil had I’ve seen some of the bigger libraries in the US doing like hosting TED Talks, offering digital issues lectures, etc. You could also invite kids in there to talk about what they know/have learned.

Phil’s quote: “The library fulfills its promise when people of different ages, races, and cultures come together to pool their talents in creating new creative content.” One thing to think about is whether this change from analog to digital can happen in libraries without changing their names. Instead we could call them the digital commons [I'm not sure this is necessary - I see Phil's point - but I think we need to just rebrand libraries and market them properly and keep their name.]

Some awesome libraries include Chattanooga Public Library which has their 4th floor makerspace. In Colorado there are the Anythink Libraries. The Delaware Department of Libraries is creating a new makerspace.

Books are just one of the tools toward helping libraries enhance human dignity – there are so many other ways we can do this.

Phil showed us a video of his:

You can bend the universe by asking questions – so call your library and ask questions about open source or about new technologies so that we plant the seeds of change.

Further reading from Phil:

The post ATO2014: Open sourcing the public library appeared first on What I Learned Today....

Related posts:

  1. ATO2014: Open source, marketing and using the press
  2. ATO2014: How ‘Open’ Changes Products
  3. ATO2014: Open Source – The Key Component of Modern Applications

Patrick Hochstenbach: Homework assignment #5 – bis Sketchbookskool

planet code4lib - Thu, 2014-10-23 19:29
I was so happy with my new Lamy fountain pen that I drew a second version of my homework assignment: one using my favorite Arthur and Fietje Precies characters.   Filed under: Comics, Doodles Tagged: cartoon, cat, christmas, doodle, fondue,

Patrick Hochstenbach: Homework assignment #5 Sketchbookskool

planet code4lib - Thu, 2014-10-23 19:24
As second assignment we needed to draw some fantasy image. Preferably using some meta story inside the story. I was drawing monsters the whole week during my commute so I used these drawings as inspiration Filed under: Comics Tagged: cartoon,

Patrick Hochstenbach: Homework assignment #4 Sketchbookskool

planet code4lib - Thu, 2014-10-23 19:21
This week we were asked to draw a memory: our first day at school. I tried to find old school pictures but didn’t find anything nice I could use. I only remembered I cried a lot on my first day

Nicole Engard: ATO2014: How ‘Open’ Changes Products

planet code4lib - Thu, 2014-10-23 18:44

Next up at All Things Open was Karen Borchert talking about How ‘Open’ Changes Products.

We started by talking about the open product conundrum. There is a thing that happens when we think about creating products in an open world. In order to understand this we must first understand what a product is. A product is a good, idea, method, information or service that we want to distribute. In open source we think differently about this. We think more about tools and toolkits instead of packages products because these things are more conducive to contribution and extension. With ‘open’ products work a bit more like Ikea – you have all the right pieces and instructions but you have to make something out of it – a table or chair or whatever. Ikea products are toolkits to make things. When we’re talking about software most buyers are thinking what they get out of the box so a toolkit is not a product to our consumers.

Open Atrium is a product that Phase2 produces and people say a lot about it like “It’s an intranet in a box” – but in reality it’s a toolkit. People use it a lot of different ways – some do what you’d expect them to do, others make it completely different. This is the great thing about open source – this causes a problem for us though in open source – because in Karen’s example a table != a bike. “The very thing that makes open source awesome is what makes our product hard to define.”

Defining a product in the open arena is simple – “Making an open source product is about doing what’s needed to start solving a customer problem on day 1.” Why are we even going down this road? Why are we creating products? Making something that is useable out of the box is what people are demanding. They also provide a different opportunity for revenue and profit.

This comes down to three things:

  • Understanding the value
  • Understanding the market
  • Understanding your business model

Adding value to open source is having something that someone who knows better than me put together. If you have an apple you have all you need to grow your own apples, but you’re not going to both to do that. You’d rather (or most people would rather) leave that to the expert – the farmer. Just because anyone can take the toolkit and build whatever they want with it that they will.

Markets are hard for us in open source because we have two markets – one that gives the product credibility and one that makes money – and often these aren’t the same market. Most of the time the community isn’t paying you for the product – they are usually other developers or people using it to sell to their clients. You need this market because you do benefit from it even if it’s not financially. You also need to work about the people who will pay you for the product and services. You have to invest in both markets to help your product succeed.

Business models include the ability to have two licenses – two versions of the product. There is a model around paid plugins or themes to enhance a product. And sometimes you see services built around the product. These are not all of the business models, but they are a few of the options. People buy many things in open products: themes, hosting, training, content, etc.

What about services? Services can be really important in any business model. You don’t have to deliver a completely custom set of services every time you deliver. It’s not less of a product because it’s centered around services.

Questions people ask?

Is it going to be expensive to deal with an open source product? Not necessarily but it’s not going to be free. We need to plan and budget properly and invest properly.

Am I going to make money on my product this year?
Maybe – but you shouldn’t count on it. Don’t bet the farm on your product business until you’ve tested the market.

Everyone charges $10/mo for this so I’m just going to charge that – is that cool? Nope! You need to charge what the product is worth and what people will pay for it and what you can afford to sell it for. Think about your ROI.

I’m not sure we want to be a products company. It’s very hard to be a product company without buy in. A lot of service companies ask this. Consider instead a pilot program and set a budget to test out this new model. Write a business plan.

The post ATO2014: How ‘Open’ Changes Products appeared first on What I Learned Today....

Related posts:

  1. ATO2014: Using Bootstrap to create a common UI across products
  2. ATO2014: Open source, marketing and using the press
  3. ATO2014: Saving the world: Open source and open science


Subscribe to code4lib aggregator