You are here

Feed aggregator

Library of Congress: The Signal: Office Opens up with OOXML

planet code4lib - Tue, 2015-02-03 14:40

The following is a guest post by Carl Fleischhauer, a Digital Initiatives Project Manager in the Office of Strategic Initiatives.

Before VisiCalc, Lotus 1-2-3, and Microsoft Excel, spreadsheets were manual although their compilers took advantage of adding machines. And there were contests, natch. This 1937 photograph from the Library’s Harris & Ewing collection portrays William A. Offutt of the Washington Loan and Trust Company. It was produced on the occasion of Offutt’s victory over 29 competitors in a speed and accuracy contest for adding machine operators sponsored by the Washington Chapter, American Institute of Banking.

We are pleased to announce the publication of nine new format descriptions on the Library’s Format Sustainability Web site. This is a closely related set, each of which pertains to a member of the Office Open XML (OOXML) family.

Readers should focus on the word Office, because these are the most recent expression of the formats associated with Microsoft’s family of “Office” desktop applications, including Word, PowerPoint and Excel. Formerly, these applications produced files in proprietary, binary formats that carried the filename extensions doc, ppt, and xls. The current versions employ an XML structure for the data and an x has been added to the extensions: docx, pptx, and xlsx.

In addition to giving the formats an XML expression, Microsoft also decided to move the formats out of proprietary status and into a standardized form (now focus on the word Open in the name.) Three international organizations cooperated to standardize OOXML. Ecma International, an international, membership-based organization, published first in 2006. At that time, Caroline Arms (co-compiler of the Library’s Format Sustainability Web site) served on the ECMA work group, which meant that she was ideally situated to draft these descriptions.

In 2008, a modified version was approved as a standard by two bodies who work together on information technology standards through a Joint Technical Committee (JTC 1): International Organization for Standardization and International Electrotechnical Commission. These standards appear in a series with identifiers that lead off with ISO/IEC 29500. Subsequent to the initial publication by ISO/IEC, ECMA produced a second edition with identical text. Clarifications and corrections were incorporated into editions published by this trio in 2011 and 2012.

Here’s a list of the nine:

  • OOXML_Family, OOXML Format Family — ISO/IEC 29500 and ECMA 376
  • OPC/OOXML_2012, Open Packaging Conventions (Office Open XML), ISO 29500-2:2008-2012
  • DOCX/OOXML_2012, DOCX Transitional (Office Open XML), ISO 29500:2008-2012; ECMA-376, Editions 1-4
  • DOCX/OOXML_Strict_2012, DOCX Strict (Office Open XML), ISO 29500:2008-2012; ECMA-376, Editions 2-4
  • PPTX/OOXML_2012, PPTX Transitional (Office Open XML), ISO 29500:2008-2012; ECMA-376, Editions 1-4
  • PPTX/OOXML_Strict_2012, PPTX Strict (Office Open XML), ISO 29500:2008-2012; ECMA-376, Editions 2-4
  • XLSX/OOXML_2012, XLSX Transitional (Office Open XML), ISO 29500:2008-2012; ECMA-376, Editions 1-4
  • XLSX/OOXML_Strict_2012, XLSX Strict (Office Open XML), ISO 29500:2008-2012; ECMA-376, Editions 2-4
  • MCE/OOXML_2012, Markup Compatibility and Extensibility (Office Open XML), ISO 29500-3:2008-2012, ECMA-376, Editions 1-4

Microsoft is not the only corporate entity to move formerly proprietary specifications into the realm of public standards. Over the last several years, Adobe has done the same thing with the PDF family. There seems to be a new business model here: Microsoft and Adobe are proud of the capabilities of their application software–that is where they can make money–and they feel that wider implementation of these data formats will help their business rather than hinder it.

Office work in the days before computer support. This photograph of the U.S. Copyright Office (part of the Library of Congress) was made in about 1920 by an unknown photographer. Staff members are using typewriters and a card file to track and manage copyright information. The original photograph is held in the Geographical File in the Library’s Prints and Photographs Division.

Although an aside in this blog, it is worth noting that Microsoft and Adobe also provide open access to format specifications that are, in a strict sense, still proprietary. Microsoft now permits the dissemination of its specifications for binary doc, ppt, and xls, and copies have been posted for download at the Library’s Format Sustainability site. Meanwhile, Adobe makes its DNG photo file format specification freely available, as well as its older TIFF format specification.

Both developments–standardization for Office XML and PDF and open dissemination for Office, DNG and TIFF–are good news for digital-content preservation. Disclosure is one of our sustainability factors and these actions raise the disclosure levels for all of these formats, a good thing.

Meanwhile, readers should remember that the Format Sustainability Web site is not limited to formats that we consider desirable. We list as many formats (and subformats) as we can, as objectively as we can, so that others can choose the ones they prefer for a particular body of content and for particular use cases.

The Library of Congress, for example, has recently posted its preference statements for newly acquired content. The acceptable category for textual content on that list includes the OOXML family as well as OpenDocument (aka Open Document Format or ODF), another XML-formatted office suite. ODF was developed by the Organization for the Advancement of Structured Information Standards, an industry consortium. ODF’s standardization as ISO/IEC 23600 in 2006 predates ISO/IEC’s standardization of OOXML. The Format Sustainability team plans to draft descriptions for ODF very soon.

Nick Ruest: An Exploratory look at 13,968,293 #JeSuisCharlie, #JeSuisAhmed, #JeSuisJuif, and #CharlieHebdo tweets

planet code4lib - Tue, 2015-02-03 14:29
#JeSuisCharlie #JeSuisAhmed #JeSuisJuif #CharlieHebdo

I've spent the better part of a month collecting tweets from the #JeSuisCharlie, #JeSuisAhmed, #JeSuisJuif, and #CharlieHebdo tweets. Last week, I pulled together all of the collection files, did some clean up, and some more analysis on the data set (76G of json!). This time I was able to take advantage of Peter Binkley's twarc-report project. According to the report, the earliest tweet in the data set is from 2015-01-07 11:59:12 UTC, and the last tweet in the data set is from 2015-01-28 18:15:35 UTC. This data set includes 13,968,293 tweets (10,589,910 retweets - 75.81%) from 3,343,319 different users over 21 days. You can check out a word cloud of all the tweets here.

First tweet in data set (numberic sort of tweet ids):

#JESUISCHARLIE pic.twitter.com/4fkcjH0yaz

— Thierry Puget (@titi1960) January 7, 2015


Hydration

If you want to experiment/follow along with what I've done here, you can "rehydrate" the data set with twarc. You can grab the Tweet ids for the data set from here (Data & Analysis tab).

% twarc.py --hydrate JeSuisCharlie-JeSuisAhmed-JeSuisJuif-CharlieHebdo-tweet-ids-20150129.txt > JeSuisCharlie-JeSuisAhmed-JeSuisJuif-CharlieHebdo-tweets-20150129.json

The hydration process will take some time. I'd highly suggest using GNU Screen or tmux, and grabbing approximately 15 pots of coffee.

Map

In this data set, we have 133,970 tweets with geo coordinates availble. This represents about 0.96% of the entire data set.

The map is available here in a separate page since the geojson file is 83M and will potato your browser while everything loads. If anybody knows how to stream that geojson file to Leaflet.js so the browser doesn't potato, please comment! :-)

Users

These are the top 10 users in the data set.

  1. 35,420 tweets Promo_Culturel
  2. 33,075 tweets BotCharlie
  3. 24,251 tweets YaMeCanse21
  4. 23,126 tweets yakacliquer
  5. 17,576 tweets YaMeCanse20
  6. 15,315 tweets iS_Angry_Bird
  7. 9,615 tweets AbraIsacJac
  8. 9,318 tweets AnAnnoyingTweep
  9. 3,967 tweets rightnowio_feed
  10. 3,514 tweets russfeed

This comes from twarc-report's report-profiler.py.

$ ~/git/twarc-report/reportprofiler.py -o text JeSuisCharlie-JeSuisAhmed-JeSuisJuif-CharlieHebdo-tweets-20150129.json Hashtags

There are teh top 10 hashtags in the data set.

  1. 8,597,175 tweets #charliehebdo
  2. 7,911,343 tweets #jesuischarlie
  3. 377,041 tweets #jesuisahmed
  4. 264,869 tweets #paris
  5. 186,976 tweets #france
  6. 177,448 tweets #parisshooting
  7. 141,993 tweets #jesuisjuif
  8. 140,539 tweets #marcherepublicaine
  9. 129,484 tweets #noussommescharlie
  10. 128,529 tweets #afp
URLs

These are the top 10 URLs in the data set. 3,771,042 tweets (27.00%) had an URL associated with them.

These are all shortened urls. I'm working through and issue with unshorten.py.

  1. http://bbc.in/1xPaVhN (43,708)
  2. http://bit.ly/1AEpWnE (19,328)
  3. http://bit.ly/1DEm0TK (17,033)
  4. http://nyr.kr/14AeVIi (14,118)
  5. http://youtu.be/4KBdnOrTdMI (13,252)
  6. http://bbc.in/14ulyLt (12,407)
  7. http://europe1.fr/direct-video (9,228)
  8. http://bbc.in/1DxNLQD (9,044)
  9. http://ind.pn/1s5EV8w (8,721)
  10. http://srogers.cartodb.com/viz/123be814-96bb-11e4-aec1-0e9d821ea90d/embed_map (8,581)

This comes from twarc-report's report-profiler.py.

$ ~/git/twarc-report/reportprofiler.py -o text JeSuisCharlie-JeSuisAhmed-JeSuisJuif-CharlieHebdo-tweets-20150129.json Media

These are the top 10 media urls in the data set. 8,141,552 tweets (58.29%) had a media URL associated with them.

36,753 Occurrences



35,942 Occurrences



33,501 Occurrences



31,712 Occurrences



29,359 Occurrences



26,334 Occurrences



25,989 Occurrences



23,974 Occurrences



22,659 Occurrences



22,421 Occurrences



This comes from twarc-report's report-profiler.py.

$ ~/git/twarc-report/reportprofiler.py -o text JeSuisCharlie-JeSuisAhmed-JeSuisJuif-CharlieHebdo-tweets-20150129.json tags: #JeSuisCharlie#JeSuisAhmed#JeSuisJuif#CharlieHebdotwarctwarc-report

LITA: To Infinity (Well, LibGuides 2.0) And Beyond

planet code4lib - Tue, 2015-02-03 13:00

Ah, but do I? Credit: Buffy Hamilton

Introduction

LibGuides is a content management system distributed by Springshare and used by approximately 4800 libraries worldwide to curate and annotate resources online. Generally librarians use it to compile subject guides, but more and more libraries are also using it to build their websites. In 2014, Springshare went public with a new and improved version called LibGuides 2.0.

When my small university library upgraded to LibGuides 2.0, we went the whole hog. After migrating our original LibGuides to version 2, I redid the entire library website using LibGuides, integrating all our content into one unified, flexible content management system (CMS).

Today’s post considers my library’s migration to LibGuides 2.0 as well as assessing the product. My next post will look at how we turned a bunch of subject guides into a high-performing website.

A faculty support page built using LibGuides 2.0 (screenshot credit: Michael Rodriguez)

Decision

According to the LibGuides Community pages, 913 libraries worldwide are running LibGuides v1, 439 are running LibGuides v1 CMS, and 1005 are running some version of LibGuides 2.0. This is important because (1) a lot of libraries haven’t upgraded yet, and (2) Springshare has a virtual monopoly on the market for library resource guides. Notwithstanding, Springshare does offer a quality product at a reasonable price. My library pays about $2000 per year for LibGuides CMS, which adds enhanced customization and API features to the regular LibGuides platform.

We did consider dropping LibGuides in favor of WordPress or another open source system, but we concluded that consolidating our web services as much as possible would enhance ease-of-use and ease training and transitions among staff. Our decision was also influenced by the fact that we use LibCal 2.0 for our study room reservation system, while Florida’s statewide Ask a Librarian virtual reference service, in which we participate, is switching to LibAnswers 2.0 by summer 2015. LibGuides, LibCal, and LibAnswers now all integrate seamlessly behind a single “LibApps” login.

LibApps admin interface (screenshot credit: Michael Rodriguez)

Migration

Since the upgrade is free, we decided to migrate before classes recommenced in September 2014. We relentlessly weeded redundant, dated, or befuddling content. I deleted or consolidated four or five guides, eliminated the inconsistent tagging system, and rearranged the subject categories. We picked a migration date, and Springshare made it happen within 30 minutes of the hour we chose.

I do suggest carefully screening your asset list prior to migration, because you have the option of populating your A-Z Database List from existing assets simply by checking the box next to each link you want to add to the database list. We overlooked this stage of the process and then had to manually add 140 databases to our A-Z list post-migration. Otherwise, migration was painless. Springshare’s tech support staff were helpful and courteous throughout the process.

Check out Margaret Heller and Will Kent’s ALA TechConnect blog post on Migrating to LibGuides 2.0 or Bill Coombs’ review of LibGuides 2 for other perspectives on the product migration.

Benefits of LibGuides 2.0

Mobile responsive. All the pages automatically realign themselves to match the viewport (tablet, smartphone, laptop, or desktop) through which they are accessed. This is huge.

Modern code. LibGuides 2.0 is built in compliance with HTML5, CSS3, and Bootstrap 3.2, which is a vast improvement given that the previous version’s code seems to date from 1999.

Custom URLs. Did you know that you can work with Springshare and your IT department to customize the URLs for your Guides? Or that you can create a custom homepage with a simple redirect? My library’s website now lives at a delightfully clean URL: library.hodges.edu.

Hosting. Springshare hosts its apps on Amazon servers, so librarians can focus on building content instead of dealing with IT departments, networks, server crashes, domain renewals, or FTP.

A-Z Database List. Pool all your databases into one easily sortable, searchable master list. Sort by subject, database type, and vendor and highlight “best bets” for each category.

Customizations. Customize CSS and Bootstrap for individual guides and use the powerful new API to distribute content outside LibGuides (for LibGuides CMS subscribers only). The old API has been repurposed into a snazzy widget generator to which any LibGuides subscriber has access.

Dynamic design. New features include carousels, image galleries, and tabbed boxes.

Credit: Flickr user Neal Jennings

Gripes

Hidden code. As far as I can tell, librarians can’t directly edit the CSS or HTML as in WordPress. Instead, you have add custom code to empty boxes in order to override the default code.

Inflexible columns. LibGuides 2.0 lacks the v1 feature wherein librarians could easily adjust the width of guides’ columns. Now we are assigned a preselected range of column widths, which we can only alter by going through the hassle of recoding certain guide templates. Grr.

Slow widgets. Putting multiple widgets on one page can reduce load time, and occasionally a widget won’t load at all in older versions of IE, forcing frustrated users to refresh the page.

Closed documentation. Whereas the older documentation is available on the open web for anyone to see, Springshare has locked most of its v2 documentation behind a LibApps login wall.

No encryption. Alison Macrina’s recent blog post on why we need to encrypt the web resonated because LibGuides’ public side isn’t encrypted. Springshare can provide SSL for libguides.com domains, but they can’t help with custom domains maintained by library IT on local servers.

Proprietary software. As a huge advocate for open source, I wince at relying on a proprietary CMS rather than on WordPress or Drupal, even though Springshare beats most vendors hollow.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

Conclusion

That said, we are delighted with the new and improved LibGuides. The upgrade has significantly enhanced our website’s user-friendliness, visual appeal, and performance. The next post in this two-part series will look at how we turned a bunch of subject guides into a library website.

Over to you, dear readers! What is your LibGuides experience? Any alternatives to suggest?

Open Knowledge Foundation: India’s Science and Technology Outputs are Now Under Open Access

planet code4lib - Tue, 2015-02-03 11:13

This is a cross-post from the Open Knowledge India blog, see the original here.

As a new year 2015 gift to the scholars of the world, the two departments (Department of Biotechnology [DBT] and Department of Science and Technology [DST]) under the Ministry of Science and Technology, Government of India had unveiled Open Access Policy to all its funded research.

The policy document dated December 12, 2014 states that “Since all funds disbursed by the DBT and DST are public funds, it is important that the information and knowledge generated through the use of these funds are made publicly available as soon as possible, subject to Indian law and IP policies of respective funding agencies and institutions where the research is performed“.

As the Ministry of Science and Technology funds basic, translational and applied scientific research in the country through various initiatives and schemes to individual scientists, scholars, institutes, start-up, etc., this policy assumes very significance and brings almost all the science and technology outputs (here published articles only) generated at various institutes under Open Access.

The policy underscores the fact that by providing free online access to the publications is the most effective way of ensuring the publicly funded research is accessed, read and built upon.

The Ministry under this policy has set up two central repositories of its own (dbt.sciencecentral.in and dst.sciencecentral.in) and a central harvester (www.sciencecentral.in) which will harvest the ful-text and metadata from these repositories and other repositories of various institutes established/funded by DBT and DST in the country.

According to the Open Access policy, “the final accepted manuscript (after refereeing, revision, etc. [post-prints]) resulting from research projects, which are fully or partially funded by DBT or DST, or were performed using infrastructure built with the support of these organizations, should be deposited“.

The policy is not only limited to the accepted manuscripts, but extends to all scholarship and data which received funding from DBT or DST from the fiscal year 2012-13 onwards.

As mentioned above that many of the research projects at various institutes in the country are funded by DBT or DST, this policy definitely, encourage the establishment of Open Access Institutional Repositories by the institutes and opening up of access to all the publicly funded research in the country.

Terry Reese: MarcEdit 6 Update

planet code4lib - Tue, 2015-02-03 06:04

This MarcEdit update includes a couple fixes and an enhancement to one of the new validation components.  Updates include:

** Bug Fix: Task Manager: When selecting the Edit Subfield function, once the delete subfield checkbox is selected and saved, you cannot reopen the task to edit.  This has been corrected.
** Bug Fix: Validate ISBNS: When processing ISBNs, validation appears to be working incorrectly.  This has been corrected.  The ISBN validator now automatically validates $a and $z of any field specified.
** Enhancement: Validate ISBNs: When selecting the field to validate — if just the field is entered, the program automatically examines the $a and $z.  However, you can specify a specific field and subfield for validation. 

 

Validate ISBNs

This is a new function (as of the last update) that utilizes the mathematical formula to examine the ISBN and determine if the number is mathematically correct.  As I work into the future, I’ll add functionality to enable users to ensure that the ISBN is actually in use and linked to the record referenced in the record.  To use the function, open the MarcEditor, Select the Reports Menu, and then Validate ISBNs. 

Once selected, you will be asked to specify a field or field and subfield to process.  If just the field is selected, the program will automatically evaluate the $a and $z if present.  If the field and subfield is specified, the program will only evaluate the specified subfield.

When run, the program will output any ISBN fields that cannot be mathematically validated.

 

To get the update, utilize the automated update utility or go to http://marcedit.reeset.net/downloads to get the current download.

–tr

William Denton: Measure the Library Freedom

planet code4lib - Tue, 2015-02-03 01:41

The winners of the Knight News Challenge: Libraries were announced a few days ago. I didn’t know about the Knight Foundation (those are the same Knights as in Knight Ridder (“Not to be confused with Knight Rider or Night Rider”)) but they’re giving out lots of money to lots of good projects. DocumentCloud got funded a few years ago, and the Internet Archive got $600,000 this round, and well deserved it is. I was struck by how two winners fit together: the Library Freedom Project, which got $244,700, and Measure the Future, which got $130,000.

The Library Freedom Project has this goal:

Providing librarians and their patrons with tools and information to better understand their digital rights by scaling a series of privacy workshops for librarians.

Measure the Future says:

Imagine having a Google-Analytics-style dashboard for your library building: number of visits, what patrons browsed, what parts of the library were busy during which parts of the day, and more. Measure the Future is going to make that happen by using simple and inexpensive sensors that can collect data about building usage that is now invisible. Making these invisible occurrences explicit will allow librarians to make strategic decisions that create more efficient and effective experiences for their patrons.

Our goal is to enable libraries and librarians to make the tools that measure the future of the library as physical space. We are going to build open tools using open hardware and open source software, and then provide open tutorials so that libraries everywhere can build the tools for themselves.

Moss is boss.

I like collecting and analyzing data, I like measuring things, I like small computers and embedded devices, even smart dust—it always comes back to Vernor Vinge, this time A Deepness In the Sky—but I must say I don’t like Google Analytics even though we use it at work. Any road up:

We will be producing open tutorials that outline both the open hardware and the open source software we will be using, so that any library anywhere will be able to purchase inexpensive parts, put them together, and use code that we provide to build their own sensor networks for their own buildings.

The people behind Measure the Future are all top in the field, but, cripes, it looks like they want to combine users, analytics, metrics, sensors, embedded devices, free software, open hardware and “library as place” into a well-intentioned ROI-demonstrating panopticon.

Delicious Mondrian cake. So moist, so geometric.

I’m not going to get all Michel Foucault you, but I recently read The Inspection House: An Impertinent Field Guide to Modern Surveillance by Tim Maly and Emily Horne:

The panopticon is the inflexion point and the culmination point of this new regime. It is the platonic ideal of the control the disciplinary society is trying to achieve. Operation of the panopticon does not require special training or expertise; anyone (including the children or servants of the director, as Bentham suggests) can provide the observation that will produce the necessary effects of anxiety and paranoia in the prisoner. The building itself allows power to be instrumentalized, redirecting it to the accomplishment of specific goals, and the institutional architecture provides the means to achieve that end.

Measure the Future has all the best intentions and will use safe methods, but still, it vibes hinky, this idea of putting sensors all over the library to measure where people walk and talk and, who knows, where body temperature goes up or which study rooms are the loudest … and then that would get correlated with borrowing or card swipes at the gate … and knowing that the spy agencies can hack into anything unless the most extreme security measures are taken and there’s never a moment’s lapse … well, it makes me hope they’ll be in close collaboration with the Library Freedom Project.

And maybe the Library Freedom Project can ask them why, when we’re trying to help users protect themselves as their own governments try to eliminate privacy forever, we’re planting sensors around our buildings because we now think that neverending monitoring of users will help us improve our services and show our worth to our funders.

Mita Williams: Hackerspaces, Makerspaces, Fab Labs, TechShops, Incubators, Accelerators... Where do libraries fit in?

planet code4lib - Mon, 2015-02-02 22:18
[ On February 1st, I gave this presentation the American Library Association Midwinter Conference in Chicago, Illinois as part of the ALA Masters Series. Thank you, good people of ALA.]



Today’s session is going to start out as a field guide but it’s going to end with a history lesson.




We’re going to start here - with a space station called c-base that found/ed in Berlin in 1995.



 
And then we are going travel through time and space to the present day where business start-up incubator innovation labs are everywhere including CBASE  which is the College of Business and Economics from the University of Guelph.



But before we figure out where libraries makerspaces fit in, we’re going to use the c-base space station to go back in time, just before the very first public libraries were established around the world, so we can figure out how to go back to the future we want. It is 2015, after all.  



But before we can talk about library makerspaces, we need to talk about hackerspaces.





This is the inside of c-base.

c-base is considered one of - or perhaps even - the very first hackerspace. It was established in 1995 by self-proclaimed nerds, sci-fi fans, and digital activists who tell us that c-base was built from a reconstructed space station that fell to earth, then somehow became buried, and when it was uncovered it was found to be borne with the inscription : be future compatible.

The c-base is described as a system of seven concentric rings that can move in relation to each other. These rings are called core, com, culture, creative, cience, carbon and clamp.

Beyond its own many activities, c-base has become the meeting place for German Wikipedians and it’s where the German Pirate Party was first established.





Members of c-base have been known to present at events hosted by the Chaos Computer Club, which is Europe's largest association of hackers that's been around for 30 years now.

So c-base is a hackerspace that is actually inhabited by what we commonly think of as hackers.  




Some of the earliest hackerspaces were directly inspired by c-base. There is story that goes that in August of 2007, a group of North American hackers visited Germany for Chaos Communication Camp and was so impressed that when came back, they formed the first hackerspaces in the United States including NYC Resistor (2007), HacDC (2007), and Noisebridge (San Francisco, 2008).

Since then, many, many more hackerspaces have been developed - there are at least a thousand -  but behind these new spaces are organizations that have are much less counter-culture in their orientation than the mothership of c-base. In fact, at this moment, you could say there isn’t a clear delineation between hackerspaces and makerspaces at all.

But before we can start talking about makerspaces, I think it’s necessary to pay a visit two branches of the hackerspace evolutionary tree: TechShops and Fab Labs.





TechShop is a business that started in 2006 which provides - in return for a monthly membership - access to space that contains over a half a million dollars of equipment, generally including an electronics lab, a machine shop, a wood shop, a metal working shop, etc. There are only 8 of these TechShops across the US despite earlier predictions that would be about 20 of them by now.  They have been slow to open because the owner has stated that the business requires at least 800 people willing to pay over $100 a month in order for a TechShop to be viable.





The motto of TechShop is Build Your Dreams here. But TechShops have been largely understood as places where members dream of prototypes for their future Kickstarter projects. And such dreams have already come true: the prototype of the Square credit card processing reader, for example, was built in a Techshop. I think it's telling that the Detroit Techshop has a bright red phone in the space that connects you directly to the United States Patent and Trademark Office in case of a patent emergency.





Three of out of the 8 TechShops have backing from other organizations. TechShop's Detroit center opened in 2012 in partnership with Ford, which gives its employees free membership for three months. Ford employees can claim patents for themselves or they can give them to Ford in exchange for a share in revenue generated. Ford claims that this partnership with TechShop has led to a 50% rise in the number of patentable ideas put forward by the carmaker's employees,  in one year.






TechShop's offices in Washington DC and Pittsburgh are being sponsored by DARPA, an agency of the Defense Department. DARPA is reported to have invested $3.5 million dollars into TechShop as part of its “broad mission to see if regular citizens can outinvent military contractors on some of its weirder projects.”  But DARPA is not just helping pay for the space, they supposedly use the space themselves. According to the Bloomberg Business Week story I read, DARPA employees arrive at midnight to work when the TechShop is closed to its regular members.

You might be surprised, but we're going to be talking about DARPA again during this talk. But before that, we need to visit another franchise-like type of makerspace called the Fab Lab.





In 1998, Neil Gershenfeld started a class at MIT called "How to make (almost) anything". Gershenfeld wanted to introduce industrial-size machines normally inaccessible to technical students. However, he found his class also attracted a lot of students from various backgrounds including artists, architects, and designers. This led to a larger collaboration which eventually resulted in the Fab Lab Project which began in 2001. Fab Lab began as an educational outreach program from MIT but the idea has since developed into an ambitious network of labs located around the world.




The idea behind Fab Lab is that the space should provide a core set of tools powered by open source software that allow novice makers to make almost anything given a brief introduction to engineering and design education. Anyone can create a recognized Fab Lab as long as it makes a strong effort uphold the criteria of a Fab Lab, with the most important being that Fab Labs are required to be regularly open to the public for little or no cost. While it's not required, a Fab Lab is also strongly encouraged to communicate and collaborate with the other 350 or so other Fab Labs around the world. The idea is that, for example, if you design and make something using Fab Lab equipment in Boston, you could send the files and documents to someone in the Cape Town Fab Lab who could the same using their equipment.





The first library makerspace was a Fab Lab. It was established in 2011 in the Fayetteville Free Library in the state of New York. That's Lauren Britton pictured on screen who was a driving force that helped make that happen.

Now we don't tend to talk about Fab Labs in libraries. We talk about makerspaces. I think this is for several reasons with one of the main ones being - as admirable as I personally find the goals of international collaboration through open source and standardization - the established minimum baseline for such a Fab Lab generally costs between $25,000 and $65,000 in capital costs alone. This  means that a proper Fab Lab is out of reach for many communities and smaller organizations.

I think there's another reason why we think of makerspaces before we think of Fab Labs, TechShops or hackerspaces. And that's because of Make Magazine.





Started in 2005 from the influential source of so many essential computer books, O'Reilly Publishing, Make Magazine was going to be called Hack. But then the daughter of founder Dale Dougherty told him that hacking didn’t sound good, and she didn’t like it. Instead, she suggested he call the magazine MAKE instead, because ‘everyone likes making things’.


And there is something to be said for having a more inclusive name, and something less threatening than hackerspace. But I think there's more to it as well. There is a freedom that comes with the name of makerspace.


 
One my favourite things about makerspaces is that most of them are open to everyone - artists, scientists, educators, hobbyists, hackers and entrepreneurs and it is possibility for cross-pollination of ideas that is one of the espoused benefits of their spaces for their members. In a world where there's so much specialization, makerspaces are a force that are trying to bring different groups of people together.

Here's such an example. This is i3Detroit which calls itself a DIY co-working space that is a "a collision of art, technology and collaboration".






There are also makerspaces that are more heavily arts-based.  Miss Despoinas is a salon for experimental research and radical aesthetics that hosts workshops using code in contemporary art practice. It is physically located in Hobart, Tasmania.





There are presumably makerspaces that are designed primarily for the launching of new companies, although the only one I could find was Haxlr8r .  Haxkl8r is a hardware business accelerator that combines workshop space with mentorship and venture capital opportunities and official bases in San Francisco and Shenzhen, China.





That being said, I can't help but note that most of these maker spaces that I've found that are designed specifically to support start ups has been in universities.  Pictured here is the "Industrial Courtyard" where students and recent graduates of the university where I work can have access for prototype or product development.






In some ways, this brings up us full circle because it's been said the originators of the first hackerspaces set them up deliberately outside of universities, governments, and businesses because they wanted a form of political independence and even to be a place for resistance to the bad actors of these organizations.

As Willow Brugh describes this transition from the earliest hackerspaces and hacklabs :

The commercialization of the space means more people have access to the ideals of these spaces - but just as when "Open Source" opened up the door to more participants, the blatant political statement of "Free Software" was lost - hacklabs have turned from a political statement on use of space and voice into a place for production and participation in mainstream culture.




For as neutral and benign makerspaces seemingly are ("everyone likes to make things"), there are reasons to be mindful of the organizations behind them. For one, in 2012 Make Magazine received a grant from DARPA to establish makerspaces in 1000 U.S. high schools over the next four years.







Now it's one thing if makerspaces simply exist as a place where friends and hobbyists can meet, work and learn from each other. It's quite another if the makerspace becomes the basis of a model to address STEM anxieties in education.

As much as I appreciate how the Maker Movement is trying to bring a playful approach to learning through building, it's important to recognize that makerspaces tend to collect successful makers rather than produce them. The community who participates in hackerspaces and makerspaces is pronouncedly skewed white and male.  In 2012, Make Magazine reported that of its 300,000 in total readership, 81% are male, median age is 44, and the median household income is $106,000.




 
Lauren Britton, the librarian who was responsible for the very first Library Fab Lab/Makerspace is now studying as a doctoral student at Syracuse University in Information Science and Technology and a researcher for their Information Institute. She's been doing discourse analysis on the maker movement and last year she informally published some of her findings so far.  She's already tackled STEM anxiety and I'm particularly looking forward to what has has to say about gender and the makerspace movement.




But there's no time to get into all of that now, because it is now time to hop into c-base and travel through and time and space to the time before public libraries. We are going to travel up the makerspace evolutionary tree to what I like to consider the proto-species of the makerspace : The Mechanics Institute.




The world's first Mechanics' Institute was established in Edinburgh, Scotland in October 1821. Mechanics Institutes were formed to provide libraries and forms of adult education, particularly in technical subjects, to working men. As such, they were often funded by local industrialists on the grounds that they would ultimately benefit from having more knowledgeable and skilled employees. Mechanics Institutes as an institution did not last very long - the movement lasted only fifty years or so - although at their peak there were 700 of them worldwide.






What I think is so particularly poetic is that many of the buildings and core books collections of these Mechanics Institutes- especially where I'm from which is the province of Ontario in Canada - became the foundation for the very first public libraries.





Although there are still some Mechanics Institutes still among us, like coelacanths evolutionary speaking- most notably Montreal's Atwater Library and San Francisco's beautiful Mechanics Institute and Chess Room.

Now, I have to admit, when I see some makerspaces, they remind me of mechanics institutes: subsidized spaces that exist to provide access to technologies to be used for potential start-ups. And if that remains their primary focus, I think their moment will pass, just like mechanics institutes. The forces that made industrial technology accessible to small groups will presumably continue to develop into consumer technology.  To live by disruption is to die by disruption.

This is one reason why I'm so happy and proud of the way so many libraries have embraced makerspaces and have made them their own.  Because by and large, libraries keep people at the centre of the space- not technology.







Librarians - by and large - have opted for accessible materials and activities in their spaces and have host activities that emphasize creativity, personal expression and learning through play. 

This is The Bubbler which is a visually arts based makerspace from the Madison Public Library. I have never been but from what I can see, they are doing many wonderful things. They hosts events that involve bike hacking, audio engineering, board game making, and media creation projects. I was particular impressed how they are working with juvenile justice programs to bring these activities and workshops to justice involved youth.

As long as libraries can continue to focus on building a better future for all of us, then we can continue to be a space where that future can be built.

This concludes our tour through time and space. Thank you kindly for your attention.

May your libraries and your makerspaces be future compatible.

Nicole Engard: Bookmarks for February 2, 2015

planet code4lib - Mon, 2015-02-02 20:30

Today I found the following resources and bookmarked them on <a href=

  • Coggle Coggle is about redefining the way documents work: the way we share and store knowledge. It’s a space for thoughts that works the way that people do — not in the rigid ways of computers.

Digest powered by RSS Digest

The post Bookmarks for February 2, 2015 appeared first on What I Learned Today....

Related posts:

  1. Irony of Ironies
  2. ATO2014: Open Source Schools: More Soup, Less Nuts
  3. NFAIS: Innovation for Today’s Chemical Researchers

District Dispatch: President Obama’s budget increases library funding

planet code4lib - Mon, 2015-02-02 20:09

President Barack Obama today transmitted to Congress the Obama Administration’s nearly $4 trillion budget request to fund the federal government for fiscal year 2016, which starts October 1, 2015. The President’s budget reflected many of the ideas and proposals outlined in his January 20th State of the Union speech.

Highlights for the library community include $186.5 million in assistance to libraries through the Library Services and Technology Act (LSTA). This important program provides funding to states through the Institute of Museum and Library Services (IMLS).

“We applaud the President for recognizing the tremendous contributions libraries make to our communities, ” said American Library Association (ALA) President Courtney Young in a statement. “The American Library Association appreciates the importance of federal support for library services around the country, and we look forward to working with the Congress as they draft a budget for the nation.

“The biggest news for the library community is the announcement of $8.8 million funding for a national digital platform for library and museum services, which will give more Americans free and electronic access to the resources of libraries, archives, and museums by promoting the use of technology to expand access to the holdings of museums, libraries, and archives. Funding for this new program will be funded through the IMLS National Leadership Grant programs for Libraries ($5.3 million) and Museums ($3.5 million).

Statutory Authority FY 2010 FY 2011 FY 2012 FY 2013 FY 2014 Request FY 2014 Enacted FY 2015 Request Grants to States 172,561 160,032 156,365 150,000 150,000 154,848 152,501 Native Am/Haw. Libraries 4,000 3,960 3,869 3,667 3,869 3,861 3,869 Nat. Leadership / Libraries 12,437 12,225 11,946 11,377 13,200 12,200 12,232 Laura Bush 21st Century 24,525 12,818 12,524 10,000 10,000 10,000 10,000 Subtotal, LSTA 213,523 189,035 184,704 175,044 177,069 180,909 178,602

(View the full chart on the budget cuts from IMLS.)

“With the appropriations process beginning, we look forward to working for continued support of key programs, including early childhood learning, digital literacy, and the Library Services and Technology Act.”

The post President Obama’s budget increases library funding appeared first on District Dispatch.

LITA: LITA Board Meeting Two – ALA Midwinter 2015

planet code4lib - Mon, 2015-02-02 19:33

If you would like to listen in to the LITA Board meeting at ALA Midwinter 2015, it is streaming (in audio) below:

Islandora: Islandora/Fedora 4 Project Update

planet code4lib - Mon, 2015-02-02 19:21

The Islandora 7.x/Fedora 4.x integration that we announced in December has officially begun. Work began on January 19th and our first team meeting was Friday, January 30th and we will be meeting on the 4th Friday of every month at 1:00 PM Eastern time. Here's what's going on so far:

Project Updates

The new, Fedora 4 friendly version of Islandora is being built under the working designation of Islandora 7.x-2.x (as oppose to the 7.x-1.x series that encompasses current Fedora 3.x updates to Islandora, which are not going away any time soon). A new GitHub organization is in place for development and testing, and the Islandora Fedora 4 Interest Group has been reconvened under new Terms of Reference to act as a project group for the Fedora 4 integration. If you want to participate, please sign up as part of this group. If you don't have time to participate in regular meetings, we would still love to hear your use case. You can submit it for discussion in the issue queue of the interest group. Need help getting into the GitHub of it all? Contact us and we'll get you there.

There is also a new chef recipe in the works to quickly spin up development and testing environments with the latest for 7.x-2.x. Special thanks to MJ Suhonos and the team at Ryerson University for Islandora Chef!

The project is under the direction of Project Lead Nick Ruest (York University) and Tech Lead Danny Lamb (discoverygarden, Inc.), with participation from:

  • The University of Toronto Scarborough
  • The University of Oklahoma
  • The University of Manitoba
  • The University of Virginia
  • The University of Prince Edward Island
  • The University of Limerick
  • Simon Fraser University
  • REUNA
  • LYRASIS
  • Common Media
  • The Colorado Alliance

Special thanks goes to Aaron Coburn, whose fcrepo Camel module is going to be an integral part of our own designs for Fedora 4 and Islandora.

If you would like to talk to Nick and Danny about the project, or even offer up some help while they code away on an unofficial 'sprint,' you can meet up with them at discoverygarden's table at Code4Lib 2015 in Portland, OR February 9 - 12.

Technical Planning

Danny Lamb has kicked off the design of the next stage of Islandora with a Technical Design Doc that you should definitely read and comment on if you have any plans to use Islandora with Fedora 4 in the future. We are still at the stage of hearing use cases and making plans, so now is the time to get your needs into the mix. The opening line sums up the basic approach: Islandora version 7.x-2.x is middleware built using Apache Camel to orchestrate distributed data processing and to provide web services required by institutions who would like to use Drupal as a frontend to a Fedora 4 JCR repository. 

Some preliminary Big Ideas:

  • No more Tuque. No more GSearch. No more xml forms. The Java middleware layer will handle many things that were previously done in PHP and Drupal.
  • It will treat Drupal like any other component of the stack. There will be indexing in Drupal for display using nodes, fields, and other parts of the Drupal ecosystem.
  • It will use persistent queues, so the middleware layer can exist on separate servers.
  • The Fedora-Drupal connection comes first. An admin interface will be developed later.

And some preliminary Wild Ideas (we'd love to hear your opinions):

  • Headless Drupal 7.x
  • Make the REST API endpoints the same for Drupal 7 and Drupal 8 so migration is easier.
  • Dropbox-style ingest.
Migration

Or rather, upgration (a portmanteau of upgrade and migration, and our new favourite word). Nick Ruest and York University are working through a Fedora 3.x -> 4.x upgration path. Because York's Islandora stack is as close to generic as you can reasonably get in in production, this should provide a model for a generic upgration path that others can follow - as well as keeping the needs of the Islandora community on the radar for the Fedora 4 development team, so that all of the pieces evolve to work together.

Funding

We launched the project with a funding goal of $100,000 to get a functioning prototype and Fedora 3.x -> 4.x migration path. We are very pleased to announce that we have achieved more than half of that funding goal and are well set to see things through to the end. 

Many, many thanks to our supporters, all of whom are now members the Islandora Foundation as Partners:

  • LYRASIS
  • York University
  • McMaster University
  • University of Prince Edward Island
  • University of Manitoba
  • University of Limerick

If your institution would like to join up, whether as a $10,000 Partner or at some other level of support, please contact us

 

State Library of Denmark: British Library and IIPCTech15

planet code4lib - Mon, 2015-02-02 13:42

For a change of pace: A not too technical tale of my recent visit to England.

The people behind IIPC Technical Training Workshop – London 2015 had invited yours truly as a speaker and participant in the technical training. IIPC stands for International Internet Preservation Consortium and I were to talk about using Solr for indexing and searching preserved Internet resources. That sounded interesting and Statsbiblioteket encourages interinstitutional collaboration, so the invitation was gladly accepted. Some time passed and British Library asked if I might consider arriving a few days early and visit their IT development department? Well played, BL, well played.

I kid. For those not in the know, British Library made the core software we use for our Net Archive indexing project and we are very thankful for that. Unfortunately they do have some performance problems. Spending a few days, primarily talking about how to get their setup to work better, was just reciprocal altruism working. Besides, it turned out to be a learning experience for both sides.

It is the little things, like the large buses

At British Library, Boston Spa

The current net archive oriented Solr setups at British Library is using SolrCloud with live indexes on machines with spinning drives (aka harddisks) and a – relative to index size – low amount of RAM. At Statsbiblioteket, our experience tells us that such setups generally have very poor performance. Gil Hoggarth and I discussed Solr performance at length and he was tenacious on exploring every option available. Andy Jackson partook in most of the debates. Log file inspections and previous measurements from the Statsbiblioteket setups seemed to sway them in favour of different base hardware, or to be specific: Solid State Drives. The open question is how much such a switch would help or if it would be a better investment to increase the amount of free memory for caching.

  • A comparative analysis of performance with spinning drives vs. SSDs for multi-TB Solr indexes on machines with low memory would help other institutions tremendously, when planning and designing indexing solutions for net archives.
  • A comparative analysis of performance with different amounts of free memory for caching, as a fraction of index size, for both spinning drives and SSDs, would be beneficial on a broader level; this would give an idea of how to optimize bang-for-the-buck.

Illuminate the road ahead

Logistically the indexes at British Library are quite different from the index at Statsbiblioteket: They follow the standard Solr recommendation and treats all shards as a single index, both for index and search. At Statsbiblioteket, shards are build separately and only treated as a whole index at search time. The live indexes at British Library have some downsides, namely re-indexing challenges, distributed indexing logistics overhead and higher hardware requirements. They also have positive features, primarily homogeneous shards and the ability to update individual documents. The updating of individual documents is very useful for tracking meta-data for resources that are harvested at different times, but have unchanged content. Tracking of such content, also called duplicate handling, is a problem we have not yet considered in depth at Statsbiblioteket. One of the challenges of switching to static indexes is thus:

  • When a resource is harvested multiple times without the content changing, it should be indexed in such a way that all retrieval dates can be extracted and such that the latest (and/or the earliest?) harvest date can be used for sorting, grouping and/or faceting.

One discussed solution is to add a document for each harvest date and use Solr’s grouping and faceting features to deliver the required results. The details are a bit fluffy as the requirements are not strictly defined.

At the IIPC Technical Training Workshop, London 2015

The three pillars of the workshop were harvesting, presentation and discovery, with the prevalent tools being Heritrix, Wayback and Solr. I am a newbie in two thirds of this world, so my outsider thoughts will focus on discovery. Day one was filled with presentations, with my Scaling Net Archive Indexing and Search as the last one. Days two and three were hands-on with a lot of discussions.

As opposed to the web archive specific tools Heritrix and Wayback, Solr is a general purpose search engine: There is not yet a firmly established way of using Solr to index and search net archive material, although the work from UKWA is a very promising candidate. Judging by the questions asked at the workshop, large scale full-text search is relatively new in the net archive world and as such the community lacks collective experience.

Two large problems of indexing net archive material is analysis and scaling. As stated, UKWA has the analysis part well in hand. Scaling is another matter: Net archives typically contains billions of documents, many of them with a non-trivial amount of indexable data (webpages, PDFs, DOCs etc). Search responses ideally involve grouping or faceting, which requires markedly more resources than simple search. Fortunately, at least from a resource viewpoint, most countries does not allow harvested material to be made available to the general public: The number of users and thus concurrent requests tend to be very low.

General recommendations for performant Solr systems tend to be geared towards small indexes or high throughput, minimizing the latency and maximizing the number of requests that can be processed by each instance. Down to Earth, the bottleneck tend to be random reads from the underlying storage, easily remedied by adding copious amounts of RAM for caching. While the advice arguable scales to net archive indexes in the multiple TB-range, the cost of terabytes of RAM, as well as the number of machines needed to hold them, is often prohibitive. Bearing in mind that the typical user groups on net archives consists of very few people, the part about maximizing the number of supported requests is overkill. With net archives as outliers in the Solr world, there is very little existing shared experience to provide general recommendations.

  • As hardware cost is a large fraction of the overall cost of doing net archive search, in-depth descriptions of setups are very valuable to the community.

All different, yet the same

Measurements from British Library as well as Statsbiblioteket shows that faceting on high cardinality fields is a resource hog when using SolrCloud. This is problematic for exploratory use of the index. While it can be mitigated with more hardware or software optimization, switching to heuristic counting holds promises of very large speed ups.

  • The performance benefits and the cost in precision of approximate search results should be investigated further. This area is not well-explored in Solr and mostly relies on custom implementations.

On the flipside of fast exploratory access is the extraction of large result sets for further analysis. SolrCloud does not scale for certain operations, such as deep paging within facets and counting of unique groups. Certain operations, such as percentiles in the AnalyticsComponent, are not currently possible. As the alternative to using the index tend to be very heavy Hadoop processing of the raw corpus, this is an area worth investing in.

  • The limits of result set extractions should be expanded and alternative strategies, such as heuristic approximation and per-shard processing with external aggregation, should be attempted.
On a personal note

Visiting British Library and attending the IIPC workshop was a blast. Being embedded in tech talk with intelligent people for 5 days was exhausting and very fulfilling. Thank you all for the hospitality and for pushing back when my claims sounded outrageous.


DPLA: Unexpected: Balletic or Brutish? Picturing Football

planet code4lib - Sun, 2015-02-01 22:08

[This is the second post in our new series, Unexpected, which covers thematic discoveries in our collection. In case you missed it, the first post covered unusual snow removal machines.]

Bringing together over 15,000 photographs of football, from its origins after the Civil War to the Super Bowl era, and from over a thousand collections around the United States, presents an opportunity to see in one place how this uniquely American sport has been played—and imagined. Photography itself evolved in concert with the sport, from lantern slides of players to aerial shots of stadiums.

From the very beginning, however, one constant has been the tension between picturing football as balletic and gentlemanly, or chaotic and brutish.

Eadweard Muybridge’s 1887 collotype of a nude man punting a football put the sport squarely into the graceful category, showing the wide range of motion involved in a kick.

[Eadweard Muybridge. Animal locomotion: an electro-photographic investigation of consecutive phases of animal movements. 1872-1885 / published under the auspices of the University of Pennsylvania. Plates. The plates printed by the Photo-Gravure Company. Philadelphia, 1887. Image courtesy of the University of Southern California Libraries]

The fully extended leg of the punter, pictured in the upper left of Muybridge’s series, became common one in sports photography—a kickline of one:

[Image courtesy of the California Historical Society Collection via the University of Southern California Libraries]

[Image courtesy of the University of Virginia Special Collections]

Catching the football also presented the photographer with an opportunity to depict football as ballet:

[Image courtesy of the Boston Public Library via Digital Commonwealth]

Early photographs often showed football players in suits and tuxedos, as the 1869 Rutgers team wore in their team photograph after beating Princeton in the very first college game:

[Image courtesy of the New York Public Library]

For photographs of football formations, ties and jackets were sometimes worn.

[Image courtesy of the Archives and Special Collections at the University of Montana via the Mountain West Digital Library]

But the fact that football, unlike baseball, was based on contact—in many cases, extreme contact—made it clearly open to other interpretations. Faster film, which required less exposure to light, could not only capture the punter and wide receiver at work; it could capture the moment of impact, leading to distinctly different images of football.

[Images courtesy Springfield College Archives and Special Collections via Digital Commonwealth]

Many of these photographs effectively create freeze-frame sculpture, heightened with the painful knowledge of what is about to be felt by the player under assault.

[Image courtesy of the Austin History Center at the Austin Public Library, via the Portal to Texas History]

[Image courtesy of the Boston Public Library via Digital Commonwealth]

The cameras may have changed radically and film is now virtually obsolete, but you’ll undoubtedly see these two photographic styles in the coverage of today’s Super Bowl. Football: still balletic, still brutal.

 

Hugh Rundle: A measured approach

planet code4lib - Sun, 2015-02-01 21:54

As has become traditional, I’m posting again in February after a long break in the second half of last year. Hopefully in 2015 I can break my bad habit and actually continue with regular blog content all the way through the year.

I’ve spent much of the last few months obsessing over stats and analytics from the Boroondara library websites, as I developed a brief for developers to help us with a major overhaul. The experience has reinforced the advice from Matthew Reidsma to regularly analyse the way people use your website, and test and make changes immediately and incrementally. A lot of the recommendations I’ve made at Boroondara are as much about the way we produce website content as they are about the design of the sites. For example, I’ve discovered that visitors using mobile devices are most likely to visit on weekends, whilst visitors on desktop are most likely to visit on Monday and least likely to visit on Sunday. Do we need to change our posting schedules? Does this difference reflect different users or just the same visitors using different devices across the week? These are questions we would not have even thought to ask until we saw the data - and this is just one simple example. Something more intriguing (and hindsight obvious) was my discovery that visitors to our Storytimes page were more than 50% more likely to come from mobiles (about 46% compared to 30% for visits to all pages). It’s pretty easy to construct a story of busy parents checking their phone from the local park to check if the library has a storytime today - but we hadn’t really considered this behaviour until now (and of course, there could be any number of alternative explanations for why there is this difference).

What became quite clear is that we should have been doing more than simply look at total hits and visits each month and really looked deeply into our analytics on both our catalogue and our general website. I won’t be doing this at Boroondara, because I finished up there last week, but if anyone at Brimbank Libraries is reading this - be prepared to become obsessed with user tracking and analytics!

Coincidentally, I recently read John O’Nolan’s post about onboarding stats at Ghost. I’ve read lots of stuff from UX experts and library website experts emphasising the usefulness of things like A/B testing and ongoing analysis of usage data, but until now I’ve never fully appreciated what they’re saying. Perhaps it’s because the Ghost Foundation is a non-profit, but I found O’Nolan’s post helped me to see how we can (carefully) use usage data to help library members get more from us. That is, libraries have the ability to actually use analytics to ‘improve the user experience’. Using data to manipulate users to act against their own best interests, as too many commercial services seem to do, isn’t the only possibility.

A couple of simple examples Email notices

Pretty much every library sends notifications to members in one form or another. Mostly these are emails. Whilst I am stunned by the fact that several major library management systems are still only capable of sending plaintext emails and not HMTL formatted emails, at this point I am going to assume you are sending HTML formatted email notices.

Ever wondered whether the wording of your notices is effective? Perhaps if you used a different subject heading or made your email text more friendly members would have less overdue loans. Wouldn’t it be great to test your theory scientifically? A/B testing is the way to do this. Web companies do this all the time. True A/B testing is random - on a given day a website might randomly show different users different configurations on the front page, for example. They can then test which configuration (‘configuration A’ or ‘configuration B’) resulted in more sales, or newsletter sign-ups, or whatever.

It all sounds very hard and complicated, but you can fairly easily use an analytics program like Piwik to create ‘campaigns’ and associated tracking codes. All this does is add some extra code to the URLs you use, which is identified by your analytics system when visitors use a URL with that code. You could use campaign tracking codes by sending out two batches of email notices (perhaps on two consecutive Tuesdays, for example) with a link to ‘click here to renew these items’. By comparing the number of hits on your login page from that tracking code to the number of notices sent out using it, you can measure the effectiveness of different types of approaches to subject headings, wording and layout.

What do mobile visitors want to do?

An even simpler example comes from some of the analysis I’ve recently been doing. I had a feeling that visitors on mobile devices might show different browsing behaviour to those on desktops, but I didn’t really know. Because browsers tend to broadcast what type of browser they are, what device they are installed on, and the size of their screen, it’s pretty easy to track what type of device visitors are using. By creating a segment (about 15 seconds in your favourite analytics software), you can determine if visitors from mobile (or tablets, for that matter) behave differently from desktop users.

What I discovered was that nearly half of all mobile visitors to our website visited the Opening Hours page - making mobile users about three times more likely than desktop users to be looking for our opening hours. This has obvious ramifications for any mobile optimisation of our website - clearly opening hours need to be pretty close to the first thing they see. Of course, by claiming your branches’ Google Maps pages you can ensure that your opening hours are available right there in Google before users even hit your site. Since we’re in the business of providing information and experiences, rather than selling stuff through our websites, we’re in the fortunate position that it doesn’t actually matter if people get the information they need (in this case “Is the library open?”) without visiting our website at all.

It might strike you as obvious that people visiting a library website using a smartphone probably want to know whether the library is open, but with hard data you can actually test such intuitions. There were plenty of other ‘obvious’ assumptions that I found to be false when checking our website analytics properly. None of the things I have just described are difficult or even particularly clever. There are smart librarians who use and understand these tools in much more sophisticated ways than I ever have. Given the state of most library websites, however, it seems doubtful that these sorts of techniques are anywhere close to mainstream in libraries today.

Privacy

At this point, some of you are probably yelling at your screen “I thought you were supposed to be interested in user privacy, you hypocrite!” Indeed, I am very interested in user privacy. Whilst working on our website project I have also been busy tightening up the privacy and security of our existing catalogue. The conclusion I have come to, however, is that we can genuinely protect the privacy of library members and visitors whilst still collecting a lot of useful aggregate data. The important thing is to always consider the consequences of tracking, collecting and storing any particular piece of data before you do anything, and ensure that is how you decide whether to collect it, rather than how useful or interesting it might be.

There are a couple of practices we need to be particularly careful to avoid:

Linking web and search analytics to identified library members

Whilst it may be possible to make a link between a tracked website user and a registered member through data matching things like their IP address, this still takes time and requires a targeted effort aimed at a specific person. If, on the other hand, you set up your web analytics in a way that can easily identify search terms used by specific user (and, therefore, vice-versa) you make it possible to provide lists of search terms associated with a specific person, or lists of specific people associated with particular search terms. It would be so easy to track actual members’ search terms and general website use that you could probably do it accidentally.

This is also worth thinking about with regard to how you track individual website users. Piwik, for example, includes ‘Visitor profiles’, which track users over time based on their IP address. This makes me very uncomfortable, especially coming from software that prides itself on being great for privacy. There are a couple of ways to reduce the privacy problems caused by this. Firstly, Piwik can be set up to simply ignore the last one, two or three bytes of an IP address. This makes it impossible to track usage geographically to particular suburbs or cities, but usually you won’t care much about that. The other feature Piwik recommends administrators use is archiving. The archive function stores usage data in aggregate in tables, then deletes the actual logs. This means you get to use old data for aggregate reports, but when the men in dark suits come knocking you don’t have any personally identifiable data to give them.

Using third parties who can see your data

The reason I have been mentioning Piwik so much, and the reason for their claim to be good for privacy, is that Piwik is a software program, not a cloud service. When you use Google Analytics, it’s not just you who has access to that data. Google can track users across the web using the javascript embedded in at least half of the world’s websites. There’s a reason Google Analytics is free of charge. The same is true for Facebook tracking pages with ‘Like us on Facebook’ buttons, Twitter with ‘Tweet this’ buttons and so on.

It’s all very well to have policies and statements about the freedom to read and how you protect member loan records, but the world has moved on. The library user who doesn’t use online services at all is almost extinct. Privacy statements are one thing, but privacy practice is another entirely. As a general rule if the data isn’t stored on-site, someone else probably has access to it. If you didn’t pay anything for the service, you can guarantee that. Eric Hellman provided a stark illustration last year of how many people and organisations have access to your users’ data if you don’t pay attention to security and privacy. Following on the heels of the Adobe Digital Editions debacle in October, it should be obvious to even the most obstinately clueless that libraries need to ask a lot more questions when third parties are providing services on our behalf.

The future

I’d like to see libraries take more action to protect user privacy and collect more and better data. I truly think it is possible for us to do both - but only if we are careful and thoughtful about how we go about it. Jason Griffey announced an exciting new project over the weekend, called ‘Measure the Future’. Led by Griffey and other library stars Gretchen Caserotti and Jenica Rogers, along with educator Jeff Branson, the project seeks to build a ‘Google Analytics for your library building’, tracking physical use of libraries just as we can track digital use. Built on open hardware and software by librarians, this has huge promise - but we need to be mindful of the same privacy concerns we have always expressed with regard to reading habits, and started to neglect as reading moves increasingly to digital environments.

Currently most libraries seem to be (accidentally) providing a huge hoard of private user data to virtually anyone who wants it, but not actually using any of it themselves. If we are to credibly claim to be defenders of intellectual freedom and responsive to our communities, we need to use data more cleverly - and protect member privacy while we do so.

John Miedema: Embedded Reading in Lila Cognitive Writing Technology [Video]

planet code4lib - Sun, 2015-02-01 16:10

Lila is a cognitive technology that extends reading and analysis capabilities for a writing project. Author content is used to generate “slips”, short units of text from unread content. Slips are visualized to allow embedded reading. Embedded means “to fix firmly and deeply in a surrounding mass.” Embedded reading is reading content in the context of other closely related content. Context is meaning. Embedded reading gives new insight and ensures completeness. It is visualized as web of associated, clickable slips in Lila. View the video.

Ed Summers: Documenting Ferguson Emails

planet code4lib - Sun, 2015-02-01 13:11

If you are an IfThisThenThat user and are interested in archives maybe you’ll be interested in this recipe that will email you when a new item is added to the Documenting Ferguson repository. Let me know if you give it a try! I just created the recipe and it hasn’t emailed me yet. But the RSS Feed from Washington University’s Omeka instance reports that the last item was added on January 30th, 2015. So the collection is still being added to.

I thought about having it tweet, but that would involve creating a Twitter account for the project and that isn’t my place. Plus, RSS and Email are still fun Web 1.0 technologies that don’t get enough love. Well I guess Email predates the Web entirely heh, but you get my drift.

Mark E. Phillips: UNT Libraries’ Digital Collections: 2014 Review – Items Added

planet code4lib - Sat, 2015-01-31 23:18

One thing that tends to be hard in the digital library world is to understand how a given program is doing in relation to other programs throughout the country.  This information can be helpful to help justify funds spent locally on digital library initiatives.  The same information can be used within a department to understand if workflows are on par with others throughout the country/region.

Most often the numbers that are reported are those that are required by membership groups such as ARL, ACRL and others who have a token question or two about digital library statistics but most people involved with those numbers know that they are often…. unclear at best.

Some of the dimensions that are available to look at include traffic to the digital library system,  visitors, page views, time on site, referral traffic.  Locally we use Google Analytics for this data at the repository level.  How a digital libraries items get used is also another metric that is helpful in knowing the impact of these resources.  This can be measured in a wide range of ways and there are initiatives such as Counter that provide some guidance to this sort of work but it feels like it is more focused on “Electronic Resources” and doesn’t really handle the range of cases we run into in digital library/repository land.  The University of Florida Digital Collections makes their usage data for each item in the collection easily obtainable, many modern DSpace instances also have great reporting on usage of items.  I’ve talked a little about how UNT Libraries calculates “uses” for our digital library collections here and here. The final area that is often reported on is the collection growth of the repository either in the number of items added, number of bytes (or GB, TB) added, or number of files added in a given year.

I think walking through some of these metrics in a series of posts will be helpful for me to articulate some of the opportunities that are available if the digital libraries/repository community openly shared more of this data.  There are of course organizations such as Hathi Trust,  the Digital Public Library of America, and others who make growth data available front and center,  but for most of our repositories it is pretty hidden.

The data that I’m showing in this post is from the UNT Libraries Digital Collections which contains three separate digital library interfaces,  The Portal to Texas History, the UNT Digital Library, and the Gateway to Oklahoma History.  All three of these interfaces are powered by the same repository infrastructure on the backend and are made searchable by a unified Solr index.  The datasets here are from that Solr instance directly.

Items added per month

From Jan 1 to Dec 31, 2014 the UNT Libraries Digital Collections added 417,645 unique digital resources to its holdings.   The breakdown of the monthly additions look like this:

Month Items Added January 32,074 February 9,220 March 7,758 April 11,161 May 11,475 June 32,549 July 18,503 August 67,769 September 83,916 October 25,537 November 73,404 December 44,279

A better way to look at this might be a simple chart.

UNT Libraries Digital Collections: Growth by Month in 2014

Or looked at a different way.

The average number of items added to the system in 2014 by month is 34,803.

Wait, What is an Item,Object,Resource

A little side trip is needed so that we are on the same page.  For us a “digital object” or “digital item” or “digital resource” is an intellectual unit that a descriptive metadata record is assigned at.  This may be a scan of a photographic negative, front and back scans of a physical photographic print,  a book, letter, pamphlet, map, or issue of newspaper.  In most instances there are multiple files/images/pages per item in our system but we are just talking about those larger units and not the files that make up the items themselves.  Just wanted to make sure we were on the same page about that.

Items added per day

In looking at the daily data for the year,  there were 215 days that new content was processed and added to the collection with no processing being done on 150 days.  The average number of items added per day during the year was 1,144 items.   If we think about an ten hour work day (roughly when the library is open for normal folks) that’s 114 items per hour, or 1.9 new items created per minute during the work week last year.

Items by Type

I thought it might be interesting to see how the 417,645 were distributed among the various resource types that we categorize records into.  Here is that table.

Resource Type Items image_photo 197,133 text_newspaper 109,456 image_map 66,637 text_report 12,569 text 9,517 text_patent 7,052 text_etd 4,449 physical-object 3,573 text_leg 1,660 text_book 1,171 text_journal 1,063 video 804 text_article 494 image_postcard 366 collection 347 text_pamphlet 346 text_letter 235 text_legal 216 text_yearbook 180 image_presentation 96 image_artwork 44 text_clipping 44 dataset 36 image_poster 30 image 26 text_paper 23 image_score 22 sound 17 website 13 text_review 12 text_chapter 8 text_prose 5 text_poem 1

As you can see the majority of all of the items added were in the category of image_photo (Photographs) or text_newspaper (Newspapers) with those two types accounting for 73% of the new additions to the system.

Closing

As I mentioned at the beginning of this post,  I think knowing metrics of other digital library programs is helpful for local initiatives in a number of ways.  The UNT Libraries had a very successful year for adding new content,  over the past few years we’ve been able to double the number of items each year,  I don’t think that’s a rate of growth that we can keep up with but it is always fun to try.  How do repository systems at your institution look in relation to this?  Sharing that data more broadly would be helpful to the digital library community overall and I encourage others to take some time and make this data available.

If you have any specific questions for me let me know on twitter.

LITA: LITA Board Meeting One – ALA Midwinter 2015

planet code4lib - Sat, 2015-01-31 19:50

If you would like to listen in to the LITA Board meeting at ALA Midwinter 2015, it is streaming (in audio) below:

Code4Lib: 2015 Code of Conduct

planet code4lib - Sat, 2015-01-31 19:00

Code4Lib seeks to provide a welcoming, fun, and safe community and
conference experience as well as an ongoing community for everyone. We do not
tolerate harassment in any form. Discriminatory language and imagery
(including sexual) is not appropriate for any event venue, including talks,
or any community channel such as the chatroom or mailing list.

Harassment is understood as any behavior that threatens another person or
group, or produces an unsafe environment. It includes offensive verbal
comments or non-verbal expressions related to gender, gender identity,
gender expression, sexual orientation, disability, physical appearance,
body size, race, age, religious beliefs, sexual or discriminatory images
in public spaces (including online), deliberate intimidation, stalking,
following, harassing photography or recording, sustained disruption of
talks or other events, inappropriate physical contact, and unwelcome sexual
attention.

Conflict Resolution

  1. Initial Incident

    If you are being harassed, notice that someone else is being harassed,
    or have any other concerns, and you feel comfortable speaking with
    the offender
    , please inform the offender that he/she/ze has affected you
    negatively. Oftentimes, the offending behavior is unintentional, and the
    accidental offender and offended will resolve the incident by having
    that initial discussion.

    Code4Lib recognizes that there are many reasons speaking directly to
    the offender may not be workable for you (including but not limited to
    unfamiliarity with the conference or its participants, lack of spoons,
    and concerns for personal safety). If you don't feel comfortable
    speaking directly with the offender for any reason, skip straight to
    step 2.

  2. Escalation

    If the offender insists that he/she/ze did not offend, if offender is
    actively harassing you, or if direct engagement is not a good option
    for you at this time, then you will need a third party to step in.

    If you are at a conference or other event, find an event organizer or
    staff person, who should be listed on the wiki.
    If you can't find an event organizer, there will be other staff
    available to help if the situation calls for immediate action.

    If you are in the #code4lib IRC, the zoia command to list people
    designated as channel helpers is @helpers. At most times, there is at least one helper in the channel.

    For the listserv, you have a free-for-all for public messages; however,
    the listserv does have a maintainer, Eric Lease Morgan.

  3. Wider community response to Incident:

    If the incident doesn't pass the first step (discussion reveals offense
    was unintentional, apologies said, public note or community is informed
    of resolution), then there's not much the community can do at this point
    since the incident was resolved without outside intervention.

    If incident results in corrective action, the community should support
    the decision made by the Help in Step 2 if they choose corrective action,
    like ending a talk early or banning from the listserv, as well as
    support those harmed by the incident, either publicly or privately
    (whatever individuals are comfortable with).

    If the Help in Step 2 run into issues implementing the CoC, then the
    Help should come to the community with these issues and the community
    should revise the CoC as they see fit.

    In Real Life people will have opinions about how the CoC is enforced.
    People will argue that a particular decision was unfair, and others will
    say that it didn't go far enough. We can't stop people having
    opinions, but what we could do is have constructive discussions
    that lead to something tangible (affirmation of decision, change in CoC,
    modify decision, etc,).

Sanctions

Participants asked to stop any harassing behavior are expected to comply
immediately. If a participant engages in harassing behavior, organizers may
take any action they deem appropriate, including warning the offender,
expulsion from the Code4Lib event, or banning the offender from a chatroom
or mailing list.

Specific sanctions may include but are not limited to:

  • warning the harasser to cease their behavior and that any further reports
    will result in other sanctions
  • requiring that the harasser avoid any interaction with, and physical
    proximity to, their victim for the remainder of the event
  • early termination of a talk that violates the policy
  • not publishing the video or slides of a talk that violated the policy
  • not allowing a speaker who violated the policy to give (further) talks at
    the event
  • immediately ending any event volunteer responsibilities and privileges the
    harasser holds requiring that the harasser not volunteer for future Code4lib
    events (either indefinitely or for a certain time period)
  • requiring that the harasser immediately leave the event and not return
  • banning the harasser from future events (either indefinitely or for a
    certain time period)
  • publishing an account of the harassment

Code4Lib event organizers can be identified by their name badges, and will
help participants contact hotel/venue security or local law enforcement,
provide escorts, or otherwise assist those experiencing harassment to feel
safe for the duration of the event. Code4Lib IRC volunteers can be identified
by issuing the @helpers command to the #code4lib bot named "zoia".

If an incident occurs, please use the following contact information:

  • Conference organizers: Tom Johnson, 360-961-7721 or Evviva Weinraub, 617-909-2913
  • Hilton Portland & Executive Tower: 503-226-1611
  • Portland Police Department: 503-823-0000
  • Portland Women's Crisis Line (24/7): 503-235-5333 (or toll-free: 888-235-5333) 24/7
  • Radio Cab: 503-227-1212
  • IRC channel administrators: anarchivist, mistym, mjgiarlo, ruebot; or enter @helpers in the IRC channel

We expect participants to follow these rules at all conference venues,
conference-related social events, community gatherings, and online communication channels.

We value your participation in the Code4Lib community, and your efforts to
keep Code4Lib a safe and friendly space for all participants!

Licensed under CC0

Based on the example policy from the Geek Feminism wiki, created by the Ada Initiative and other volunteers.

Eric Hellman: Why GitHub is Important for Book Publishing

planet code4lib - Fri, 2015-01-30 23:13
How do you organize large numbers of people for a common purpose? For millenia, the answer has been some sort of hierarchical organization. An army, or a feudal system topped with a king. To reach global scale, these hierarchies propagated customs and codes for behavior: laws, religions, ideology. Most of what you read in history books is really the history of these hierarchies. It wasn't possible to orchestrate big efforts or harness significant resources any other way.

In the 20th century, mass media redistributed much of this organizational power. In politics, charismatic individuals could motivate millions of people independently of the hierarchies that maintain command and control. But for the most part, one hierarchy got swapped for another. In business, production innovations such as Henry Ford's assembly line needed the hierarchy to support the capital investments.

I think the history of the 21st century will be the story of non-hierarchical systems of human organization enabled by the Internet. From this point of view, Wikipedia is particularly important not only for its organization of knowledge, but because it demonstrated that thousands of people can be organized with extremely small amounts of hierarchy. Anyone can contribute, anyone can edit, and many do. Bitcoin, or whatever cryptocurrency wins out, won't be successful because of a hierarchy but rather because of a framework of incentives for a self-interested network of entities to work together. Crowdfunding will enable resources to coalesce around needs without large hierarchical foundations or financial institutions.

So let's think a bit about book publishing. Through the 20th century, publishing required a signification amount of investment in capital- printing presses, warehouses, delivery trucks, bookstores, libraries, and people with specialized skills and abilities. A few large publishing companies emerged along with big-box retailers that together comprised an efficient machine for producing, distributing and monetizing books of all kinds. The transition from print to digital has eliminated need for the physical aspects of the book publishing machine, but the human components of that machine remain essential. It's no longer clear that the hierarchical organization of publishing is necessary for the organization of publishing's human effort.

I've already mentioned Wikipedia's conquest of encyclopedia publishing, by dint of its large scale and wide reach. But equally important to its success has been a set of codes and customs bound together in a suite of collaboration and workflow tools. Version tracking allows for easy reversion of edits. "Talk pages" and notifications facilitate communication and collaboration. (And edit-wars and page locking, but that's another bucket of fish.)

Most publishing projects have audiences that are too small or requirements too specific to support Wikipedia's anyone-can-edit-or-revert model of collaboration. A more appropriate model for collaboration in publishing  is one widely used for software development.

Modern software development requires people with different skills to work together. Book publishing is the same. Designers, engineers, testers, product managers, writers, and subject domain experts may each have an important role in creating a software application; authors, editors, proofreaders, illustrators, designers, subject experts, agents, and publicists may all work together on a book. Book publishing and software can be either open or proprietary. The team producing a book or a piece of software might number from one to a hundred. Books and programs can go into maintenance mode or be revised in new editions or versions. Translation into new languages happens for both. Assets from one project can be reused in other projects.

Open source software has been hugely successful over the past few decades. Along the way, an ecosystem of collaboration tools and practices has evolved to support both open source development and software development in general. Many aspects of this ecosystem have been captured in GitHub.

The "Git" in GitHub comes from git, an open source distributed version control system initially written by Linus Torvalds, the Linus behind Linux. It's fast, and it lets you work on a local code repository and then merge your changes with a repository stored somewhere else.

In just two sentences, I've touched on several concepts that may be foreign to many book publishing professionals. Microsoft Word's "track changes" is probably the closest that most authors get to a version control system. The big difference is that "track changes" is designed to facilitate collaboration between a maximum of two people. Git works easily with many contributors. A code "repository" holds more than just code, it can contain all the assets, documentation, and licenses associated with a project. And unlike "track changes", Git remembers the entire history of your project. Many book publishers still don't keep together all the assets that go into a book. And I'm guessing that publishers are still working on centralizing their asset stores instead of distributing them!

Git is just one of the useful aspects of GitHub. I think the workflow tools are perhaps more important. Developers talk about the workflow variants such as "git-flow" and "GitHub-flow", but the differences are immaterial to this discussion. Here's what it boils down to: Someone working on a project will first create a "feature branch", a copy of the repository that adds a feature or fixes a bug. When the new feature has been tested and is working, the changes will be "committed". Each set of changes is given an identifier and a message explaining what has been changed. The branch's developer then sends a "pull request" to the maintainers of the repository. A well crafted pull request will provide tests and documentation for the new feature. If the maintainers like the changes, they "pull" the changes into the main branch of the repository. Each of these steps is a push of a button on GitHub, and GitHub provides annotation, visualization and commenting tools that support discussions around each pull request, as well as issue lists and wiki pages.

The reason the workflow tools and the customs surrounding their use are so important is that anyone who has used them already knows how to participate in another project. For an excellent non-programming example, take a look at the free-programming-books repository, which is a basic list of programming books available online for free.  As of today, 512 different different people have contributed a total of 2,854 sets of changes the the repository, have expanded it to books in 23 languages, and have added free courses, screencasts and interactive tutorials. The maintainers enforce some basic standards and make sure that the list is free of pirated books and the like.

It's also interesting that there are 7,229 "forks" of free-programming-books. Each of these could be different. If the main free-programming-books repo disappears, or if the maintainers go AWOL, one of these forks could become the main fork. Or if one group of contributors want to move the project in a different direction from the maintainers, it's easy to do.

Forking a book is a lot more common than you might think. Consider the book Robinson Crusoe by Daniel Defoe. OCLC's WorldCat lists 7,459 editions of this book, each one representing significantly more effort than a button push in a workflow system. It's common to have many editions of out-of-copyright books of course, but it's also becoming common for books developed with open processes. As an example, look at the repository for Amy Brown and Greg Wilson's Architecture of Open Source Applications.  It has 5 contributors, and has been forked 58 times. For another example of using GitHub to write a book, read Scott Chacon's description of how he produced the second edition of Pro Git. (Are you surprised that a founder of GitHub is using GitHub to revise his book about Git?)

There's another aspect of modern software engineering with GitHub support that could be very useful for book publishing and distribution. "Continuous integration" is essential for development of complex software systems because changes in one component can have unintended effects on other components. For that reason, when a set of changes is committed to a project, the entire project needs to be rebuilt and retested. GitHub supports this via "hooks". For example, a "post-commit" hook can trigger a build-test apparatus; hooks can even be used to automatically deploy the new software version into production environments. In the making of a book, the insertion of a sentence might necessitate re-pagination and re-indexing. With continuous integration, you can imagine the correction of a typo immediately resulting in changes in all the copies of a textbook for sale. (or even the copies that had already been purchased!)

A number of startups have recognized the applicability of Git and GitHub to book publishing. Leanpub, GitBook, and Penflip are supporting GitHub backends for open publishing models; so far adoption has been most rapid in author communities that already "get" GitHub, for example, software developers. The company that is best able to teach a GitHub-like toolset to non-programmers will have a good and worthy business, I think.

As more people learn and exercise the collaboration culture of GitHub, new things will become possible. Last year, I became annoyed that I couldn't fix a problem I found with an ebook from Project Gutenberg. It seemed obvious to me that I should put my contributions into a GitHub repo so that others could easily make use of my work. I created a GitHub organization for "Project GitenHub". In the course of creating my third GitenHub book, I discovered that someone named Seth Woodward had done the same thing a year before me, and he had moved over a thousand Project Gutenberg texts onto GitHub, in the "GITenberg"  organization. Since I knew how to contribute to a GitHub project, I knew that I could start sending pull requests to GITenberg to add my changes to its repositories. And so Seth and I started working together on GITenberg.

Seth has now loaded over 50,000 books from Project Gutenberg onto GitHub. (The folks at Project Gutenberg are happy to see this happening, by the way.) Seth and I are planning out how to make improved quality ebooks and metadata for all of these books, which would be impossible without a way to get people to work together. We put in a funding proposal to the Knight Foundation's NewsChallenge competition. And we were excited to learn that (as of Jan 1, 2015) the Text Creation Partnership has added 25,000 texts from EEBO (Early English Books Online) on GitHub. So it's an exciting time for books on GitHub.

There's quite a bit of work to do. Having 50,000 repositories in an organization strains some GitHub tools. We need to figure out how to explain the GitHub workflow to potential contributors who aren't software developers. We need to  make bibliographic metadata more git-friendly. And we need to create a "continuous integration system" for building ebooks.

Who knows, it might work.

Update January 30: Our NewsChallenge proposal is being funded!!!

Pages

Subscribe to code4lib aggregator