You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib -
Updated: 4 hours 26 min ago

D-Lib: Collaborative Construction of Digital Cultural Heritage: A Synthesis of Research on Online Sociability Determinants

Mon, 2015-11-16 14:14
Article by Chern Li Liew, Victoria University of Wellington, New Zealand

D-Lib: Semantometrics in Coauthorship Networks: Fulltext-based Approach for Analysing Patterns of Research Collaboration

Mon, 2015-11-16 14:14
Article by Drahomira Herrmannova, KMi, The Open University and Petr Knoth, Mendeley Ltd.

D-Lib: Efficient Table Annotation for Digital Articles

Mon, 2015-11-16 14:14
Article by Matthias Frey, Graz University of Technology, Austria and Roman Kern, Know-Center GmbH, Austria

D-Lib: NLP4NLP: The Cobbler's Children Won't Go Unshod

Mon, 2015-11-16 14:14
Article by Gil Francopoulo, IMMI-CNRS + TAGMATICA, France; Joseph Mariani, IMMI-CNRS + LIMSI-CNRS, France; Patrick Paroubek, LIMSI-CNRS, France

D-Lib: Holiday Reading

Mon, 2015-11-16 14:14
Editorial by Laurence Lannom, CNRI

D-Lib: Developing Best Practices in Digital Library Assessment: Year One Update

Mon, 2015-11-16 14:14
Article by Joyce Chapman, Duke University Libraries, Jody DeRidder, University of Alabama Libraries and Santi Thompson, University of Houston Libraries

D-Lib: The OpenAIRE Literature Broker Service for Institutional Repositories

Mon, 2015-11-16 14:14
Article by Michele Artini, Claudio Atzori, Alessia Bardi, Sandro La Bruzzo, Paolo Manghi and Andrea Mannocci, Istituto di Scienza e Tecnologie dell'Informazione "A. Faedo" -- CNR, Pisa, Italy

D-Lib: In Brief: The Clipper Project

Mon, 2015-11-16 14:14

LITA: Agile Development: Building an Agile Culture

Mon, 2015-11-16 14:00

Over the last few months I have described various components of Agile development. This time around I want to talk about building an Agile culture. Agile is more than just a codified process; it is a development approach, a philosophy, one that stresses flexibility and communication. In order for a development team to successfully implement Agile the organization must embrace and practice the appropriate culture. In this post will to briefly discuss several tips that will help develop Agile development.

The Right People

It all starts here: as with pretty much any undertaking, you need the right people in place, which is not necessarily the same as saying the best people. Agile development necessitates a specific set of skills that are not intrinsically related to coding mastery: flexibility, teamwork, and ability to take responsibility for a project’s ultimate success are all extremely important. Once the team is formed, management should work to bring team members closer together and create the right environment for information sharing and investment.

Encourage Open Communication

Because of Agile’s quick pace and flexibility, and the lack of overarching structures and processes, open communication is crucial. A team must develop communication pathways and support structures so that all team members are aware of where the project stands at any one moment (the daily scrum is a great example of this). More important, however, is to convince the team to open up and conscientiously share progress individual progress, key roadblocks, and concerns about the path of development. Likewise, management must be proactive about sharing project goals and business objectives with the team. An Agile team is always looking for the most efficient way to deliver results, and the more information they receive about the motivation and goals that lie behind a project the better. Agile managers must actively encourage a culture that says “we’re all in this together, and together we will find the solution to the problem.” Silos are Agile’s kryptonite.

Empower the Team

Agile only works when everyone on the team feels responsible for the success of the project, and management must do its part by encouraging team members to take ownership of the results of their work, and trusting them to do so. Make sure everyone on the team understands the ultimate organizational need, assign specific roles to each team member, and then allow team members to find their own ways to meet the stated goals. Too often in development there is a basic disconnect between the people who understand the business needs and those who have the technical know-how to make them happen. Everyone on the team needs to understand what makes for a successful project, so that wasted effort is minimized.

Reward the Right Behaviors

Too often in development organizations, management metrics are out of alignment with process goals. Hours worked are a popular metric teams use to evaluate members, although often proxies like hours spent at the office, or time spent logged into the system, are used. With Agile, the focus should be on results. As long as a team meets the stated goals of a project, the less time spent working on the solution, the better. Remember, the key is efficiency, and developing software that solves the problem at hand with as few bells and whistles as possible. If a team is consistently beating it’s time estimates by a significant margin, it can recalibrate their estimation procedures. Spending all night at the office working on a piece of code is not a badge of honor, but a failure of the planning process.

Be Patient

Full adoption of Agile takes time. You cannot expect a team to change it’s fundamental philosophy overnight. The key is to keep working at it, taking small steps towards the right environment and rewarding progress. Above all, management needs to be transparent about why it considers this change important. A full transition can take years of incremental improvement. Above all, be conscious that the steady state for your team will likely not look exactly like the theoretical ideal. Agile is adaptable and each organization should create the process that works best for its own needs.

If you want to learn more about building an Agile culture, check out the following resources:

In your experience, how long does it take for a team to fully convert to the Agile way? What is the biggest roadblock to adoption? How is the process initiated and who monitors and controls progress?

“Scrum process” image By Lakeworks (Own work) [GFDL ( or CC BY-SA 4.0-3.0-2.5-2.0-1.0 (], via Wikimedia Commons

Conal Tuohy: Taking control of an uncontrolled vocabulary

Mon, 2015-11-16 13:49

A couple of days ago, Dan McCreary tweeted:

Working on new ideas for NoSQL metadata management for a talk next week. Focus on #NoSQL, Documents, Graphs and #SKOS. Any suggestions?

— Dan McCreary (@dmccreary) November 14, 2015

It reminded me of some work I had done a couple of years ago for a project which was at the time based on Linked Data, but which later switched away from that platform, leaving various bits of RDF-based work orphaned.

One particular piece which sprung to mind was a tool for dealing with vocabularies. Whether it’s useful for Dan’s talk I don’t know, but I thought I would dig it out and blog a little about it in case it’s of interest more generally to people working in Linked Open Data in Libraries, Archives and Museums (LODLAM).

I told Dan:

@dmccreary i did a thing once with an xform using a sparql query to assemble a skos concept scheme, edit it, save in own graph. Of interest?

— Unholy Taco (@conal_tuohy) November 14, 2015

When he sounded interested, I made a promise:

@dmccreary i have the code somewhere. .. will dig it out

— Unholy Taco (@conal_tuohy) November 14, 2015

I know I should find a better home for this and the other orphaned LODLAM components, but for now, the original code can be seen here:

I’ll explain briefly how it works, but first, I think it’s necessary to explain the rationale for the vocabulary tool, and for that you need to see how it fits into the LODLAM environment.

At the moment there is a big push in the cultural sector towards moving data from legacy information systems into the “Linked Open Data (LOD) Cloud” – i.e. republishing the existing datasets as web-based sets of inter-linked data. In some cases people are actually migrating from their old infrastructure, but more commonly people are adding LOD capability to existing systems via some kind of API (this is a good approach, to my way of thinking – it reduces the cost and effort involved enormously). Either way, you have to be able to take your existing data and re-express it in terms of Linked Data, and that means facing up to some challenges, one of which is how to manage “vocabularies”.

Vocabularies, controlled and uncontrolled

What are “vocabularies” in this context? A “vocabulary” is a set of descriptive terms which can be applied to a record in a collection management system. For instance, a museum collection management system might have a record for a teacup, and the record could have a number of fields such as “type”, “maker”, “pattern”, “colour”, etc. The value of the “type” field would be “teacup”, for instance, but another piece in the collection might have the value “saucer” or “gravy boat” or what have you. These terms, “teacup”, “plate”, “dinner plate”, “saucer”, “gravy boat” etc, constitute a vocabulary.

In some cases, this set of terms is predefined in a formal list, This is called a “controlled vocabulary”. Usually each term has a description or definition (a “scope note”), and if there are links to other related terms (e.g. “dinner plate” is a “narrower term” of “plate”), as well as synonyms, including in other languages (“taza”, “plato”, etc) then the controlled vocabulary is called a thesaurus. A thesaurus or a controlled vocabulary can be a handy guide to finding things. You can navigate your way around a thesaurus, from one term to another, to find related classes of object which have been described with those terms, or the thesaurus can be used to automatically expand your search queries without you having to do anything; you can search for all items tagged as “plate” and the system will automatically also search for items tagged “dinner plate” or “bread plate”.

In other cases, though, these vocabularies are uncontrolled. They are just tags that people have entered in a database, and they may be consistent or inconsistent, depending on who did the data entry and why. An uncontrolled vocabulary is not so useful. If the vocabulary includes the terms “tea cup”, “teacup”, “Tea Cup”, etc. as distinct terms, then it’s not going to help people to find things because those synonyms aren’t linked together. If it includes terms like “Stirrup Cup” it’s going to be less than perfectly useful because most people don’t know what a Stirrup Cup is (it is a kind of cup).

The vocabulary tool

So one of the challenges in moving to a Linked Data environment is taking the legacy vocabularies which our systems use, and bringing them under control; linking synonyms and related terms together, providing definitions, and so on. This is where my vocabulary tool would come in.

In the Linked Data world, vocabularies are commonly modelled using a system called Simple Knowledge Organization System (SKOS). Using SKOS, every term (a “Concept” in SKOS) is identified by a unique URI, and these URIs are then associated with labels (such as “teacup”), definitions, and with other related Concepts.

The vocabulary tool is built with the assumption that a legacy vocabulary of terms has been migrated to RDF form by converting every one of the terms into a URI, simply by sticking a common prefix on it, and if necessary “munging” the text to replace, or encode spaces or other characters which aren’t allowed in URIs. For example, this might produce a bunch of URIs like this:

  • etc.

What the tool then does is it finds all these URIs and gives you a web form which you can fill in to describe them and link them together. To be honest I’m not sure how far I got with this tool, but ultimately the idea would be that you would be able to organise the terms into a hierarchy, link synonyms, standardise inconsistencies by indicating “preferred” and “non-preferred” terms (i.e. you could say that “teacup” is preferred, and that “Tea Cup” is a non-preferred equivalent).

When you start the tool, you have the opportunity to enter a “base URI”, which in this case would be – the tool would then find every such URI which was in use, and display them on the form for you to annotate. When you had finished imposing a bit of order on the vocabulary, you would click “Save” and your annotations would be stored in an RDF graph whose name was Later, your legacy system might introduce more terms, and your Linked Data store would have some new URIs with that prefix. You would start up the form again, enter the base URI, and load all the URIs again. All your old annotations would also be loaded, and you would see the gaps where there were terms that hadn’t been dealt with; you could go and edit the definitions and click “Save” again.

In short, the idea of the tool was to be able to use, and to continue to use, legacy systems which lack controlled vocabularies, and actually impose control over those vocabularies after converting them to LOD.

How it works

OK here’s the technical bit.

The form is built using XForms technology, and I coded it to use a browser-based (i.e. Javascript) implementation of XForms called XSLTForms.

When the XForm loads, you can enter the common base URI of your vocabulary into a text box labelled “Concept Scheme URI”, and click the “Load” button. When the button is clicked, the vocabulary URI is substituted into a pre-written SPARQL query and sent off to a SPARQL server. This SPARQL query is the tricky part of the whole system really: it finds all the URIs, and it loads any labels which you might have already assigned them, and if any don’t have labels, it generates one by converting the last part of the URI back into plain text.

prefix skos: <> construct { ?vocabulary a skos:ConceptScheme ; skos:prefLabel ?vocabularyLabel. ?term a skos:Concept ; skos:inScheme ?vocabulary ; skos:prefLabel ?prefLabel . ?subject ?predicate ?object . } where { bind(&lt;<vocabulary-uri><!--></vocabulary-uri>&gt; as ?vocabulary) { optional {?vocabulary skos:prefLabel ?existingVocabularyLabel} bind("Vocabulary Name" as ?vocabularyLabel) filter(!bound(?existingVocabularyLabel)) } union { ?subject ?predicate ?term . bind( replace(substr(str(?term), strlen(str(?vocabulary)) + 1), "_", " ") as ?prefLabel ) optional {?term skos:prefLabel ?existingPrefLabel} filter(!bound(?existingPrefLabel)) filter(strstarts(str(?term), str(?vocabulary))) filter(?term != ?vocabulary) } union { graph ?vocabulary { ?subject ?predicate ?object } } }

The resulting list of terms and labels is loaded into the form as a “data instance”, and the form automatically grows to provide data entry fields for all the terms in the instance. When you click the “Save” button, the entire vocabulary, including any labels you’ve entered, is saved back to the server.

William Denton: Anthropocene librarianship

Sun, 2015-11-15 20:59

Anthropocene librarianship is the active response librarians make to the causes and effects of climate change so severe humans are creating a new geological epoch.

(I’ve been mulling this over this week and wanted to put the idea out there because it’s giving me a good framework for thinking about things. I’m curious to know what you make of it.)

What is the Anthropocene?

The idea was first set out in Crutzen and Stoermer (2000):

Considering these and many other major and still growing impacts of human activities on earth and atmosphere, and at all, including global, scales, it seems to us more than appropriate to emphasize the central role of mankind in geology and ecology by proposing to use the term “anthropocene” for the current geological epoch. The impacts of current human activities will continue over long periods.

They end with:

Without major catastrophes like an enormous volcanic eruption, an unexpected epidemic, a large-scale nuclear war, an asteroid impact, a new ice age, or continued plundering of Earth’s resources by partially still primitive technology (the last four dangers can, however, be prevented in a real functioning noösphere) mankind will remain a major geological force for many millennia, maybe millions of years, to come. To develop a world-wide accepted strategy leading to sustainability of ecosystems against human induced stresses will be one of the great future tasks of mankind, requiring intensive research efforts and wise application of the knowledge thus acquired in the noösphere, better known as knowledge or information society. An exciting, but also difficult and daunting task lies ahead of the global research and engineering community to guide mankind towards global, sustainable, environmental management.

For more, Wikipedia has a good overview. The Working Group on the ‘Anthropocene’ (which sits inside the International Union of Geological Sciences) defines it so (with odd punctuation):

The 'Anthropocene’ is a term widely used since its coining by Paul Crutzen and Eugene Stoermer in 2000 to denote the present time interval, in which many geologically significant conditions and processes are profoundly altered by human activities. These include changes in: erosion and sediment transport associated with a variety of anthropogenic processes, including colonisation, agriculture, urbanisation and global warming. the chemical composition of the atmosphere, oceans and soils, with significant anthropogenic perturbations of the cycles of elements such as carbon, nitrogen, phosphorus and various metals. environmental conditions generated by these perturbations; these include global warming, ocean acidification and spreading oceanic 'dead zones’. the biosphere both on land and in the sea, as a result of habitat loss, predation, species invasions and the physical and chemical changes noted above.

What does Anthropocene librarianship do?

Some examples, but you will be able to think of more:

  • Collections: building collections that serve our users’ needs regarding everything about climate change; sharing resources; keeping users informed about what we have and how it’s useful; providing reader’s advisory about climate fiction.
  • Preservation: preserving materials in all forms and carriers, including knowledge, culture, the web, data, code and research; collaborating with others on preserving languages, seeds, etc.; guaranteeing long-term stability of online sources; saving libraries and special collections at risk to disasters; storing original documents and special collections about climate-related research (e.g. Harvard Library’s Papers of the Intergovernmental Panel on Climate Change).
  • Sustainability: of our buildings and architecture (the green libraries work currently underway); of our practices, processes and platforms.
  • Greenhouse gas reductions: in buildings; power usage overall; from paper and power in printers and photocopiers; purchasing; book delivery between branches; conference arrangements.
  • Preparation: preparing for droughts, storms, floods, heat waves, higher sea levels, temperature increases, changes in agriculture, extinctions, climate migrations, conflict, regulations enforcing reduced carbon emissions, etc.
  • Disaster response: providing reference services; providing telephone and internet access; lending technology; supporting crisis mapping.
  • Climate migrations: providing services for incoming migrants; preserving what they leave behind.
  • Collaborations: with libraries, associations and communities in areas under pressure or at risk; with researchers; with climate change groups.
  • Communities: hosting shelter in cool air during heat waves; making meeting spaces available to community groups.
  • Advocacy: about the science and politics; about responses and remedies; about what libraries, archives and our local communities need and can do.
  • Information literacy and climate literacy: about the science and how is done; about the politics and how it is made; about resources to help understand and respond to climate change; dealing with climate change deniers; using climate change as an example subject in instruction; providing subject guides, workshops, classes, reference service at climate change events.
  • Research: applying library and information science methods to climate change-related disciplines, their methods, scholars, publications, practices, discourse, etc.; collaborating on and supporting work by researchers in those fields.
  • Free and open: access, data, software, research; making all work in this area freely available to everyone under the best license (Creative Commons, GPL, etc.).
  • Social justice: understanding and explaining how climate change is connected to issues about economics, law, social policy, etc.
  • Values: recognizing values shared with environmental and other groups, such as preservation, conservation, stewardship and long time frames.
  • Prefiguration: “making one’s means as far as possible identical with one’s ends” as Graeber (2014) puts it; putting into practice today what we want our work, profession, institutions and organizations to be like in the future.

The term

There is debate about whether the term “Anthropocene” is valid and if so when the interval began. Boswell (1892) quotes Dr. Johnson: “Depend upon it, Sir, when any man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.” Climate change isn’t two weeks away, it’s last year, now, and decades and centuries ahead. “Anthropocene librarianship” is meant to help concentrate our minds.

The current literature

A search in Library and Information Science Abstracts (one of the major subscription article databases in LIS; it’s run by ProQuest) turns up nothing for the word “anthropocene:”

“Climate change” is one of its subject terms, however, and that shows 17 results today:

Here they are:

  • Adamich, Tom et al. “The Gov Doc Kids Group and Free Government Information.” IFLA Journal 38.1 (2012): 68–77.
  • Dutt, Bharvi, K. C. Garg, and Archita Bhatta. “A Quantitative Assessment of the Articles on Environmental Issues Published in English-Language Indian Dailies.” Annals of Library and Information Studies 60.3 (2013): 219–226.
  • Elia, Emmanuel F., Stephen Mutula, and Christine Stilwell. “Indigenous Knowledge Use in Seasonal Weather Forecasting in Tanzania: The Case of Semi-Arid Central Tanzania.” South African Journal of Libraries and Information Science 80.1 (2014): 18–27.
  • Etti, Susanne et al. “Growing the ERM Energy and Climate Change Practice Through Knowledge Sharing.” Journal of Information & Knowledge Management 9.3 (2010): 241–250.
  • Gordon-Clark, Matthew. “Paradise Lost? Pacific Island Archives Threatened by Climate Change.” Archival Science 12.1 (2012): 51–67.
  • Hall, Richard. “Towards a Resilient Strategy for Technology-Enhanced Learning.” Campus-Wide Information Systems 28.4 (2011): 234–249.
  • Hiroshi, Hirano. “Usage details of the Earth Simulator and sustained performance of actual applications.” Journal of Information Processing and Management 48.5 (2005): 268–275.
  • Holgate, Becky. “Global Climate Change.” The School Librarian 63.2 (2015): 84.
  • Islam, Md. Shariful. “The Community Development Library in Bangladesh.” Information Development 25.2 (2009): 99–111.
  • Johansen, Bruce E. “Media Literacy and ‘Weather Wars:’ Hard Science and Hardball Politics at NASA.” SIMILE: Studies in Media & Information Literacy Education 6.3 (2006): np.
  • Luz, Saturnino, Masood Masoodian, and Manuel Cesario. “Disease Surveillance and Patient Care in Remote Regions: An Exploratory Study of Collaboration among Health-Care Professionals in Amazonia.” Behaviour & Information Technology 34.6 (1507): 548–565.
  • Murgatroyd, Peter, and Philip Calvert. “Information-Seeking and Information-Sharing Behavior in the Climate Change Community of Practice in the Pacific.” Science & Technology Libraries 32.4 (2013): 379–401.
  • Mwalukasa, Nicholaus. “Agricultural Information Sources Used for Climate Change Adaptation in Tanzania.” Library Review 62.4-5 (2013): 266–292.
  • Sabou, Marta, Arno Scharl, and Michael Fols. “Crowdsourced Knowledge Acquisition: Towards Hybrid-Genre Workflows.” International Journal on Semantic Web and Information Systems 9.3 (2013): 14–41.
  • Stoss, F. W. “The Heat Is on! U.S. Global Climate Change Research and Policy.” EContent 23.4 (2000): 36–38.
  • Vaughan, K. T. L. “Science and Technology Sources on the Internet. Global Warming and Climate Change Science.” Issues in Science and Technology Librarianship 32 (2001): n. pag.
  • Veefkind, V. et al. “A New EPO Classification Scheme for Climate Change Mitigation Technologies.” World Patent Information 34.2 (2012): 106–111.

Quite a mix, from around the world, and representative of the wide range of subject matter LIS has in its scope.

But only 17? Since 2000? This certainly isn’t a full literature review, but 17 is far too few for even a quick search. We need a lot more work done.

The Journal of Anthropocene Librarianship

Perhaps we could start The Journal of Anthropocene Librarianship to focus and grow attention in our discipline, while still engaging in inter- and transdisciplinary work beyond LIS. Of course it would be fully open access.

I found three new journals on the the Anthropocene: Anthropocene (Elsevier, RoMEO green, allows some self-archiving), The Anthropocene Review (Sage), and Elementa: Science of the Anthropocene (BioOne, fully open access, see author guidelines). The introductory editorial in Anthropocene by Chin et al. sets out its aim:

Anthropocene openly seeks research that addresses the scale and extent of human interactions with the atmosphere, cryosphere, ecosystems, oceans, and landscapes. We especially encourage interdisciplinary studies that reveal insight on linkages and feedbacks among subsystems of Earth, including social institutions and the economy. We are concerned with phenomena ranging over time from geologic eras to single isolated events, and with spatial scales varying from grain scale to local, regional, and global scales. Papers that address new theoretical, empirical, and methodological advances are high priority for the Journal. We welcome contributions that elucidate deep history and those that address contemporary processes; we especially invite manuscripts with potential to guide and inform humanity into the future.

A broad approach like this but tailored to LIS could work well.

On the other hand, leaping to a journal is a big step. Maybe it’s best to follow the Code4Lib model: start with a mailing list and a web site, and grow. Or, do it all at once.

What about archives?

Libraries and archives work together closely but serve different purposes, and archivists are very different from librarians, so I won’t venture into describing what Anthropocene archives might be like. However, Matthew Gordon-Clark’s “Paradise Lost? Pacific Island Archives Threatened by Climate Change” (2012; the sea level rise predictions now are worse) is a perfect example of this work. Here’s the abstract:

Over the past 10 years, a clear pattern of increasing sea-level rises has been recorded across the Pacific region. As international work progresses on climate change, it is becoming clear that the expected rise of sea levels will have significant impacts upon low-lying islands and nations. Sea-level rises of less than 0.5 m are generally suggested, although some researchers have made more drastic projections. This paper describes the second stage of research into the impacts of climate change upon the national archival collections in low-lying Pacific islands and nations. This article follows on the argument that archival collection relocation will be necessary and sets the boundaries for further research. It will summarize current research into climate change models and predicted sea-level rises, identify Pacific islands and nations that will be the focus of detailed further research by setting a range of research boundaries based on the known geography of nations within the Pacific, arguing for a specific measurement of “low-elevation”, outlining other risk factors likely to affect the survival of threatened national archival collections and naming those islands and nations that are thus deemed to be at greatest risk of flooding and thus likely to need to relocate their archives. The goal is to demonstrate how archivists might inform the governmental policy in threatened islands and nations as well as what other nations might do to offer assistance.

A web search for anthropocene archives turns up a lot of results. Archives of the Anthropocene at the Max Planck Institute for the History of Science is interesting:

Taken seriously, the Anthropocene claims that the cultural has insinuated itself so thoroughly into the natural that any notion of an objective, unhumanized record of the earth will no longer be tenable. The Anthropocene hypothesis implies that the sciences of the archives will need to reorient themselves to a new, participatory sense of macro-duration and confront the possibilities that the unaccessioned “noise” of human artifacts might dwarf any authoritative signal that we believe our archives will communicate to the distant future.

Galleries and museums also have to deal with the problem. As a group we’re called the GLAM sector: galleries, libraries, archives and museums. Together: GLAMthropocene, saving the world.

Works cited

Boswell, James. The Life of Samuel Johnson, LL.D. Together with The Journal of a Tour to the Hebdrides. Vol. 3. London: George Bell & Sons, 1892.

Chin, Anne et al. “Anthropocene: Human Interactions with Earth Systems.” Anthropocene 1 (2013): 1–2. DOI: 10.1016/j.ancene.2013.10.001

Crutzen, Paul J. and Eugene F. Stoermer. “The 'Anthropocene’.” Global Change Newsletter 41 (2000): 17–18.

Gordon-Clark, Matthew. “Paradise Lost? Pacific Island Archives Threatened by Climate Change.” Archival Science 12.1 (2012): 51–67. DOI: 10.1007/s10502-011-9144-3

Graeber, David. “Anthropology and the rise of the professional-managerial class.” Journal of Ethnographic Theory 4.3 (2014): 73–88.

Mita Williams: What we've got here is failure to understand Scholarly Communication

Sun, 2015-11-15 14:34
If you follow conversations about Scholarly Communication (as I do), it is not uncommon to run into the frustrations of librarians and scholars who cannot understand why their peers continue to publish in journals that reside behind expensive paywalls. As someone who very much shares this frustration, I found this quotation particularly illuminating:

As in Latin, one dominant branch of meaning in "communication" has to do with imparting, quite apart from any notion of a dialog or interactive process. Thus communication can mean partaking, as in being a communicant (partaking in holy communication). Here "communication" suggests belonging to a social body via an expressive act that requires no response or recognition. To communicate by consuming bread and wine is to signify membership in a communion of saints both living and dead, but it is primarily a message-sending activity (except perhaps as a social ritual to please others or as a message to the self or to God). Moreover, here to "communicate" is an act of receiving, not of sending; more precisely, it is to send by receiving. A related sense is the notion of a scholarly "communication" (monograph) or a "communication" as a message or notice. Here is no sense of exchange, through some sort of audience, however vague or dispersed, is implied.

- "Speaking into the air", John Durham Peters, p.7

Hydra Project: Sufia 6.4.0 released

Fri, 2015-11-13 10:22

We are pleased to announce the release of Sufia 6.4.0

Sufia 6.4.0 includes new features for uploading files to collections, enabling suggested citation formatting, as well as a number of bugfixes and refactorings.

See the release notes [1] for the upgrade process and for an exhaustive list of the work that has gone into this release. Thanks to the 18 contributors for this release, which comprised 100 commits touching 123 files: Adam Wead, Anna Headley, Brandon Straley, Carolyn Cole, Colin Gross, Dan Kerchner, Lynette Rayle, Hector Correa, Justin Coyne, Mike Giarlo, Michael Tribone, Nation Rogers, Olly Lyytinen, Piotr Hebal, Randy Coulman, Christian Aldridge, Tonmoy Roy, and Yinlin Chen.


William Denton: Foul, rainy, muddy sloppy morning

Fri, 2015-11-13 03:12

“It was a foul, rainy, muddy, sloppy morning, without a glimmer of sun, with that thick, pervading, melancholy atmosphere which forces for the time upon imaginative men a conviction that nothing is worth anything,” —Anthony Trollope, Ralph the Heir (1871), chapter XXIX.

District Dispatch: Fan fiction webinar now available

Thu, 2015-11-12 22:26

Harry Potter enthusiasts dress as Hogwarts students (image from Wikimedia).

An archive of the CopyTalk webinar on fan fiction and copyright issues originally broadcast on Thursday, November 5, 2015 is available.

Fan-created works are in general broadly available to people at the click of a link. Fan fiction hasn’t been the subject of any litigation, but it plays an increasing role in literacy as its creation and consumption has skyrocketed. Practice on the ground can matter as much as court cases and the explosion of noncommercial creativity is a big part of the fair use ecosystem. This presentation touched on many of the ways in which creativity has impacted recent judicial rulings on fair use, from Google books , to  putting a mayor’s face on a T-shirt, to copying a competitor’s ad for a competing ad. Legal scholar and counsel to the Organization for Transformative Works, Rebecca Tushnet enlightened us.

This was a really interesting webinar. Do check it out!

Rebecca Tushnet clerked for Chief Judge Edward R. Becker of the Third Circuit Court of Appeals in Philadelphia and Associate Justice David H. Souter of the United States Supreme Court and spent two years as an associate at Debevoise & Plimpton in Washington, DC, specializing in intellectual property. After two years at the NYU School of Law, she moved to Georgetown, where she now teaches intellectual property, advertising law, and First Amendment law.

Her work currently focuses on the relationship between the First Amendment and false advertising law. She has advised and represented several fan fiction websites in disputes with copyright and trademark owners. She serves as a member of the legal team of the Organization for Transformative Works, a nonprofit dedicated to supporting and promoting fanworks, and is also an expert on the law of engagement rings.

Our next CopyTalk is December 3rd at 2pm Eastern/11am Pacific. Our topic will be the 1201 rulemaking and this year’s exemptions. Get ready for absurdity!

The post Fan fiction webinar now available appeared first on District Dispatch.

Open Knowledge Foundation: Calling all Project Assistants: we need you!

Thu, 2015-11-12 17:55

The mission of Open Knowledge International is to open up all essential public interest information and see it utilized to create insight that drives change. To this end we work to create a global movement for open knowledge, supporting a network of leaders and local groups around the world; we facilitate coordination and knowledge sharing within the movement; we build collaboration with other change-making organisations both within our space and outside; and, finally, we prototype and provide a home for pioneering products.

A decade after its foundation, Open Knowledge International is ready for its next phase of development. We started as an organisation that led the quest for the opening up of existing data sets – and in today’s world most of the big data portals run on CKAN, an open source software product developed first by us.

Today, it is not only about opening up of data; it is making sure that this data is usable, useful and – most importantly – used, to improve people’s lives. Our current projects (OpenSpending, OpenTrials, School of Data, and many more) all aim towards giving people access to data, the knowledge to understand it, and the power to use it in our everyday lives.

Now, we are looking for an enthusiastic

Project Assistant

(flexible location, part time)

to join the team to help deliver our projects around the world. We are seeking people who care about openness and have the commitment to make it happen.

We do not require applicants to have experience of project management – instead, we would like to work with motivated self-starters, able to demonstrate engagement with initiatives within the open movement. If you have excellent written and verbal communication skills, are highly organised and efficient with strong administration and analytical abilities, are interested in how projects are managed and are willing to learn, we want to hear from you.

The role includes the following responsibilities:

  • Monitoring and reporting of ongoing work progress to Project Managers and on occasion to other stakeholders
  • Research and investigation
  • Coordination of, and communication with, the project team, wider organisation, volunteers and stakeholders
  • Documentation, including creating presentations, document control, proof-reading, archiving, distributing and collecting
  • Meeting and event organisation, including scheduling, booking, preparing documents, minuting, and arranging travel and accommodation where needed
  • Project communication and promotion, including by email, blog, social media, networking online and in person
  • Liaising with staff across the organisation to offer and for support, eg public communication and finance

Projects you may be involved with include Open Data for Development, OpenTrials and OpenSpending, as well as new projects in future.

This role requires someone who can be flexible and comfortable with remote working, able to operate in a professional environment and participate in grassroots activities. Experience working as and with volunteers is advantageous.

You are comfortable working with people from different cultural, social and ethnic backgrounds. You are happy to share your knowledge with others, and you find working in transparent and highly visible environments interesting and fun.

Personally, you have a demonstrated commitment to working collaboratively, with respect and a focus on results over credit.

The position reports to the Project Manager and will work closely with other members of the project delivery team.

The role is part-time at 20 hours per week, paid by the hour. You will be compensated with a market salary, in line with the parameters of a non-profit-organisation.

This would particularly suit recent graduates who have studied a complementary subject to Open Knowledge International, looking for some experience in the workplace.

Successful applicants must have excellent English language skills in both speaking and writing.

You can work from home, with flexibility offered and required. Some flexibility around work hours is useful, and there may be some (infrequent) international travel required.

We offer employment contracts for residents of the UK with valid permits, and services contracts to overseas residents.

Interested? Then send us a motivational letter and a one page CV via Please indicate your current country of residence, as well as your salary expectations (in GBP) and your earliest availability.

Early application is encouraged, as we are looking to fill the positions as soon as possible. These vacancies will close when we find a suitable candidate.

If you have any questions, please direct them to jobs [at]

David Rosenthal: SPARC Author Addendum

Thu, 2015-11-12 16:00
SPARC has a post Author Rights: Using the SPARC Author Addendum to secure your rights as the author of a journal article announcing the result of an initiative to fix one of the fundamental problems of academic publishing, namely that in most cases authors carelessly give up essential rights by signing unchanged a copyright transfer agreement written by the publisher's lawyers.

The publisher will argue that this one-sided agreement, often transferring all possible rights to the publisher, is absolutely necessary in order that the article be published. Despite their better-than-average copyright policy, ACM's claims in this regard are typical. I dissected them here.

The SPARC addendum was written by a lawyer, Michael W. Carroll of Villanova University School of Law, and is intended to be attached to, and thereby modify, the publisher's agreement. It performs a number of functions:
  • Preserving the author's rights to reproduce, distribute perform, and display the work for non-commercial purposes.
  • Acknowledges that the work may already be the subject of non-exclusive copyright grants to the author's institution or a funding agency.
  • Imposes as a condition of publication that the publisher provide the author with a PDF of the camera-ready version without DRM.
The kicker is the final paragraph, which requests that the publisher return a signed copy of the addendum, and makes it clear that publishing the work in any way indicates assent to the terms of the addendum. This leaves the publisher with only three choices, agree to the terms, refuse to publish the work, or ignore the addendum.

Of course, many publishers will refuse to publish, and many authors at that point will cave in. The SPARC site has useful advice for this case. The more interesting case is the third, where the publisher simply ignores the author's rights as embodied in the addendum. Publishers are not above ignoring the rights of authors, as shown by the history of my article Keeping Bits Safe: How Hard Can It Be?, published both in ACM Queue (correctly with a note that I retained copyright) and in CACM (incorrectly claiming ACM copyright). I posted analysis of ACM's bogus justification of their copyright policy based on this experience. There is more here.

So what will happen if the publisher ignores the author's addendum? They will publish the paper. The author will not get a camera-ready copy without DRM. But the author will make the paper available, and the "kicker" above means they will be on safe legal ground. Not merely did the publisher constructively agree to the terms of the addendum, but they failed to deliver on their side of the deal. So any attempt to haul the author into court, or send takedown notices, would be very risky for the publisher.

2012 data from Alex HolcombePublishers don't need anything except permission to publish. Publishers want the rights beyond this to extract the rents that generate their extraordinary profit margins. Please use the SPARC addendum when you get the chance.