You are here

Feed aggregator

Library of Congress: The Signal: We’re All Digital Archivists Now: An Interview with Sibyl Schaefer

planet code4lib - Wed, 2014-09-24 15:44

Sibyl Schaefer, Head of Digital Programs at the Rockefeller Archive

Digital was everywhere at this year’s Society of American Archivists annual meeting. What is particularly exciting is that many of these sessions were practical and pragmatic. That is, many sessions focused on exactly how archivists are meeting the challenge of born-digital records.

In one such session, Sibyl Schaefer, Head of Digital Programs at the Rockefeller Archive Center, offered such advice. I am excited to discuss some of the themes from her talk, “We’re All Digital Archivists: Digital Forensic Techniques in Everyday Practice,” here as part of the ongoing Insights Interview series.

Trevor: Could you unpack the title of your talk a bit for us? Why exactly is it time for all archivists to be digital archivists? What does that mean to you in practice?

Sibyl: We don’t all need to be digital archivists, but we do need to be archivists who work with digital materials. It’s not scalable to have one person, or one team, focus on the “digital stuff.” When I was first considering how to structure the Digital Team (or D-Team) at the RAC, it crossed my mind to mirror the structure of my organization, which is based on the main functions of an archive: collection development, accessioning, preservation, description, and access. I quickly realized that integrating digital practices into existing functions was essential.

The archivists at my institution take great pride in their knowledge of the collections, and not tapping into that knowledge would disadvantage the digital collections. We also don’t have many purely digital collections; the vast majority are hybrid. It wouldn’t make sense for one person to arrange and describe analog materials and another the digital materials. The principles of arrangement and description don’t change due to the format of the materials. Our archivists just need guidance in how to be effective in handling digital records, they need experience using tools so they feel comfortable with them, and they need someone available to ask if they have questions. So the digital archivists on my team are figuring out which software and tools to adopt, which workflows are the most efficient, and how to best educate the rest of the staff so they can do the actual archival work. The digital archivists aren’t actually doing traditional archival work and in that sense, “digital archivist” is a misnomer.

Trevor: If an archivist wants to get caught up-to-speed on the state and role of digital forensics for his or her work, what would you suggest they read/review? Further, what about these works do you see as particularly important?

Sibyl: The CLIR report, “Digital Forensics and Born-Digital Content in Cultural Heritage Collections,” is an excellent place to start. It clearly outlines what is gained by using forensics techniques in archival practice: namely the ability to capture digital archival materials in a secure manner that preserves more of their context and original order. These techniques also allow archivists to search through and review those materials without worrying about inadvertently altering them and affecting their authenticity.

I was ecstatic when I first saw Peter Chan’s YouTube video on processing born-digital materials using the Forensic ToolKit software. It was the first time I saw how functionality in FTK could be mapped to traditional processing activities: weeding duplicates, identifying Personally Identifiable Information and restricted records, arranging materials hierarchically, etc. It really answers the question of “So you have a disk image, now what do you do with it?” It also conveyed that the program could be picked up fairly easily by processing archivists.

The “From Bitstreams to Heritage: Putting Digital Forensics into Practice in Collecting Institutions” report (pdf) provides a really good overview of the recent activities in this area and a practical analysis of some of the capabilities and limitations of the forensics tools available.

Trevor: Could you tell us a bit about how the digital team works at the Rockefeller Archive Center? What kinds of roles do people take in the staff? How does the team fit into the structure of the Archive? How do you define the services you provide?

Sibyl: My team takes a user-centered approach in fulfilling our mission of leveraging technology to support all our program areas. We generally start by identifying a need for new technology, whether it be to place our finding aids online, create digital exhibits for our donors, preserve the context and authenticity of materials as they move from one physical medium to another, or increase our efficiency in managing researcher requests. We then try to involve users — both internal and external — as much as possible throughout the process. This involvement is crucial given that we usually aren’t the primary users of the software we implement.

One archivist focuses on delivery and access, which includes managing our online finding aid delivery system, as well as working very closely with our reference staff to develop and integrate tools that will help increase the efficiency of their work. Another team member is focused on digitization and metadata projects which includes things like scanning and outsourced digitization projects, as well as migrating from the Archivists’ Toolkit to ArchivesSpace. We just hired a new digital archivist to really delve into the digital forensics work I discussed in my presentation at SAA. She will be disk imaging and teaching our processing archivists to use FTK for description. In addition to overseeing the work of all the team members, I interface with our donor institutions, create policies and procedures, set team priorities and oversee our digital preservation system.

As I mentioned before, the RAC is divided up into five different archival functional areas: donor services, collections management, processing, reference and the digital team. Certain services, like digital preservation and digital duplication for special projects, are within our realm of responsibility, while with others we take a more advisory role. For example, we’re in the midst of an Aeon special collections management tool implementation, and although we won’t be internally hosting the server, we are helping our reference staff articulate and revise their workflows to take advantage of the efficiencies that system enables.

Our services are quite loosely defined; one of our program goals is to “leverage technology in an innovative way in support of all RAC program areas.” This gives us a lot of leeway in what we choose to do. I prioritize our preservation work based on risk and our systems work based on an evaluation of institutional priorities. For example, over the last year the RAC has been trying to increase the efficiency of our reference services, so we evaluated their workflows, replaced an unscalable method for organizing reference interactions with a user-friendly ticketing system, and are now aiding with the Aeon implementation.

Trevor: Could you tell us a bit about the workflow you have put in place to implement digital forensics in processing digital records? What roles do members of your team play and what roles do others play in that workflow?

Sibyl: My team takes care of inventorying removable media, creating disk images, running virus checks on those images, and providing them to the processing staff for analysis and description. Processing staff then identifies duplicates, restricted materials, and materials that contain PII. They arrange and describe materials within FTK. When they have finished, they notify the D-Team and we add the description to the Archivists’ Toolkit (or ArchivesSpace — we’re preparing to transition over soon) and ingest those files and related metadata into Archivematica.

There’s a lot of details we need to add in that will greatly increase the complexity of the process, and some of them will require actual policy decisions to be made. For example, the question of redaction comes up every time I review this process with our archivists. Redaction can be pretty straightforward with certain file formats, but definitely not with all. Also, how do we relay that information has been redacted to our researchers? We need to have a policy that clearly outlines when we redact information (for materials going online? for use in the reading room?) what types of information we redact, and what types of files can securely be redacted.

Diagram of the digital records processing workflow at RAC.

Trevor: As your process is established and refined, what do you see as the future role and place of the digital team within the archive? That is, what things are on the horizon for you and your team?

Sibyl: In the years since I joined the RAC we’ve placed our finding aids and digital objects online in an access system, architected a system for digital preservation, and configured forensics workflows. Now that we’ve got that foundation for managing and accessing our digital materials, I want to start embodying our goals to be innovative and leaders in the field. One area I think we can contribute to is integrating systems. For example, we’re launching a new project with Artefactual, the developers of Archivematica, to create a self-submission mechanism for donors to transfer records to us. Part of the project includes integrating ArchivesSpace with Archivematica. How cool would it be to have an accession record automatically created in ArchivesSpace when a donor transfers materials to our Archivematica instance?

Likewise, I’ve been talking with a few people about using data in FTK to create interactive interfaces for researchers. We could use directory data captured during imaging or created during analysis (like labeling materials “restricted”) to recreate (but not necessarily emulate) the way files were originally organized, including listing deleted and duplicate files and then linking that directly to their final, archival organization. The researcher would be able to see how the files were originally organized by the donor and what is missing (or restricted) from what is presented as the final archival organization. I get giddy when I think of how we can use technology to increase the transparency of what happens during archival processing. I’m also excited about the prospect of working EAC-CPF records into our discovery interface to bolster our description.

We also have a great deal of less innovative but very necessary tasks ahead of us. We need to implement a DAMS to help corral the digitized materials that are created on request and also to provide more granular permissions to materials than what we currently have. We need to create and implement policies to fill in gaps in our policy framework and inch towards TRAC compliance. And lastly, we need to systematize our preservation planning. We have a lot of work to keep us busy! That said, it’s a really great time to be in the archival field. Digital materials may present new and complex challenges, but we also have a chance to be creative and innovative with systems design and applying traditional archival practices to new workflows.

Harvard Library Innovation Lab: Link roundup September 24, 2014

planet code4lib - Wed, 2014-09-24 14:32

The never changing web, traffic lights, art, and bots. All with a healthy dose of fun.

var t;

Understanding subjects by decomposing them into their algorithms, then implementing in code. In this case, art and JS

Build your own bot

Soft robots feel like something we’d have in a reading room. Love this effort to package DIY info on soft robots.

Interactive Dancing Traffic Light Makes Waiting to Cross the Street More Fun – My Modern Met

A dancing crosswalk sign helps people wait. Real live dancing translated into the little sign!

This Kinetic Wall Of Clocks Is Utterly Hypnotic

I want to order some clocks in bulk.

Django UI in 2005 vs Django UI in 2014

The never changing web. Django docs and admin UI are still usable and not ugly.

DPLA: Open Content Strategy Committee Call: October 1, 2:00 PM Eastern

planet code4lib - Wed, 2014-09-24 14:15

The Content Strategy Committee will hold an open call on Wednesday, October 1 at 2:00 PM EDT. To register, follow the link below and complete the short form.

  1. Recruitment of new Content Committee Co-chair
  2. Gauging interest regarding topic specific aggregations, such as
  3. Update on DPLA/Europeana Rights work

All written content on this blog is made available under a Creative Commons Attribution 4.0 International License. All images found on this blog are available under the specific license(s) attributed to them, unless otherwise noted.

LITA: New Collaborative Technology

planet code4lib - Wed, 2014-09-24 13:00

As an academic librarian I often hear students lamenting the struggles of working in groups. Collaborating on a project is challenging, especially when everyone is working in their own place and at their own speed. At my library we have tried to provide space where students can more easily work in groups and accomplish work together.

Our first floor is dedicated collaborative space. We have a whiteboard table, comfortable seating, the coffee shop, and it gets loud. We were looking for ways to enhance this space with more technology, but we were encountering budget limitations with many of the collaborative technology pieces we considered.

An unplanned visit to a neighboring academic library led me to discover Crestron’s AirMedia. Check it out here:

This technology allows up to 32 people to wirelessly connect to the shared presentation. Also, up to four people can display their device on a shared screen. We had considered purchasing a large television and then buying cables, but having the capability for users to connect wirelessly was a huge selling feature for us. Also, it works with Android, iOS, and Windows. I would like to see more capabilities for tablets in the future, but this technology is just a year old and hopefully more features will be made available.

With a grant from Amigos Library Services we purchased the AirMedia device, a 55” television, and a new table and chairs that are more conducive to collaborative work than what we already had available. It is still early in the semester, but students are catching on and commenting on how “cool” it is. We have had to do some promotion, because otherwise it just looks like a big TV with a new table. Generating a list of potential uses for the technology and placing it on the wall by the station is one promotional method.

We hope to be able to purchase more collaborative technology in the future. I’d love to hear what technology others are using to help library users collaborate!

Open Knowledge Foundation: A Data Revolution that Works for All of Us

planet code4lib - Wed, 2014-09-24 12:10

Many of today’s global challenges are not new. Economic inequality, the unfettered power of corporations and markets, the need to cooperate to address global problems and the unsatisfactory levels of accountability in democratic governance – these were as much problems a century ago as they remain today.

What has changed, however – and most markedly – is the role that new forms of information and information technology could potentially play in responding to these challenges.

What’s going on?

The incredible advances in digital technology mean we have an unprecedented ability to create, share and access information. Furthermore, these technologies are increasingly not just the preserve of the rich, but are available to everyone – including the world’s poorest. As a result, we are living in a (veritable) data revolution – never before has so much data – public and personal – been collected, analysed and shared.

However, the benefits of this revolution are far from being shared equally.

On the one hand, some governments and corporations are already using this data to greatly increase their ability to understand – and shape – the world around them. Others, however, including much of civil society, lack the necessary access and capabilities to truly take advantage of this opportunity. Faced with this information inequality, what can we do?

How can we enable people to hold governments and corporations to account for the decisions they make, the money they spend and the contracts they sign? How can we unleash the potential for this information to be used for good – from accelerating research to tackling climate change? And, finally, how can we make sure that personal data collected by governments and corporations is used to empower rather than exploit us?

So how should we respond?

Fundamentally, we need to make sure that the data revolution works for all of us. We believe that key to achieving this is to put “open” at the heart of the digital age. We need an open data revolution.

We must ensure that essential public-interest data is open, freely available to everyone. Conversely, we must ensure that data about me – whether collected by governments, corporations or others – is controlled by and accessible to me. And finally, we have to empower individuals and communities – especially the most disadvantaged – with the capabilities to turn data into the knowledge and insight that can drive the change they seek.

In this rapidly changing information age – where the rules of the game are still up for grabs – we must be active, seizing the opportunities we have, if we are to ensure that the knowledge society we create is an open knowledge society, benefiting the many not the few, built on principles of collaboration not control, sharing not monopoly, and empowerment not exploitation.

In the Library, With the Lead Pipe: Locating the Library in Institutional Oppression

planet code4lib - Wed, 2014-09-24 11:01

Editor’s note: On July 16th, 2014 we published Open Source Outline: Locating the Library within Institutional Oppression, where we discussed nina de jesus’s Outline for a Paper I Probably Won’t Write and called for authors to use her open source outline as the basis for an article of their own. We are pleased that nina herself and Joshua Beatty have both taken up the challenge. Below is nina de jesus’s article based on that outline. In a first for In the Library with the Lead Pipe, we are also simultaneously publishing Joshua Beatty’s article based on the same outline.

Figure A: Oppressive Institution (source)

In Brief: An exploration into the relationship between libraries and institutional oppression. It begins with with an examination at how the enlightenment provides the ideological foundation and framework for public libraries and the historical processes that created the library as institution. It then examines this institution using the three logics of white supremacy: slavery, Indigenous genocide, and Orientalism. Locating the Library in Institutional Oppression 1. Introduction1

Libraries are, in the hearts and minds of many people, a cherished and much beloved institution. Beyond nurturing a love of reading, libraries also embody a certain set of values. Popular author Neil Gaiman recently summarized a commonly held view of libraries:

Libraries are about freedom. Freedom to read, freedom of ideas, freedom of communication. They are about education (which is not a process that finishes the day we leave school or university), about entertainment, about making safe spaces, and about access to information.2

With libraries holding such a seemingly unassailable position within our cultural imagination, how do we begin to understand their place within institutional oppression?

Just as libraries represent notions of freedom, education, and a love of reading, we also exist within a society and culture of great disparity and oppression. In one very simplistic sense, the existence of libraries themselves attest to this reality, since freely available resources wouldn’t be necessary if it weren’t the case that not everyone has equal access to these resources. Libraries themselves exist to address certain disparities within our society.

This paper is an initial exploration of the ways that libraries, in attempting to address inequity, actually entrench oppression. However, this isn’t about the failures of libraries, rather it is about the way that certain values structure libraries such that they come to embody institutional oppression, rather than resist it.

I discuss both how the enlightenment created the ideological underpinnings of the library as public institution as well as the historical processes that created the library as we know it today. Next, I analyze the contemporary political location of libraries within our culture. Lastly, I explore the implications of the library as institutionalized enlightenment ideology using the three logics of white supremacy as proposed by Andrea Smith3 as a way to demonstrate that libraries cannot be distinguished either from their historical roots nor from their contemporary context within a white supremacist settler state.

2. Whence libraries? 2.1 The Enlightenment4 as Ideology

In Libraries and the Enlightenment Wayne Bivens-Tatum makes a compelling case for “the scientific and political principles of the Enlightenment provid[ing] the philosophical foundation for American academic and public libraries.”5 The Stanford Encyclopedia of Philosophy describes the enlightenment as “the period in the history of western thought and culture, stretching roughly from the mid-decades of the seventeenth century through the eighteenth century, characterized by dramatic revolutions in science, philosophy, society and politics.”6 For the sake of this discussion, I am largely considering the social and political aspects of enlightenment thought.

I focus on the enlightenment and these parts of it because:

The enduring legacy of the Enlightenment lives on in politics as much as in science, especially in the United States of America, which in some ways is the country best embodying Enlightenment principles.7

Bivens-Tatum further identifies the key themes of enlightenment8 political/social/ethical thought as “individual liberty, equal rights, religious toleration, freedom of speech, freedom of the press, the right to education and political participation in a democracy.”9 In addressing the seeming contradiction between Enlightenment ideologies and the slave-holding reality of many of its proponents (especially in the US), Bivens-Tatum claims that “their historical blindness to their own contradictions hardly negates the universal appeal of enlightened political thought.”10

However, Bivens-Tatum not only has no argument or evidence to support this claim but he also ignores the historical reality that, speaking only of the US, Indigenous (and other) peoples have been resisting the settler values of enlightenment from the very beginning of its history. As already noted, he writes that the US “is the country best embodying Enlightenment principles in theory if not always in practice” and traces this to the founding document of the US state, the “Declaration of Independence.”11

It is also the case that 1776 “initiated a land grab that drew to a close only after the United States had extended its border to the Pacific Ocean and nearly eliminated the Native American land base” with the invasion of the Cherokee just a few months after American independence.12 Why was this violent genocide needed by a state founded on ideas with universal appeal? Why are Indigenous peoples in the US _still_ resisting this state grounded on ideas with universal appeal? One would think that hundreds of years later, with the values and politics of the enlightenment firmly entrenched, they might have come to understand the appeal of enlightenment values.13

Bivens-Tatum’s hand-waving about how the historical evils committed by the early ideological state as a failure to live up to stated principles doesn’t explain why resistance continues today, since it is fairly easy to argue that the US today does a much better job of living up to enlightenment ideals than it did in the past. Putting his claim in proper historical and political context helps us understand the full hegemonic force of the claim: enlightened political thought does not have universal appeal.

The US, the nation best embodying enlightenment ideals, is and was grounded on the (ongoing) genocide of Indigenous peoples. Where Bivens-Tatum and I largely diverge is that he characterizes criticism of the enlightenment via historical context as being derived by an apparent contradiction between ideal and reality. However, the source of my criticism is that there is no contradiction between the ideals of the enlightenment and the harsh reality of the ongoing Indigenous genocides, rather the two are inextricably linked within settler states.

A key theme of this paper is that libraries do, in fact, embody enlightenment values, but/and that enlightenment values are themselves steeped in and reinforce white supremacist settler state ideologies. To the extent that libraries do embody enlightenment values, they likewise contribute to ongoing colonization and are thus reasonably seen as sites of violence and oppression.

2.2 The Historical Genesis of Libraries

In discussing the purpose of public libraries, Bivens-Tatum notes that:

Public libraries began as instruments of enlightenment, hoping to spread knowledge and culture broadly to the people, who as free citizens of a democratic republic required access to that knowledge and culture to live fuller lives and to become better citizens.14

It becomes clear that the primary purpose of libraries wasn’t education (as he erroneously concludes from a claim like this)15 but was political. Education within this ideological statement is only a means to an end: creating better citizens. In this instance ‘better’ equals ‘better educated.’ Almost nothing in this statement about the purpose of libraries is value-neutral and apolitical. And it would be difficult to unpack everything that is oppressive about this motivation to create libraries.

In a context like this, many of the current real-world examples about how libraries are ‘failing’ marginalized people become clearly not a ‘failure’ but intentional. Public libraries in America and Canada were not designed for everyone; they are, as Bivens-Tatum says, intended for citizens. And their purpose is to create better citizens. This is not a politically neutral purpose.

And, in anticipation of his arguments against ‘revisionist histories’ about the founding of public libraries,16 I note that I am, in fact, following his account of the historical motivations and ideology surrounding the formation of libraries. I do honestly think that they were motivated by enlightenment ideals. Where we significantly depart is in statements like this, “we can have a reasonable pluralism in society, but only if everyone acknowledges the authority of the public democratic institutions.”1718

If we view libraries as embodying a particular political ideology (that of enlightenment and its support for democracy) and if we understand that libraries were created to make citizens better, then the role that libraries play, as an institution, in perpetuating settler states becomes clear. As noted in the previous paragraph, libraries have a limit to the (types of) knowledge they provide access to — they cannot significantly or effectively challenge the authority of democratic institutions. Thus, libraries are implicated within institutional oppression in two ways: by having their genesis within the enlightenment ideology and by existing as a tool to perpetuate the state.

2.3 Contemporary Libraries and Liberalism

In my search for resources for this paper, the connection between liberal philosophy and the enlightenment was simply taken as fact rather than something that needed to be established. As such, I’m going to take this as a given: liberalism is the modern day embodiment of enlightenment values.

It does need to be noted, though, that liberalism as political philosophy/ideology is not really at all connected to political parties as they currently exist in Canada or the US. The type of liberal political ideology that was born out of the enlightenment fundamentally structures most contemporary political parties and organizations. This ideology broadly encapsulates the values of having a democratic state, freedom and inherent rights, etc. While current political parties debate about what counts as ‘freedom’ and how, exactly, the democratic state should be run, most of the larger, influential parties agree on these basic tenets.19

ALA’s Code of Ethics asserts that:

We significantly influence or control the selection, organization, preservation, and dissemination of information. In a political system grounded in an informed citizenry, we are members of a profession explicitly committed to intellectual freedom and the freedom of access to information. We have a special obligation to ensure the free flow of information and ideas to present and future generations.20

It should be fairly easy to see the parallels of this statement to what Bivens-Tatum notes was the original motivation for the creation of public libraries, at least as far as the ‘political system grounded in an informed citizenry’ is concerned.

As the Annoyed Librarian also notes:

Outside of a commitment to liberal democracy in general — which, by the way, is the only regime that supports the intellectual freedom of writers, artists, historians, philosophers, etc. — liberal institutions should take no substantive political position. A liberal library association would support intellectual freedom, access to information, and liberal democratic political institutions, but wouldn’t go on to make political statements irrelevant to libraries.21

While the Annoyed Librarian does think that ALA (or at least parts of it) are failing this liberal standard, it doesn’t make a difference to the reality that, failing or not, liberalism (and thus enlightenment) is the fundamental political philosophy informing how (at the very least) libraries and librarians think of themselves.

Last, as I began exploring in the previous section, we can see that, yes, libraries are political institutions and, from this section, they are politically liberal institutions (in the classical understanding of liberalism). This also means that regardless of what the Annoyed Librarian and ALA wish, the proposition that libraries (and librarians) be politically neutral is a self-defeating one. Claiming that libraries ought to be liberal institutions that take ‘no substantive political position’ is a political position in and of itself. And it is not a neutral one (if such a thing is even possible).

3 Libraries, democracy, and the logics of white supremacy

In locating the library in institutional oppression I’ll be focusing on only one line of criticism — white supremacy and decolonization — because of how focused my earlier sections are on the role that public libraries play (or ought to play) in maintaining a democratic (settler) state. I’m also largely depending on Andrea Smith’s understanding of how white supremacy is constituted:

We may wish to rearticulate our understanding of white supremacy by not assuming that it is enacted in a single fashion; rather, white supremacy is constituted by separate and distinct, but still interrelated, logics. I would argue that the three primary logics of white supremacy in the US context include: (1) slaveability/anti-black racism, which anchors capitalism; (2) genocide, which anchors colonialism; and (3) orientalism, which anchors war.22

Her analytic framework provides a three lens way to view how the library, as institution, embodies and enforces one type of oppression, white supremacy.

Namely, that libraries, being liberal institutions, are not ‘neutral’ in the ways that many of the sources cited in this paper either want them to be or believe they are. Rather, the explicit and expressed function of libraries, from their inception in the US and Canadian political structures to their existence today, is to create an informed citizenry for the sake of democracy. This allows us to finally locate the library in institutional oppression.

3.1 The Logic of Slavery

Andrea Smith writes:

One pillar of white supremacy is the logic of slavery. This logic renders black people as inherently enslaveable—as nothing more than property…This logic is the anchor of capitalism. That is, the capitalist system ultimately commodifies all workers: one’s own person becomes a commodity that one must sell in the labour market while the profits of one’s work are taken by somebody else. To keep this capitalist system in place—which ultimately commodifies most people—the logic of slavery applies a racial hierarchy to this system.23

Note one of the key claims in Smith’s discussion of the logic of slavery, that it ‘anchors capitalism’. Another way of understanding this is that the enslavability of Black people is a necessary and foundational part of capitalism, such that slavery is not the result of capitalism, but rather that capitalism itself is structured around this logic:

[T]he market did more than surround and detain black bodies — it also possessed them with logics of fungibility and accumulation. Under the logic of the Atlantic slave trade, the market’s arithmetic of accumulation was sutured to the flesh, inhabiting the bodies and lives it stripped down to the sum of their biological parts for sale within the freedom of the market. For the slave, economic rationality possessed every moment of life’s terror and death’s release. Liberal distinctions between the public and private, and the economic, political, and social were fabrications for the slave, illusions that depended on their erasure from the realm of the human. This erasure made possible the alchemy of the market so that with its social, economic, and discursive racial mechanisms, the market could transform a human being into an object and test the limits of that object’s biological life. The fungibility of blackness meant that slaves were money, were animals, were gold, were cotton, were rum, and on and on.24

This fungibility of Blackness also, for Black people, makes notions of intellectual property a fabrication when it comes to Black creative and intellectual work. A reality evidenced by the history of modern/contemporary music whereby every major movement in music over the past 100+ years has happened via a process of the exploitation of Black creative labour with little benefit to their creators.25

However, because the logic of slavery structures the process of commodification within capitalism we also see that “the overall trend in intellectual property protection is broadly correlated with the rise of capitalism. In fact, some institutional features associated with capitalism had to exist prior to the full development of intellectual property rights.”26 While it is possible for intellectual property rights to exist outside of a capitalist framework, the system we current have exists within this framework. This means that our system of intellectual property, having arisen (at least in part) from capitalism, is necessarily structured by the logic of slavery.

All of this creates a framework through which we can begin to understand how libraries institutionalize white supremacy. Principle IV in the ALA’s Code of Ethics states “we respect intellectual property rights.”27 Of course, many people would counter this claim by saying that the manner by which libraries operate fundamentally contradicts this capitalist impulse by making ‘intellectual property’ freely accessible to the public. Except this isn’t entirely true or, rather, it doesn’t represent the entire picture.

When we look at the work of libraries, we begin to see that they actually play a significant role in not just ‘respecting intellectual property’ but in ensuring the stability of intellectual property itself. One mechanism through which libraries do this is through the creation of ‘authority records': “An authority record is a tool used by librarians to establish forms of names (for persons, places, meetings, and organizations), titles, and subjects used on bibliographic records.”28

While the Library of Congress (LOC) makes it clear that authority records are created with the intent to improve accessibility, the mechanism they use for this ensures that every creative work necessarily has an identifiable owner. This is necessary in a system of capital wherein everything and everyone can (and likely will be) reduced to a commodity.

This is only one way that libraries come to be implicated via active participation in the logic of slavery, of capitalism, and of white supremacy. We can also see that libraries, regardless of their making ‘knowledge’ or ‘information’ accessible for free, do not actually challenge or resist this logic. Rather, libraries are another institution necessary for maintaining a system of intellectual property within a larger context of white supremacy that depends on the inherent enslaveability of Black people.

3.2 The Logic of Genocide

According to Andrea Smith:

This logic holds that indigenous peoples must disappear. In fact, they must always be disappearing, in order to enable non-indigenous peoples’ rightful claim to land. Through this logic of genocide, non-Native peoples then become the rightful inheritors of all that was indigenous—land, resources, indigenous spirituality, and culture. Genocide serves as the anchor of colonialism: it is what allows non-Native peoples to feel they can rightfully own indigenous peoples’ land. It is acceptable exclusively to possess land that is the home of indigenous peoples because indigenous peoples have disappeared.29

In earlier sections it was noted that libraries were created and continue to be conceived as institutions designed for benefit of creating an informed citizenry for the sake of democracy. This can only be established as a value in a settler state like the US or Canada if the Indigenous peoples of this region have already disappeared.

For ‘Canada’ and the ‘United States’ to continue to exist as democratic states (or for them to even be conceived as such) requires either that we understand that the Indigenous genocide is already complete or that we ensure that the genocide is ongoing. Since it is a fact that the Indigenous peoples of North America continue to exist, the ideal of libraries as liberal institutions existing to make democracy ‘better,’ thus stronger, is no less than an ideal wherein the genocide of Indigenous peoples is finally completed (putting democracy in its strongest possible position).

By and large, this is what is missing from Bivens-Tatum’s glowing account of libraries and the enlightenment. He is careful to distinguish the historical and material realities of the enlightenment from its ideas/philosophy. My argument is that the historical and material context of the enlightenment is not actually ‘historical’ at all. As we continue to grapple with the ideas of the enlightenment today, so do we grapple with the material conditions that both caused and are caused by the enlightenment (settler colonialism, white supremacy, etc).

However, we cannot be surprised by this, since the logic of genocide is “that indigenous peoples must disappear,”30 so the absence of their struggle against settler colonialism in the US (and all other settler states) is necessary. This absence, of course, extends not only to their physical disappearance, but their disappearance from history and discourse. Thus, having an ethical code “grounded in an informed citizenry” for librarians is fundamentally rooted in the ongoing Indigenous genocides.31 To put it plainly, settler states, in order to lay claim to their statehood, require the genocide of native populations. Libraries in supporting “a political system grounded in an informed citizenry” support the state and thus support genocide.

3.3 The Logic of Orientalism

As Andrea Smith states:

The logic of orientalism marks certain peoples or nations as inferior and deems them to be a constant threat to the wellbeing of empire… Consequently, orientalism serves as the anchor of war, because it allows the United States to justify being in a constant state of war to protect itself from its enemies. Orientalism allows the United States to defend the logics of slavery and genocide as these practices enable it to stay “strong enough” to fight these constant wars.32

Smith doesn’t go into the details of Orientalism, as it was developed by Edward Said,33 but libraries firmly belong to the discursive space identified by Said as “a structured set of concepts, assumptions, and discursive practices that were used to produce, interpret, and evaluate knowledge about non-European peoples.”34 It is this knowledge that informs the logic of orientalism described by Smith — the logic that allows the US to justify its ongoing wars.

Libraries disguise their Orientalism by invoking the stance of neutrality: “We distinguish between our personal convictions and professional duties and do not allow our personal beliefs to interfere with fair representation of the aims of our institutions or the provision of access to their information resources.”35 However, it is pretty easy to see that libraries are far from neutral spaces. There are many examples in the literature about the ways that collection development,36 reference,37 cataloguing,38 and many other library functions reveal deep biases in how the library as an institution exists. More importantly, as Bivens-Tatum himself writes, “we can have a reasonable pluralism in society, but only if everyone acknowledges the authority of the public democratic institutions.”39 In such a situation, it is impossible for neutrality to exist.

Thus, if we look past this claim of neutrality and understand that it is an impossible position, we begin to understand how libraries come to articulate the logic of Orientalism. Part of what made Said’s work so groundbreaking and influential is that he demonstrated the way that knowledge creation within the empire is not (and never has been) a neutral activity and so the knowledge itself cannot be neutral. One of the interesting distinctions often drawn in library literature is between ‘information’ and ‘knowledge’. Information is understood as neutral facts and knowledge is created when we understand information (or something like that). It is a tidy distinction that allows ‘neutral’ librarians to feel like we are transmitting neutral facts, all unmediated by reality.

Most criticisms of library neutrality tend to focus on the librarians and/or institution, rather than the ‘information’ or ‘knowledge’ preserved, stored, communicated, and legitimized by libraries. Said’s claims are, in part, explorations about the epistemology of empire, of colonial expansion, and of war but few criticisms of library neutrality have examined or focused on the role that libraries have within the empire and its epistemology.

When we look into the collections, the actual ‘information’ contained in libraries and how it is organized, we can see that it (surely by accident) somehow manages to construct a reality wherein whiteness is default, normal, civilized and everything else is Other. In so doing, libraries very much participate in a larger imperial project that justifies war. We see that libraries very happily fulfill this expectation of Bivens-Tatum: “we can have a reasonable pluralism in society, but only if everyone acknowledges the authority of the public democratic institutions,”40 since these ‘public democratic institutions’ he is discussing are constituent parts of the empire. Seen from this light, is it at all surprising that library collections play their happy role in the Orientalist project of creating the Other?

4 Conclusions

Looking back, now, at this paper and seeing what all I had to say during this excursion, one of the surprising themes (to me at least) is how often the concept of ‘neutrality’ came under fire, even though this was not intended. As I consider it now, it seems obvious to me that neutrality has been central to locating the library in institutional oppression. But it is a more complex concept of neutrality than is usually discussed within the literature, which tends to focus on the coherence of the neutrality of the individuals operating and working within libraries or on some of the processes and systems of libraries (like classification).

The main notion of neutrality that I challenge within this article is that of institutional neutrality. Regardless of many people’s feelings about the coherence of individual neutrality, many have taken it as axiomatic that libraries are neutral institutions and that any failure of libraries to be neutral is largely the fault of individuals failing to live up to the ideals or ethics of the profession, rather than understanding the library as institution as fundamentally non-neutral. Libraries as institutions were created not only for a specific ideological purpose but for an ideology that is fundamentally oppressive in nature. As such, the failings of libraries can be re-interpreted not as libraries failing to live up to their ideals and values, but rather as symptoms and evidence of this foundational and oppressive ideology.

In tying my line of criticism to that of colonialism, settler colonialism, and white supremacy (but as a reminder: there are many other lines of criticism that can and ought to be explored when situating the library in institutional oppression), I also have the seeds of solutions, for those who want such things. The clear solution is decolonization.41 Of course, this is a difficult prospect for many within the field since it precludes any solution that is reformist in nature; no reform is possible if we understand libraries as fundamentally white supremacist institutions.

For those who find this unpalatable, there is, perhaps, a worthwhile solution in decoupling libraries from their avowed goal in propping up and strengthening settler democracies. This could allow us to preserve the institution, but would require drastic and daring changes to the overall structure and organization of libraries. Libraries, unlike other institutions of settler states (like the judicial system), have at least some emancipatory potential.

Realizing the emancipatory potential of the library as institution would require breaking and disrupting the system of intellectual property and other aspects of capitalism, especially the publishing industry. It would require disrupting the empire’s mechanisms for creating ‘knowledge’ by being more than a repository for imperial knowledge products. It would require supporting Indigenous resistance to the settler state and working towards dismantling anti-Blackness.

In so doing, perhaps libraries could begin to live up to the ideal expressed by Gaiman in the introduction. Libraries really could come to represent and embody freedom. They could become focal points for the free exchange and access of ideas, knowledge, and imagination.

Thanks to Chris Bourg and Ellie Collier for being such ridiculously awesome reviewers and for making this paper about 100000000000x better. Really. THANK YOU SO MUCH. 


5 Works Cited

Addams, Suellen S., and Kate Pierce. “Is There Transgender Canon?: Information Seeking and Use in the Transgender Community,” 2006.

Annoyed Librarian. “Libraries as Liberal Institutions.” Accessed August 3, 2014.

Berman, Sanford. “‘Inside’ Censorship.” De Facto Censorship Implicit in Collection Decisions, no. 18 (July 15, 2001): 48–63.

Bivens-Tatum, Wayne. Libraries and the Enlightenment. Library Juice Press, 2012.

Bristow, William. “Enlightenment.” Edited by Edward N. Zalta. The Stanford Encyclopedia of Philosophy, 2011.

“Code of Ethics of the American Library Association.” Accessed August 3, 2014.

Curry, Ann. “If I Ask, Will They Answer?” Reference & User Services Quarterly 45, no. 1 (Fall 2005): 65–75.

Downey, Jennifer. “Public Library Collection Development Issues Regarding the Information Needs of GLBT Patrons.” Progressive Librarian, no. 25 (Summer 2005): 86–95.

“Frequently Asked Questions (Library of Congress Authorities).” Accessed August 3, 2014.

Gaiman, Neil. “Why Our Future Depends on Libraries, Reading and Daydreaming.” The Guardian, October 15, 2013, sec. Books.

King, Tiffany. “Labor’s Aphasia: Toward Antiblackness as Constitutive to Settler Colonialism.” Decolonization. Accessed August 9, 2014.>

Klumpp, Tilman, and Paul H. Rubin. “Property Rights and Capitalism.” In The Oxford Handbook of Capitalism, edited by Dennis C. Mueller. Oxford: Oxford University Press, 2012.

Kohn, Margaret. “Colonialism.” Edited by Edward N. Zalta. The Stanford Encyclopedia of Philosophy, 2014.

“Orphan Works and Mass Digitization | U.S. Copyright Office.” Accessed August 3, 2014.

Roberto, K. R., ed. Radical Cataloging: Essays at the Front. Jefferson, N.C: McFarland & Co, 2008.

Said, Edward W. Orientalism. Penguin Classics. London: Penguin, 2003.

Smith, Andrea. “Indigeneity, Settler Colonialism, White Supremacy – Centre for World Dialogue.” Global Dialogue 12, no. 2 (2010).

Saunt, Claudio. “1776: Not Just the Revolution – The Boston Globe.”, July 6, 2014.

Wolfson, Matthew. “The Origins of Globalisation.” Prospect Magazine, May 14, 2013.

  1. A note about the research/citation methodology of this article: I’ve decided to make a principled stance about only citing open access resources. The exception within the paper is monographs, which haven’t been considered by the OA movement in the same way. But as far as articles and other scholarly resources are concerned, if I wasn’t able to find a non-paywalled copy, I haven’t cited or used it within this paper. There are obvious and unfortunate limitations when strictly adhering to such a principle, since much relevant research remains locked up behind publisher paywalls.
  2. Gaiman, Neil. “Why Our Future Depends on Libraries, Reading and Daydreaming.” The Guardian, October 15, 2013, sec. Books.
  3. Smith, Andrea. “Indigeneity, Settler Colonialism, White Supremacy – Centre for World Dialogue.” Global Dialogue 12, no. 2 (2010).
  4. Bivens-Tatum is careful to note his bias towards the enlightenment on page 4 of his book when he writes “I will be discussing the principles of the Enlightenment in a positive way.” In the same spirit, I’ll state outright that I think the enlightenment is and was evil because it is the ideology of colonialism. I don’t use the word ‘evil’ lightly, but I’m hard pressed to think of any other word to describe a set of philosophical and political ideas that directly led to the deaths of millions of people and the subjugation of pretty much the entire world under white colonial powers.
  5. Bivens-Tatum, Wayne. Libraries and the Enlightenment (Library Juice Press, 2012), 185.
  6. Bristow, William. “Enlightenment.” Edited by Edward N. Zalta. The Stanford Encyclopedia of Philosophy, 2011.
  7. Bivens-Tatum, Wayne. Libraries and the Enlightenment (Library Juice Press, 2012), 12.
  8. While it is commonplace to capitalize or treat the ‘Enlightenment’ as a proper noun, my practice of not capitalizing the term is a small act of resistance to the mythology surrounding most discourse about the enlightenment.
  9. Bivens-Tatum, Wayne. Libraries and the Enlightenment (Library Juice Press, 2012), 23.
  10. Bivens-Tatum, Wayne. Libraries and the Enlightenment (Library Juice Press, 2012), 23.
  11. Bivens-Tatum, Wayne. Libraries and the Enlightenment (Library Juice Press, 2012), 12.
  12. Saunt, Claudio. “1776: Not Just the Revolution – The Boston Globe.”, July 6, 2014.
  13. The articles/posts on provide a great example of contemporary Indigenous resistance to setter colonial states like the US. This is an ongoing struggle.
  14. Bivens-Tatum, Wayne. Libraries and the Enlightenment (Library Juice Press, 2012), 133.
  15. Bivens-Tatum, Wayne. Libraries and the Enlightenment (Library Juice Press, 2012), 133.
  16. Bivens-Tatum, Wayne. Libraries and the Enlightenment (Library Juice Press, 2012), 111.
  17. Bivens-Tatum, Wayne. Libraries and the Enlightenment (Library Juice Press, 2012), 112.
  18. So that my position is very clear: I don’t acknowledge the authority of public democratic institutions. This is exactly  why I’m writing a paper locating the library in institutional oppression. I’m not attempting to quibble about what is or isn’t the enlightenment or what did or did not motivate the creation of public libraries. Part of my argument rests on the understanding that libraries, as an institution, are oppressive because of their relationship to a white supremacist, hetero-patriarchal settler state. And because of the exact reasons he describes: libraries are necessary for creating better citizens of a democratic state. This is one of the major reasons why, as they currently exist in Canada and the US, libraries are a tool of oppression, rather than of liberation.
  19. In Canada, for example, while the Conservative party and the Liberal party place a different emphasis on what they consider ‘freedom’ and have divergent views on economics, neither has any interest in pushing for a non-democratic Canada.
  20. “Code of Ethics of the American Library Association.” Accessed August 3, 2014.
  21. Annoyed Librarian. “Annoyed Librarian: Libraries as Liberal Institutions.” Accessed August 3, 2014.
  22. Smith, Andrea. “Indigeneity, Settler Colonialism, White Supremacy – Centre for World Dialogue.” Global Dialogue 12, no. 2 (2010).
  23. Smith, Andrea. “Indigeneity, Settler Colonialism, White Supremacy – Centre for World Dialogue.” Global Dialogue 12, no. 2 (2010).
  24. vitoria. “The Fungibility of Blackness.” Acceptable Society. Accessed September 5, 2014.
  25. For a recent example, do an internet search on “Miley Cyrus twerking cultural appropriation.” See Bowen, Sesali. Let’s Get Ratchet! Check Your Privilege At The Door.” Racialicious – the Intersection of Race and Pop Culture.  Accessed September 15, 2014.
  26. Klumpp, Tilman, and Paul H. Rubin. “Property Rights and Capitalism.” In The Oxford Handbook of Capitalism, edited by Dennis C. Mueller. Oxford: Oxford University Press, 2012, 11. Preprint here:
  27. “Code of Ethics of the American Library Association.” Accessed August 3, 2014.
  28. “Frequently Asked Questions (Library of Congress Authorities).” Accessed August 3, 2014.
  29. Smith, Andrea. “Indigeneity, Settler Colonialism, White Supremacy – Centre for World Dialogue.” Global Dialogue 12, no. 2 (2010).
  30. Smith, Andrea. “Indigeneity, Settler Colonialism, White Supremacy – Centre for World Dialogue.” Global Dialogue 12, no. 2 (2010).
  31. “Code of Ethics of the American Library Association.” Accessed August 3, 2014.
  32. Smith, Andrea. “Indigeneity, Settler Colonialism, White Supremacy – Centre for World Dialogue.” Global Dialogue 12, no. 2 (2010).
  33. Said, Edward W. Orientalism. Penguin Classics. London: Penguin, 2003.
  34. Kohn, Margaret. “Colonialism.” Edited by Edward N. Zalta. The Stanford Encyclopedia of Philosophy, 2014.
  35. “Code of Ethics of the American Library Association.” Accessed August 3, 2014.
  36. See Downey, Jennifer. “Public Library Collection Development Issues Regarding the Information Needs of GLBT Patrons.” Progressive Librarian, no. 25 (Summer 2005): 86–95.
  37. See Curry, Ann. “If I Ask, Will They Answer?” Reference & User Services Quarterly 45, no. 1 (Fall 2005): 65–75.,+Will+They+Answer.pdf
  38. See “Teaching the Radical Catalog.” in Radical Cataloging: Essays at the Front, ed. K.R. Roberto. Jefferson, N.C.: McFarland, April 2008.
  39. Bivens-Tatum, Wayne. Libraries and the Enlightenment (Library Juice Press, 2012), 112.
  40. Bivens-Tatum, Wayne. Libraries and the Enlightenment (Library Juice Press, 2012), 112.
  41. Refer to Unsettling America,, for a starting point to understanding decolonization.

In the Library, With the Lead Pipe: Locating Information Literacy within Institutional Oppression

planet code4lib - Wed, 2014-09-24 11:00

Editor’s note: On July 16th, 2014 we published Open Source Outline: Locating the Library within Institutional Oppression, where we discussed nina de jesus’s Outline for a Paper I Probably Won’t Write and called for authors to use her open source outline as the basis for an article of their own. We are pleased that nina herself and Joshua Beatty have both taken up the challenge. Below is Joshua Beatty’s article  based on that outline. In a first for In the Library with the Lead Pipe, we are also simultaneously publishing nina de jesus’s article based on the same outline.

Image credit: Bryce Johnson

In Brief: The ACRL’s draft Framework for Information Literacy in Higher Education represents a chance to undo the neoliberal assumptions of earlier information literacy standards. Despite some positive changes, the language of the Framework still reinforces existing structures of power. The Framework relies on a rhetoric of crisis and on the metaphors “information marketplace” and “information ecosystem.” These metaphors naturalize information resources as a series of walled gardens that might instead have been part of a larger commons.


In a January 2014 talk entitled “The Neoliberal Library: Resistance Is Not Futile,” Chris Bourg argued that “neoliberalism is toxic for higher education, but research libraries can & should be sites of resistance.” Bourg gives as examples four areas of the research library affected by neoliberalism: instruction and reference, collection development, staffing models, and assessment. It seems to me that those areas are not exclusive to libraries at our largest research institutions. Small college libraries perform all these functions, though the balance may differ. For example, I work at a self-defined “teaching library” at a four-year state college, a library which prioritizes instruction in information literacy over support for faculty research. If my library is to be a site of resistance to neoliberalism, that resistance must start in the area the library considers central to its mission.

Yet current formulations of information literacy make it difficult for any such library to resist neoliberalism. In this article I will follow Maura Seale’s analysis of the neoliberal underpinnings of existing information literacy standards to show that they also apply to the draft document soon to supercede them. I will concentrate on the rhetoric of the document, especially the way in which, to use Bourg’s terms, “market language and metaphors” have colonized the Framework. Finally, I will show how uncritically using that language has led us to naturalize the current model of production, organization, and distribution of scholarly information that we take for granted in our libraries today.


In her lecture, Bourg follows Daniel Saunders in defining neoliberalism as “a varied collection of ideas, practices, policies and discursive representations … united by three broad beliefs: the benevolence of the free market, minimal state intervention and regulation of the economy, and the individual as a rational economic actor.” She continues:

Neoliberal thinking emphasizes individual competition, and places primary value on “employability” and therefore on an individual’s accumulation of human capital and marketable skills.

A key feature of neoliberalism is the extension of market logic into previously non-economic realms – in particular into key social, political and cultural institutions.

We can see this when political candidates promote their experience running a successful business as a reason to vote for them, and in the way market language and metaphors have seeped into so many social and cultural realms.

For example, Neoliberalism is what leads us to talk about things like “the knowledge economy”, where we start to think of knowledge not as a process but as a kind of capital that an individual can acquire so that she then can sell that value to the market.

In short, neoliberalism pressures us to assume that markets and competition are an efficient way to distribute resources, to believe it necessary for individuals to self-fashion themselves as useful to the system, and to reduce all judgments of value to purely economic terms. These and similar examples play out every day in library instruction, promoted by the information-literacy standards that underlie our teaching.

Information literacy: the ACRL’s Standards and Framework

Information literacy is probably taught in as many ways as there are libraries that teach it. Though the Association of College and Research Libraries (ACRL) promotes its own Information Literacy Competency Standards for Higher Education, published in 2000, not all academic libraries follow those guidelines. But some follow them very closely. New York’s state university system, for example, has an “information management” competency in its system-wide general education requirements. The learning outcomes for the State University of New York’s (SUNY) general education program are very similar to the ACRL’s Standards, making it easier for individual colleges to adopt the details of the Standards when drafting their school’s specific information literacy requirements.1

Yet from the perspective of resistance to neoliberalism, any such institutional literacy program must be flawed from its beginning. Maura Seale has shown that the Standards is an intimately neoliberal document. It emphasizes measurable learning outcomes, which lead to a commodification of education. It sets as its goal the creation of the “information-literate student,” a concept similar to the neoliberal homo oeconomus in that it erases all sociocultural context. The Standards emphasize “authoritative” sources too easily equated with the productions of for-profit publishers. And the Standards place an inordinate emphasis on the end result of gaining these skills as being merely job training.2

Seale argues that the Standards is merely the latest in a long line of information literacy documents that embrace neoliberal assumptions. She finds that the library profession has been unwilling to engage with critiques of neoliberalism from the fields of education and critical theory. Information literacy discourse is a “closed system.” Even when information literacy discourse does open to a new concept, it removes that concept from any outside context and folds it back into the closed system. An example in recent years has been information literacy’s embrace of transliteracy.3

The Standards are likely irredeemable. But we are at a moment in which we might reclaim ACRL’s information literacy guidelines from neoliberalism. This year the ACRL has proposed a new set of guidelines to replace the Standards: the Framework for Information Literacy for Higher Education.4

This Framework as presently constituted is not just a revision of the Standards, but represents a new approach to teaching information literacy. Information literacy is defined around six “frames.” These frames are titled “Scholarship is a Conversation,” “Research as Inquiry,” “Authority is Constructed and Contextual,” “Format as a Process,” “Searching as Exploration,” and (added in the June draft) “Information has Value.” Each frame combines a “threshold concept” with “knowledge practices / abilities and dispositions.” Threshold concepts are defined as “those ideas in any discipline that are passageways or portals to enlarged understanding or ways of thinking and practicing within that discipline.” Knowledge practices then demonstrate how learners increase their understanding, while dispositions define the values held by a learner who has passed the threshold.

The six frames take up nine pages. But they are surrounded by another twenty-five pages of supporting material, including a cover letter outlining changes since the last draft, an introduction with suggestions on how to use the document, sample assignments, a glossary and bibliography, and three appendices. The appendices include an earlier introduction under the title “Setting the Context” and an “Introduction for Faculty and Administrators.” The presence of this last suggests the Framework is a political document, its ideas (if not the document itself) intended to be presented to other interest on campus as well as serve as a guide for librarians. It is no coincidence that the vision presented by the Framework embeds information literacy within every aspect of the curriculum, as an “overarching set of abilities in which students are both consumers and creators of information in multiple formats.”

From the perspective of the librarian attuned to critical information literacy issues, there are many ways in which the Framework significantly improves on the Standards. In the Framework training the “information-literate student” is less important than creating habits of “lifelong learning,” and “learning outcomes” are paralleled by more flexible “abilities” and “dispositions.” Learning outcomes themselves are left to the individual libraries to decide upon.

These advances reflect and incorporate the critiques of critical information literacy practitioners.5 Such librarians have acknowledged many aspects of the Framework as improving on the Standards but also expressed concern that these advances were insufficient. The Framework, they argued, should also emphasize “social inclusion; cultural, historical, and socioeconomic contexts; access issues; critical awareness of the mechanisms of establishing authority, including academic authority; and civic and community engagement” as well as the growing critical information literacy movement itself. If incorporated, these recommendations will carve out small spaces of resistance to neoliberalism within the larger document.6

Crisis rhetoric

But a close reading of the Framework suggests that this critique does not go far enough. Key rhetorical measures deployed within the Framework serve to reinforce neoliberal notions, and creating spaces for resistance within the document leaves those intact. These measures include the rhetoric of crisis, the metaphor of the “information ecosystem,” and the metaphor of the “information marketplace.”

Under neoliberalism elites feel justified in using — and even creating — uncertainty and crisis in order to amass power. David Harvey has described how politicians and financiers take advantage of financial upheaval in order to transfer wealth from the poor to the rich — and sometimes have even created such crises on purpose. At universities and their libraries, administrators use the excuse of financial crises to demand reform, a process so common that it is called simply “austerity.” Libraries lose resources, and that money is shifted upwards to fund administrators’ priorities (and salaries). So it is with suspicion that we should look upon invocations of crisis for any new or revamped program in our libraries.7

It is precisely this rhetoric of crisis and reform that characterizes the 2014 Framework. On the very first page of the document, the authors explain that the Framework is a response to a “rapidly changing higher education environment, along with the dynamic and often uncertain information ecosystem in which all of us work and live, [that] require new attention to foundational ideas about that ecosystem.” Gilles Deleuze has written of such rhetoric that “the administrations in charge never cease announcing supposedly necessary reforms: to reform schools, to reform industries, hospitals, the armed forces, prisons.”8 Warnings of rapid change, dynamism, and uncertainty are thus meant to effect compliance from the subject rather than provoke critical thinking.

This kind of language has been used to justify the ACRL’s information literacy programs since their inception. The 2000 Standards insist that information literacy is particularly important “in the contemporary environment of rapid technological change and proliferating information resources.” The 1998 Progress Report on Information Literacy justifies information literacy through the “amount and variety of information” available both digitally and in print. The volume of information is now “staggering … [and] has mushroomed beyond everyone’s wildest imagination.” And the 1989 Presidential Committee on Information Literacy: Final Report explains the need for information literacy thus: “Information is expanding at an unprecedented rate, and enormously rapid strides are being made in the technology for storing, organizing, and accessing the ever growing tidal wave of information.”

The ACRL, then, for twenty-five years has periodically panicked about technological change to justify more comprehensive information literacy programs. Randall Munroe of XKCD provides us with one reasonable response:

As with neoliberal politicians and businessmen, when librarians use this rhetoric it serves to inflate the proposal’s importance and to mask specific agendas under a guise of rational common-sense thinking, rather than identifying any truly disruptive historical moment.9

Ecosystem rhetoric

The opening paragraph of the 2014 Framework, then, emphasizes change. But what is it that is changing? It is a “higher education environment” and an “information ecosystem in which we all work and live”; we must revise our “foundational ideas about that ecosystem.” The concept “information ecosystem” is the fulcrum of the Framework; it is used fifteen times, while “information environment” occurs another six.

The phrase “information ecosystem” seems innocuous enough at first glance. It evokes images of connections, of interdependence, of a landscape known in full and thus made harmless, even appealing. But the concept has a history that belies those images — a history that reveals the phrase’s origins in the business literature of the 1990s tech bubble. By using this ecological metaphor in the Framework the authors thus reinforce the neoliberal ethos underlying that crisis.

A Google ngram shows that the phrase “information ecosystem” or “information ecology” saw a significant increase in use in the mid-1990s. To understand why the phrase appeared at that time — why librarians and others began to envision information with an environmental metaphor — we have to look at an article from 1993 that brought ecological imagery to the business world.

That year James F. Moore published “Predator and Prey: A New Ecology of Competition” in the Harvard Business Review. The article won the HBR’s award for best article of the year, and the author expanded it into a book, The Death of Competition: Leadership and Strategy in the Age of Business Ecosystems. Moore argued that the old model of understanding competition between businesses, as a simple head-to-head fight for market share within an industry, was outmoded. Instead, businesses should be thought of as parts of a “business ecosystem” that cuts across many industries. Within this “business ecosystem, “companies co-evolve capabilities around a new innovation: they work cooperatively and competitively to support new products, satisfy customer needs, and eventually incorporate the next round of innovations.” At times, the ecosystems themselves might find themselves in competition. Moore likened this to the border between a hardwood forest and a grassland.10

It is innovation that takes the place of evolutionary changes in Moore’s business ecosystems. Moore argues that we have to accept the collapse of business ecosystems as a fact of life Instead of propping up old ecosystems, we should help those individuals affected make their way into newer, healthier ecosystems. The key to making this transition work is laissez-faire capitalism: “it’s only essential that competition among them is fierce and fair — and that the fittest survive.”

This last phrase is a tell. “Survival of the fittest” is a famous phrase coined by Herbert Spencer, and not Charles Darwin himself. Spencer, a philosopher, took up Darwinian ideas to argue for the application of evolutionary ideas to society and politics. “Social Darwinism,” as it later came to be called, was the intellectual justification for decades of foreign colonization and internal racial oppression in the late nineteenth and early twentieth centuries.

Nor was Darwin averse to the use of evolutionary ideas in this manner. Gregory Claeys has argued that both Darwin and Spencer were influenced by the Victorian intellectual culture of their time, which viewed society through a lens crafted by the Enlightenment thinker Thomas Malthus.

According to Claeys,

Malthus viewed society in terms of an organic metaphor in which similar laws governed both animal and human worlds. He strongly distinguished between people who benefitted society (as defined in terms of productivity) and those who did not, and he defined rights as derived solely from productivity, competition-as-natural-selection dictated the survival of the “fittest,” and the starvation of the less successful, unless other factors intervened. We do not, of course, have a theory of inherited characteristics in which this “fitness” is transmitted, but we do very nearly have the symbolic imagery, so suitable to an age that prized usefulness above all else, in which such a concept functioned not as science, but as social theory.11

Our age, too, prizes “usefulness” above all else. Evolutionary metaphors are everywhere in our culture. They’re so pervasive that in an article about the dark underpinnings of evolutionary metaphors Claeys (to all appearances unwittingly) used one himself, referring to “intellectual historians concerned with how ideas themselves evolve.” For Moore to use a fairly complex evolutionary metaphor to describe the world of business was no more than tycoons and corporate thinkers had been doing since the Gilded Age. But the ecological metaphor would itself become pervasive, creeping into the library world via the high-tech business press.

Moore’s particular interest was in these high-tech companies. The running narrative throughout the article was the rise of the personal computer industry. Apple, IBM and Tandy are discussed in great detail, while Wal-Mart and the automobile industry were relegated to sidebars. And the technology industry embraced Moore’s analysis. Quickly the tech press adopted Moore’s terminology of ecologies that cut across industries. In 1997 another business thinker, Thomas Davenport, adapted Moore’s ideas to corporate information systems. His book, Information Ecology: Mastering the Information and Knowledge Environment, was excerpted in CIO magazine’s May 1997 issue.

Davenport’s “information ecology”:

emphasizes an organization’s entire information environment. It addresses all of a firm’s values and beliefs about information (culture); how people actually use the information and what they do with it (behavior and work processes); the pitfalls that can interfere with information sharing (politics); and what information systems are already in place (yes, finally, technology).

Davenport’s information ecology focused on the machines behind the information as much as the information itself. So too did the articles in a 1998 special issue of Cultural Resource Management newsletter, which marked the spot at which “information ecology” crossed over into the world of libraries and archives as the “information ecosystem.” Diane Vogt-O’Connor’s introductory article “The Information Ecosystem” took Davenport’s definition of “information ecology” as an epigraph. She echoed neoliberal crisis rhetoric, stating that the “Cultural Resource Information Ecosystem is imperiled by increasing costs, decreased budgets, fewer staff, more users, burgeoning information, increasingly unstable information formats, changing professional information standards and practices, revised laws on fair use and copyright, and institutional restructuring and instability.” Vogt-O’Connor put the journal’s core audience on notice that they were now squarely placed in the new economy. “At the end of the 20th century,” she wrote, “cultural resource managers have become knowledge workers.” Richard Pierce-Moses, in an article on “The Information Ecology of Archives,” echoed this sentiment, explaining that archivists must now work together with IT staff — but in a workplace more like the IT staffer’s than the archive: “I believe that in the evolving high-tech information ecosystem, a savvy manager will look at the strengths of these two disciplines and forge a new alliance between them.”

What we can conclude from this history is that “ecosystem” or “ecology” is a near-infinitely malleable metaphor. That malleability has made it perfect for inserting into discussions of the unknown. From the mid-1990s on, it has been used as a way of signaling that, though there is a seemingly limitless amount of information, here is a way to think about that information as a whole. By describing information as within an ecosystem, we have defined its characteristics and its boundaries, and we understand the connections among its components. We don’t necessarily control all the aspects of the ecosystem — but we can model them.

That acknowledgement that we don’t control the network has another implication: that there is change in the “information ecosystem,” change driven over time by evolutionary processes. The environmental conditions change, and a given organism either has the right traits to thrive in the new environment, or it does not, and dies. Information, then, has a evolutionary value; higher-valued information survives at the cost of lower-valued.

Marketplace rhetoric

Perhaps not coincidentally, “Information has Value” is now a part of the 2014 Framework, added for the second draft published in May. “Information has Value” is one the six frames around which the new model of information literacy is built. In this section, the phrase “information ecosystem” does not appear. Instead, the metaphor used is “information marketplace.” The neoliberal connotations of the “information marketplace,” in a section titled “Information has Value,” are too obvious to require much discussion.

Neoliberal discourse tends to reduce everything to markets, and information is no exception. Indeed, both “information marketplace” and “information ecosystem” work in very similar ways. Both “marketplace” and “ecosystem” are vague metaphors suggesting an ability to model the interaction of its contents, if not see it all at once. Both suggest that within that space interactions are continually taking place the result of which defines the value of the components interacting. In an “information marketplace,” information has value only to the point that the owner and the purchaser agree it does; moreover, that value is constantly compared to the value of other pieces of information. In an “information ecosystem” information has value only to the point that it is adapted to the current environmental conditions. In both, value is defined by comparison and equivalence; without those there is no way to define value.

The compatibility of these two models — the environment and the marketplace — is well-documented in historical literature. For elites, the evolutionary model has served to retroactively justify the hierarchy of society — the wealthiest and the most successful must have been the most fit, while those in ranks below were progressively less fit. Similarly, a marketplace rewards, impartially, the most valuable goods. These two models have popped up throughout the twentieth and twenty-first century whenever someone wanted to justify existing conditions as natural and proper. And even Herbert Spencer saw them as intimately connected, He believed that civilized societies eventually ceased to struggle through warfare and instead competed in the marketplace. The market was thus a natural outgrowth of evolutionary processes.12

Such an uncritical approach to how we describe information thus serves to justify its current state. Information today is largely a commodity. We have an internet that continually walls off portions: newspaper subscriptions, digital versions of books, and especially scholarly publications. The portions that are free we tell students to look upon with suspicion. Consider this: an encyclopedia exists on the internet, free to access, free for anyone to correct or to comment upon, and in many different languages. We view it with suspicion precisely because it is open and free.

Suspicion of free sources extends beyond Wikipedia. At an information literacy instruction workshop for teachers I attended this summer, the instructors wanted to discuss “authority” and how we teach it to students. They gave us two items about wind turbines: one the front page of a scholarly journal article, the other a printout of someone’s blog about the health effects they’d experienced from living near a wind farm. You didn’t need to read the text to just infer from the layout that it was the former which was “authoritative.” The blog’s design, in contrast, would be familiar to anyone who was on Blogger circa 2008: blog title and description in a too-large rectangular box at the top of the page, fonts suitable for viewing on low-resolution devices like a budget mid-2000s PC monitor.

Read the blog. Read a few posts from 2009 or so, when the author was posting regularly. Put yourself in their shoes. Some quick impressions: they live in the Ontario countryside, they have a wind turbine near them and they don’t understand it. They try to communicate, not just through the blog but in other ways, but the very medium of their communication marks them as not to be taken seriously.

Consider just this post: A neighbor has written to the local government and “Nik,” with the neighbor’s permission, reprints the piece. The neighbor writes at the end “Please don’t get us wrong we are all for green energy anything to help the planet it has been damaged enough, but when do we say wait a minute our health and way of life comes first.” They understand the justification for the wind turbines, but they don’t have a voice beyond their neighbor’s blog and a letter that may or may not have been read by local officials.

Nik and his neighbor have run afoul of ideas we take for granted about the relative value of information. Because the format and the context of their writing is irregular, the content is automatically discounted. Scholarly articles have passed a competitive process of peer review. The precisely-formatted pages of the journal becomes, in theory, the outward sign of the article’s innate value. But in an ecosystem or a marketplace of information it is precisely the format and context that is valued. Because something is on a free blog whose template hasn’t been changed since 2008 it is automatically of less worth than a paywalled academic article. “Marketplaces” and “ecosystems” of information thus serve to justify existing inequalities of access, both to content and to publication.13


We know that by privileging design and credentials over content we obscure these power relations, and that reliance on appeals to “expertise” and “authority” is an important feature of neoliberal rhetoric. And yet, teaching students to identify the outward forms of reliable information is key to to the whole concept of information literacy. Since students cannot yet judge the contents of a scholarly work without prior experience in that field, we show them how to first identify the credentials that indicate that a work is scholarly.

The 2014 Framework, like any other information literacy standards, must confront this paradox. The “Authority is Contextual and Constructed” frame takes on the problem in the most direct way. It acknowledges credentialism to be only a substitute for expertise: “The novice researcher may need to rely on superficial indicators of authority such as type of publication or author credentials where experts recognize schools of thought or discipline-specific paradigms.”

To equate expertise with “recognizing schools of thought or discipline-specific paradigms” is just to make it a slightly more sophisticated form of credentialism. To be sure, it’s a step towards expertise. Real academic expertise is born from immersion in a subject to the point that the meanings of these labels break down. What this threshold concept offers is not expertise, but the credentialism of first-year graduate students establishing their internal pecking order over a pitcher of beer.

Together the six frames describe the signs by which we know that a student has passed from “novice” to “expert.” But “expert” is never defined. Instead, each frame contains a brief description of how an expert understands information. Among these descriptions are:

  • “The expert understands that there may not be a single uncontested answer to a query and, hence, is inclined to seek out the many perspectives in a scholarly conversation, not merely the one with which the expert already agrees.”
  • “Experts see inquiry as a process that focuses on problems or questions in a discipline or between disciplines that are open or unresolved.”
  • “Experts understand that authority is the degree of trust that is bestowed and as such, authority is both contextual and constructed.”
  • “The expert understands that the quality and usefulness of a given piece of information is determined by the processes that went into making it.”

These descriptions represent improvement over how many incoming college students understand information. Yet the “expert” described in the 2014 Framework is really no more competent than the “information-literate student” that is the subject of the 2000 Standards. The difference is in the verbs: the “expert” understands, the “information-literate student” merely does. The expertise offered by the Framework is at best a first step.

Further, the Framework still neglects the power relations that govern access to the resources necessary to take part in the process of becoming expert. Seale’s argument about the Standards holds true for the Framework: without discussion of the causes of specific inequalities, the discussion slides towards a blame of the individual for not taking advantage of the opportunity to become information-literate.

The Framework and the Walled Garden

This article has so far highlighted the continuities between the Framework and earlier standards for information literacy. But there is one particular difference that I would like to explore. All the ACRL’s information literacy documents since 1989 have invoked the threat of a looming crisis in order to spur action. In previous documents this crisis has been the a crisis of overabundance: the “proliferating information resources” of the Standards or the “mushroom[ing] beyond everyone’s wildest imagination” of the 1998 Progress Report.

The Framework, in contrast, threatens us not with overwhelming information, but with merely a “rapidly changing higher education environment” and “dynamic and often uncertain information ecosystem.” This change in rhetoric likely results from the segregation of “reliable” resources from the rest of the Internet over the past fifteen years. Scholarly journals are hidden from public view within subscription databases. Major newspapers put their archives in those same databases and their current articles behind paywalls. Librarians make of these databases a virtue, telling students that by using them they will find only reliable sources. And when we do admit that reliable work can be found beyond our databases, we show them Google Scholar, itself designed to search only for scholarly sources.

The Framework thus not only assumes but is predicated upon the continuance of the current system of walled gardens. Its conception of information literacy is about knowing not how but where to find gold untainted by dross. It redefines expertise as little more than knowing how to find one’s way around these walled gardens, and to identify when one has stepped outside. In short, the “information ecosystem” and “information marketplace” metaphors naturalize the enclosure of what might instead have been a commons.


I was inspired to write this article by nina de jesus’s original outline for “Locating the Library within Institutional Oppression.” At the conclusion of that outline, de jesus argues that libraries are potentially key tools of oppression because they target the mind. I believe that relative to its overall place within the library, information literacy is of outsize importance as a potential tool of oppression. Information literacy does not merely target the contents of the mind but consciously tries to change individuals’ cognitive processes. This is especially true of the 2014 Framework, which hinges on “threshold concepts” — “those ideas in any discipline that are passageways or portals to enlarged understanding or ways of thinking and practicing within that discipline.”

In this article I have tried to show why it is important for librarians to resist the neoliberal rhetoric of information literacy, and the particulars of that rhetoric deployed by the Framework. The Framework insists on its own necessity due to a supposed crisis. By describing information as embedded in an “ecosystem” or a “marketplace” it naturalizes the present condition of information scarcity. And it makes of that scarcity a virtue by using it as a credential of authority. The alternative is to resist, for ourselves and for our students, by insisting on the possibility of a true commons of information, and by denying the supposed inevitability of neoliberal values and neoliberal librarianship.

The author gratefully acknowledges the efforts of publishing editor Cecily Walker, internal peer reviewer Ellie Collier, and external peer reviewers Nate Enright and Maura Seale.

Works Cited

American Library Association. “Information Literacy Competency Standards for Higher Education,” 2000.

Association of College and Research Libraries. “A Progress Report on Information Literacy: An Update on the American Library Association Presidential Committee on Information Literacy: Final Report,” March 1998.

———. “Framework for Information Literacy for Higher Education (revised draft),” June 2014.

———. “Presidential Committee on Information Literacy: Final Report.” Accessed September 19, 2014.

Bourg, Chris. “The Neoliberal Library: Resistance Is Not Futile.” Feral Librarian, January 16, 2014.

Claeys, Gregory. “The ‘Survival of the Fittest’ and the Origins of Social Darwinism.” Journal of the History of Ideas 61, no. 2 (April 2000): 223. doi:10.2307/3654026.

Cowan, Susanna M. “Information Literacy: The Battle We Won That We Lost?” Portal: Libraries and the Academy 14, no. 1 (2014): 23–32.

Davenport, Thomas H. “The Bigger Picture.” CIO, May 15, 1997.

Davies, Will, and Tom Mills. “Neoliberalism and the End of Politics.” New Left Project, August 22, 2014.

Deleuze, Gilles. “Postscript on the Societies of Control.” October 59 (Winter 1992): 3–7.

Fister, Barbara. “The Illogical Complexity of the Walled-Garden Library.” Library Babel Fish, September 19, 2013.

Harvey, David. “Neoliberalism as Creative Destruction.” The Annals of the American Academy of Political and Social Science 610, no. 1 (March 1, 2007): 21–44. doi:10.1177/0002716206296780.

Hawkins, Mike. Social Darwinism in European and American Thought, 1860-1945: Nature as Model and Nature as Threat. New York: Cambridge University Press, 1997.

Jacobs, Heidi L. M. “Minding the Gaps.” Communications in Information Literacy 7, no. 2 (2013).

Moore, James F. “Predators and Prey: A New Ecology of Competition.” Harvard Business Review 71, no. 3 (1993): 75–86.

Nik. “My Next Door Neighbour Is a Wind Turbine,” 2009-2013.

“Petition on the ACRL Framework,” 2014.

Pierce-Moses, Richard. “The Information Ecology of Archives.” CRM 21, no. 6 (1998): 29–33.

Saunders, Daniel B. “Neoliberal Ideology and Public Higher Education in the United States.” Journal for Critical Education Policy Studies 8, no. 1 (2010): 41–77.

Seale, Maura. “The Neoliberal Library.” In Information Literacy and Social Justice: Radical Professional Praxis, edited by Lua Gregory and Shana Higgins, 39–61. Sacramento, Calif.: Library Juice Press, 2013.

Vogt-O’Connor, Diane. “The Information Ecosystem.” CRM 21, no. 6 (1998): 3–6.

  1. As information literacy standards are added to general education and college accreditation requirements, librarians gain a voice on the committees that shape those standards. Yet their very prominence also means that information literacy can come to define the library’s relationship with the larger institution. See Susanna M. Cowan, “Information Literacy: The Battle We Won That We Lost?,” Portal: Libraries and the Academy 14, no. 1 (2014): 23–32. Thanks to Maura Seale for pointing out that not all academic library instruction programs are so tied to the ACRL’s standards.
  2. Maura Seale, “The Neoliberal Library,” in Information Literacy and Social Justice: Radical Professional Praxis, ed. Lua Gregory and Shana Higgins (Sacramento, Calif.: Library Juice Press, 2013), 51. Homo oeconomus refers to the model humans used in many modern economic theories, always acting with rational disinterest to better their own state. Seale draws on Daniel B. Saunders, “Neoliberal Ideology and Public Higher Education in the United States,” Journal for Critical Education Policy Studies 8, no. 1 (2010): 41–77.
  3. Seale, “The Neoliberal Library,” 40-46.
  4. The Framework is as of this writing (September 2014) still a work in progress. A first draft was released in February 2014, a second draft in June, and a third draft is scheduled to appear in November.
  5. In particular, Heidi Jacobs has advocated an information literacy that focuses not on outcomes but on “habits of mind,” parallels to which can be seen in the “dispositions” and “threshold concepts” of the Framework. Jacobs argues that teaching habits of mind is potentially democratizing. See Heidi L. M. Jacobs, “Minding the Gaps,” Communications in Information Literacy 7, no. 2 (2013): 103.
  6. Disclaimer: I have signed the petition linked in this paragraph.
  7. David Harvey, “Neoliberalism as Creative Destruction,” The Annals of the American Academy of Political and Social Science 610, no. 1 (March 1, 2007): 37, doi:10.1177/0002716206296780.
  8. Gilles Deleuze, “Postscript on the Societies of Control,” October 59 (Winter 1992): 4.
  9. See also.
  10. James F. Moore, “Predators and Prey: A New Ecology of Competition,” Harvard Business Review 71, no. 3 (1993): 76, 79.
  11. Gregory Claeys, “The ‘Survival of the Fittest’ and the Origins of Social Darwinism,” Journal of the History of Ideas 61, no. 2 (April 2000): 223, doi:10.2307/3654026.
  12. Mike Hawkins, Social Darwinism in European and American Thought, 1860-1945: Nature as Model and Nature as Threat (New York: Cambridge University Press, 1997): 86. Hawkins also notes (p.153) that Karl Marx, a rather well-known critic of the marketplace, rejected the possibility that biological laws could be fruitfully applied to the study of human societies. Thanks to Ellen Adams for the reference.
  13. This also serves to explain the failure of institutional repositories to gain faculty support. Many publishers only allow preprints or postprints to be uploaded. A Word document in twelve-point double-spaced Times New Roman must be unconsciously undervalued by scholars compared to a traditionally-formatted journal article. (Note: The author manages an institutional repository, and has had no more luck than anyone else in overcoming this bias).

Karen Coyle: The book you scroll

planet code4lib - Tue, 2014-09-23 21:32
I was traveling in Italy where I spend a lot of time in bookstores. I'm looking not only for books to read, but to discover new authors, since Italian bookstores are filled with translations of authors that I rarely see in the few bookstores remaining in my home town of Berkeley, CA. While there I came across something that I find fascinating: the flipback book. These books are small - the one I picked up is about 4 3/4" x 3 1/4". It feels like a good-sized package of post-it notes in your hand.

 From the outside, other than its size, looks "normal" although the cover design is in landscape rather than portrait  position.

The surprise is when you open the book. The first thing you notice is that you read the book top-to-bottom across two pages. It's almost like scrolling on a web page, because you move the pages up, not across.

The other thing is that they are incredibly compact. The paper is thin, and some of the books contained entire trilogies, although only about 2 - 2.5 inches thick.

Because there is no gutter between the two pages, you essentially get a quantity of text that is equal to what you get on a regular book page. Oddly, the two contiguous pages are numbered as separate pages, although only the odd numbers actually printed, so you have pages 37, 39, 41, etc. However, the actual number of open pages is about the same as the paperback book.

The font is a sans serif, similar to many used online, so that whole thing feels like a paper book imitating a computer screen.

I haven't read the book yet, so I don't know if the reading experience is pleasing. But I am amazed that someone has found a way to reinvent the print book after all these years. Patented, of course.

There are few titles available yet, but an Amazon search on "flipback" brings up a few.

District Dispatch: Lebanese librarians visit ALA Washington Office

planet code4lib - Tue, 2014-09-23 14:19

ALA staff with the Lebanese librarians.

Last week, the American Library Association (ALA) Washington Office hosted librarians from Lebanon who are visiting the United States to learn about library practices and futures. Our visitors, May El Okaily Ep Riad Hassan (Baakleen National Library) and Carole Sahyoun (Library of the Municipality of Zahleh), are participants in the U.S. State Department’s International Visitor Leadership Program. Through short-term visits to the U.S., foreign leaders in a variety of fields experience our country firsthand and cultivate professional relationships.

The visitors’ agenda was wide-ranging with particular interest in digital content and technology. Topics included ebooks, online databases, webinars, digitization, maker spaces, 3D printing, and libraries as publishers. Also discussed were library advocacy and various aspects about ALA.

Adam Eisgrau, Carrie Russell, Charlie Wapner, and I represented ALA. Hosting visitors from abroad is a regular responsibility of the Office, and we’ve met with librarians from many other countries around the world.

The post Lebanese librarians visit ALA Washington Office appeared first on District Dispatch.

LITA: An Interview with LITA Emerging Leader Kyle Denlinger

planet code4lib - Tue, 2014-09-23 13:00

1. Tell us about your library job.  What do you love most about it?

My job as eLearning Librarian is equal parts project manager, instructional designer, information literacy teacher, and instructional technologist, with some multimedia producer and reference librarian thrown in to keep things interesting. My main initiative right now is the continuing development of ZSRx, my library’s series of open online courses for Wake Forest alumni and parents. What I love most about my job is that I’m empowered to act on big ideas, I get to do a bunch of creative work, and I get to do it all alongside some of the best coworkers and faculty colleagues you could find anywhere.

2. Where do you see yourself going from here?

I would *love* to eventually head up a team that serves as a resource for faculty who want to better integrate technology and library resources into their teaching in effective and creative ways. This team would handle everything from software training to multimedia production to instructional design for online, blended, and face-to-face courses.

3. Why did you apply to be an Emerging Leader?  What are your big takeaways from the ALA-level activities so far?

I applied to the EL program because so many of the people I look up to in libraries went through the program at some point in their career, and their experiences seem to have served them well. I can see why–I’ve already made some excellent ALA buddies through EL and have had a few doors open to me since being accepted to the program. My biggest takeaway so far is that decisions are made by those that show up. Big as they are, ALA, and LITA in particular, are really accessible organizations for those that wish to get involved at any level–you just have to show up and be willing to do the work.

4. What have you learned about LITA governance and activities so far?

It was great to be able to sit in on a LITA board meeting and to help plan the #becauseLITA stuff surrounding the Town Meeting at Midwinter. LITA’s emphasis on openness and camaraderie, and the fun-by-default nature of most LITA activities makes me happy that it’s my professional home. I can’t say I’m an expert on LITA governance (yet), but I do know that I’m able to be involved at even the highest levels if I so choose.

5. What’s your favorite LITA moment?  What would you like to do next in the organization?

My favorite LITA moment comes from my least-favorite LITA moment (or, rather, a LITA non-moment). At the Top Tech Trends panel at Annual in Chicago, Char Booth gave me and a project I’d been working on a very prominent shout-out in front of a full room. This was great, but it would have been even better if I’d, you know, ATTENDED THE PANEL. I’d decided to skip it to get an early dinner with a friend. I found out through a small flood of excited texts from friends who were there, and at the LITA Happy Hour that evening, almost everyone I knew was super excited for me. I think someone bought me a beer. Such is LITA.

The thing I’m excited about is getting involved in next is the shiny new User Experience IG, which everyone should join. Shameless plug:

ZBW German National Library of Economics: Other editions of this work: An experiment with OCLC's LOD work identifiers

planet code4lib - Tue, 2014-09-23 11:39

Large library collections, and more so portals or discovery systems aggregating data from diverse sources, face the problem of duplicate content. Wouldn't it be nice, if every edition of a work could be collected beyond one entry in a result set?

The WorldCat catalogue, provided by OCLC, holds more than 320 million bibliographic records. Since early in 2014, OCLC shares its 197 million work descriptions as Linked Open Data: "A Work is a high-level description of a resource, containing information such as author, name, descriptions, subjects etc., common to all editions of the work. ... In the case of a WorldCat Work description, it also contains [Linked Data] links to individual, oclc numbered, editions already shared in WorldCat." The works and editions are marked up with semantic markup, in particular using schema:exampleOfWork/schema:workExample for the relation from edition to work and vice versa. These properties have been added recently to the spec, as suggested by the W3C Schema Bib Extend Community Group.

ZBW contributes to WorldCat, and has 1.2 million oclc numbers attached to it's bibliographic records. So it seemed interesting, how many of these editions link to works and furthermore to other editions of the very same work.

As a basis for our experiment, we extracted the "oclc subset" from EconBiz. Round about one third of the instances in this subset is in English - and therefore most likely to be included in WorldCat from other sources also -, one third in other languages (mostly German), and for one third the language is unknown. We randomly selected 100,000 instances from the 1.2 million "oclc subset". We looked up the work id for each of the attached oclc numbers, and in turn the editions of this work.

An example for the work/edition linking

We start with oclc number 247780068, which represents an edition of the book "Changes in the structure of employment with economic development" by Amarjit S. Oberai (WorldCat, EconBiz).

A lookup via

curl -LH "Accept: application/ld+json"

returns data in JSON-LD format. Here a heavily shortened version (the full data of the example is available on GitHub):

    "exampleOfWork" : "",
    "schema:name" : "Changes in the structure of employment with economic development",
    "schema:datePublished" : "1978",
    "@id" : "",
    "workExample" : [

We go up the hierarchy by picking the "exampleOfWork" URI:

curl -LH "Accept: application/ld+json"

and get the data for the work (heavily shortened again)

    "schema:name" : [
      "Changes in the structure of employment with economic development",
          "@value" : "Changes in the structure of employment with economic development",
          "@language" : "en"
          "@value" : "Changes in the structure of employment with economic development /",
          "@language" : "en"
      "Changes in the structure of employment with economic development /",
      "Changes in the structure of employment with economic development."
    "@id" : "",
    "workExample" : [

As we can observe, different forms of the title of the work (with and w/o language tag) are collected in the "schema:name" property - WorldCat does not try to determine an authoritative title for the work. Data from different editions is also collected for authors or subjects. Sometimes literal values are complemented by URIs, for example to VIAF (persons) or LCSH (subjects).

We now can look up other editions of this work, given in a "workExample" property, e.g.

curl -LH "Accept: application/ld+json"

which reveals a later edition of the same work:

    "exampleOfWork" : "",
    "schema:name" : "Changes in the structure of employment with economic development",
    "schema:datePublished" : "1981",
    "@id" : "",
    "schema:bookEdition" : "2nd ed",
    "workExample" : ""

WorldCat itself seems to use this data in it's View all editions and formats link on the human-readable web pages for the editions. The ISBN-based "workExample" links on the edition level redirect to oclc numbers; their purpose and use seems to be not yet documented.

The experiment

The starting point for the experiment outlined above was the query to what extent such work/edition links exist for a real-world collection like EconBiz. For the randomly selected 100,000 instances (editions) with oclc numbers, we looked up the edition by its URI and extracted the related work (if such a work exists). We than looked up the work, and extracted the oclc numbers of all its editions. For each oclc number of the starting set, we saved a list of oclc numbers of other editions as json data structure. This took about 44 hours of runtime. (We deliberately didn't parallelize the network access to avoid overloading the server, and cached results to save some lookups). For 15 of the lookups we got a 500 "internal server error", for 71 a 404 "not found". These errors occurred on edition as well as on work lookups, with no recognizable pattern. Some random tests revealed that normally a second lookup of the url was successful. Due to their small number we ignored these errors in the further course of the experiment.

In a second step, we evaluated the resulting data in respect to the whole of WorldCat, and to the data of our collection. All code and data for the experiment (and prior run seven weeks earlier, which did not show significantly different results) is avaliable on GitHub.

The results

For more than 99 % of our test data set we found valid WorldCat work ids. For a total of 880 oclc numbers we couldn't retrieve a work; in 922 cases the work did not link back to the oclc number from which we started. So in this early stage of WorldCat Linked Data (flagged still as "experimental") there seem to be some minor gaps and inconsistencies in the Work/Edition linkage. Yet, the results show that more than 60 % of WorldCats editions of our test set link to a work with at least one other edition within WorlCat.

Number of editions per work re. all OCLC numbers from WorldCat re. 1,260,337 OCLC numbers from EconBiz 1 37847 92879 2 13734 4826 3-5 23697 1225 6-10 14663 137 11-50 8563 50 51-100 443 2 101-9999 173 1

When we take into account, which of these other editions are in the holdings of EconBiz, the number boils down to 6.2 % of the test set.

The resulting edition clusters themselves, the clustering algorithms, and in the end the cataloging practices they result from, require further analysis and discussion. A quick glance at the largest clusters in EconBiz reveal that they result from serials: Indian village surveys, country profiles or economical analysis for different countries. If particularly these clusters make sense to users, seems questionable.

How could this be useful?

One aim in a larger subject portal like EconBiz, which merge several data sources, is the reduction of duplicates in result sets received by the users. Unfortunately, only a minor part (1,2 of 8 million records) of the EconBiz holdings have oclc numbers, and only a fraction of these form clusters within these holdings. So currently the WorldCat work clusters could only be a tiny piece of the de-duplication puzzle. For the development of custom de-duplication algorithms however the data may create a starting point, by providing firstly a pool of possible example cases, and secondly a counterpart for statistical analysis of results. (In a recent blog entry with some answers to early question about the OCLC work entities, Richard Wallis points to OCLC's FRBR Work-Set Algorithm, which has been described in a 2002 D-Lib magazine article.) Some random samples revealed a situation de-duplication also for a few instances can be highly helpful: When working papers or other sources have records with and without attached links to the full text, work clusters could be exploited to display always a link to the PDF, when an instances/edition is presented to the users.

Another area, where work clusters could be useful immediately, is the ranking of search results. If we suppose that works for which multiple instances exist are more relevant, we can use that as a ranking factor (surely among others). Since it does not make a crucial difference where these editions exist, we here can base such an assumption on the whole of WorldCat, and thus can add such a ranking factor for a much larger part of our existing data.

This does not even touch the most exiting field for exploitation: The descriptions on the work as well as on the edition level. For subject indexing and classification, this has been investigated by Magnus Pfeffer (slides) and Kai Eckert, e.g. in the UB Mannheim Linked Data Service and continued in the Culturegraph project. Possible applications are the collection and merging of index terms or classes from different editions of a work, or perhaps also an evaluation of indexing consistency. Heidrun Wiesenmüller suggested the use of work clusters for the enrichment with personal name authority data, or even the enrichment of the authority itself (slides, in German).

OCLC has announced further development of the service: "WorldCat Works will continue to be enhanced over the coming months and years.  The data will get cleaner, the descriptions will get richer, and the linking will get better."


With thanks to Kirsten Jeude, Kim Plassmeier and Timo Borst for hints and discussions.

Works from Linked Data Linked data  

Open Knowledge Foundation: Join our first Regional Community Mentoring and Skill-share Gathering

planet code4lib - Tue, 2014-09-23 10:51

We are glad to announce our first official Community Mentoring and Skillshare Gathering to be held in Mexico City on October 3, 2014 in connection with the ConDatos and AbraLatam conferences. The event will kick off a series of similar regional events on other continents later this year and into next and will serve to enhance our virtual skill sharing and mentoring activities.

The Community Mentoring and Skillshare Gathering is a 1-day event scheduled to take place right after the ConDatos and AbreLatam conferences in Mexico next week. Open Knowledge community members from Latin America will join other grassroots open activists from across the region to build relationships, share skills, and find mentors.

The event is a pilot that will explore new ways of supporting the global (and often virtual) open knowledge community by organising face-to-face skill sharing and mentoring activities around relevant regional open events. The intention is to use these gatherings to jumpstart a community lead mentorship programme, an idea that we have been discussing with community members for a few months (see here for more details). The mentorship programme is intended to be largely self-sustainable, community/peer-to-peer driven and of benefit to both newcomers and more experienced community members. The program should run on volunteer basis, to ensure broad commitment and inclusivity. We are honoured to be able to experiment with this idea in collaboration with our community in and around Latin America following AbreLatam/ConDatos this month in Mexico, and hope to learn a lot about the needs and desires of community members seeking mentorship – as well as how we can make the most of in person gatherings to strengthen both our skills and community.

Powered by the Partnership for Open Data

The series of events are organized in close collaboration with the Partnership for Open Data and in partnership with SocialTIC. One goal of the Partnership for Open Data is to support the development of strong open knowledge communities around the world, and the aim of the community summit will be to run a number of peer to peer skillshares designed to strengthen the open community’s ability to continue to grow and diversify.

A day full of activities

Activities at the event will include a mentoring brainstorming session, where we will discuss how and why mentoring is needed in the network, actual skill sharing sessions as well as some time dedicated to discussing how we continue to support and teach each other online after we return to our respective cities and countries.

In this same spirit of peer-to-peer support, the Partnership for Open Data and Open Knowledge will host a skillshare corner at the ConDatos. One of the activities that we will be running is a Open Data Census community sprint in which we will try to expand the community of contributors to the open data census.

Community building of the programme and upcoming community calls

In order to ensure that we make the most of the time we have all together and put together a programme that suits the needs of the Latin American open community, we would like to invite you to participate in one of the following community calls to discuss the ideas mentioned above:

  • Tuesday, 23th of September, 6 pm CET/12 pm EDT (HANGOUT LINK)
  • Thursday, 25th of September, 9 pm CET/3 pm EDT (HANGOUT LINK)

If you are unable to attend one of the above calls but would like to suggest ideas, we would love to hear from you via this idea submission form or on email local (at) okfn (dot) org.

How to join

If you are in Mexico next week and would like to participate, please drop us a line at local [at] okfn [dot] org.

Lastly, we would like to extend a warm thank you to our friends at SocialTIC for helping to make this happen! We are looking forward to seeing you all next week in Mexico!

Terry Reese: MarcEdit 6.0 Update

planet code4lib - Tue, 2014-09-23 04:39

This update is coming a little later than I’d hoped, but I’ve been busying myself with a couple of projects that have been consuming some of my off hours.  Today’s update deals with a handful of issues, as well as provides some new functionality. 


  • Bug Fix: Edit Field Function: Field recursion switch (/r) was broken in the last update.  This has been corrected.
  • Enhancement: Edit Field Function: LDR editing support has been added to the function.
  • Enhancement: MarcEditor: Keyboard shortcut for jump to page and jump to records have been added.
  • Enhancement: RDA Helper:  Added a new option to the 260/264 translation that enables users to always utilize a copyright or phonograph symbol.
  • Enhancement: RDA Helper:  Updated the RDA Helper to support the manufacturer or distributor subfields.  When the program encounters these in the 260, the appropriate 264 with second indicator 2 or 3 will be created.
  • Enhancement: RDA Helper:  The new option has been added to the task list.
  • Enhancement: Linked Records Tool:  I’ve added a new option to the Linked Records to allowing the program to embed $0 links to VIAF. 
  • Enhancement: MARCSplit:  The save directory now automatically sets to the desktop rather than the root drive.

You can get the updates via MarcEdit’s automated update tool or at:


DuraSpace News: CINECA and DSpace

planet code4lib - Tue, 2014-09-23 00:00

Bologna, Italy

In August Cineca started work on two different projects related to DSpace:

1) As a DSpace Registered Service Provider, the Consortium worked on the update to DSpace 4.1 of the Institutional Repository of the National Institute of Education of Singapore (NIE);

Eric Hellman: Attribution Meets Open Access

planet code4lib - Mon, 2014-09-22 18:24
Credits Dancer (see on YouTube)It drives my kids crazy, but I always stay for the credits after the movie. I'm writing this while on a plane over the Atlantic, and I just watched Wes Anderson's Grand Budapest Hotel. Among the usual credits for the actors, the producers, the directors, writers, editors, composers, designers, musicians, key grips, best boys, animators, model makers and the like, Michael Taylor is credited as the painter of "Johannes von Hoytl's Boy with Apple" along with his model, Ed Munro. "The House of Waris" is credited for "Brass Knuckle-dusters and Crossed Key Pins". There's a "Drapesmaster", a Millener and two "Key Costume Cutters". There are even "Photochrom images courtesy of The Library of Congress". To reward me for watching to the end there's a funny Russian dancer over the balalaika chorus.

It says a lot about the movie industry that so much work has gone into the credits. They are a fitting recognition of the miracle of a myriad of talents collaborating to result in a Hollywood movie. But the maturity of the film industry is also reflected in the standardization of the form of this attribution.

The importance of attribution is similarly reflected by its presence is each of the Creative Commons licenses. But many of the digital media that have adopted Creative Commons licensing have not reached the sort of attribution maturity seen in the film industry. The book publishing industry, for example, hides the valuable contributions of copy editors, jacket designers, research assistants and others. It's standard practice to attribute a work to the author alone. If someone spends time to make an ebook work well, that generally doesn't get a credit alongside the author.

The Creative Commons licenses require attribution, but don't specify much about how the attribution is to be done, and it's taken a while for media specific conventions to emerge. It seems to be accepted practice, for example, that CC licensed blog posts require a back-link to the original blog post. People who use CC licensed photos to illustrate a slide presentation typically have a credits page with links to the sources at the end.

Signs of maturation were omnipresent at the 6th Conference for Open Access Scholarly Publishing, which I'm just returning from. Prominent in the list of achievements was the announcement of a "Shared Statement and Community Principles on Expectations of Scholarly Standards on Attribution", a set of attribution principles for open access scholarly publications, signed by all the important open access scholarly publishers.

The four agreed-upon principles are as follows:

  1. Researchers choosing Open Access and using liberal licenses do so because they wish to maximise access to and re-use of their work. We acknowledge the tradition of both freely giving knowledge to our communities and also the expectation that contributions will be respected and that full credit is given according to scholarly norms.
  2. Authors choose Creative Commons licenses in part to ensure attribution and the assignment of credit. The community expects that where a work is reprinted, collected, aggregated or otherwise re-used substantially as a whole that the original source, location and free availability of the original version will be both made explicit and emphasised.
  3. The community expects that where modifications have been made to an article that this will be made explicit and every practicable effort will be made to make the nature and scope of modifications explicit. Where a derivative is digital all practicable efforts should be made to make comparison with the original version as easy as possible for the user.
  4. The community assumes, consistent with the terms of the Creative Commons licenses, that unless noted otherwise authors have not endorsed any republication or modification of their original work. Where authors have explicitly endorsed the republication or modified version this should be made explicit in a way which is separate to the attribution.

These principles, and the implementation guidelines that will result from further consultations, are particularly needed because many scholars, while supporting the reuse enabled by CC BY licenses, are concerned about possible misuse. The principles reinforce that when a work is modified, the substance of the modifications should be made clear to the end user, and that further, there must be no implication that republication carries any endorsement by the original authors.

One thing that is likely to emerge from this process is the use of CrossRef DOI's as attribution urls. DOIs can be resolved (via redirection) to an authoritative web and can be maintained by the publisher so that links needn't break when content moves.

As scholarly content gets remixed, revised and repurposed, there will increasingly be a need to track contributions every bit as elaborate as for Grand Budapest Hotel. Imagine a paper by Alice analyzing data from Bob on a sample by Carol, with later corrections by Eve. Luckily we live in the future and there's already a technology and user framework that shows how it can be done. That technology, the future of attribution (I hope), is Distributed Version Control. A subsequent post will discuss why every serious publisher needs to understand GitHub.

The emphasis on community in the the "Shared Statement" is vitally important. With consultation and shared values, we'll soon all be dancing at the end of the credits.

Manage Metadata (Diane Hillmann and Jon Phipps): Late to the party?

planet code4lib - Mon, 2014-09-22 17:01

In my post last week, I mentioned a paper that Gordon Dunsire, Jon Phipps and I had written for the IFLA Satellite Meeting in Paris last month “Linked Data in Libraries: Let’s make it happen!” (note the videos!). I wanted to talk about the paper and why we wrote it, but I’m not just going to summarize it–I wouldn’t want to spoil the paper for anyone!

The paper, “Versioning Vocabularies in a Linked Data World”, was written in part because we’d seen far too many examples of vocabulary management and distribution that paid little or no attention to the necessity to maintain vocabularies over time and to make them available (over and over again, of course) to the data providers using them. It goes without saying that the vocabularies were expected to change over time, but in too many cases, vocabulary owners distributed changes in document form, or as files with new data embedded but no indication of what had changed, or worse: nothing.

We have been thinking about this problem for a long time. Even the earliest instance of the NSDL Registry (precursor of the current Open Metadata Registry, or OMR, as we like to call it) incorporated a ‘history’ view of the data, basically the ‘who, what, when’ of every change made in every vocabulary. Later on, we added the ability to declare ‘versions’ of the vocabularies themselves, taking advantage of that granular history data, for those trying to manage the updating of their ‘product’ in a rational manner. Sadly enough, not very many of our users took advantage of that feature, and we’re not entirely sure why not, but there it was. Jon has always been frustrated with our first passes at this problem, and after Gordon and I discussed the problem with others at DC-2013 last year, and my rant about the lack of version control on came out, it seemed time to think about the issue again.

At that point we were also planning our own big time versioning event: the unpublished first version of the RDA Element Sets were about to make their re-debut in ‘published’ form, reorganized, and with new URIs. Jon was also working on the GitHub connection with the OMR underlying the new RDA Registry site, working in a more automated mode as planned. He and Gordon and I had been discussing a new approach for some time, based on the way software is versioned and distributed, which is well-supported in Git and GitHub. So, as we drove back from ALA Midwinter in Philadelphia in January of last year, Jon and I blocked out the paper we’d agreed to do with Gordon on how we thought versioning should work in the semantic vocabulary world.

Consider: how do all of us computer nerds update our applications? Do we have to go to all sorts of websites (sometimes, but not always, prompted by an email) to determine which applications have changed and invoke an update? Well, sure, sometimes we do (particularly when they want more money!), but since the advent of the App Store and Google Play, we can do our updates much more easily, and for the most part those updates are ‘pushed’ to us for decisions on whether we want to update or not, we are told in a general way what has changed, and we click … and it’s done.

This is the way updates should happen in the Semantic Web data world, increasingly dependent on element sets and value vocabularies to provide descriptions of products of all kinds in order to provide access, drive sales or eyeballs, or support effective connections between resources. Now that we’re all reconciled to using URIs instead of text (even if our data hasn’t yet made that transition), shouldn’t we consider an important upside of that change, a simpler and more useful way to update our data?

So, I’ll quit there–go read the paper and let us know what you think. Don’t miss Gordon’s slides from Paris, available on his website. Note especially the last question on his final slide: “Is it time to get serious about linked data management?” We think it’s past time. After all, ‘management’ is our middle name.

LITA: LITA Members: take the LITA Education Survey

planet code4lib - Mon, 2014-09-22 16:42

LITA members, please participate in the LITA Education Survey. The survey was first sent out 2 weeks ago to all current LITA members.  Another reminder will appear in LITA members email boxes soon, or you can click the links in this posting. The survey should take no more than 10 minutes of your time and will help your LITA colleagues developing continuing education programs to meet your needs.

LITA Education Survey 2014

In our continuing efforts to make LITA education offerings meet the needs and wishes of our membership, we ask that you, the LITA members, take a few minutes to fill out the linked survey. We are looking for information on education offerings you have participated in recently and would like to know what topics, methods and calendar times work best for you.

The more responses we get the better chances we have to create education offerings that provide excellent value to you the LITA membership. We appreciate you taking 10 minutes of your time to complete the LITA Education Survey 2014.

Thank you for your time and input.

LITA Education Committee

Library of Congress: The Signal: 18 Years of Kairos Webtexts: An interview with Douglas Eyman & Cheryl E. Ball

planet code4lib - Mon, 2014-09-22 14:05

Cheryl E. Ball, associate professor of digital publishing studies at West Virginia University, is editor of Kairos

Since 1996 the electronic journal Kairos has published a diverse range of webtexts, scholarly pieces made up of a range of media and hypermedia. The 18 years of digital journal texts are both interesting in their own right and as a collection of complex works of digital scholarship that illustrate a range of sophisticated issues for ensuring long-term access to new modes of publication. Douglas Eyman, Associate Professor of Writing and Rhetoric at George Mason University is senior editor and publisher of Kairos. Cheryl E. Ball, associate professor of digital publishing studies at West Virginia University, is editor of Kairos. In this Insights Interview, I am excited to learn about the kinds of issues that this body of work exposes for considering long-term access to born-digital modes of scholarship. [There was also a presentation on Kairos at the Digital Preservation 2014 meeting.]

Trevor: Could you describe Kairos a bit for folks who aren’t familiar with it? In particular, could you tell us a bit about what webtexts are and how the journal functions and operates?

Doug: Webtexts are texts that are designed to take advantage of the web-as-concept, web-as-medium, and web-as-platform. Webtexts should engage a range of media and modes and the design choices made by the webtext author or authors should be an integral part of the overall argument being presented. One of our goals (that we’ve met with some success I think) is to publish works that can’t be printed out — that is, we don’t accept traditional print-oriented articles and we don’t post PDFs. We publish scholarly webtexts that address theoretical, methodological or pedagogical issues which surface at the intersections of rhetoric and technology, with a strong interest in the teaching of writing and rhetoric in digital venues.

Douglas Eyman, Associate Professor of Writing and Rhetoric at George Mason University is senior editor and publisher of Kairos

(As an aside, there was a debate in 1997-98 about whether we were publishing hypertexts, which then tended to be available in proprietary formats and platforms and not freely available on the WWW or not; founding editor Mick Doherty argued that we were publishing much more than only hypertexts, so we moved from calling what we published ‘hypertexts’ to ‘webtexts’ — Mick tells that story in the 3.1 loggingon column).

Cheryl: WDS (What Doug said One of the ways I explain webtexts to potential authors and administrators is that the design of a webtext should, ideally, enact authors’ scholarly arguments, so that the form and content of the work are inseparable.

Doug: The journal was started by an intrepid group of graduate students, and we’ve kept a fairly DIY approach since that first issue appeared on New Year’s day in 1996. All of our staff contribute their time and talents and help us to publish innovative work in return for professional/field recognition, so we are able to sustain a complex venture with a fairly unique economic model where the journal neither takes in nor spends any funds. We also don’t belong to any parent organization or institution, and this allows us to be flexible in terms of how the editors choose to shape what the journal is and what it does.

Cheryl: We are lucky to have a dedicated staff who are scattered across (mostly) the US: teacher-scholars who want to volunteer their time to work on the journal, and who implement the best practices of pedagogical models for writing studies into their editorial work. At any given time, we have about 25 people on staff (not counting the editorial board).

Doug: Operationally, the journal functions much like any other peer-reviewed scholarly journal: we accept submissions, review them editorially, pass on the ones that are ready for review to our editorial board, engage the authors in a revision process (depending on the results of the peer-review) and then put each submission through an extensive and rigorous copy-, design-, and code-editing process before final publication. Unlike most other journals, our focus on the importance of design and our interest in publishing a stable and sustainable archive mean that we have to add those extra layers of support for design-editing and code review: our published webtexts need to be accessible, usable and conform to web standards.

Trevor: Could you point us to a few particularly exemplary works in the journal over time for readers to help wrap their heads around what these pieces look like? They could be pieces you think are particularly novel or interesting or challenging or that exemplify trends in the journal. Ideally, you could link to it, describe it and give us a sentence or two about what you find particularly significant about it.

Cheryl: Sure! We sponsor an award every year for Best Webtext, and that’s usually where we send people to find exemplars, such as the ones Doug lists below.

Doug: From our peer-reviewed sections, we point readers to the following webtexts (the first two are especially useful for their focus on the process of webtext authoring and editing):

Cheryl: From our editorially (internally) reviewed sections, here are a few other examples:

Trevor: Given the diverse range of kinds of things people might publish in a webtext, could you tell us a bit about the kinds of requirements you have enforced upfront to try and ensure that the works the journal publishes are likely to persist into the future? For instance, any issues that might come up from embedding material from other sites, or running various kinds of database-driven works or things that might depend on external connections to APIs and such.

Doug: We tend to discourage work that is in proprietary formats (although we have published our fair share of Flash-based webtexts) and we ask our authors to conform to web standards (XHTML or HTML5 now). We think it is critical to be able to archive any and all elements of a given webtext on our server, so even in cases where we’re embedding, for instance, a YouTube video, we have our own copy of that video and its associated transcript.

One of the issues we are wrestling with at the moment is how to improve our archival processes so we don’t rely on third-party sites. We don’t have a streaming video server, so we use YouTube now, but we are looking at other options because YouTube allows large corporations to apply bogus copyright-holder notices to any video they like, regardless of whether there is any infringing content (as an example, an interview with a senior scholar in our field was flagged and taken down by a record company; there wasn’t even any background audio that could account for the notice. And since there’s a presumption of guilt, we have to go through an arduous process to get our videos reinstated.) What’s worse is when the video *isn’t* taken down, but the claimant instead throws ads on top of our authors’ works. That’s actually copyright infringement against us that is supported by YouTube itself.

Another issue is that many of the external links in works we’ve published (particularly in older webtexts) tend to migrate or disappear. We used to replace these where we can with links to (aka The Wayback Machine), but we’ve discovered that their archive is corrupted because they allow anyone to remove content from their archive without reason or notice.[1] So, despite its good intentions, it has become completely unstable as a reliable archive. But we don’t, alas, have the resources to host copies of everything that is linked to in our own archives.

Cheryl: Kairos holds the honor within rhetoric and composition of being the longest-running, and most stable, online journal, and our archival and technical policies are a major reason for that. (It should be noted that many potential authors have told us how scary those guidelines look. We are currently rewriting the guidelines to make them more approachable while balancing the need to educate authors on their necessity for scholarly knowledge-making and -preservation on the Web.)

Of course, being that this field is grounded in digital technology, not being able to use some of that technology in a webtext can be a rather large constraint. But our authors are ingenious and industrious. For example, Deborah Balzhiser et al created an HTML-based interface to their webtext that mimicked Facebook’s interface for their 2011 webtext, “The Facebook Papers.” Their self-made interface allowed them to do some rhetorical work in the webtext that Facebook itself wouldn’t have allowed. Plus, it meant we could archive the whole thing on the Kairos server in perpetuity.

Trevor: Could you give us a sense of the scope of the files that make up the issues? For instance, the total number of files, the range of file types you have, the total size of the data, and or a breakdown of the various kinds of file types (image, moving image, recorded sound, text, etc.) that exist in the run of the journal thus far?

Doug: The whole journal is currently around 20 Gb — newer issues are larger in terms of data size because there has be an increase in the use of audio and video (luckily, html and css files don’t take up a whole lot of room, even with a lot of content in them). At last count, there are 50,636 files residing in 4,545 directories (this count includes things like all the system files for WordPress installs and so on). A quick summary of primary file types:

  • HTML files:     12247
  • CSS:               1234
  • JPG files:        5581
  • PNG:               3470
  • GIF:                 7475
  • MP2/3/4:         295
  • MOV               237
  • PDF:                191

Cheryl: In fact, our presentation at Digital Preservation 2014 this year [was] partly about the various file types we have. A few years ago, we embarked on a metadata-mining project for the back issues of Kairos. Some of the fields we mined for included Dublin Core standards such as MIMEtype and DCMIType. DCMIType, for the most part, didn’t reveal too much of interest from our perspective (although I am sure librarians will see it differently!! but the MIMEtype search revealed both the range of filetypes we had published and how that range has changed over the journal’s 20-year history. Every webtext has at least one HTML file. Early webtexts (from 1996-2000ish) that have images generally have GIFs and, less prominent, JPEGs. But since PNGs rose to prominence (becoming an international standard in 2003), we began to see more and more of them. The same with CSS files around 2006, after web-standards groups starting enforcing their use elsewhere on the Web. As we have all this rich data about the history of webtextual design, and too many research questions to cover in our lifetimes, we’ve released the data in Dropbox (until we get our field-specific data repository,, completed).

Trevor: In the 18 years that have transpired since the first issue of Kairos a lot has changed in terms of web standards and functionality. I would be curious to know if you have found any issues with how earlier works render in contemporary web browsers. If so, what is your approach to dealing with that kind of degradation over time?

Cheryl: If we find something broken, we try to fix it as soon as we can. There are lots of 404s to external links that we will never have the time or human resources to fix (anyone want to volunteer??), but if an author or reader notifies us about a problem, we will work with them to correct the glitch. One of the things we seem to fix often is repeating backgrounds. lol. “Back in the days…” when desktop monitors were tiny and resolutions were tinier, it was inconceivable that a background set to repeat at 1200 pixels would ever actually repeat. Now? Ugh.

But we do not change designs for the sake of newer aesthetics. In that respect, the design of a white-text-on-black-background from 1998 is as important a rhetorical point as the author’s words in 1998. And, just as the ideas in our scholarship grow and mature as we do, so do our designs, which have to be read in the historical context of the surrounding scholarship.

Of course, with the bettering of technology also comes our own human degradation in the form of aging and poorer eyesight. We used to mandate webtexts not be designed over 600 pixels wide, to accommodate our old branding system that ran as a 60-pixel frame down the left-hand side of all the webtexts. That would also allow for a little margin around the webtext. Now, designing for specific widths — especially ones that small — seems ludicrous (and too prescriptive), but I often find myself going into authors’ webtexts during the design-editing stage and increasing their typeface size in the CSS so that I can even read it on my laptop. There’s a balance I face, as editor, of retaining the authors’ “voice” through their design and making the webtext accessible to as many readers as possible. Honestly, I don’t think the authors even notice this change.

Trevor: I understand you recently migrated the journal from a custom platform to the Open Journal System platform. Could you tell us a bit about what motivated that move and issues that occurred in that migration?

Doug: Actually, we didn’t do that.

Cheryl: Yeah, I know it sounds like we did from our Digital Preservation 2014 abstract, and we started to migrate, but ended up not following through for technical reasons. We were hoping we could create plug-ins for OJS that would allow us to incorporate our multimedia content into its editorial workflow. But it didn’t work. (Or, at least, wasn’t possible with the $50,000 NEH Digital Humanities Start-Up Grant we had to work with.) We wanted to use OJS to help streamline and automate our editorial workflow–you know, the parts about assigning reviewers and copy-editors, etc., — and as a way to archive those processes.

I should step back here and say that Kairos has never used a CMS; everything we do, we do by hand — manually SFTPing files to the server, manually making copies of webtext folders in our kludgy way of version control, using YahooGroups (because it was the only thing going in 1998 when we needed a mail system to archive all of our collaborative editorial board discussions) for all staff and reviewer conversations, etc.–not because we like being old school, but because there were always too many significant shortcomings with any out-of-the-box systems given our outside-the-box journal. So the idea of automating, and archiving, some of these processes in a centralized database such as OJS was incredibly appealing. The problem is that OJS simply can’t handle the kinds of multimedia content we publish. And rewriting the code-base to accommodate any plug-ins that might support this work was not in the budget. (We’ve written about this failed experiment in a white paper for NEH.)

[1] will obey robots.txt files if they ask not to be indexed. So, for instance, early versions of Kairos itself are no longer available on because such a file is on the Texas Tech server where the journal lived until 2004. We put that file there because we want Google to point to the current home of the journal, but we actually would like that history to be in the Internet Archive. You can think of this as just a glitch, but here’s the more pressing issue: if I find someone has posted a critical blog post of my work, if I ever get ahold of the domain it was originally posted, I can take it down there *and* retroactively make it unavailable on, even if it used to show up there. Even without such nefarious purpose, just the constant trade in domains and site locations means that no researcher can trust that archive when using it for history or any kind of digital scholarship.

LITA: Taking the Edge Off of Tech

planet code4lib - Mon, 2014-09-22 13:00
Image courtesy of Tina Franklin. Flickr 2013.

E-readers and tablets have become an increasingly popular way for patrons to access digital media. Mobile technology has altered the landscape of the types of services offered to public library patrons. Digital media services and distributers (i.e. iBookstore, Audible, Overdrive and Hoopla) allow patrons to download and stream ebooks, audiobooks, video and music. After happening upon the article “Shape Up Your Skills and Shake Up Your Library,” by Marshall Breeding for Computers in Libraries, I’m reminded that information professionals in public libraries must sharpen their tech skills in order to be of advantage to their patrons. If you belong to a library that subscribes to a digital media distributor, such as Overdrive and Hoopla, you are most likely first tier technical support for issues concerning the application and the device itself. For patrons who are not familiar with tablets and e-readers, their expectation of your assistance goes beyond navigating the library’s subscription service. You may find yourself giving instruction on where to find the wireless settings or how to properly turn the device off. It is natural to become intimidated by the technology when you’re sitting with a patron desperately attempting to figure out what the issue could be.

Not all public libraries are fortunate to have e-readers and tablets to train with. In that case, I suggest looking into alternative forms of instruction. Though I cannot promise you a complete instructional, I’ll attempt to help you brush-up on the light technical skills you’ll need before having to phone the professionals.

Familiarity is key
The first step in getting a better understanding of the technology is to become familiar with the exact services that your library is subscribed to. In the case of Overdrive and Hoopla, their services can be accessed using a computer. That is a great opportunity to explore the different features of the service. Be ready to answer certain questions: Does the library offer downloadable ebooks, audiobooks or video? What formats are they available in? What devices can be used with the service? If all else fails, you can always contact the service provider and ask for training materials or frequently asked questions and answers. If not already available, you can create instructional handouts for use by colleagues and patrons.

Take advantage of free services
To add some edge to your skills, consider utilizing the live product displays at electronics stores.
• Visit the Apple Store to use their iPads, iPad mini, etc.
• Best Buy has live displays of various Android OS tablets
• Target stores often feature Kindles and iPads
• Barnes and Noble stores have Nook displays

There are a plethora of alternate stores to consider. The imperative is to root around with the technology until you’re comfortable with its features. You want to know where the settings are located for each device because that knowledge will be useful at some point. And while you’re there, don’t be afraid to ask the salesperson questions about the device’s functionality. There is a high chance you will ask a question that will later be asked of you.

I make no assumptions here. Not all libraries have access to instructional materials or handouts for patrons. My aim is to create a starting point for self-training and instruction that is free and can be passed along to colleagues and patrons.

David Rosenthal: Three Good Reads

planet code4lib - Mon, 2014-09-22 12:11
Below the fold I'd like to draw your attention to two papers and a post worth reading.

Cappello et al have published an update to their seminal 2009 paper Towards Exascale Resilience called Towards Exascale Resilience: 2014 Update. They review progress in some critical areas in the past five years. I've referred to the earlier paper as an example of the importance of, and the difficulty of, fault-tolerance at scale. As scale increases, faults become part of the normal state of the system; they cannot be treated as an exception. It is nevertheless disappointing that the update, like its predecessor, deals only with exascale computation not also with exascale long-term storage. Their discussion of storage is limited to the performance of short-term storage in checkpointing applications. This is a critical issue, but a complete exascale system will need a huge amount of longer-term storage. The engineering problems in providing it should not be ignored.

Dave Anderson of Seagate first alerted me to the fact that, in the medium term, manufacturing economics make it impossible for flash to displace hard disk as the medium for most long-term near-line bulk storage. Fontana et al from IBM Almaden have now produced a comprehensive paper, The Impact of Areal Density and Millions of Square Inches (MSI) of Produced Memory on Petabyte Shipments of TAPE, NAND Flash, and HDD Storage Class Memories that uses detailed industry data on flash, disk and tape shipments, technologies and manufacturing investments from 2008-2012 to reinforce this message. They also estimate the scale of investment needed to increase production to meet an estimated 50%/yr growth in data. The mismatch between the estimates of data growth and the actual shipments of media on which to store it is so striking that they are forced to cast doubt on the growth estimates. It is clear from their numbers that the industry will not make the mistake of over-investing in manufacturing capacity, driving prices, and thus margins, down. This provides significant support for our argument that Storage Will Be Much Less Free Than It Used To Be.

Henry Newman has a post up at Enterprise Storage entitled Ensuring the Future of Data Archiving discussing the software architecture that future data archives require. Although I agree almost entirely with Henry's argument, I think he doesn't go far enough. We need to fix the system, not just the software. I will present my, much more radical, view of future archival system architecture in a talk at the Library of Congress' Designing Storage Architectures workshop. The text will go up here in a few days.


Subscribe to code4lib aggregator