Evergreen 2.10.1 is now available for download.
This is a bugfix release that fixes the following significant bug:
- LP#1560174: Importing MARC records can fail in database upgraded to 2.10.0
This bug affected only databases that were upgraded to 2.10.0 from a previous version; fresh installations of 2.10.0 are not affected.
Evergreen users who prefer not to perform a full upgrade from 2.10.0 to 2.10.1 can fix the bug by applying the database update script 2.10.0-2.10.1-upgrade-db.sql (found in the source directory Open-ILS/src/sql/Pg/version-upgrade).
For more information about what’s in the release, check out the release notes.
Part gq of Amazon crawl..
This item belongs to: data/ol_data.
This item has files of the following types: Data, Data, Metadata, Text
The following year Ian Adams and Ethan Miller of UC Santa Cruz's Storage Systems Research Center and I looked at this possibility more closely in a Technical Report entitled Using Storage Class Memory for Archives with DAWN, a Durable Array of Wimpy Nodes. We showed that it was indeed plausible that, even at then current flash prices, the total cost of ownership over the long term of a storage system built from very low-power system-on-chip technology and flash memory would be competitive with disk while providing high performance and enabling self-healing.
Although flash remains more expensive than hard disk, since 2011 the gap has narrowed from a factor of about 12 to about 6. Pure Storage recently announced FlashBlade, an object storage fabric composed of large numbers of blades, each equipped with:
- Compute – 8-core Xeon system-on-a-chip – and Elastic Fabric Connector for external, off-blade, 40GbitE networking,
- Storage – NAND storage with 8TB or 52TB raw capacity of raw capacity and on-board NV-RAM with a super-capacitor-backed write buffer plus a pair of ARM CPU cores and an FPGA,
- On-blade networking – PCIe card to link compute and storage cards via a proprietary protocol.
FlashBlade clearly isn't DAWN. Each blade is much bigger, much more powerful and much more expensive than a DAWN node. No-one could call a node with an 8-core Xeon, 2 ARMs, and 52TB of flash "wimpy", and it'll clearly be too expensive for long-term bulk storage. But it is a big step in the direction of the DAWN architecture.
DAWN exploits two separate sets of synergies:
- Like FlashBlade, it moves the computation to where the data is, rather then moving the data to where the computation is, reducing both latency and power consumption. The further data moves on wires from the storage medium, the more power and time it takes. This is why Berkeley's Aspire project's architecture is based on optical interconnect technology, which when it becomes mainstream will be both faster and lower-power than wires. In the meantime, we have to use wires.
- Unlike FlashBlade, it divides the object storage fabric into a much larger number of much smaller nodes, implemented using the very low-power ARM chips used in cellphones. Because the power a CPU needs tends to grow faster than linearly with performance, the additional parallelism provides comparable performance at lower power.
This is a guest post by John Caldwell.
On Friday, January 29, 2016, I hosted my fellow National Digital Stewardship residents, their mentors, and the NDSR program staff to our cohort’s first enrichment session at the US Senate.
The morning started with two presentations. First, Mark Evans, Director of Digital Archives and Information Resources Management Services at History Associates talked about the challenges of preserving a Senator’s digital legacy. Second, Brandon Hirsch, IT Specialist at the Center for Legislative Archives, a division of the National Archives, shared with us the difficulty in preserving the permanent electronic records of Congress.
Mark talked about how History Associates performs a digital assessment and how one of the important elements of the assessment is to use digital preservation tools such as DROID to identify file formats. A graphical representation can show you where to focus future preservation and description efforts. Unfortunately, the tail can also be misleading; in this collection, there were only 116 email files, but email can be a treasure trove of digital information.
Brandon shared an experience where, in order to extract permanent records that were transferred to the National Archives in proprietary software, Center for Legislative Archives’ staff leveraged virtualization to build a temporary instance of the proprietary application. The recovery operation allowed Center staff to extract and preserve the records in their native format, dissociated from the proprietary container.While this project demonstrated a significant achievement for the Center in terms of preservation, it is not a viable strategy for ongoing and future preservation work. Aside from the increased staff resources devoted to this single operation, the underlying technology used for virtualization changes rapidly. In addition to technological changes, business strategies also change and may alter the long-term support for virtualization products and formats.
This not only introduces additional format sustainability problems, but it seems in the world of digital preservation, archivists, curators and librarians are at the hands of the technology sector. In a short amount of time, tools and systems we rely on can disappear or cease to be supported.
Some more general discussion topics included the idea that considerations for preservation should be baked in at the point of creation; concern over the growing volume of electronic records, especially in government records (the Center doubled its holdings between FY14 and FY15); and how digital preservation concerns will be managed by emulation or migration.
After the presentations, Mark and Brandon demonstrated digital preservation tools and we talked about how these tools can be integrated into digital preservation workflows (for a longer discussion of digital preservation tools, see my Signal blog post from November 2015).
Earlier this month, ALA announced the opening of the application process for the 2016 Google Policy Fellowship program. Consider this post a friendly reminder that applications for the program are due on Friday, March 25th (that’s right, just three days from today).
The program is a great opportunity for graduate students to gain experience working on information policy within the dynamic beltway ecosystem. As we mentioned in the announcement, Fellows work on a wide gamut of issues that may include digital copyright, e-book licenses and access, future of reading, international copyright policy, broadband deployment, online privacy, telecommunications policy (including e-rate and network neutrality), digital divide, open access to information, free expression, digital literacy, the future of libraries generally, and more.
This summer, the selected fellow will spend 10 weeks learning about national policy from ALA’s Washington Office staff, and completing a major project. Google provides the $7,500 stipend for the summer, but the work agenda is determined by the ALA and the selected fellow. Throughout the summer, Google’s Washington Office will provide an educational program for all of the fellows, such as lunchtime talks and interactions with Google Washington staff.
Johnna Percell, a graduate of the College of Information Studies at the University of Maryland, served as our fellow last summer. Margaret Kavaras, our fellow from the summer of 2014, now serves as an Office for Information Technology Policy Research Associate.
ALA encourages all interested graduate students – and especially those in the library science and information fields – to apply for the program. Further information on the program is available here.
The post Google Policy Fellowship applications due this Friday (3/25)! appeared first on District Dispatch.
Alan Stanley taught me this trick at an Islandora Camp a few years ago, and when trying to remember it this morning I messed up one critical piece. So I’ll post it here so I have something to refer back to when I need to do this again.
The Drupal Devel module includes a menu item for executing arbitrary PHP code on the server. (This is, of course, something you want to set permissions on very tightly because it can seriously wreck havoc on your day if someone uses it to do bad things.) Navigate to /devel/php on your Islandora website (with the Devel module enabled), and you’ll get a nice, big &lgtextarea> and an “Execute” button:
In this case, I’m generating the TECHMD datastream using the FITS module and displaying the results of the function call on the HTML page using the Devel module’s dpm() function:include drupal_get_path('module', 'islandora_fits') . '/includes/derivatives.inc'; $object= islandora_object_load('demo:6'); $results = islandora_fits_create_techmd($object, False, array('source_dsid' => 'OBJ')); dpm($results);
Works like a charm!
Part fz of Amazon crawl..
This item belongs to: data/ol_data.
This item has files of the following types: Data, Data, Metadata, Text
That was the topic discussed recently by OCLC Research Library Partners metadata managers, initiated by Chew Chiat Naun of Cornell University. Using identifiers now to point to “things” rather than relying on text strings will facilitate transforming legacy data into linked data later. By linking to authoritative sources through identifiers, libraries can reduce the need for local maintenance of authority files. A number of institutions have already started adding identifiers to their catalog records, including the national libraries of France and Germany.
A Program for Cooperative Cataloging task group is developing a plan to incorporate identifiers (URIs) in MARC bibliographic and authority records as mainstream practice. One challenge is to differentiate real-world objects from descriptions about real-world objects. This distinction may be difficult for catalogers to make, but maybe tools could be created to make this differentiation easier. The goal is to align library practices with those of the semantic web. The task group has focused on MARC fields that support the $0 for identifiers, and the British Library is preparing a proposal to use $4 to specify relationships with identifiers.
These identifiers could point to non-library resources. For example, Wikidata already has identifiers for such roles as trumpeter, violinist, translator, librettist, and narrator. The task group’s focus has been on identifiers in bibliographic records because all catalogers can create bibliographic records while only a much smaller subset can create authority records. In some countries, only the national libraries create national authority records. Opportunities for batch enhancement of authorities are limited currently. Ideally, the bibliographic record would have a $0 URI pointing to a real-world object described by an authority record.
OCLC’s recent Person Entity Lookup pilot indicated how identifiers might impact authority workflows. By looking up a person and retrieving a number of identifiers, libraries could aggregate associated information from other authorities or sources having a “same as” relationship. For example, Wikidata shows that Noam Chomsky is affiliated with MIT, information that neither the LC/NAF authority file nor VIAF (Virtual International Authority File) includes. One of the most important—and powerful—aspects of adding identifiers is to reduce the amount of copying/pasting in the library environment when the identifier is stewarded elsewhere. Identifiers could provide a bridge between MARC and non-MARC environments and to non-library resources. Librarians wouldn’t have to be the experts in all domains.
Other potential areas of impact:
- Much journal literature is described by non-library agencies. Identifiers could link the forms of name in journal articles vs. scholarly profiling services vs. library catalogs, thus transcending currently siloed domains. This should also help catalogers disambiguate names more easily.
- Identifiers could provide links to digital collections and other resources that are not under authority control currently.
- Identifiers linking to other sources could allow us to present users with labels in non-Latin scripts for entitites that are represented only by romanization in our current authority files.
- In a linked data environment, identifiers could bypass authority records. Content negotiation could determine the preferred labels to display to the user. Ultimately, there could be much less emphasis on establishing an “authoritative text string”.
Tools mentioned during the discussions:
- Terry Reese’s MARCEdit (“Build Links Data” enhancements) and the editor produced by LC for its BIBFRAME project include lookups of remote authority services that allow incorporating a range of identifier schemes into cataloging workflows.
- The RIMFF (RDA in Many Metadata Formats) tool captures various attributes of an entity. Its focus is on concatenating elements the cataloger has selected rather than establishing an authorized access point. The application can decide what data to extract or display, such as an English or a Chinese language version.
- W3C SHACL (Shapes Constraint Language) helps define the shapes that our data will need, such as what types of descriptions we’ll want for various attributes. These could include the attributes catalogers might want to add to enhance an entity such as a missing birthplace, institutional affiliation or discipline.
- Catmandu is a data processing toolkit developed to build up digital libraries and research services.
Many challenges lie ahead. We’re going to need a larger vocabulary of relationships between entities. Libraries will want their book vendors to also include identifiers in the records they supply. We will still have many name authorities without dates or other attributes that cannot be matched by algorithms alone, still requiring human curation. It is unclear how libraries—or their support systems—will deal with multiple identifiers referring to the same object or resource. We need more editing tools that add URIs in the process of editing records. Libraries must educate their local systems vendors on the need for identifiers for both cataloging and discovery to avoid their stripping out the data added. Identifiers’ impact on authority workflows will depend on tools that don’t exist yet.About Karen Smith-Yoshimura
Karen Smith-Yoshimura, senior program officer, works on topics related to creating and managing metadata with a focus on large research libraries and multilingual requirements.Mail | Web | Twitter | More Posts (65)
For the third consecutive year, Open Data Peru organised the #OpenDataDay 2016, an international event about #OpenData.
Currently, the open data is becoming a trend adopted by governments to provide information about public spending, budgets, etc. in open formats, free to use and available to any citizen. In this way it seeks to create a more transparent and citizen participatory system. As data is released under this license, any citizen can access and use the information to build distribution platforms, data visualizations, and so on. This not only benefits citizens, it also allows specialists, academics, journalists and organizations to process this information to generate research, articles, and much more complete applications.
On March 5, Open Data Peru invite different specialists, citizens and organizations to discuss and learn about the use and contribution of open data. During the morning, they could hear various talks on the implementation of open data in Peru. Several initiatives, such as the collective @QDatosAbiertos (We Want Open Data), which seeks to inform and engage citizens with different communication campaigns and workshops aimed at demonstrating how to use simple technologies that allow this data to be used without needing specialist skills. The group, Ciudadanos al Día, presented an initiative for Best Practices of Open Data in the public sector, which aims to reward public institutions to publish information in open data.
Another great presentation was that of the Municipality of San Isidro, which has been promoting a culture of technology and innovation has been since 2015. They implemented an open data portal and organized a hackathon. They also signed up to the International Open Data Charter.
Open Data Peru presented a summary of the work carried out during 2015, and one of the main activities (workshops, hackathons, Dataton, etc.) that was undertaken was the National Scholars Program, which focused on the decentralization of open data at the national level. Throughout this program, we worked with communities of different technology departments in Peru and selected leaders to become data trainers. With this work, Open Data Peru seeks to create a network of trainers and specialists who can work steadily and advise on creating platforms and applications using open data, and create a space for experimentation and citizen participation.
During the afternoon, we simultaneously held workshops with different specialists. Participants were able to learn more about data journalism work, visualizations, narrative, semantic web, usability and internet governance.
The #OpenDataDay 2016 in Peru, finished the day with dynamic lighting talks during #PiscoyDatos.
Open Data Peru is constantly in search of volunteers to work with open data technology projects, train more journalists in the dynamics of working with data and promote a more transparent system through the release of open data. To date we keep on improving and contribution of open data in our platform d.odpe.org
We thank all the communities and organizations that were part of #OpenDataDay: StoryCode, OjoPúblico, Hiperderecho, Hack IT Labs, Ciudadanos al Día and each of the speakers.
The event was sponsored by Hack IT Labs, Municipality of San Isidro, the Peruvian Press Council and the Latin American Open Data Initiative (ILDA), thank you for your contribution and support of the event.
In my humble opinion, what we have here is a failure to communicate.
Libraries, especially larger libraries, are increasingly made up of many different departments, including but not limited to departments such as: cataloging, public services, collections, preservation, archives, and now-a-days departments of computer staff. From my point of view, these various departments fail to see the similarities between themselves, and instead focus on their differences. This focus on the differences is amplified by the use of dissimilar vocabularies and subdiscipline-specific jargon. This use of dissimilar vocabularies causes a communications gap and left unresolved ultimately creates animosity between groups. I believe this is especially true between the more traditional library departments and the computer staff. This communications gap is an impediment to when it comes to achieving the goals of librarianship, and any library — whether it be big or small — needs to address these issues lest it wastes both its time and money.
Here are a few examples outlining failures to communicate:
- MARC – MARC is a data structure. The first 24 characters are called the leader. The second section is called the directory, and the third section is intended to contain bibliographic data. The whole thing is sprinkled with ASCII characters 29, 30, and 31 denoting the ends of fields, subfields, and the record itself. MARC does not denote the kinds of data it contains. Yet, many catalogers say they know MARC. Instead, what they really know are sets of rules defining what goes into the first and third sections of the data structure. These rules are known as AACR2/RDA. Computer staff see MARC (and MARCXML) as a data structure. Librarians see MARC as the description of an item akin to a catalog card.
- Databases & indexes – Databases & indexes are two sides of the same information retrieval coin. “True” databases are usually relational in nature and normalized accordingly. “False” databases are flat files — simple tables akin to Excel spreadsheets. Librarians excel (no puns intended) at organizing information, and this usually manifests itself through the creation of various lists. Lists of books. Lists of journals. Lists of articles. Lists of authoritative names. Lists of websites. Etc. In today’s world, the most scalable way to maintain lists is through the use of a database, yet most librarians wouldn’t be able to draw an entity relationship diagram — the literal illustration of a database’s structure — to save their lives. With advances in computer technology, the problem of find is no longer solved through the searching of databases but instead through the creation of an index. In reality, modern indexes are nothing more than enhancements of traditional back-of-the-book indexes — lists of words and associated pointers to where those words can be found in a corpus. Computer staff see databases as MySQL and indexes as Solr. Librarians see databases as a matrix of rows & columns, and the searching of databases in a light of licensed content such as JSTOR, Academic Search Primer, or New York Times.
- Collections – Collections, from the point of view of a librarian, are sets of curated items with a common theme. Taken as a whole, these collections embody a set of knowledge or a historical record intended for use by students & researchers for the purposes of learning & scholarship. The physical arrangment of the collection — especially in archives — as well as the intellectual arrangment of the collection is significant because they bring together like items or represent the development of an idea. This is why libraries have classification schemes and archives physically arrange their materials in the way they do. Unfortunately, computer staff usually do not understand the concept of “curation” and usually see the arrangements of books — classification numbers — as rather arbitrary.
- Services – Many librarians see the library profession as being all about service. These services range from literacy programs to story hours. They range from the answering of reference questions to the circulation of books. They include social justice causes, stress relievers during exam times, and free access to computers with Internet connections. Services are important because the provide the means for an informed public, teaching & learning, and the improvement society in general. Many of these concepts are not in the forefront of the minds of computer staff. Instead, their idea of service is making sure the email system works, people can log into their computers, computer hardware & software are maintained, and making sure the connections to the Internet are continual.
As a whole, what the profession does not understand is that everybody working in a library has more things in common than differences. Everybody is (suppose to be) working towards the same set of goals. Everybody plays a part in achieving those goals, and it behooves everybody to learn & respect the roles of everybody else. A goal is to curate collections. This is done through physical, intellectual, and virtual arrangment, but it also requires the use of computer technology. Collection managers need to understand more of the computer technology, and the technologist needs to understand more about curation. The application of AACR2/RDA is an attempt to manifest inventory and the dissemination of knowledge. The use of databases & indexes also manifest inventory and dissemination of knowledge. Catalogers and database administrators ought to communicate on the similar levels. Similarly, there is much more to preservation of materials than putting bits on tape. “Yikes!”
What is the solution to these problems? In my opinion, there are many possibilities, but the solution ultimately rests with individuals willing to take the time to learn from their co-workers. It rests in the ability to respect — not merely tolerate — another point of view. It requires time, listening, discussion, reflection, and repetition. It requires getting to know other people on a personal level. It requires learning what others like and dislike. It requires comparing & contrasting points of view. It demands “walking a mile in the other person’s shoes”, and can be accomplished by things such as the physical intermingling of departments, cross-training, and simply by going to coffee on a regular basis.
Again, all of us working in libraries have more similarities than differences. Learn to appreciate the similarities, and the differences will become insignificant. The consequence will be a more holistic set of library collections and services.
DuraSpace News: Recordings Available: “VIVO plus SHARE: Closing the Loop on Tracking Scholarly Activity”
Austin, TX DuraSpace launched its 14th Hot Topics Community Webinar Series, “VIVO plus SHARE: Closing the Loop on Tracking Scholarly Activity” last month. Curated by Rick Johnson, Program Co-Director, Digital Initiatives and Scholarship Head, Data Curation and Digital Library Solutions Hesburgh Libraries, University of Notre Dame and Visiting Program Officer for SHARE at the Association of Research Libraries, this series explored how the effort to link VIVO and SHARE together will bring us closer to a wider picture of today’s scholarship.
I’ve been working with my team on a vision document for what we want our future digital library platform to look like. This exercise keeps bringing us back to defining the library of the future. And that means addressing the very use of the term, ‘Library.’
When I first exited my library (and information science) program, I was hired by Adobe Systems to work in a team of other librarians. My manager warned us against using the word ‘Librarian’ among our non-librarian colleagues. I think the gist was: too much baggage there.
So, we used the word ‘Information Specialist.’
Fast forward a few years to my time in an academic environment at DePaul University Library and this topic came up in the context of services the library provided. Faculty and students associated the library in very traditional ways: a quiet, book-filled space. But the way they used the library was changing despite the lag in their semantic understanding.
The space and the virtual tools we put in place online helped users not only find and evaluate information, but also create, organize and share information. A case in point was our adoption of digital publishing tools like Bepress and Omeka, but also the Scholar’s Lab.
I’m seeing a similar contradiction in the public library space. Say library and people think books. Walk into a public library and people do games, meetings, trainings and any number of online tasks.
This disconnect between what the word ‘Library’ evokes in the mind’s eye and what it means in practice is telling. We’ve got a problem with our brand.
In fact, we may need a new word.
Taken literally, a library has been a word for a physical collection of written materials. The Library of Alexandria held scrolls for example. Even code developers rely on ‘libraries’ today, which are collections of materials. In every case, the emphasis is on the collection of things.
Now, I’m not suggesting that we move away from books. Books are vessels for ideas and libraries will always be about ideas.
In fact, this focus on ideas rather than any one mode for transmitting ideas is key. In today’s library’s people not only read about ideas, they meet to discuss ideas, they brainstorm ideas.
I don’t pretend to have the magic word. In fact, maybe it’s taking so long for us to drop ‘Library’ because there is not a good word in existence. Maybe we need create a new one.
One tactic that comes to mind as we navigate this terminological evolution is to retain the library, but subsume it inside of something new. I’ve seen this done to various degrees in other libraries. For example, Loyola University in Chicago built an entirely new building adjacent to the book-filled library. Administratively, the building is run by the library, but it is called the Klarchek Information Commons. In that rather marvelous space looking out over Lake Michigan, you’ll find the modern ‘library’ in all its glory. Computers, Collaboration booths, etc. I like this model for fixing our identity problem and I think it would work without throwing the baby out with the bathwater.
However, its done, one thing is for sure. Our users have moved on from ‘the library’ and are left with no accurate way to describe that place that they love to go to when they want to engage with ideas. Let’s put our thinking caps on and puts a word on their lips that does justice to what the old library has become. Let’s get past the L Word.
We are three lessons into our series of webinars detailing how to develop in Islandora CLAW, led by CLAW Committer Diego Pino (METRO.org). If you haven't been attending, you've missed out on some great expressions of the CLAW stack via colorful doodles, such as the difference between the Islandora 7.x-1.x hamburger and the Islandora 7.x-2.x lobster chimera.
If that doesn't make any sense to you, then good news! You can catch up on what you've missed by viewing the recorded sessions:
Week One: Intro to Fedora 4.x
Week Two: Hands-on Creating Fedora 4.x Resources
Week Three: Data Flow in the CLAW
Once you have caught up, why not join us for the rest of the lessons in real time? They will continue on for another six weeks, every Tuesday at 11AM Eastern time, on Adobe Connect. Here's what's in store:General Outline:
- Basic Notions of Fedora 4 (sessions 1 and 2)
- How Fedora 4 Works - General Intro and differences between Fedora 3 and 4
- RDF instead of XML
- Fedora 4 REST API
- How Fedora 4 Works - General Intro and differences between Fedora 3 and 4
- Introduction to CLAW
- How Data Flows (session 3)
- Sync Gateway (how to trigger the sync) (session 3)
- Basics of Camel (session 4)
- Adding/creating new content type (session 5 - 6)
- PHP Microservices (session 7 - 8)
- Basics of Silex
- Dissecting a Service
- Interacting with Fedora via Microservices
- How to Join a Sprint (session 9)
With thanks to our hosts:
We all know that Access is one of the best deals around for a tech conference. There are great speakers, great activities and great food, all for one amazingly low price. You get the hackfest, two and half days of our single-stream conference and the workshop for one great price of $450 Canadian (I know, right?).
The keeners and well-organized won’t miss out on the Early Bird tickets, which should hit store shelves in June for an incredible $350. That translates into less than $100 per day of awesomeness and we feed you. Shut the front door!
If you are working on an even tighter budget, there are still some options for you:
- Students – if you are a full-time student and trying save for Access, we are also making available 25 deeply discounted tickets just for you at the rock-bottom price of $200.
- Be a Presenter – submit a proposal, rock our world and we’ll hook you up for $300. That’s a 33.33333% savings for sharing your awesome project, idea or words of wisdom with your peers. What can go wrong?
Finally, if attending Access is still a stretch for your budget, we will once again have two Diversity Scholarships available. To qualify, you need to be from a “traditionally underrepresented and/or marginalized group,” be unable to attend the conference without some financial assistance and must not have received a scholarship to attend either of the previous two conferences. Meet the criteria and you’ll be eligible for a draw of one our $1000 Diversity Scholarships to help you attend the conference.
We hope to see you in Fredericton.
From the NYTimes, “Apple Encryption Engineers, if Ordered to Unlock iPhone, Might Resist”
Apple employees are already discussing what they will do if ordered to help law enforcement authorities. Some say they may balk at the work, while others may even quit their high-paying jobs rather than undermine the security of the software they have already created, according to more than a half-dozen current and former Apple employees.
Do software engineers have professional ethical responsibilities to refuse to do some things even if ordered by their employers?
Filed under: General
From Tim Donohue, DSpace Tech Lead on behalf of the DSpace developers
Austin, TX DSpace 5.5 is now available providing security fixes to both the XMLUI and JSPUI, along with bug fixes to the DSpace 5.x platform.
From Mike Conlon, VIVO Project Director
I learned this week that Reveal Digital has digitized On Our Backs (OOB), a lesbian porn magazine that ran from 1984-2004. This is a part of the Independent Voices collection that “chronicles the transformative decades of the 60s, 70s and 80s through the lens of an independent alternative press.” For a split second I was really excited — porn that was nostalgic for me was online! Then I quickly thought about friends who appeared in this magazine before the internet existed. I am deeply concerned that this kind of exposure could be personally or professionally harmful for them.
While Reveal Digital went through the proper steps to get permission from the copyright holder, there are ethical issues with digitizing collections like this. Consenting to a porn shoot that would be in a queer print magazine is a different thing to consenting to have your porn shoot be available online. I’m disappointed in my profession. Librarians have let down the queer community by digitizing On Our Backs.Why is this collection different?
The nature of this content makes it different from digitizing textual content or non-pornographic images. We think about porn differently than other types of content.
Most of the OOB run was published before the internet existed. Consenting to appear in a limited run print publication is very different than consenting to have one’s sexualized image be freely available on the internet. These two things are completely different. Who in the early 90s could imagine what the internet would look like in 2016?
In talking to some queer pornographers, I’ve learned that some of their former models are now elementary school teachers, clergy, professors, child care workers, lawyers, mechanics, health care professionals, bus drivers and librarians. We live and work in a society that is homophobic and not sex positive. Librarians have an ethical obligation to steward this content with care for both the object and with care for the people involved in producing it.How could this be different?
Reveal Digital does not have a clear takedown policy on their website. A takedown policy describes the mechanism for someone to request that digital content be taken off a website or digital collection. Hathi’s Trust’s takedown policy is a good example of a policy around copyright. When I spoke to Peggy Glahn, Program Director for Reveal Digital she explained there isn’t a formal takedown policy. Someone could contact the rights holder (the magazine publisher, the photographer, or the person who owns the copyright to the content) and have them make the takedown request to Reveal Digital. Even for librarians it’s sometimes tricky to track down the copyright holder of a magazine that’s not being published anymore. By being stewards of this digital content I believe that Reveal Digital has an ethical obligation to make this process clearer.
I noticed that not all issues are available online. Peggy Glahn said that they digitized copies from Sallie Bingham Center for Women’s History & Culture at Duke University and Charles Deering McCormick Library of Special Collections at Northwestern University but they are still missing many of the later issues. More issues should not be digitized until formal ethical guidelines have been written. This process should include consultation with people who appeared in OOB.
There are ways to improve access to the content through metadata initiatives. I’m really, really excited by Bobby Noble and Lisa Sloniowski‘s proposed project exploring linked data in relation to Derrida and feminism. I’ve loved hearing how Lisa’s project has shifted from a physical or digital archive of feminist porn to a linked data project documenting the various relationships between different people. I think the current iteration avoids dodgy ethics while exploring new ways of thinking about the content and people through linked data. Another example of this is Sarah Mann’s index of the first 10 years of OOB for the Canadian Gay and Lesbian Archive.
We need to have an in depth discussion about the ethics of digitization in libraries. The Zine librarian’s Code of Ethics is the best discussion of these issues that I’ve read. There two ideas that are relevant to my concerns are about consent and balancing interests between access to the collection and respect for individuals.
Whenever possible, it is important to give creators the right of refusal if they do not wish their work to be highly visible.
Because of the often highly personal content of zines, creators may object to having their material being publicly accessible. Zinesters (especially those who created zines before the Internet era) typically create their work without thought to their work ending up in institutions or being read by large numbers of people. To some, exposure to a wider audience is exciting, but others may find it unwelcome. For example, a zinester who wrote about questioning their sexuality as a young person in a zine distributed to their friends may object to having that material available to patrons in a library, or a particular zinester, as a countercultural creator, may object to having their zine in a government or academic institution.
Consent is a key feminist and legal concept. Digitizing a feminist porn publication without consideration for the right to be forgotten is unethical.
The Zine librarian’s Code of Ethics does a great job of articulating the tension that sometimes exists between making content available and the safety and privacy of the content creators:
To echo our preamble, zines are “often weird, ephemeral, magical, dangerous, and emotional.” Dangerous to whom, one might ask? It likely depends on whom one asks, but in the age of the Internet, at least one prospectively endangered population are zinesters themselves. Librarians and archivists should consider that making zines discoverable on the Web or in local catalogs and databases could have impacts on creators – anything from mild embarrassment to the divulging of dangerous personal information.
Zine librarians/archivists should strive to make zines as discoverable as possible while also respecting the safety and privacy of their creators.
I’ve heard similar concerns with lack of care by universities when digitizing traditional Indigenous knowledge without adequate consultation, policies or understanding of cultural protocols. I want to learn more about Indigenous intellectual property, especially in Canada. It’s been a few years since I’ve looked at Mukurtu, a digital collection platform that was built in collaboration with Indigenous groups to reflect and support cultural protocols. Perhaps queers and other marginalized groups can learn from Indigenous communities about how to create culturally appropriate digital collections.
Librarians need to take more care with the ethical issues, that go far beyond simple copyright clearances, when digitizing and putting content online.
I just had a wonderful stroke of luck that bailed me out of a big ole boneheaded error I made yesterday. It is the kind of error that I have a certain notoriety for — not all the time, just once in a while, when I am on overload and stop reading email all the way through, forget to review checklists, and otherwise put myself in a dangerous position with decision-making. The stroke of luck was due to someone who had a solid sixth sense that something was not quite right.
This error reminded me of my most illustrious “did not read the memo” gaffe, which I share here for the first time ever.
At my last university, I was invited to participate in a university president’s inauguration ceremony and quickly scanned the invitational email. Wear regalia and process to a stage? Sounds easy enough! Ok, on to the next problem!
But after we were seated (on a large, brightly-lit stage facing audience of oh, several hundred), I gradually realized that everyone else on stage was getting up one by one, and giving a speech. My hands started trembling. I had no speech. I looked out into the audience. There were the other library people, gazing calmly at their fearless leader. I mean, if anyone likes to give a speech and can knock one out of the park, it would be me, right? The woman who has presented seventy-bazillion times?
My mouth turned to ancient parchment and I could feel cold perspiration wending its way down my torso. I suspect if you had been able to see my eyes, they would have been two fully-dilated orbs in my panicked face. I could feel the hair on my head whitening.
Out of about two dozen people on stage, I could see that I was scheduled to go next to last. The speakers walked to the podium one by one. What to do, what to do?
Breathe. What tools did I have at hand? Breathe. I have a small paper program for the inauguration. Breathe. What is going on with the speeches? Breathe. Observation: the speeches are mostly too long. Breathe. Try to still my hands. Notice that the audience is getting restless. Breathe. Smile out at the audience. Breathe.
It was my turn–a turn that for once in my life came far too quickly. I walked to the podium, looked out at the audience, and smiled. I slowly unfolded the small program and frowned at it for a moment as if it were my speaking notes while I mentally rehearsed the two or three points I would make. I began with a joke about not wanting to speak too long. Other words, now forgotten, ensued, as I winged it onstage. I could hear laughter and appreciative rustling, though I was so anxious my vision was too blurred to see past the lectern for the next two or three minutes. I summed up my speech by noting that the university, like our library, was small and mighty, a joke which if you know me has a visual cue as well.
As soon as I was outside, I owned up that mistake to my team. Not to brag about getting through a disastrous mistake unscathed (well, maybe a little), but also to fully claim my error. This situation was awful and funny and educational, all at once. It was about my strengths, but also about my weaknesses. I believe I slept 14 hours that night. It became part of our library lore.
There were many clues that I was in the vulnerability zone for error yesterday. Distraction, overflowing email, too many simultaneous “channels”; I had even remarked the previous week that I was trying hard, but sometimes not succeeding, at not responding to email messages while I was in a face-to-face meeting. The people I was interacting with were equally busy and besides, it wasn’t their job to see that the conditions for making major errors had become highly favorable. That was my job, as the senior mechanic in charge of this project, and I wasn’t doing it. Clues abounded, but as my overload factor increased, I missed them — a classic case of being unaware that I was unaware. And I ignored the checklist sitting in front of me just waiting to help me, if only I would let it do so.
I had excellent training in the Air Force about the value of using checklists, and I have touted their use in libraries. People often need convincing that checklists work and that checklists are not an indication that they are somehow dumb or stupid for not being able to extemporize major tasks, even though there is a preponderance of evidence underscoring their utility. In aircraft maintenance, failure to follow checklists could, and sometimes did, cost lives; even when lives were not at stake, failure to follow checklists sometimes led to expensive errors. And yes, for yesterday’s mistake, there was a perfectly reasonable checklist, but I didn’t review it. Just as there were email messages I didn’t read all the way through, and just as I didn’t catch that I wasn’t shifting my attention to where it needed to be.
As I reflected today about awareness, checklists, and stumbling toward errors, I looked outward and thought, this is what this presidential campaign feels like to me. There are cues and signs swirling around us, and an abundance of complementary cautionary tales spanning the entire history of human civilization. Anger, vulgarity, and veiled hints at violence abound. The standards for public discourse have declined to the point where children are admonished not to listen to possible future leaders. We worry, with half a mind, that what looked like a lame but forgettable joke a few months back is simultaneously surfacing and fomenting an ugliness that has been burbling under the body politic for some time now. We watch people dragged away and sucker-punched at rallies as they clumsily try to be an early-warning system for what they fear lies ahead. We have all learned what “dog-whistle” means–and yet as the coded words and actions fly around us, we still do not understand why this is happening. We sit on this stage, programs wadded in our sweating hands, watching and watched by the restive audience until our vision blurs; and we do not have a checklist, but we do have our sixth sense.Bookmark to: