You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 4 hours 35 min ago

David Rosenthal: Bruce Schneier on the IoT

Thu, 2016-06-16 15:00
John Leyden at The Register reports that Government regulation will clip coders' wings, says Bruce Schneier. He spoke at Infosec 2016:
Government regulation of the Internet of Things will become inevitable as connected kit in arenas as varied as healthcare and power distribution becomes more commonplace, ... “Governments are going to get involved regardless because the risks are too great. When people start dying and property starts getting destroyed, governments are going to have to do something,” ... The trouble is we don’t yet have a good regulatory structure that might be applied to the IoT. Policy makers don’t understand technology and technologists don’t understand policy. ... “Integrity and availability are worse than confidentiality threats, especially for connected cars. Ransomware in the CPUs of cars is gonna happen in two to three years,” ... technologists and developers ought to design IoT components so they worked even when they were offline and failed in a safe mode." Not to mention the problem that the DMCA places researchers who find vulnerabilities in the IoT at risk of legal sanctions, despite the recent rule change. So much for the beneficial effects of government regulation.

This post will take over from Gadarene swine as a place to collect the horrors of the IoT. Below the fold a list of some of the IoT lowlights in the 17 weeks since then.

Schneier pointed to cars as vulnerable, and indeed both the Nissan Leaf:
when Nissan put together the companion app for its Leaf electric vehicle—the app will turn the climate control on or off—it decided not to bother requiring any kind of authentication. When a Leaf owner connects to their car via a smartphone, the only information that Nissan's APIs use to target the car is its VIN—the requests are all anonymous.and the Mitsubishi Outlander:
the Outlander uses wifi to connect the car directly with a smartphone, which is less secure and allowed Monroe to disable the alarm and then open the car. Describing the hack methodology and solutions, Munro speculates that the car’s insecure software system was probably a result of cost-cutting by Mitsubishi. “I assume that it’s been designed like this to be much cheaper for Mitsubishi than [the more secure] GSM/web service/mobile app based solution,”failed to include any security at all in their connected car systems. In both cases the researchers had to go public before the company admitted that they had a problem. This is not a good strategy:
Only one in four respondents to the survey could remember an incidence of car hacking occurring in the last year. That’s a dramatic drop from just a few months earlier, when a survey by the same firm performed just days after WIRED’s car hacking exposé in July found that 72 percent of ... consumers—were aware of the Jeep hack when asked about it specifically."Only" a quarter of car buyers remembered that Jeeps were hackable a year later. It'd take a lot of advertising dollars to be that effective. Among the authors commenting on the risks of connected cars were Jean-Louis Gassée, Jonathan Gitlin and Josh Corman at the Building IoT conference:
Corman zeroed in on our increasingly connected cars and medical devices as key targets. The consequences of mass compromising of connected vehicles, for example, would be confidence in vehicle manufacturers, transport infrastructure and knock-on effects at the GDP level.Speaking of medical devices, Cory Doctorow at BoingBoing reported on a paper in World Neurosurgery that discusses the dystopian security issues posed by brain implants. He also reported that Automated drug cabinets have 1400+ critical vulns that will never be patched.

Connected homes were equally problematic:. Thermostats:
More than 30 users of Hive, which is owned by British Gas, have complained their heating has been turned up to the maximum level by the iPhone app without their instruction, the Daily Mail reports.lightbulbs:
Matthew Garrett "bought some awful light bulbs so you don't have to." And you really, really shouldn't buy the iRainbow light bulb set: the controller box runs all sorts of insecure services, including an open WiFi hotspot that lets anyone into your home network.thermostats:
Nest in fact pushed out a buggy software update for its Learning Thermostat in January 2016 that led to some of the devices not maintaining temperature.home automation hubs:
The extraordinary decision of Nest to brick its $300 Revolv home automation hub has served as a wake-up call to the tech industry. Both customers and the broader internet of things (IoT) industry were appalled when Nest removed all support for the device, making it as useful as a tub of hummus, as one angry consumer memorably noted. The result has been a series of articles, blog posts and public discussions over how to ensure that the next generation of internet and smart-home products continues to work in an open environment and are not locked down to specific companies.entire home automation systems such as Samsung's SmartThings ecosystem - two separate vulnerabilities discovered by researchers at U. Mich provide the bad guys capabilities such as:
unlock doors, modify home access codes, create false smoke detector alarms, or put security and automation devices into vacation mode. security cameras:
The IP cameras that you bought to secure your physical space suddenly turn into a vast cloud network designed to share your pictures and videos far and wide. The best part? It’s all plug-and-play, no configuration necessary! and of course the home routers without which they wouldn't function:
the US Federal Trade Commission settled charges that alleged the hardware manufacturer failed to protect consumers as required by federal law. The settlement resolves a complaint that said the 2014 mass compromise was the result of vulnerabilities that allowed attackers to remotely log in to routers and, depending on user configurations, change security settings or access files stored on connected devices.all featured in the roll of dishonor. Were their manufacturers grateful for the help security researchers gave them in making their products less insecure? In some cases yes, in others they responded by hurling legal threats at the researchers.

David Rosenthal: Bruce Schneier on the IoT

Thu, 2016-06-16 15:00
John Leyden at The Register reports that Government regulation will clip coders' wings, says Bruce Schneier. He spoke at Infosec 2016:
Government regulation of the Internet of Things will become inevitable as connected kit in arenas as varied as healthcare and power distribution becomes more commonplace, ... “Governments are going to get involved regardless because the risks are too great. When people start dying and property starts getting destroyed, governments are going to have to do something,” ... The trouble is we don’t yet have a good regulatory structure that might be applied to the IoT. Policy makers don’t understand technology and technologists don’t understand policy. ... “Integrity and availability are worse than confidentiality threats, especially for connected cars. Ransomware in the CPUs of cars is gonna happen in two to three years,” ... technologists and developers ought to design IoT components so they worked even when they were offline and failed in a safe mode." Not to mention the problem that the DMCA places researchers who find vulnerabilities in the IoT at risk of legal sanctions, despite the recent rule change. So much for the beneficial effects of government regulation.

This post will take over from Gadarene swine as a place to collect the horrors of the IoT. Below the fold a list of some of the IoT lowlights in the 17 weeks since then.

Schneier pointed to cars as vulnerable, and indeed both the Nissan Leaf:
when Nissan put together the companion app for its Leaf electric vehicle—the app will turn the climate control on or off—it decided not to bother requiring any kind of authentication. When a Leaf owner connects to their car via a smartphone, the only information that Nissan's APIs use to target the car is its VIN—the requests are all anonymous.and the Mitsubishi Outlander:
the Outlander uses wifi to connect the car directly with a smartphone, which is less secure and allowed Monroe to disable the alarm and then open the car. Describing the hack methodology and solutions, Munro speculates that the car’s insecure software system was probably a result of cost-cutting by Mitsubishi. “I assume that it’s been designed like this to be much cheaper for Mitsubishi than [the more secure] GSM/web service/mobile app based solution,”failed to include any security at all in their connected car systems. In both cases the researchers had to go public before the company admitted that they had a problem. This is not a good strategy:
Only one in four respondents to the survey could remember an incidence of car hacking occurring in the last year. That’s a dramatic drop from just a few months earlier, when a survey by the same firm performed just days after WIRED’s car hacking exposé in July found that 72 percent of ... consumers—were aware of the Jeep hack when asked about it specifically."Only" a quarter of car buyers remembered that Jeeps were hackable a year later. It'd take a lot of advertising dollars to be that effective. Among the authors commenting on the risks of connected cars were Jean-Louis Gassée, Jonathan Gitlin and Josh Corman at the Building IoT conference:
Corman zeroed in on our increasingly connected cars and medical devices as key targets. The consequences of mass compromising of connected vehicles, for example, would be confidence in vehicle manufacturers, transport infrastructure and knock-on effects at the GDP level.Speaking of medical devices, Cory Doctorow at BoingBoing reported on a paper in World Neurosurgery that discusses the dystopian security issues posed by brain implants. He also reported that Automated drug cabinets have 1400+ critical vulns that will never be patched.

Connected homes were equally problematic:. Thermostats:
More than 30 users of Hive, which is owned by British Gas, have complained their heating has been turned up to the maximum level by the iPhone app without their instruction, the Daily Mail reports.lightbulbs:
Matthew Garrett "bought some awful light bulbs so you don't have to." And you really, really shouldn't buy the iRainbow light bulb set: the controller box runs all sorts of insecure services, including an open WiFi hotspot that lets anyone into your home network.thermostats:
Nest in fact pushed out a buggy software update for its Learning Thermostat in January 2016 that led to some of the devices not maintaining temperature.home automation hubs:
The extraordinary decision of Nest to brick its $300 Revolv home automation hub has served as a wake-up call to the tech industry. Both customers and the broader internet of things (IoT) industry were appalled when Nest removed all support for the device, making it as useful as a tub of hummus, as one angry consumer memorably noted. The result has been a series of articles, blog posts and public discussions over how to ensure that the next generation of internet and smart-home products continues to work in an open environment and are not locked down to specific companies.entire home automation systems such as Samsung's SmartThings ecosystem - two separate vulnerabilities discovered by researchers at U. Mich provide the bad guys capabilities such as:
unlock doors, modify home access codes, create false smoke detector alarms, or put security and automation devices into vacation mode. security cameras:
The IP cameras that you bought to secure your physical space suddenly turn into a vast cloud network designed to share your pictures and videos far and wide. The best part? It’s all plug-and-play, no configuration necessary! and of course the home routers without which they wouldn't function:
the US Federal Trade Commission settled charges that alleged the hardware manufacturer failed to protect consumers as required by federal law. The settlement resolves a complaint that said the 2014 mass compromise was the result of vulnerabilities that allowed attackers to remotely log in to routers and, depending on user configurations, change security settings or access files stored on connected devices.all featured in the roll of dishonor. Were their manufacturers grateful for the help security researchers gave them in making their products less insecure? In some cases yes, in others they responded by hurling legal threats at the researchers.

District Dispatch: Caribbean librarians visit the Washington Office

Thu, 2016-06-16 13:45

Visiting librarians from the Caribbean with ALA Washington Office staff.

On Tuesday, the American Library Association (ALA) was pleased to receive a delegation of librarians and archivists from the Caribbean. These visitors are invited to the United States under the auspices of the International Visitor Leadership Program of the U.S. Department of State. The delegation included:

  • Ryllis Mannix, Antigua and Barbuda
  • Joseph Prosper, Antigua and Barbuda
  • Junior Browne, Barbados
  • Grace Haynes, Barbados
  • Vernanda Raymond, Dominica
  • Claudette Paula Bartholomew Frederick, Grenada
  • Evauntay Bridgewater, Saint Kitts and Nevis
  • Petrine Clarke Whyte, Saint Kitts and Nevis
  • Donna Mason Mclean, Saint Vincent and the Grenadines

Accompanying the delegation were international visitor liaisons Mr. Jason Brown and Ms. Elka Charren.

The central interest of the visitors concerned intellectual property and we did indeed have an energetic discussion of those issues. We touched on the Google Books and Georgia State cases as well as the details concerning the digitization of local content and the intellectual property issues. Not surprisingly, a number of the policy issues are actually not so different—we heard very familiar challenges and themes.

The delegation will be spending several weeks in the United States that includes a visit to the ALA Annual Conference in Orlando—so  perhaps you’ll see them there!

The ALA representatives in this meeting were Alan S. Inouye, Carrie Russell, and Brian Clark. We thoroughly enjoyed the time together and look forward to future meetings with representatives from around the world as we fulfill one of the responsibilities of the Washington Office—to represent ALA and U.S. libraries with international delegations.

The post Caribbean librarians visit the Washington Office appeared first on District Dispatch.

Islandora: Islandoracon 2017!

Thu, 2016-06-16 13:18

The Islandora Foundation is thrilled to announce the second Islandoracon, to be held at the lovely LIUNA Station in Hamilton, Ontario. Islandoracon2017 is sponsored in part by our local host, McMaster University. We will have a lot more information for you in the weeks and months to come, but for now, please save the date so you can join us.

 

FOSS4Lib Recent Releases: Avalon Media System - 5.0

Thu, 2016-06-16 12:07

Last updated June 16, 2016. Created by Peter Murray on June 16, 2016.
Log in to edit this page.

Package: Avalon Media SystemRelease Date: Monday, June 13, 2016

FOSS4Lib Recent Releases: Islandora - 7.x-1.7

Thu, 2016-06-16 08:01

Last updated June 16, 2016. Created by Peter Murray on June 16, 2016.
Log in to edit this page.

Package: IslandoraRelease Date: Wednesday, June 15, 2016

Evergreen ILS: Evergreen 2.9.6 and 2.10.5 released

Thu, 2016-06-16 03:21

We are pleased to announce the release of Evergreen 2.9.6 and 2.10.5, both bugfix releases.

Evergreen 2.9.6 fixes the following issues:

  • Emails sent using the Action Trigger SendEmail reactor now always MIME-encode the From, To, Subject, Bcc, Cc, Reply-To, and Sender headers. As a consequence, non-ASCII character in those fields are more likely to be displayed correctly in email clients.
  • Fixes the responsive view of the My Account Items Out screen so that Title and Author are now in separate columns.
  • Fixes an incorrect link for the MVF field definition and adds a new link to BRE in fm_IDL.xml.

Evergreen 2.10.5 fixes the following issues:

  • Fixes SIP2 failures with patron information messages when a patron has one or more blocking penalties that are not otherwise ignored.
  • Recovers a previously existing activity log entry that logged the username, authtoken, and workstation (when available) for successful logins.
  • Fixes an error that occurred when the system attempted to display a translated string for the “Has Local Copy” hold placement error message.
  • Fixes an issue where the Show More/Show Fewer Details button didn’t work in catalogs that default to showing more details.
  • Removes Social Security Number as a stock patron identification type for new installations. This fix does not change patron identification types for existing Evergreen systems.
  • Adds two missing link fields (patron profile and patron home library) to the fm_idl.xml for the Combined Active and Aged Circulations (combcirc) reporter source.
  • Adds a performance improvement for the “Clear Holds Shelf” checkin modifier.

Please visit the downloads page to retrieve the server software and staff clients

Evergreen ILS: Evergreen 2.9.6 and 2.10.5 released

Thu, 2016-06-16 03:21

We are pleased to announce the release of Evergreen 2.9.6 and 2.10.5, both bugfix releases.

Evergreen 2.9.6 fixes the following issues:

  • Emails sent using the Action Trigger SendEmail reactor now always MIME-encode the From, To, Subject, Bcc, Cc, Reply-To, and Sender headers. As a consequence, non-ASCII character in those fields are more likely to be displayed correctly in email clients.
  • Fixes the responsive view of the My Account Items Out screen so that Title and Author are now in separate columns.
  • Fixes an incorrect link for the MVF field definition and adds a new link to BRE in fm_IDL.xml.

Evergreen 2.10.5 fixes the following issues:

  • Fixes SIP2 failures with patron information messages when a patron has one or more blocking penalties that are not otherwise ignored.
  • Recovers a previously existing activity log entry that logged the username, authtoken, and workstation (when available) for successful logins.
  • Fixes an error that occurred when the system attempted to display a translated string for the “Has Local Copy” hold placement error message.
  • Fixes an issue where the Show More/Show Fewer Details button didn’t work in catalogs that default to showing more details.
  • Removes Social Security Number as a stock patron identification type for new installations. This fix does not change patron identification types for existing Evergreen systems.
  • Adds two missing link fields (patron profile and patron home library) to the fm_idl.xml for the Combined Active and Aged Circulations (combcirc) reporter source.
  • Adds a performance improvement for the “Clear Holds Shelf” checkin modifier.

Please visit the downloads page to retrieve the server software and staff clients

Cynthia Ng: Accessibility June Meetup (Vancouver) Notes

Thu, 2016-06-16 02:38
Notes from the June Accessibility Meetup presentations. AT-BC (Accessible Technology of BC) Providing assistive technology resources to make learning and working environments usable for people with disabilities. Examples of technology: * “handshake” mouse * microphone with direct to headphones setup * microphone with sound amplification/speaker behind audience. Tend to be more relaxed by decreasing stress … Continue reading Accessibility June Meetup (Vancouver) Notes

Cynthia Ng: Accessibility June Meetup (Vancouver) Notes

Thu, 2016-06-16 02:38
Notes from the June Accessibility Meetup presentations. AT-BC (Accessible Technology of BC) Providing assistive technology resources to make learning and working environments usable for people with disabilities. Examples of technology: * “handshake” mouse * microphone with direct to headphones setup * microphone with sound amplification/speaker behind audience. Tend to be more relaxed by decreasing stress … Continue reading Accessibility June Meetup (Vancouver) Notes

Terry Reese: MarcEdit Update

Wed, 2016-06-15 21:36

Last night, I posted an update squashing a couple bugs and adding some new features.  Here’s the change log:

* Bug Fix: Merge Records Tool: If the user defined field is a title, the merge doesn’t process correctly.
* Bug Fix: Z39.50 Batch Processing: If the source server provides data in UTF8, characters from multi-byte languages may be flattened.
* Bug Fix: ILS Integration..Local:  In the previous version, one of the libraries versions didn’t get updated and early beta testers had some trouble.
* Enhancement: Join Records — option added to process subdirectories.
* Enhancement: Batch Processing Tool — option added to process subdirectories
* Enhancement: Extract Selected Records — Allowing regular expressions as an option when processing file data.
* Enhancement: Alma Integration UI Improvements

Downloads can be picked up via the automated updating tool or via the downloads (http://marcedit.reeset.net/downloads) page.

 

–tr

Terry Reese: MarcEdit Update

Wed, 2016-06-15 21:36

Last night, I posted an update squashing a couple bugs and adding some new features.  Here’s the change log:

* Bug Fix: Merge Records Tool: If the user defined field is a title, the merge doesn’t process correctly.
* Bug Fix: Z39.50 Batch Processing: If the source server provides data in UTF8, characters from multi-byte languages may be flattened.
* Bug Fix: ILS Integration..Local:  In the previous version, one of the libraries versions didn’t get updated and early beta testers had some trouble.
* Enhancement: Join Records — option added to process subdirectories.
* Enhancement: Batch Processing Tool — option added to process subdirectories
* Enhancement: Extract Selected Records — Allowing regular expressions as an option when processing file data.
* Enhancement: Alma Integration UI Improvements

Downloads can be picked up via the automated updating tool or via the downloads (http://marcedit.reeset.net/downloads) page.

 

–tr

LITA: Jobs in Information Technology: June 15, 2016

Wed, 2016-06-15 19:35

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Midwestern University, Library Manager, Glendale, AZ

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

David Rosenthal: What took so long?

Wed, 2016-06-15 15:00
More than ten months ago I wrote Be Careful What You Wish For which, among other topics, discussed the deal between Elsevier and the University of Florida:
And those public-spirited authors who take the trouble to deposit their work in their institution's repository are likely to find that it has been outsourced to, wait for it, Elsevier! The ... University of Florida, is spearheading this surrender to the big publishers.Only now is the library community starting to notice that this deal is part of a consistent strategy by Elsevier and other major publishers to ensure that they, and only they, control the accessible copies of academic publications. Writing on this recently we have:
Barbara Fister writes:
librarians need to move quickly to collectively fund and/or build serious alternatives to corporate openwashing. It will take our time and money. It will require taking risks. It means educating ourselves about solutions while figuring out how to put our values into practice. It will mean making tradeoffs such as giving up immediate access for a few who might complain loudly about it in order to put real money and time into long-term solutions that may not work the first time around. It means treating equitable access to knowledge as our primary job, not as a frill to be worked on when we aren’t too busy with our “real” work of negotiating licenses, fixing broken link resolvers, and training students in the use of systems that will be unavailable to them once they graduate.Amen to all that, even if it is 10 months late. If librarians want to stop being Elsevier's minions they need to pay close, timely attention to what Elsevier is doing. Such as buying SSRN. How much would arXiv.org cost them?

David Rosenthal: What took so long?

Wed, 2016-06-15 15:00
More than ten months ago I wrote Be Careful What You Wish For which, among other topics, discussed the deal between Elsevier and the University of Florida:
And those public-spirited authors who take the trouble to deposit their work in their institution's repository are likely to find that it has been outsourced to, wait for it, Elsevier! The ... University of Florida, is spearheading this surrender to the big publishers.Only now is the library community starting to notice that this deal is part of a consistent strategy by Elsevier and other major publishers to ensure that they, and only they, control the accessible copies of academic publications. Writing on this recently we have:
Barbara Fister writes:
librarians need to move quickly to collectively fund and/or build serious alternatives to corporate openwashing. It will take our time and money. It will require taking risks. It means educating ourselves about solutions while figuring out how to put our values into practice. It will mean making tradeoffs such as giving up immediate access for a few who might complain loudly about it in order to put real money and time into long-term solutions that may not work the first time around. It means treating equitable access to knowledge as our primary job, not as a frill to be worked on when we aren’t too busy with our “real” work of negotiating licenses, fixing broken link resolvers, and training students in the use of systems that will be unavailable to them once they graduate.Amen to all that, even if it is 10 months late. If librarians want to stop being Elsevier's minions they need to pay close, timely attention to what Elsevier is doing. Such as buying SSRN. How much would arXiv.org cost them?

DPLA: Reflections on Community Currents at #DPLAfest

Wed, 2016-06-15 14:33

This guest post was written by T-Kay Sangwand, Librarian for Digital Collection Development, Digital Library Program, UCLA and DPLA + DLF ‘Cross-Pollinator.’ (Twitter: @tttkay)

As an information professional committed to social justice and employing a critical lens to examine the impact of our work, I always look forward to seeing how these principles and issues of diversity and representation of the profession and historical record are more widely discussed in national forums. In my new role as Librarian for Digital Collection Development at UCLA’s Digital Library Program, I grapple with how our work as a digital library can serve our predominantly people of color campus community within the larger Los Angeles context, a city also predominantly comprised of people of color. As a first time attendee to DPLAfest, I was particularly interested in how DPLA frames itself as a national digital library for a country that is projected to have a majority person of color population by 2060. I observed that the DPLAfest leadership did not yet reflect the country’s changing demographics. The opening panel featured eight speakers yet there was only one woman and two people of color.

The opening panel of DPLAfest was filled with many impressive statistics – over 13 million items in DPLA, over 1900 contributors, over 30 partners, over 100 primary source sets, with all 50 states represented by the collections. While these accomplishments merit celebration, I appreciated Dr. Kim Christen Withey’s Twitter comment that encourages us to consider alternate frameworks of success:

#DPLAfest lots of talk of numbers–presumably the bigger the better–how else can we think about success? esp in the digital content realm?

— Kim Christen Withey (@mukurtu) April 14, 2016

“Tech Trends in Libraries” panelists Carson Block, Alison Macrina, and John Resig discuss ‘big data’ and libraries. Photo by Jason Dixson

While the amount of materials or information we have access to is frequently used as a measure of success, several panels such as The People’s Archives: Communities and Documentation Strategy, Wax Works in the Age of Digital Reproduction: The Futures of Sharing Native/First Nations Cultural Heritage, and Technology Trends in Libraries encouraged nuanced discussions of success through its discussions around the complexities of access. The conversation between Alison Macrina of Library Freedom Project and John Resig of Khan Academy critically interrogated the celebration of big data. Macrina reminds libraries to ask the questions: Who owns big data? What is the potential for exploitation? Who has access? How do we negotiate questions of privacy for individuals yet not allow institutions to escape accountability?

The complexities of access and privacy were further explored in the community archives sessions. Community archivists Carol Steiner and Keith Wilson from the People’s Archive of Police Violence in Cleveland spoke on storytelling as a form of justice in the face of impunity but also the real concerns of retribution for archiving citizen stories of police abuse. Dr. Kim Christen Withey spoke on traditional knowledge labels and the Mukurtu content management system that privileges indigenous knowledge about their own communities and enables a continuum of access instead of a binary open/closed model of access. In both of these cases, exercising control over one’s self and community representation constitutes a form of agency in the face of symbolic annihilation that traditional archives and record keeping have historically wreaked on marginalized communities. Additionally, community investment in these documentation projects outside traditional library and archive spaces have been key to their sustainability. In light of this, Bergis Jules raised the important question of “what is or should be the role of large scale digital libraries, such as DPLA, in relation to community archives?” First and foremost, I think our role as information professionals is to listen to communities’ vision(s) for their historical materials; it’s only then that we may be able contribute to and support communities’ agency in documentation and representation. I’m grateful that participants created space within DPLA to have these nuanced discussions and I’m hopeful that community driven development can be a guiding principle in DPLA’s mission.

For a closer read of the aforementioned panels, see my Storify: Community Archives @ DPLAfest.

Special thanks to the Digital Library Federation for making the DPLAfest Cross-Pollinator grant possible.

DPLA: Reflections on Community Currents at #DPLAfest

Wed, 2016-06-15 14:33

This guest post was written by T-Kay Sangwand, Librarian for Digital Collection Development, Digital Library Program, UCLA and DPLA + DLF ‘Cross-Pollinator.’ (Twitter: @tttkay)

As an information professional committed to social justice and employing a critical lens to examine the impact of our work, I always look forward to seeing how these principles and issues of diversity and representation of the profession and historical record are more widely discussed in national forums. In my new role as Librarian for Digital Collection Development at UCLA’s Digital Library Program, I grapple with how our work as a digital library can serve our predominantly people of color campus community within the larger Los Angeles context, a city also predominantly comprised of people of color. As a first time attendee to DPLAfest, I was particularly interested in how DPLA frames itself as a national digital library for a country that is projected to have a majority person of color population by 2060. I observed that the DPLAfest leadership did not yet reflect the country’s changing demographics. The opening panel featured eight speakers yet there was only one woman and two people of color.

The opening panel of DPLAfest was filled with many impressive statistics – over 13 million items in DPLA, over 1900 contributors, over 30 partners, over 100 primary source sets, with all 50 states represented by the collections. While these accomplishments merit celebration, I appreciated Dr. Kim Christen Withey’s Twitter comment that encourages us to consider alternate frameworks of success:

#DPLAfest lots of talk of numbers–presumably the bigger the better–how else can we think about success? esp in the digital content realm?

— Kim Christen Withey (@mukurtu) April 14, 2016

“Tech Trends in Libraries” panelists Carson Block, Alison Macrina, and John Resig discuss ‘big data’ and libraries. Photo by Jason Dixson

While the amount of materials or information we have access to is frequently used as a measure of success, several panels such as The People’s Archives: Communities and Documentation Strategy, Wax Works in the Age of Digital Reproduction: The Futures of Sharing Native/First Nations Cultural Heritage, and Technology Trends in Libraries encouraged nuanced discussions of success through its discussions around the complexities of access. The conversation between Alison Macrina of Library Freedom Project and John Resig of Khan Academy critically interrogated the celebration of big data. Macrina reminds libraries to ask the questions: Who owns big data? What is the potential for exploitation? Who has access? How do we negotiate questions of privacy for individuals yet not allow institutions to escape accountability?

The complexities of access and privacy were further explored in the community archives sessions. Community archivists Carol Steiner and Keith Wilson from the People’s Archive of Police Violence in Cleveland spoke on storytelling as a form of justice in the face of impunity but also the real concerns of retribution for archiving citizen stories of police abuse. Dr. Kim Christen Withey spoke on traditional knowledge labels and the Mukurtu content management system that privileges indigenous knowledge about their own communities and enables a continuum of access instead of a binary open/closed model of access. In both of these cases, exercising control over one’s self and community representation constitutes a form of agency in the face of symbolic annihilation that traditional archives and record keeping have historically wreaked on marginalized communities. Additionally, community investment in these documentation projects outside traditional library and archive spaces have been key to their sustainability. In light of this, Bergis Jules raised the important question of “what is or should be the role of large scale digital libraries, such as DPLA, in relation to community archives?” First and foremost, I think our role as information professionals is to listen to communities’ vision(s) for their historical materials; it’s only then that we may be able contribute to and support communities’ agency in documentation and representation. I’m grateful that participants created space within DPLA to have these nuanced discussions and I’m hopeful that community driven development can be a guiding principle in DPLA’s mission.

For a closer read of the aforementioned panels, see my Storify: Community Archives @ DPLAfest.

Special thanks to the Digital Library Federation for making the DPLAfest Cross-Pollinator grant possible.

Open Knowledge Foundation: Introducing The New Proposed Global Open Data Index Survey

Wed, 2016-06-15 11:00

The Global Open Data Index (GODI) is one of the core projects of Open Knowledge International. Originally launched in 2013, it has quickly grown and now measures open data publication in 122 countries. GODI is a community tool, and throughout the years the open data community have taken an active role in shaping it by reporting problems, discussing issues on GitHub and in our forums as well as sharing success stories. We welcome this feedback with open arms and in 2016, it has proved invaluable in helping us produce an updated set of survey questions.

In this blogpost we are sharing the first draft of the revised GODI survey. Our main objective in updating the survey this year has been to improve the clarity of the questions and provide better guidance to submitters in order to ensure that contributors understand what datasets they should be evaluating and what they should be looking for in those datasets. Furthermore, we hope the updated survey will help us to highlight some of the tangible challenges to data publication and reuse by paying closer attention to the contents of datasets.

Our aim is to adopt this new survey structure for future editions of GODI as well as the Local Open Data Index and we would love to hear your feedback! We are aware that some changes might affect the comparability with older editions of GODI and it’s for this reason that your feedback is critical. We are especially curious to hear the opinion of the Local Open Data Index community. What do you find positive? Where do you see issues with your local index? Where could we improve?

In the following we would like to present our ideas behind the new survey. You will find a detailed comparison of old and new questions in this table.

A brief overview of the proposed changes:

  • Better measure and document how easy it is to find government data online
  • Enhance our understanding of the data we measure
  • Improve the robustness of our analysis

 

  1. Better measure and document how easy or difficult it is to find government data online

Even if governments are publishing data, if potential users cannot find them, then it goes without saying that they will not be able to use it. In our revised version of the survey, we ask submitters to document where they found a given dataset as well how much time they needed to find it. We recognise this to be an imperfect measure, as different users are likely to vary in their capacity to find government data online. However, we hope that this question will help us to extract critical information around the challenges related to usability that are not easily captured by a legal and technical analysis of a given dataset, even if it would be difficult to quantify the results and therefore use it in the scoring. 

  1. Enhance our understanding of the data we measure

It is common from governments to publish datasets in separate files and places. Contributors might find department spending data scattered across different department websites or, even when made available in one place such as a portal, the data could be split up into a multiple files. Some portion of this data might be openly licensed, another portion machine-readable while others are in PDFs. Sometimes non-machine-readable data is available without charge, while machine-readable files are available for a fee. In the past, this has proven to be an enormous challenge for the Index as submitters are forced to decide what data should be evaluated (see this discussion in our forum). 

The inconsistent publication of government data leads to confusion among our submitters and negatively impacts the reliability of the Index as an assessment tool. Furthermore, we think it is safe to say if open data experts are struggling to find or evaluate datasets, potential users will face similar challenges and as such, the inconsistent and sporadic data publication policies of governments is likely to affect data uptake and reuse. In order to ensure that we are comparing like with like, GODI assesses the openness of clearly defined datasets. These dataset definitions are what have determined, in collaboration with experts in the field, to be essential government data – data that contains crucial information for society at large. If a submitter only finds parts of this information in a file or scattered across different files, rather than assessing the openness of key datasets, we end up assessing a partial snapshot that is unlikely to be representative. There is more at stake than our ability to assess the “right” datasets – incoherent data publication significantly limits the capacity of civil society to tap into the full value of government data.

  1. Improve the robustness of our analysis

In the updated survey, we will determine whether datasets are available from one URL by asking “Are all the data downloadable from one URL at once?” (formerly “Available in bulk?”).  To respond in the affirmative, submitters would have to be able to demonstrate that all required data characteristics is made available in one file. If the data cannot be downloaded from one URL, or if submitters find multiple files on one URL, they will be asked to select one dataset, from one URL, which the most number of requirements and is available free of charge. Submitters will document why they’ve chosen this dataset and data source in order to help reviewers understand the rationale for choosing a given dataset and to aid in verifying sources.

The subsequent question will, “Which of these characteristics are included in the downloadable file?”, will help us verify that the dataset submitted does indeed contain all the requisite characteristics. Submitters will assess the dataset by selecting each individual characteristic contained within it.  Not only will this prompt contributors to really verify that all the established characteristics are met, it will also allow us to gain a better understanding of the common components missing when governments are publishing data, thus giving civil society a better foundation to advocate for publishing the crucial data. In our results we will more explicitly flag which elements are missing and declare only those dataset fully open that match all of our dataset requirements. 

 

This year, we are committed to improving the clarity of the survey questions: 

  1. “Does the data exist?” – The first question in previous versions of the Index was often confusing for submitters and has been reformulated to ask: Is the data published by government (or a third-party related to government)?” If the response is no, contributors will be asked to justify their response. For example, does the collection, and subsequent publication, of this data fall under under the remit of a different level of government?  Or perhaps the data is collected and published (or not) by a private company? There are a number of legal, social, technical and political reasons that might mean that the data we are assessing simply does not exist and the aim of this question is to help open data activists advocate for coherent policies around data production and publication (see past issues with this question here and here).  
  1. “Is data in digital form?” – The objective of this question was to cover cases where governments provided large data on DVDs, for example. However, users have commented that we should not ask for features that do not make data more open. Ultimately, we have concluded that if data is going to be usable for everyone, it should be online. We have therefore deleted this question.
  2. “Publicly Available?” – We merged “Publicly available?” with “Is the data available online?”. The reason is that we only want to reward data that is publicly accessible online without mandatory registrations (see for instance discussions here and here) .
  3. “Is the data machine-readable?” – There have been a number of illuminating discussions in regards to what counts as machine-readable formats (see for example discussions here and here). We found that the question “Is the data machine-readable?” was overly technical. Now we simply ask users “In which file formats are the data?”. When submitters enter the format our system automatically recognises if the format is machine-readable and in an open format.
  4. “Openly licensed” – Some people argued that the question “Openly licensed?” does not adequately take into account the fact that some government data are in the public domain and not under the protection of copyright. As such, we have expanded the question to “Is the data openly licensed/in the public domain”. If data are not under the protection of copyright, they do not necessarily need to be openly licensed; however, a clear disclaimer must be provided informing users about their copyright status (which can be in form of an open licence). This change is in line with the Open Definition 2.1. (See discussions here and here).

Looking forward hearing your thoughts on the forum or by commenting on this post!

Islandora: The Islandora Long Tail is now Awesome

Wed, 2016-06-15 10:03

I've been posting about the Long Tail of Islandora for a while now, putting a spotlight on Islandora modules developed and shared by members of our community. It's a good way to find new tools and modules that might answer a need you have on your site (so you don't have to build your own from scratch). We've also kept an annotated list of community developed modules in our Resources section, but it had a tendency to get a little stale and sometimes miss great work that wasn't happening in places we expect.

Enter the concept of the Awesome List, a curated list of awesome lists, complete with helpful guidelines and policies that we could crib from to make our own list of all that is awesome for Islandora. It now lives in our Islandora Labs GitHub organization, and new contributions are very welcome. You can share your own work, your colleagues' work, or any public Islandora resource that you think other Islandorians might find useful. If you have something to add, please put in a pull request or email me.

Awesome Islandora

pinboard: Google Groups

Wed, 2016-06-15 02:48
Hey #Code4Lib Southeastern folk - #C4LSE is reopening a regional dialogue. Join us?

Pages