news aggregator

Manage Metadata (Phipps and Hillmann): Wake-up Call for CC:DA

planet code4lib - Wed, 2014-02-05 21:23

Presentations on innovative ways to gather data outside the library silo are happening all over ALA–generally hosted by committees and interest groups using speakers already planning to be at the conference. A great example of the kind of presentation I’m talking about was the Sunday presentation sponsored by the ALCTS CaMMS Cataloging & Classification Research Interest Group produced by the ProMusicaDB project, with founder Christy Crowl and metadata librarian Kimmy Szeto. They provided a veritable feast of slides and stories, all of them illustrating the new ways that we’ll all be operating in the very near future. Their slides should be available on the ALCTS Cataloging and Classification Research IG site sometime soon. [Full disclosure: I spoke at that session too--see previous blog post for more details.]

On the Saturday of Midwinter, I attended 2 parts of the CC:DA meeting (I had to leave to do a presentation to another group in the middle), but I dutifully returned for the last part. It was probably a mistake–my return occurred during the last gasp of a perfectly awful discussion. I had a brief chat with Peter Rolla (the current chair) after the meeting, and continued to think about why I was so appalled during the last part of the meeting. Later, when held hostage in a meeting by a conversation in which I had little interest, I wrote up some of my thoughts.

I would describe the discussion as one of the endless number of highly detailed conversations on improving the RDA rules that have been a “feature” of CC:DA meetings for the past few years. To be honest, I have a limited tolerance for such discussions, though I usually enjoy some of the ones at a less excruciating level of detail.

Somehow this discussion struck me as even more circular than most, and seemed to be aimed at “improving” the rules by limiting the choices allowed to catalogers–in a sense by mechanizing the descriptive process to an extreme degree. Now, I’m no foe of using automated means to create descriptive metadata, either as a sole technique or (preferably) for submission to catalogers or editors to complete. I think we ought to know a lot more about what can be done using technology rather than continue to flog any remaining potential for rule changes intended to push catalogers to supply a level of consistency that isn’t really achievable for humans. If you want consistency–particularly in transcription–use machines. Humans are far better utilized for reviewing the product and correcting errors and adding information to improve its usefulness.

But in cataloging circles, discussing the use of automated methods is generally considered off-topic. When the [technological] revolution comes, catalogers will be the first to go, or so it is too often believed. Copy cataloging and other less ‘professional’ means of cutting costs and increasing productivity is not a happy topic of conversation for this group.

But, looking ahead, I see no letup in this trajectory without some changes. Catalogers love rules, and rules are endlessly improvable, no? Maybe, maybe not, but just put a tech services administrator in the room for some of these discussions, and you’re likely to get a reaction pretty close to mine. But to my mind, the total focus on rules rather than a more practical approach to address the inevitability of change in the business of cataloging is doing more towards ensuring that the human role in the process will be limited in ways that make little sense, except monetarily.
What we need here is to change the conversation, and no group is more qualified to do that than CC:DA. To do that it’s absolutely necessary that its membership become more knowledgeable about what is now possible in automating metadata creation. Without that kind of awareness, it’s impossible to start thinking and discussing how to focus less of CC:DA’s efforts on that part of the cataloging process which should be done by machines, and more on what still needs humans to accomplish. There are several ways to do this. One is by dedicating some of CC:DA’s conference time to bringing in those folks who understand the technology issues to demonstrate, discuss, and collaborate.

Catalogers and their roles have been changing greatly over the past few years, and promises of more change must be taken seriously. Then the ultimate question might be asked: if resistance is futile (and it surely is), how can catalogers learn enough to help frame that change?

LITA: Jobs in Information Technology: Feb. 5

planet code4lib - Wed, 2014-02-05 18:09

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

IT Systems Analyst, University of Maryland College Park – Libraries, College Park, MD

Visit the LITA Job Site for more available jobs and for information on submitting a  job posting.

 

LITA: Jobs in Information Technology: Feb. 5

planet code4lib - Wed, 2014-02-05 18:09

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

IT Systems Analyst, University of Maryland College Park – Libraries, College Park, MD

Visit the LITA Job Site for more available jobs and for information on submitting a  job posting.

 

LITA: Four Local Libraries Honored for Offering Cutting-edge Services

planet code4lib - Tue, 2014-02-04 23:49

Today, the American Library Association (ALA) recognized four libraries for offering cutting-edge technologies in library services, honoring programs in Edmonton, Alberta, Canada; Bridgewater, New Jersey; Raleigh, North Carolina; and University Park, Pennsylvania.

The recognition, which is presented by the ALA Office for Information Technology Policy and the Library & Information Technology Association (LITA), showcases libraries that are serving their communities using novel and innovative methods. Libraries or library service areas selected will be highlighted through various ALA publications and featured in a program at the ALA Annual Conference 2014 in Las Vegas, June 26-July 1, 2014.

“This was a very competitive year for cutting-edge applicants. Those recognized today stood out in the ways they creatively solved problems, engaged library patrons, and strengthened library services and visibility,” said Marc Gartler of Madison Public Library (WI), who chaired the selection subcommittee. “We are excited to recognize these four projects, several of which already have proven their potential to be successfully replicated by libraries around the globe.”

*Cut-rate Digital Signboards, Somerset County Library System, Bridgewater, NJ.

Somerset County Library System developed a more dynamic and cost-effective way to promote programs and resources in high-traffic areas of the library. The creative solution brings together a Raspberry Pi computer, large-screen monitors, WiFi, and Google Docs Presentations to reduce digital signboard costs by almost $1,000 per display. The project also reduced poster printing costs and actually made it easier for staff to remotely update and push new content to their customers.

*“Me Card,” Edmonton Public Library, Edmonton, Alberta, Canada.

Edmonton Public Library’s Me Card technology allows customers with a library card from one library to create an account with and access collections at another library with no staff intervention or additional library cards. The Me Card can work with any integrated library system (ILS) and does not require a shared ILS among participating libraries. More than 1,500 customers accessed the web-based service and registered for membership in the first two months of operation.

*My #HuntLibrary, North Carolina State University (NCSU), Raleigh, NC.

NCSU ensured that the story of their new library’s opening would be told through the words and images of the people that use it every day. The NCSU Libraries used Instagram’s API to develop an app that captured photos tagged with #HuntLibrary and displayed them online and in the library. Both a user engagement tool and digital preservation effort, the library received more than 3,200 images from more than 1,300 different users and recorded more than 235,000 page views.

 *One Button Studio, Penn State University Libraries, University Park, PA

Penn State University Libraries, in partnership with Information Technology Services, enabled easy video creation for faculty and students across Penn State campuses. With only a flash drive and the push of a single button, users can activate a video camera, microphone and lights to begin recording. In its first year of use, 4,200 people created more than 270 hours of video. The app also reduces production costs due to changes in the type of equipment, as well as the number of staff needed.

Additional information is available at http://www.ala.org/offices/oitp/cuttingedge

About the ALA Office for Information Technology Policy (OITP)

The Office for Information Technology Policy advances ALA’s public policy activities by helping secure information technology policies that support and encourage efforts of libraries to ensure access to electronic information resources as a means of upholding the public’s right to a free and open information society. For more information, visit www.ala.org/oitp.

LITA: Four Local Libraries Honored for Offering Cutting-edge Services

planet code4lib - Tue, 2014-02-04 23:49

Today, the American Library Association (ALA) recognized four libraries for offering cutting-edge technologies in library services, honoring programs in Edmonton, Alberta, Canada; Bridgewater, New Jersey; Raleigh, North Carolina; and University Park, Pennsylvania.

The recognition, which is presented by the ALA Office for Information Technology Policy and the Library & Information Technology Association (LITA), showcases libraries that are serving their communities using novel and innovative methods. Libraries or library service areas selected will be highlighted through various ALA publications and featured in a program at the ALA Annual Conference 2014 in Las Vegas, June 26-July 1, 2014.

“This was a very competitive year for cutting-edge applicants. Those recognized today stood out in the ways they creatively solved problems, engaged library patrons, and strengthened library services and visibility,” said Marc Gartler of Madison Public Library (WI), who chaired the selection subcommittee. “We are excited to recognize these four projects, several of which already have proven their potential to be successfully replicated by libraries around the globe.”

*Cut-rate Digital Signboards, Somerset County Library System, Bridgewater, NJ.

Somerset County Library System developed a more dynamic and cost-effective way to promote programs and resources in high-traffic areas of the library. The creative solution brings together a Raspberry Pi computer, large-screen monitors, WiFi, and Google Docs Presentations to reduce digital signboard costs by almost $1,000 per display. The project also reduced poster printing costs and actually made it easier for staff to remotely update and push new content to their customers.

*“Me Card,” Edmonton Public Library, Edmonton, Alberta, Canada.

Edmonton Public Library’s Me Card technology allows customers with a library card from one library to create an account with and access collections at another library with no staff intervention or additional library cards. The Me Card can work with any integrated library system (ILS) and does not require a shared ILS among participating libraries. More than 1,500 customers accessed the web-based service and registered for membership in the first two months of operation.

*My #HuntLibrary, North Carolina State University (NCSU), Raleigh, NC.

NCSU ensured that the story of their new library’s opening would be told through the words and images of the people that use it every day. The NCSU Libraries used Instagram’s API to develop an app that captured photos tagged with #HuntLibrary and displayed them online and in the library. Both a user engagement tool and digital preservation effort, the library received more than 3,200 images from more than 1,300 different users and recorded more than 235,000 page views.

 *One Button Studio, Penn State University Libraries, University Park, PA

Penn State University Libraries, in partnership with Information Technology Services, enabled easy video creation for faculty and students across Penn State campuses. With only a flash drive and the push of a single button, users can activate a video camera, microphone and lights to begin recording. In its first year of use, 4,200 people created more than 270 hours of video. The app also reduces production costs due to changes in the type of equipment, as well as the number of staff needed.

Additional information is available at http://www.ala.org/offices/oitp/cuttingedge

About the ALA Office for Information Technology Policy (OITP)

The Office for Information Technology Policy advances ALA’s public policy activities by helping secure information technology policies that support and encourage efforts of libraries to ensure access to electronic information resources as a means of upholding the public’s right to a free and open information society. For more information, visit www.ala.org/oitp.

ALA Equitable Access to Electronic Content: Mr. President: Where Have Libraries Gone?

planet code4lib - Tue, 2014-02-04 21:22

Photo by NBC Washington.

It was my pleasure to be in the audience today for President Barack Obama’s speech about the ConnectED initiative at Buck Lodge Middle School In Maryland. I found myself thinking back to a speech I attended by then-Senator Barack Obama in 2005, where he credited libraries with helping him land his first job as a community organizer.

“More than a building that houses books and data, the library represents a window to a larger world, the place where we’ve always come to discover big ideas and profound concepts that help move the American story forward,” he said. “At the moment that we persuade a child, any child to cross that threshold into a library, we change their lives forever, for the better.”

Ninety-four percent of parents agree libraries are important, so I was disappointed to find libraries conspicuously absent in President Obama’s vision of connecting our students to world-class learning.

The President opened his remarks with his commitment to significant investments in education. But he missed the mark in a few key ways. First, he failed to recognize the importance of an effective school library program. ConnectED must include professional development and support for school librarians, in addition to broadband access and devices, to ensure students have the digital literacy and research skills necessary to effectively use those devices.

At a broader level, U.S. school, public and higher-education libraries complete education and help jumpstart employment in every community in this country. Afterschool WiFi use in public libraries spikes at 3:01 p.m. when students bring their devices and homework assignments to one of more than 16,000 library locations. New digital learning labs in libraries are seed beds for people to create content, as well as consume it—demanding upload speeds that rival download. And videoconferencing shrinks distances and empowers uses that range from virtual field trips with NASA in Maine to distance education and professional development for high school principals in Oklahoma.

Sixty-two percent of libraries report they provide the only free access to computers and the internet in their communities—for rural areas, this percentage climbs to 70 percent. With one-third of Americans lacking home broadband access, libraries provide a digital lifeline that supports essential education, employment and e-government needs. From the Department of Labor to the Department of Health and Human Services, federal agencies have called on OneStops and Head Start programs to coordinate with their local libraries to achieve their missions.

When one must have digital access and skills to find a job, apply for unemployment benefits, or enroll in a health exchange, libraries are the one place for all.

Library needs for high-capacity broadband are clear: fewer than 10 percent of libraries have internet speeds of 100Mbps or higher, and one in five libraries has speeds of 1.5 Mbps or slower. One internet user participating in an interactive distance learning program can cripple access for dozens of other learners in the library. This is not acceptable.

If we’re serious about learning, then we must be serious about libraries. The original framers of the E-rate program understood this, and we hope the President will recognize and engage the power of libraries and librarians in connecting communities and achieving the vision for 21st century education and a globally competitive economy.

The post Mr. President: Where Have Libraries Gone? appeared first on District Dispatch.

ALA Equitable Access to Electronic Content: Mr. President: Where Have Libraries Gone?

planet code4lib - Tue, 2014-02-04 21:22

Photo by NBC Washington.

It was my pleasure to be in the audience today for President Barack Obama’s speech about the ConnectED initiative at Buck Lodge Middle School In Maryland. I found myself thinking back to a speech I attended by then-Senator Barack Obama in 2005, where he credited libraries with helping him land his first job as a community organizer.

“More than a building that houses books and data, the library represents a window to a larger world, the place where we’ve always come to discover big ideas and profound concepts that help move the American story forward,” he said. “At the moment that we persuade a child, any child to cross that threshold into a library, we change their lives forever, for the better.”

Ninety-four percent of parents agree libraries are important, so I was disappointed to find libraries conspicuously absent in President Obama’s vision of connecting our students to world-class learning.

The President opened his remarks with his commitment to significant investments in education. But he missed the mark in a few key ways. First, he failed to recognize the importance of an effective school library program. ConnectED must include professional development and support for school librarians, in addition to broadband access and devices, to ensure students have the digital literacy and research skills necessary to effectively use those devices.

At a broader level, U.S. school, public and higher-education libraries complete education and help jumpstart employment in every community in this country. Afterschool WiFi use in public libraries spikes at 3:01 p.m. when students bring their devices and homework assignments to one of more than 16,000 library locations. New digital learning labs in libraries are seed beds for people to create content, as well as consume it—demanding upload speeds that rival download. And videoconferencing shrinks distances and empowers uses that range from virtual field trips with NASA in Maine to distance education and professional development for high school principals in Oklahoma.

Sixty-two percent of libraries report they provide the only free access to computers and the internet in their communities—for rural areas, this percentage climbs to 70 percent. With one-third of Americans lacking home broadband access, libraries provide a digital lifeline that supports essential education, employment and e-government needs. From the Department of Labor to the Department of Health and Human Services, federal agencies have called on OneStops and Head Start programs to coordinate with their local libraries to achieve their missions.

When one must have digital access and skills to find a job, apply for unemployment benefits, or enroll in a health exchange, libraries are the one place for all.

Library needs for high-capacity broadband are clear: fewer than 10 percent of libraries have internet speeds of 100Mbps or higher, and one in five libraries has speeds of 1.5 Mbps or slower. One internet user participating in an interactive distance learning program can cripple access for dozens of other learners in the library. This is not acceptable.

If we’re serious about learning, then we must be serious about libraries. The original framers of the E-rate program understood this, and we hope the President will recognize and engage the power of libraries and librarians in connecting communities and achieving the vision for 21st century education and a globally competitive economy.

The post Mr. President: Where Have Libraries Gone? appeared first on District Dispatch.

ALA Equitable Access to Electronic Content: Apply now: Library of Congress offering $150,000 for literacy award

planet code4lib - Tue, 2014-02-04 20:19

The Library of Congress Center for the Book is now accepting applications for its 2014 Library of Congress Literacy Awards Program, an award that honors three organizations that have made outstanding contributions to increasing literacy in the United States and abroad. The three award winners will be announced during the National Book Festival on August 30, 2014, followed by an awards ceremony and formal presentations by the winners at the Library of Congress in October.

More on the awards:

  • $150,000: The David M. Rubenstein Prize will be awarded to an organization that has made outstanding and measurable contributions in increasing literacy levels and has demonstrated exceptional and sustained depth and breadth in its commitment to the advancement of literacy. The organization will meet the highest standards of excellence in its operations and services. This award may be given to any organization based either inside or outside the United States.
  • $50,000: The American Prize will be awarded to an organization that has made a significant and measurable contribution to increasing literacy levels or the national awareness of the importance of literacy. This award may be given to any organization that is based in the United States.
  • $50,000: The International Prize will be awarded to an organization or national entity that has made a significant and measurable contribution to increasing literacy levels. This award may be given to any organization that is based in a country outside the United States.

The program is accepting applications from now until March 31, 2014. To learn more, download the application. We hope that you will share this information with any groups that might be interested and consider either applying on behalf of your own organization or nominating another group.

The Library of Congress Literacy Awards Program is administered by the Center for the Book. Email literacyawards@loc.gov with questions.

The post Apply now: Library of Congress offering $150,000 for literacy award appeared first on District Dispatch.

ALA Equitable Access to Electronic Content: Apply now: Library of Congress offering $150,000 for literacy award

planet code4lib - Tue, 2014-02-04 20:19

The Library of Congress Center for the Book is now accepting applications for its 2014 Library of Congress Literacy Awards Program, an award that honors three organizations that have made outstanding contributions to increasing literacy in the United States and abroad. The three award winners will be announced during the National Book Festival on August 30, 2014, followed by an awards ceremony and formal presentations by the winners at the Library of Congress in October.

More on the awards:

  • $150,000: The David M. Rubenstein Prize will be awarded to an organization that has made outstanding and measurable contributions in increasing literacy levels and has demonstrated exceptional and sustained depth and breadth in its commitment to the advancement of literacy. The organization will meet the highest standards of excellence in its operations and services. This award may be given to any organization based either inside or outside the United States.
  • $50,000: The American Prize will be awarded to an organization that has made a significant and measurable contribution to increasing literacy levels or the national awareness of the importance of literacy. This award may be given to any organization that is based in the United States.
  • $50,000: The International Prize will be awarded to an organization or national entity that has made a significant and measurable contribution to increasing literacy levels. This award may be given to any organization that is based in a country outside the United States.

The program is accepting applications from now until March 31, 2014. To learn more, download the application. We hope that you will share this information with any groups that might be interested and consider either applying on behalf of your own organization or nominating another group.

The Library of Congress Literacy Awards Program is administered by the Center for the Book. Email literacyawards@loc.gov with questions.

The post Apply now: Library of Congress offering $150,000 for literacy award appeared first on District Dispatch.

ALA Equitable Access to Electronic Content: Want to teach an online course? Brush up on your copyright law first

planet code4lib - Tue, 2014-02-04 20:08

Do you want to teach an online course? Have you ever created, or thought about creating, a massive open online course (also referred to as a “MOOC”)? Ever wonder about the copyright issues involved in doing this? The attention being paid to MOOCs and the sudden interest in global online education has created remarkable new situations for faculty, administrators and librarians. One area of both uncertainty and opportunity is copyright, especially because traditional copyright exceptions do not seem to apply comfortably in the MOOC environment.

Join the Copyright Education Subcommittee of the American Library Association’s Office for Information Technology Policy for a new series of presentations intended to educate users about copyright and encourage a discussion about a balanced approach to copyright law.

The subcommittee will host the free webinar “What a Difference a MOOC Makes: Copyright Management for Online Courses,” on February 6, 2014, from 2:00 to 4:00 p.m. Eastern time (11am Pacific time). This webinar will examine how institutions need to think about copyright compliance for online teaching, what level of guidance for teaching faculty is appropriate, and what kinds of services may be needed to support the MOOC phenomenon.

The webinar will be hosted by Kevin Smith, Duke University’s first Scholarly Communications Officer. Smith’s teaches and advises faculty, administrators and students about copyright, intellectual property licensing and scholarly publishing. The Scholarly Communications Officer is both a librarian and an attorney experienced in copyright and technology law.

Smith holds a Masters of Library Science from Kent State University and has worked as an academic librarian in both liberal arts colleges and specialized libraries. His strong interest in copyright law began in library school and he received a law degree from Capital University in 2005. Before moving to Duke in 2006, Kevin served as the Director of the Pilgrim Library at Defiance College in Ohio, where he also taught Constitutional Law. He is admitted to the bar in Ohio and North Carolina.

Register for the webinar

The post Want to teach an online course? Brush up on your copyright law first appeared first on District Dispatch.

ALA Equitable Access to Electronic Content: Want to teach an online course? Brush up on your copyright law first

planet code4lib - Tue, 2014-02-04 20:08

Do you want to teach an online course? Have you ever created, or thought about creating, a massive open online course (also referred to as a “MOOC”)? Ever wonder about the copyright issues involved in doing this? The attention being paid to MOOCs and the sudden interest in global online education has created remarkable new situations for faculty, administrators and librarians. One area of both uncertainty and opportunity is copyright, especially because traditional copyright exceptions do not seem to apply comfortably in the MOOC environment.

Join the Copyright Education Subcommittee of the American Library Association’s Office for Information Technology Policy for a new series of presentations intended to educate users about copyright and encourage a discussion about a balanced approach to copyright law.

The subcommittee will host the free webinar “What a Difference a MOOC Makes: Copyright Management for Online Courses,” on February 6, 2014, from 2:00 to 4:00 p.m. Eastern time (11am Pacific time). This webinar will examine how institutions need to think about copyright compliance for online teaching, what level of guidance for teaching faculty is appropriate, and what kinds of services may be needed to support the MOOC phenomenon.

The webinar will be hosted by Kevin Smith, Duke University’s first Scholarly Communications Officer. Smith’s teaches and advises faculty, administrators and students about copyright, intellectual property licensing and scholarly publishing. The Scholarly Communications Officer is both a librarian and an attorney experienced in copyright and technology law.

Smith holds a Masters of Library Science from Kent State University and has worked as an academic librarian in both liberal arts colleges and specialized libraries. His strong interest in copyright law began in library school and he received a law degree from Capital University in 2005. Before moving to Duke in 2006, Kevin served as the Director of the Pilgrim Library at Defiance College in Ohio, where he also taught Constitutional Law. He is admitted to the bar in Ohio and North Carolina.

Register for the webinar

The post Want to teach an online course? Brush up on your copyright law first appeared first on District Dispatch.

Ribaric, Tim: OLA SuperConference Presentation Material

planet code4lib - Tue, 2014-02-04 19:01

I had to opportunity to present at OLA SuperConference this year. I'm calling it part two in my series of presentations on Screen Scraping (Part 1 was Computers in Libraries last year). As always it was a great experience.

read more

Ribaric, Tim: OLA SuperConference Presentation Material

planet code4lib - Tue, 2014-02-04 19:01

I had to opportunity to present at OLA SuperConference this year. I'm calling it part two in my series of presentations on Screen Scraping (Part 1 was Computers in Libraries last year). As always it was a great experience.

read more

Open Knowledge Foundation: Announcing the Local Open Data Census

planet code4lib - Tue, 2014-02-04 11:43

Let’s explore local open data around the world!

Local data is often the most relevant to citizens on a daily basis – be it rubbish collection times, local tax rates or zoning information. However, at the moment it’s difficult to know which key local datasets are openly available and where. Now, you can help change that.

We know there is huge variability in how much local data is available not just across countries but within countries, with some cities and municipalities making major open data efforts, while in others there’s little or no progress visible. If we can find out what open data is out there, we can encourage more cities to open up key information, helping businesses and citizens understand their cities and making life easier.

We’ve created the Local Open Data Census to survey and compare the progress made by different cities and local areas in releasing Open Data. You can help by tracking down Open Data from a city or region where you live or that you’re interested in. All you need to do is register your interest and we’ll get your Local Open Data Census set up and ready to use.

Get in touch about surveying open data in your city or region »

Investigate your local open data on Open Data Day

Open Data Day is coming – it’s on 22 February 2014 and will involve Open Data events around the world where people can get involved with open data. If you’re organising an open data event, why not include a Census-a-thon to encourage people to track down and add information about their city?

A Local Open Data Census for your city will help:

  • new people learn about open data by exploring what’s available and relevant to them;
  • you compare open data availability in your city with other cities in your country;
  • local government identify data that local people and businesses are interested in using;
  • and more data get opened up everywhere!

It’s really easy to contribute to an Open Data Census: there’s lots of documentation for them and a truly global community creating and using them. A City Census is a great way to get involved with open data for the first time, as the information is about things city residents really care about. Or if you’re more interested in regions, counties or states, you can take part a regional Census. (Some countries will have both regional and city Censuses, because of the way their local government information is organised.)

Sign up now to ensure your city and country have a Local Open Data Census up and running before Open Data Day, and let’s see how much open data about open data we can create this month! We’ll have more tips on how to run a successful Census-a-thon coming soon.

Register your interest in a local census Loading… The history behind the Local Open Data Census

In 2012 we started an Open Data Census to track the state of country-level open data around the world. The 2013 results published as the first ever Open Data Index last Autumn covered 700 datasets across 70 countries, and have already proved useful in driving open data release around the world. We’re looking forward to updating the Census for 2014 later this year.

However, a lot of data that is most relevant to citizens’ everyday lives is at the local level. That’s why last year we ran a separate pilot, to measure release of open data at the local, city level – the City Open Data Census. We’ve learnt a lot from the experience and from the community who used the pilot, and we are now ready to offer a full Local Open Data Census to everyone, everywhere.

You can find out more on the new Census “Meta” site »

And there’s more: Topical Open Data Censuses

We also know that people will want to run their own specific Open Data Censuses focused on particular topics or datasets. If you’ve been wondering about the openness of pollution data, legal information, public finances or any other topic, we can set up a special Census to survey the datasets you care about, on a national or regional scale.

A Topical Census uses the platform built for the Open Data Census to run a similar, customised census, and publish the results in a simple and visually appealing way. The questionnaires, responses and results can be hosted by the Open Knowledge Foundation, so you don’t have to worry about the technical side. If you are interested in running a Topical Open Data Census, get in touch with the Census team.

Note that we expect quite a bit of demand for local Censuses in the next few weeks. We will prioritise requests for Topical Censuses from groups who have more people ready to get involved, such as existing networks, working groups or interest groups around the topic, so please let us know a little about yourselves when you get in touch.

Open Knowledge Foundation: Announcing the Local Open Data Census

planet code4lib - Tue, 2014-02-04 11:43

Let’s explore local open data around the world!

Local data is often the most relevant to citizens on a daily basis – be it rubbish collection times, local tax rates or zoning information. However, at the moment it’s difficult to know which key local datasets are openly available and where. Now, you can help change that.

We know there is huge variability in how much local data is available not just across countries but within countries, with some cities and municipalities making major open data efforts, while in others there’s little or no progress visible. If we can find out what open data is out there, we can encourage more cities to open up key information, helping businesses and citizens understand their cities and making life easier.

We’ve created the Local Open Data Census to survey and compare the progress made by different cities and local areas in releasing Open Data. You can help by tracking down Open Data from a city or region where you live or that you’re interested in. All you need to do is register your interest and we’ll get your Local Open Data Census set up and ready to use.

Get in touch about surveying open data in your city or region »

Investigate your local open data on Open Data Day

Open Data Day is coming – it’s on 22 February 2014 and will involve Open Data events around the world where people can get involved with open data. If you’re organising an open data event, why not include a Census-a-thon to encourage people to track down and add information about their city?

A Local Open Data Census for your city will help:

  • new people learn about open data by exploring what’s available and relevant to them;
  • you compare open data availability in your city with other cities in your country;
  • local government identify data that local people and businesses are interested in using;
  • and more data get opened up everywhere!

It’s really easy to contribute to an Open Data Census: there’s lots of documentation for them and a truly global community creating and using them. A City Census is a great way to get involved with open data for the first time, as the information is about things city residents really care about. Or if you’re more interested in regions, counties or states, you can take part a regional Census. (Some countries will have both regional and city Censuses, because of the way their local government information is organised.)

Sign up now to ensure your city and country have a Local Open Data Census up and running before Open Data Day, and let’s see how much open data about open data we can create this month! We’ll have more tips on how to run a successful Census-a-thon coming soon.

Register your interest in a local census Loading… The history behind the Local Open Data Census

In 2012 we started an Open Data Census to track the state of country-level open data around the world. The 2013 results published as the first ever Open Data Index last Autumn covered 700 datasets across 70 countries, and have already proved useful in driving open data release around the world. We’re looking forward to updating the Census for 2014 later this year.

However, a lot of data that is most relevant to citizens’ everyday lives is at the local level. That’s why last year we ran a separate pilot, to measure release of open data at the local, city level – the City Open Data Census. We’ve learnt a lot from the experience and from the community who used the pilot, and we are now ready to offer a full Local Open Data Census to everyone, everywhere.

You can find out more on the new Census “Meta” site »

And there’s more: Topical Open Data Censuses

We also know that people will want to run their own specific Open Data Censuses focused on particular topics or datasets. If you’ve been wondering about the openness of pollution data, legal information, public finances or any other topic, we can set up a special Census to survey the datasets you care about, on a national or regional scale.

A Topical Census uses the platform built for the Open Data Census to run a similar, customised census, and publish the results in a simple and visually appealing way. The questionnaires, responses and results can be hosted by the Open Knowledge Foundation, so you don’t have to worry about the technical side. If you are interested in running a Topical Open Data Census, get in touch with the Census team.

Note that we expect quite a bit of demand for local Censuses in the next few weeks. We will prioritise requests for Topical Censuses from groups who have more people ready to get involved, such as existing networks, working groups or interest groups around the topic, so please let us know a little about yourselves when you get in touch.

Bigwood, David: Open Publication Distribution System

planet code4lib - Tue, 2014-02-04 11:37
Well, I'm back to the weblog again because an idea has taken hold of me. I recently became aware of Open Publication Distribution System (OPDS) Catalog format, a syndication format for e-pubs based on Atom & HTTP. It is something like an RSS feed for e-books. People are using it to find and acquire books. It sounds like a natural fit for library digitization projects. An easy way for folks to know what's new and grab a copy if they like.

So is anyone using this? Is it built into Omeka, Greenstone, DSpace or any of our tools? If you do use it do you have separate feeds for different projects. Say, one for dissertations, another for the local history project and another for books by state authors. Or do you have just one large feed? Is it being used by the DPLA or Internet Archive? How's it working for you?

We have plenty of documents we have scanned as well as our own publications. Might this be a good way to make them more discoverable?

Bigwood, David: Open Publication Distribution System

planet code4lib - Tue, 2014-02-04 11:37
Well, I'm back to the weblog again because an idea has taken hold of me. I recently became aware of Open Publication Distribution System (OPDS) Catalog format, a syndication format for e-pubs based on Atom & HTTP. It is something like an RSS feed for e-books. People are using it to find and acquire books. It sounds like a natural fit for library digitization projects. An easy way for folks to know what's new and grab a copy if they like.

So is anyone using this? Is it built into Omeka, Greenstone, DSpace or any of our tools? If you do use it do you have separate feeds for different projects. Say, one for dissertations, another for the local history project and another for books by state authors. Or do you have just one large feed? Is it being used by the DPLA or Internet Archive? How's it working for you?

We have plenty of documents we have scanned as well as our own publications. Might this be a good way to make them more discoverable?

James Cook University, Library Tech: Creating your own ebook #vala14 #bcb

planet code4lib - Tue, 2014-02-04 06:35
Chris Cormack, with throbbing leg and nauseous belly, delivered a session bursting with so many ideas that I'm still putting together all the resources he provided. After a greeting in Moari the author of Koha delivered a measured rant that put him more in the Aaron Swartz camp than the big publisher camp. My interpretation of his thesis is that digital publishing completely changes the 'book'

Morgan, Eric Lease: Beginner’s glossary to linked data

planet code4lib - Tue, 2014-02-04 01:47

This is a beginner’s glossary to linked data. It is a part of the yet-to-be published LiAM Guidebook on linked data in archives.

  • API – (see application programmer interface)
  • application programmer interface (API) – an abstracted set of functions and commands used to get output from remote computer applications. These functions and commands are not necessarily tied to any specific programming language and therefore allow programmers to use a programming language of their choice.
  • content negotiation – a process whereby a user-agent and HTTP server mutually decide what data format will be exchanged during an HTTP request. In the world of linked data, content negotiation is very important when URIs are requested by user-agents because content negotiation helps determine whether or not HTML or serialized RDF will be returned.
  • extensible markup language (XML) – a standardized data structure made up of a minimum of rules and can be easily used to represent everything from tiny bits of data to long narrative texts. XML is designed to be read my people as well as computers, but because of this it is often considered verbose, and ironically, difficult to read.
  • HTML – (see hypertext markup language)
  • HTTP – (see hypertext transfer protocol)
  • hypertext markup language (HTML) – an XML-like data structure intended to be rendered by user-agents whose output is for people to read. For the most part, HTML is used to markup text and denote a text’s stylistic characteristics such as headers, paragraphs, and list items. It is also used do markup the hypertext links (URLs) between documents.
  • hypertext transfer protocol (HTTP) – the formal name for the way the World Wide Web operates. It begins with one computer program (a user-agent) requesting content from another computer program (a server) and getting back a response. Once received, the response is formatted for reading by a person or for processing by a computer program. The shape and content of both the request and the response are what make-up the protocol.
  • Javascript object notation (JSON) – like XML, a data structure enabling allowing arbitrarily large sets of values to associated with an arbitrarily large set of names (variables). JSON was first natively implemented as a part of the Javascript language, but has since become popular in other computer languages.
  • JSON – (see Javascript object notation)
  • linked data – the stuff and technical process for making real the ideas behind the Semantic Web. It begins with the creation of serialized RDF and making the serialization available via HTTP. User agents are then expected to harvest the RDF, combine it with other harvested RDF, and ideally use it to bring to light new or existing relationships between real world objects — people, places, and things — thus creating and enhancing human knowledge.
  • linked open data – a qualification of linked data whereby the information being exchanged is expected to be “free” as in gratis.
  • ontology – a highly structured vocabulary, and in the parlance of linked data, used to denote, describe, and qualify the predicates of RDF triples. Ontologies have been defined for a very wide range of human domains, everything from bibliography (Dublin Core or MODS), to people (FOAF), to sounds (Audio Features).
  • RDF – (see resource description framework)
  • representational state transfer (REST) – a process for querying remote HTTP servers and getting back computer-readable results. The process usually employs denoting name-value pairs in a URL and getting back something like XML or JSON.
  • resource description framework – the conceptual model for describing the knowledge of the Semantic Web. It is rooted in the notion of triples whose subjects and objects are literally linked with other triples through the use of actionable URIs.
  • REST – (see representational state transfer)
  • Semantic Web – an idea articulated by Tim Berners Lee whereby human knowledge is expressed in a computer-readable fashion and made available via HTTP so computers can harvest it and bring to light new information or knowledge.
  • serialization – a manifestation of RDF; one of any number of textual expressions of RDF triples. Examples include but are not limited to RDF/XML, RDFa, N3, and JSON-LD.
  • SPARQL – (see SPARQL protocol and RDF query language)
  • SPARQL protocol and RDF query language (SPARQL) – a formal specification for querying and returning results from RDF triple stores. It looks and operates very much like the structured query language (SQL) of relational databases complete with its SELECT, WHERE, and ORDER BY clauses.
  • triple – the atomistic facts making up RDF. Each fact is akin to a rudimentary sentence with three parts: 1) subject, 2) predicate, and 3) object. Subjects are expected to be URIs. Ideally, objects are URIs as well, but can also be literals (words, phrases, or numbers). Predicates are akin to the verbs in a sentence and they denote a relationship between the subject and object. Predicates are expected to be a member of a formalized ontology.
  • triple store – a database of RDF triples usually accessible via SPARQL
  • universal resource identifier (URI) – a unique pointer to a real-world object or a description of an object. In the parlance of linked data, URIs are expected to have the same shape and function as URLs, and if they do, then the URIs are often described as “actionable”.
  • universal resource locator (URL) – an address denoting the location of something on the Internet. These addresses usually specify a protocol (like http), a host (or computer) where the protocol is implemented, and a path (directory and file) specifying where on the computer the item of interest resides.
  • URI – (see universal resource identifier)
  • URL – (see universal resource locator)
  • user agent – this is the formal name for what is commonly called a “Web browser”, but Web browsers usually denote applications where people are viewing the results. User agents are usually “Web browsers” whose readers are computer programs.
  • XML – (see extensible markup language)

For a more complete and exhaustive glossary, see the W3C’s Linked Data Glossary.

Morgan, Eric Lease: Beginner’s glossary to linked data

planet code4lib - Tue, 2014-02-04 01:47

This is a beginner’s glossary to linked data. It is a part of the yet-to-be published LiAM Guidebook on linked data in archives.

  • API – (see application programmer interface)
  • application programmer interface (API) – an abstracted set of functions and commands used to get output from remote computer applications. These functions and commands are not necessarily tied to any specific programming language and therefore allow programmers to use a programming language of their choice.
  • content negotiation – a process whereby a user-agent and HTTP server mutually decide what data format will be exchanged during an HTTP request. In the world of linked data, content negotiation is very important when URIs are requested by user-agents because content negotiation helps determine whether or not HTML or serialized RDF will be returned.
  • extensible markup language (XML) – a standardized data structure made up of a minimum of rules and can be easily used to represent everything from tiny bits of data to long narrative texts. XML is designed to be read my people as well as computers, but because of this it is often considered verbose, and ironically, difficult to read.
  • HTML – (see hypertext markup language)
  • HTTP – (see hypertext transfer protocol)
  • hypertext markup language (HTML) – an XML-like data structure intended to be rendered by user-agents whose output is for people to read. For the most part, HTML is used to markup text and denote a text’s stylistic characteristics such as headers, paragraphs, and list items. It is also used do markup the hypertext links (URLs) between documents.
  • hypertext transfer protocol (HTTP) – the formal name for the way the World Wide Web operates. It begins with one computer program (a user-agent) requesting content from another computer program (a server) and getting back a response. Once received, the response is formatted for reading by a person or for processing by a computer program. The shape and content of both the request and the response are what make-up the protocol.
  • Javascript object notation (JSON) – like XML, a data structure enabling allowing arbitrarily large sets of values to associated with an arbitrarily large set of names (variables). JSON was first natively implemented as a part of the Javascript language, but has since become popular in other computer languages.
  • JSON – (see Javascript object notation)
  • linked data – the stuff and technical process for making real the ideas behind the Semantic Web. It begins with the creation of serialized RDF and making the serialization available via HTTP. User agents are then expected to harvest the RDF, combine it with other harvested RDF, and ideally use it to bring to light new or existing relationships between real world objects — people, places, and things — thus creating and enhancing human knowledge.
  • linked open data – a qualification of linked data whereby the information being exchanged is expected to be “free” as in gratis.
  • ontology – a highly structured vocabulary, and in the parlance of linked data, used to denote, describe, and qualify the predicates of RDF triples. Ontologies have been defined for a very wide range of human domains, everything from bibliography (Dublin Core or MODS), to people (FOAF), to sounds (Audio Features).
  • RDF – (see resource description framework)
  • representational state transfer (REST) – a process for querying remote HTTP servers and getting back computer-readable results. The process usually employs denoting name-value pairs in a URL and getting back something like XML or JSON.
  • resource description framework – the conceptual model for describing the knowledge of the Semantic Web. It is rooted in the notion of triples whose subjects and objects are literally linked with other triples through the use of actionable URIs.
  • REST – (see representational state transfer)
  • Semantic Web – an idea articulated by Tim Berners Lee whereby human knowledge is expressed in a computer-readable fashion and made available via HTTP so computers can harvest it and bring to light new information or knowledge.
  • serialization – a manifestation of RDF; one of any number of textual expressions of RDF triples. Examples include but are not limited to RDF/XML, RDFa, N3, and JSON-LD.
  • SPARQL – (see SPARQL protocol and RDF query language)
  • SPARQL protocol and RDF query language (SPARQL) – a formal specification for querying and returning results from RDF triple stores. It looks and operates very much like the structured query language (SQL) of relational databases complete with its SELECT, WHERE, and ORDER BY clauses.
  • triple – the atomistic facts making up RDF. Each fact is akin to a rudimentary sentence with three parts: 1) subject, 2) predicate, and 3) object. Subjects are expected to be URIs. Ideally, objects are URIs as well, but can also be literals (words, phrases, or numbers). Predicates are akin to the verbs in a sentence and they denote a relationship between the subject and object. Predicates are expected to be a member of a formalized ontology.
  • triple store – a database of RDF triples usually accessible via SPARQL
  • universal resource identifier (URI) – a unique pointer to a real-world object or a description of an object. In the parlance of linked data, URIs are expected to have the same shape and function as URLs, and if they do, then the URIs are often described as “actionable”.
  • universal resource locator (URL) – an address denoting the location of something on the Internet. These addresses usually specify a protocol (like http), a host (or computer) where the protocol is implemented, and a path (directory and file) specifying where on the computer the item of interest resides.
  • URI – (see universal resource identifier)
  • URL – (see universal resource locator)
  • user agent – this is the formal name for what is commonly called a “Web browser”, but Web browsers usually denote applications where people are viewing the results. User agents are usually “Web browsers” whose readers are computer programs.
  • XML – (see extensible markup language)

For a more complete and exhaustive glossary, see the W3C’s Linked Data Glossary.

Syndicate content