You are here

Feed aggregator

Open Knowledge Foundation: Presenting public finance just got easier

planet code4lib - Fri, 2015-03-20 12:40

This blog post is cross-posted from the CKAN blog.

CKAN 2.3 is out! The world-famous data handling software suite which powers, and numerous other open data portals across the world has been significantly upgraded. How can this version open up new opportunities for existing and coming deployments? Read on.

One of the new features of this release is the ability to create extensions that get called before and after a new file is uploaded, updated, or deleted on a CKAN instance.

This may not sound like a major improvement but it creates a lot of new opportunities. Now it’s possible to analyse the files (which are called resources in CKAN) and take them to new uses based on that analysis. To showcase how this works, Open Knowledge in collaboration with the Mexican government, the World Bank (via Partnership for Open Data), and the OpenSpending project have created a new CKAN extension which uses this new feature.

It’s actually two extensions. One, called ckanext-budgets listens for creation and updates of resources (i.e. files) in CKAN and when that happens the extension analyses the resource to see if it conforms to the data file part of the Budget Data Package specification. The budget data package specification is a relatively new specification for budget publications, designed for comparability, flexibility, and simplicity. It’s similar to data packages in that it provides metadata around simple tabular files, like a csv file. If the csv file (a resource in CKAN) conforms to the specification (i.e. the columns have the correct titles), then the extension automatically creates the Budget Data Package metadata based on the CKAN resource data and makes the complete Budget Data Package available.

It might sound very technical, but it really is very simple. You add or update a csv file resource in CKAN and it automatically checks if it contains budget data in order to publish it on a standardised form. In other words, CKAN can now automatically produce standardised budget resources which make integration with other systems a lot easier.

The second extension, called ckanext-openspending, shows how easy such an integration around standardised data is. The extension takes the published Budget Data Packages and automatically sends it to OpenSpending. From there OpenSpending does its own thing, analyses the data, aggregates it and makes it very easy to use for those who use OpenSpending’s visualisation library.

So thanks to a perhaps seemingly insignificant extension feature in CKAN 2.3, getting beautiful and understandable visualisations of budget spreadsheets is now only an upload to a CKAN instance away (and can only get easier as the two extensions improve).

To learn even more, see this report about the CKAN and OpenSpending integration efforts.

Open Knowledge Foundation: Open Data Day report #3: Data Fiesta in Latin America and the Caribbean

planet code4lib - Fri, 2015-03-20 12:28

(This post was co-written by Open Knowledge and Fabrizio Scrollini from ILDA)

In our follow-up series about Open Data Day 2015, which took place on February 21 across the world, we will now highlight some of the great events that took place across the Latin America and the Caribbean. See our previous post about Asia-Pacific and Europe.

The Americas saw a lot of activity during this Open Data Day. Events, hackathons, formal and informal discussions were some of the activities in which the continent engaged through the day. There is an emerging movement with different levels of experience, maturity and resources but with lots of enthusiasm and great perspectives for the future.


The Argentine Open Data Community came together to participate in a full day of activities in Buenos Aires, organized by Open Knowledge embassador, Yamila Garcia. Different members of the community presented their experience in lighting talks that took place throughout the day. Presenters shared experiences about their work in the federal government, Buenos Aires municipality, media, hacking space and advocacy. In a different room, round tables were set up in order to deliberate and plan on the future of open data in Argentina. Subjects like Innovation and the upcoming elections. Hopefully, Ideas will be taken forward and will help to shape the eco-system of Open Data in Argentina. Read more about the event here.


Brazil held 6 events in 6 different cities this Open Data Day. In the small Sao Carlos, there was a roundtable discussion about open data policies. The group tried to convince local authorities about the importance of having open data policies that can help build a more transparent and open political process. In Teresina, the local group took a more hands on approach. The local hacker club organised hackathon and all the outcomes were shared under open knowledge licence. San Paulo was also in a coding mood and orginized a hackathon with the LabHacker, PoliGNU, Thacker and Comptroller General of São Paulo (SP-CGM) to promote the use of dataset and the creation of new apps in the fields of Water, Health and transport (See summary of the event here).

Read more about ODD 2015 in Brazil on the Open Knowledge Brasil blog.


In Chile, Fundación Ciudadano Inteligente took a different approach and moved open data from the virtual space to the streets. As Felipe Alvarez mentions in his blog post, members of Ciudadano Inteligente got out of the office in Santiago and went to meet the citizens in Valparaíso, the second biggest city in Chile. Their objective was simple – to engage with citizens and to know what data they would like to see open. Some interesting topics came up such as expenditure on conservation efforts and local festivals as well as civil rights issues. In addition, the guys invited participants to join AbreLATAM, the regional un-conference that will take place this year in Santiago, Chile.


The Uruguayan community was busy around a cup of fresh coffee plotting on how to use local government data to visualize women’s right issues. For four hours engineers, designers, communication people and policy wonks tried some ideas and look for available open data on this topic. With geographical data available and wikipedia, the team took on the task to visualize streets that are named after notable women in Montevideo. The results were a bit discouraging but expected: only 100 out of 5000 streets in Montevideo are named after women. This result catalysed a small community that worked for two weeks on developing this interactive website that acknowledges and explain the role that these women played in Uruguayan history. No doubt, this was a very productive open data day for this country!


Paraguayans did not have enough with one day so they had a whole week for open data activities! Groups like TEDIC took the streets hand in hand with government officials to paint murals with data. Also the government launched the national open data portal among other initiatives that are fostering the nascent open data scene in Paraguay

Costa Rica

In central America “Ticos” are building their open data community. Abriendo Datos Costa Rica is a nascent initiative which co-organised the Costa Rica open data day reaching out to other civil society stakeholders. The event had a wide variety of participants and topics focusing mostly on what data relevant to society should the government open next. Hopefully, open data day is just the beginning for more activities in the country. You can see some of their pics here.

El Salvador

In El Salvador a group of civil society organised a roundtable to discuss uses of open data in journalism. Taking a deep look into journalistics practices in El Salvador and Costa Rica the group discussed how to use open data in their day to day assignments. El Salvador has one of the few open data portals that is run by civil society in the region.


Peru saw an epic event organised by the Community Open Data Peru in Lima, where local specialists share knowledge and developed projects together during the whole day. The event galvanised the local open data community which is now spreading through other communities in Peru. Peruvians are very keen to work in several projects and this event may be big stepping stone for a more sustainable and diverse open data community in Peru.


In Guatemala Accion Ciudadana organised a day based on exploring the community need and their understanding of open data. Participants identified how open data could help their community in daily and strategic issues. The event showed the different levels of understanding participants had about open data. Social Tic Executive Director Juan Manuel Casanueva delivered a training based on community’s perceptions of Open Data. This was one of the first activities in Guatemala about open data and probably one of the many to come!


As usual, the Mexican community know how to party with data. In Mexico city, hundred people came to celebrate open data accompanied by Parilla and beer. Participants could choose to go to one of the four workshops that were offered, participate in a hackathon for sustainability or to discover new findings in a data expedition. In addition, a conversation around the state of openness in Mexico developed after the presentation of the the Mexican local and global index results and participant raised ideas for how to grow the local community.


In Panama IPANDETEC, organized an awesome day which involved a hackathon, documentary screenings, conferences and workshops. The event was set up in collaboration by the chapters in Panama Floss, Wikimedia, Mozilla, Fedora and Creative Commons, as well as IPANDETEC.


In Medellin Fundación Gobierno Abierto invited school of data fellows to deliver trainings about scraping data. Furthermore the community also spent time to reflect on the regional and local science of open data, as well as get into developing ideas for further action, advocating for open data all across Colombia, both nationally and locally.


[credit to Mona School of Business & Management]

In the Caribbean a great event took place at the Mona School of Business and Management (MSBM) and organized by the Open Caribbean Institute. The day kicked off by Dr. Maurice McNaughton, who delivered a 1 hour workshop data visualization based on online resources from School of Data. Then, fuelled by Coffee and Pizza, the students divided into three team where they started to develop data visualizations in three fields – the 2015-2016 Budget, High school track and field data and Development Alert!, an online tool for increasing transparency and public engagement on projects that impact the environment and public health. One on the visualizations, a dashboard for field and track data is now available online here. Over all, it seems like a great start to many more Open Data Day in the Caribbeans!


All in all the region is showing a vibrant community evolving, showing different degrees of resources and levels of understanding of open data. This is then complex but also presents an opportunity to engage and support more groups here. We could not support as many as we wanted to but this ODDay shows that open data in the Americas is here to stay. Check out some more detailed reports in Spanish on the open data day activities on the Yo Gobierno website (see also here), in this blog post from DAL and this one from BID.

13 mini Grants were given to organisers in the region, as part of the Open Data Day microgrant partnership thanks to ILDA, the Caribbean Open Institute and The Partnership of Open Data. See you next year with even more exciting events and news!

District Dispatch: Time is running out for LSTA and IAL – act now!

planet code4lib - Fri, 2015-03-20 12:00

by Judit Klein

Millions in federal funding for libraries is currently hanging in the balance. In order to saving library funding from the chopping block – particularly the Library Services Technology Act (LSTA) and Innovative Approaches to Literacy (IAL) programs —library supporters need to contact offices of their Representative and Senator and ask them to show support for continued library funding by signing “Dear Appropriator” letters about LSTA and IAL that three Members of Congress who are huge library champions have drafted to the Appropriations Committees in the House and Senate. The more Members of Congress that we can get to sign these “Dear Appropriator” letters, the better the chance of preserving and securing real money for libraries so that libraries can continue to do all the great work they do in their communities.  The only way we can achieve this is through grassroots efforts.  Members of Congress need to hear from as many voters as we can rally to action.

Please email or phone your members of Congress and ask them to sign the Dear Appropriator letter supporting LSTA and IAL, then ask all other library supporters you know to do the same by no later than March 20th.  Contact info is available on our action center (just put in your zip code in the box on the lower right side).

To see whether your Members of Congress signed the letters last year, view the FY 2015 Funding Letter Signees document (pdf). If so, please be sure to thank and remind them of that when you email or call!  More information can be found on this earlier post.

Please take a few minutes and contact your Members of Congress. You can find their contact information and talking points at our action center.

  • LSTA Letter – The Library Services and Technology Act is the primary source of federal funding for libraries in the federal budget. The bulk of the program is a population-based grant funded to each state through the Institute of Museum and Library Services.
    • House LSTA Letter Contact: Rep. Raul Grijalva –  Norma Salazar-Ibarra (; 202-225-2435)
    • Senate LSTA Letter Contact: Sen. Jack Reed – Elyse Wasch (; 202-224-4642), or Sen. Susan Collins – Cameron O’Brien (cameron_o’
  •  IAL Letter – The Innovative Approaches to Literacy ensures children and schools in underserved communities have access to books and other reading material and services. Through IAL programs, children are better prepared to succeed in high school, college, and in 21st century jobs.

The post Time is running out for LSTA and IAL – act now! appeared first on District Dispatch.

DuraSpace News: DSpaceDirect Information Sessions in 2015

planet code4lib - Fri, 2015-03-20 00:00

The DSpaceDirect Information Session series will kick-off in 2015 on Wednesday, March 25th! Registration is free and is open to the public.

Open Knowledge Foundation: We’re Hiring at Open Knowledge: Project Managers, Developers and Data Wranglers

planet code4lib - Thu, 2015-03-19 22:52

Open Knowledge are delighted to advertise several new open positions:

  • Project Manager – Open Data
  • Python Developer
  • Frontend Developer
  • Data Wrangler

A brief summary on each post can be found below. Full details and application forms can be found on

 Project Manager – Open Data

We are looking for a professional and dynamic hands-on project manager to manage a portfolio of projects at Open Knowledge – an international, world-leading non-profit working on open data. The project management style will need to fit within our creative and innovative atmosphere, and should help us retain organisational flexibility and agility.

The projects requiring management will vary, but in general will range from £25k/several months/1-2 team members, to £500k+/several years/4-6 team members. Some projects will involve substantial software design and delivery and require good technical understanding and the ability to manage a technology delivery project. In general the project teams are made up of specialists who have a good sense of the area of work and may be able to be the public face of the project; the key role of the project manager is to ensure planning, delivery, tracking and reporting occurs reliably and to a good standard.

Open Knowledge’s partners and clients include national government bodies, NGOs, and organisations such as the World Bank. Projects funded by grants are delivered for philanthropic foundations, the European Commission (eg. through the FP7/H2020 programme) and others.

Find out more or apply »

Python Developer

Working in the fast-growing area of open data, we build open source tools to drive transparency, accountability and data-driven insight. Our flagship product CKAN runs official national data portals from the UK to Brazil, US to Australia and hundreds more around the world. We also build a variety of other open source software to help people access information and turn data into insight.

We’re looking for a python developer to join our team who is smart, quick-to-learn and interested in contributing to a fast-growing and exciting area. You can be based anywhere – we operate as a virtual, online team – and can offer a flexible working structure – we care about delivery, not being 9-5 at your desk!

Find out more or apply »

Data Wrangler

Work on cutting edge data-driven, high impact open knowledge projects with a world-leading non-profit in areas ranging from government finances to health-care.

We are looking for someone with good experience in “small-to-medium” data wrangling (e.g. you’ve been scraping in python for a while, have a deep love for CSV … ). You must be a self-starter, capable of working remotely and taking initiative as well as working effectively in a team with others.

Find out more or apply »

Frontend Developer

We are looking for talented front-end developers to work with us on an ongoing, freelance basis on a variety of open-source data-driven projects ranging from healthcare to illegal logging.

Find out more or apply »

Nicole Engard: Bookmarks for March 19, 2015

planet code4lib - Thu, 2015-03-19 20:30

Today I found the following resources and bookmarked them on Delicious.

  • Calaos Open Source Home Automation

Digest powered by RSS Digest

The post Bookmarks for March 19, 2015 appeared first on What I Learned Today....

Related posts:

  1. Automation Survey
  2. Open Education Spreading
  3. Amazing OSS Podcast

SearchHub: Lucidworks Fusion 1.3 Now Available

planet code4lib - Thu, 2015-03-19 20:11
Lucidworks Fusion, Fusion 1.3 STS is now available for download. New Connectors For this release, our team concentrated on getting data into Fusion and processing it nicely. This means a few new connectors to new sources of data:
  • Any object, including Accounts, Cases and Case Notes, Activities and Activity History, Opportunities, Leads, and Contacts
  • Drupal: Site and site content
  • GitHub: repositories and commits
With these, we’ve made it a lot easier to build new search-based applications. For example, if you index data, you could apply our text analytics to Cases to help you understand what types of problems you’re working on the most and where you could be more effective. New Processing Stages We’re also expanding the list of processing stages we include to help you analyze and index content as it comes in:
  • Natural language processing stages, including:
    • Sentence detection and splitting
    • Part-of-speech tagging
  • Structured document parsing stages, including:
    • CSV
    • JSON Change to the Tika stage for CSV and JSON parsing
NLP stages can be invaluable in analyzing human-generated text, per the concept of indexing and analyzing your support cases. Easier Event Processing and Signals Extraction We’re also making it easier to work with our Event processing and Signals extraction features. These use the activity of the users in your application as feedback and context to improve search result quality. By taking into account real-world behavior of users, Fusion can give back much better and much more relevant results on future searches. Extensions to Signals Fusion 1.3 include:
  • Pre-built Index-time and query-time stages for aggregating Events and Signals
  • A query stage for applying processed and aggregated Signals to improve results
You’ll be seeing quite a bit more of Signals surfaced, as well as new processing and analytics capabilities. New Release Model With the release of Lucidworks Fusion 1.3 STS, we are moving to a support model consisting of two types of releases. Both types of releases go through our full QA/testing process and are available to any Fusion customer.
  • Short-Term Support (STS) releases are for teams willing to update more frequently to receive the newest features as they become available. STS releases receive support and maintenance updates until the next release is available. Fusion 1.3 is an STS release.
  • Long-Term Support (LTS) releases are intended for teams that need more time to plan updates across their infrastructure and don’t need immediate access to new features as they become available. LTS releases receive support and critical maintenance updates for at least 18 months after the release date. Our previous release, Fusion 1.2, is an LTS release.
This type of release strategy is similar to other platforms and apps like Mozilla Firefox, the Linux kernel, or Ubuntu OS. If you have any questions about this new support policy, please contact support. In addition to the above, we’ve continued applying design tweaks, incremental improvements, and usability enhancement to pipeline stages. And as usual, there’s work under the hood to improve performance, security, and manageability, as well as foundational work for features to come. For now, check it out! Download Fusion today. Full release notes for Fusion 1.3.

The post Lucidworks Fusion 1.3 Now Available appeared first on Lucidworks.

OCLC Dev Network: Web Services Maintenance March 22

planet code4lib - Thu, 2015-03-19 19:30

All OCLC Web services that use WSKey authentication will be unavailable for systems maintenance to the WSKey infrastructure for an estimated 45 minutes beginning at 2:00 AM Eastern Daylight Time (EDT) USA, Sunday March 22nd.

District Dispatch: What to expect: photos of National Library Legislative Day 2014

planet code4lib - Thu, 2015-03-19 16:31

In a few weeks, library advocates from all over the country will travel to Washington, D.C. to meet with their members of Congress to champion library issues. Registration for the event, National Library Legislative Day (NLLD), is currently open. To help first-time attendees prepare for the annual advocacy day, we created a photo essay using photos from past NLLD events (photos via flickr).

Participants in National Library Legislative Day spend their first day in the District receiving training and briefings in preparation for their meetings with their members of Congress the following day.

NLLD Participants on Briefing Day

Former ALA President Barbara Stripling listens to a briefing

Awards, like the White House Conference on Library and Information Services Taskforce Award (WHCLIST), are presented. The WHCLIST award is presented to a first-time, non-librarian advocate.

Former ALA President Barbara Stripling presents the WHCLIST Award to advocate Mary Lynn Collins.

Sen. Patrick Leahy addresses the advocates after accepting the 2014 Public Service Award

After the briefing, attendees are invited to a reception at the Capitol Building.

On day two, advocates are sent to the congressional buildings with their delegations. They are given one pagers and materials they can leave behind with the Members of Congress and their staff after their meetings conclude.

Lisa Rice visits with Rep. Brett Guthrie (R-KY)

This year, National Library Legislative Day will be held May 4-5, 2015. Additional questions regarding the event can be directed to Lisa Lindle.


The post What to expect: photos of National Library Legislative Day 2014 appeared first on District Dispatch.

DPLA: Announcing the 2015 DPLA + DLF Cross-Pollinators

planet code4lib - Thu, 2015-03-19 15:00

We are pleased to announce the recipients of the 2015 DPLA + DLF Cross-Pollinator Travel Grants. The DPLA + DLF Cross-Pollinator Travel Grants is the first of a series of collaborations between CLIR/DLF and the Digital Public Library of America. It is our belief that the key to sustainability of large-scale national efforts require robust community support. Connecting DPLA’s work to the energetic and talented DLF community is a positive way to increase serendipitous collaboration around this shared digital platform.

The goal of the DPLA + DLF Travel Grants is to bring cross-pollinators—active DLF community contributors who can provide unique perspectives and share the vision of DPLA from their perspective—to the conference. The travel grants include DPLAfest 2015 conference registration, travel costs, meals, and lodging in Indianapolis.


Meet the 2015 DPLA + DLF Cross-Pollinators


Benjamin Armintor
Programmer/Analyst, Columbia University Libraries

I have a tendency to get mixed up in a lot of projects, which makes a multilayered, open-source effort like DPLA a natural area of interest. I’m excited to hear about what people are doing with open data GLAM platforms like DPLA. I hope to find some like-minded folks in Indianapolis who would be willing to hack on a DPLA/Blacklight project.


Sabra Statham, Ph.D.
Project Coordinator: The People’s Contest
Office of Scholarly Communications, Penn State University

I am a scholar of American Music and have been involved in digitization outreach activities in small local archives in Pennsylvania for the past five years. I am looking forward to hearing more about New Service Hub Initiative.



Laura Wrubel
E-Resources Content Manager, The George Washington University

As a librarian managing e-resources, I’m involved in the interesting and challenging work of improving discovery, search, and access to a wide range of web resources and data. Recently, I’ve been experimenting with the DPLA API as part of a research leave project to build up my coding skills and understand how DPLA’s resources might be explored alongside other digital collections. I’m looking forward to participating in DPLAfest and joining others to find new ways to connect researchers and students with DPLA’s growing database and platform.



Scott W. H. Young

Digital Initiatives Librarian, Montana State University Library

I design and develop the web from a user experience perspective. At Montana State University Library, I am a member of the Informatics & Computing Department and lead UX research and web analytics reporting. My work focuses on front-end web design, digital library development, information architecture, search engine optimization, and social media community building.

@hei_scott |

LITA: Further Thoughts On Tech Roles + Librarianship

planet code4lib - Thu, 2015-03-19 13:00

Image courtesy of Flickr user deanj. CC BY.

Given the overwhelming response to Bryan’s post, “What is a Librarian?” and Michael’s follow up post, “Librarians: We Open Access,” a few more of the LITA bloggers thought we’d weigh in on our roles and how they fit within the profession. We hope you’ll share your story in the comments!

Lindsay C.

I remember when accepted my job, working at LYRASIS, feeling fearful that I wasn’t being a traditional librarian. Yes, I, Lindsay Cronk, unapologetic weirdo and loud lady, worried about traditional professional roles. I was concerned that my peers wouldn’t accept my work as librarianship. I got all kinds of high school lunch table self-conscious about it. I was being narrow-minded.

Most of us understand that the MLIS grad who works for a retail website providing taxonomy support and the MLIS grad who works in the academic library cataloging scholarly monographs are both librarians, and indeed peers (albeit with different job titles) who could probably give one another tips on running macros. Let’s get real. I have never worked in a library. I work for a library services organization. That said, all I ever do is troubleshoot access issues, provide outreach and education, promote services and use. My work is dedicated to advocacy for and of libraries. I’m a librarian’s librarian.

Grace T.

Currently in my first year of the MLS/MIS program, a ten hour drive away from my closest family and friends, and running from class to job to research to job, I’ve asked myself this same question. Where does this lead? Easy, librarianship. But what does that mean?

At both the University of Nebraska-Lincoln and Indiana University, I’ve noticed the traditional, stuffy academic library transitioning. These libraries are moving stacks to auxiliary facilities, making room for “space.” Study space, makerspace, cafe space, meeting space. The library is, has always been, and will always be, a third space. It isn’t work and it isn’t home. It is the place you go to think, research, write, build, and further your knowledge.

Librarians are the keepers of this third space. They are there to support and guide the thinker/researcher/writer/builder. They are there to say, “Have you considered looking at this?” upon which a thesis is born. They build digital infrastructure so that patrons have even more to access and learn. They are fluid and adaptable, changing as the third space and its people continue to evolve.

Brittney F.

Before I began graduate studies in Library and Information Science a few years ago, I read a list on about the worst Master’s degrees for job placement. Surprisingly, Library and Information Science was ranked as the worst. At the time I couldn’t believe it. Because I had done my research into the degree program, I knew it was interdisciplinary. The image on is of a woman helping children. The mid-career average is $58k a year with an 8.5% turnover average by 2020. I thought that this interpretation of the degree and librarianship was inaccurate. It was an attempt to associate the name of the degree with a specific job description and to quantify the worth of our contribution to the field.

The MLIS degree prepares both practitioners and scholars for a range of information-provision environments. In 2011, then a Syracuse University iSchool graduate student, Mia Breitkopf wrote an article called 61 Non-Librarian Jobs for LIS Grads. It is a non-exhaustive list of job titles that do not fit into the traditional image of the librarian profession. Some of the titles include research coordinator, web analytics manager and business information specialist. Whoever compiled that list for did not take into account the breadth of skills an MLIS student is exposed to in the graduate environment. They certainly didn’t recognize the necessity for information scientists in all organization types. Name any work environment and I will point out an information manager. We wear many hats.

I find solace in the fact that I love technology and history, I could parlay those interests into the archives field working with digital medium, and that has been somewhat enlightened. The Masters of Library and Information Science is now the third worst master’s degree according to their 2014 list.

Brianna M.

I am a recent graduate with a Master of Library Science and a Master of Information Science. I remember hearing from people who graduated before me that they felt odd about applying for non-librarian jobs in the first place, then a little bittersweet if they got them. They had an emotional tie to the idea of librarianship. Of course, you haven’t won if you get a librarian job; you haven’t sold out if you take a non-traditional role. It’s unlikely that anyone would ever vocalize anything like this but I do think that a lot of people come to library school with very specific, idealized visions of what they will do when they leave.

Hearing these things made me realize that I needed to think long and hard about what I wanted. I recognized that it was time for me to battle my own assumptions. I wanted to actively choose libraries, not flock to them as a knee-jerk reaction simply because I had made the decision to go to library school when I was 18. I’ve found that adopting this state of reevaluation actually keeps me more engaged within the field. I am more critical, less patient – and honestly, I think libraries need a lot more of both qualities.

So where did I end up? Technically, I am not a librarian, I’m a coordinator – an IT role within a library where I do librarian-ish things. I’m not sure where my career will take me and I’m unconcerned about what my future job titles may be. The more interesting question, unpacked much more eloquently than I could by the Library Loon, is how institutions can support these messy new roles. I would love to hear more discussion between administrators and those of us with these odd, hard to classify roles about how we can increase our chances of success in uncharted territory.

Michael R.

Following up on my February 27 post, “Librarians: We Open Access,” I’d like to reprint below, in edited and expanded form, some of my responses to comments on the original post.

First, opening access is not gatekeeping. I’d rather dismantle the gate than be its keeper. Rather, the goal is grow the people’s information commons–note that I use the term “grow” rather than “build,” a premeditated word choice whose reasoning reflects Debbie Chachra’s Atlantic article “Why I Am Not a Maker.” Opening access might, for example, entail liberating libraries and users from our current dependence on price-gouging, privacy-breaching vendors and publishers.

Knowledge creation, one of the new buzzwords of librarianship, nicely complements open access. Knowledge creation presumably requires access to existing knowledge, and targeting that access to our local and global communities and their needs is essential. Once knowledge is created, librarians ought to be providing gratis access to that new knowledge and guidance on its uses, which will hopefully engender more knowledge making, a more open society, and a growing information commons that acknowledges the “think globally, act locally” approach. This is a cycle, and access is the first spoke of the wheel, though by no means the wheel itself.

In her comment on my original post, Brianna mentioned big tent librarianship. I like it. Inclusivity doesn’t diffuse our energies. We need unabashed militancy in pursuit of core values to which information specialists can rally. Intellectual freedom has long been one of these core ideologies. To continue my post’s running metaphors of imprisoning walls and growth versus construction, surely opening access can become one of those metaphorical fetters hammered into plowshares.

Do you consider yourself a librarian? A technologist? Both? Tell us about your role in the comments!

Open Library Data Additions: Amazon Crawl: part bw

planet code4lib - Thu, 2015-03-19 06:32

Part bw of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata, Text

DuraSpace News: Hydra in the House

planet code4lib - Thu, 2015-03-19 00:00

Winchester, MA  DuraSpace and The Hydra Project are pleased to announce that DuraSpace will serve as a fiscal sponsor for the Hydra Project in 2015 in order to facilitate banking, legal and administrative functions. The Hydra Project is strategically aligned with DuraSpace in meeting community needs by providing a flexible front end for Fedora, and potentially a platform that may serve the needs of community members looking to migrate content from other repository platforms.

DuraSpace News: Registration Available for DuraSpace / ORCID Webinar Series

planet code4lib - Thu, 2015-03-19 00:00

Registration is available for our upcoming Hot Topics: The DuraSpace Community Webinar Series:

"Integrating ORCID Persistent Identifiers with DuraSpace DSpace, Fedora and VIVO"

Curated by: Josh Brown, Regional Director Europe, ORCID

Nicole Engard: Bookmarks for March 18, 2015

planet code4lib - Wed, 2015-03-18 20:30

Today I found the following resources and bookmarked them on Delicious.

  • Meerkat Tweet live video
  • 23andMe Genetic Testing for Ancestry
  • Find My Past Trace your Family Tree Online – Genealogy & Ancestry

Digest powered by RSS Digest

The post Bookmarks for March 18, 2015 appeared first on What I Learned Today....

Related posts:

  1. SxSW: We’re All Related. The (Big) Data Proves It
  2. 10 Search Tips
  3. Search all U.S. Censuses free

LITA: LITA Updates, March 2015

planet code4lib - Wed, 2015-03-18 20:10

This is one of our periodic messages sent to all LITA members. This update provides

  • Election details
  • An urgent call to action from the Washington Office
  • Current Online Learning Opportunities

Election details

ALA Candidates who are LITA members include:

  • Presidential candidate:
    • Joseph Janes
  • Council candidates:
    • Brett Bonfield
    • Megan Drake
    • Henry Mensch
    • Colby Mariva Riggs
    • Jules Shore
    • Eric Suess
    • Joan Weeks

LITA Division Candidates include

  • President Candidates:
    • Aimee Fifarek
    • Nancy Colyar
  • Director-at-large candidates:
    • Ken Varnum
    • Susan Sharpless Smith
    • Martin Kalfatovic
    • Frank Cervone

“Voting will begin at 9 a.m. Central time on March 24. Between March 24 and March 26, ALA will notify voters by email, providing them with their unique passcodes and information about how to vote online. To ensure receipt of their ballot, members should watch for emails from ALA Election Coordinator, The subject line will be “ALA 2015 election login information below.” The polls will close on Friday, May 1, at 11:59 p.m. Central time.

For the seventh year in a row, ALA is holding its election exclusively online. To be eligible to vote, individuals must be members in good standing as of January 31, 2015. Although the election is being conducted online, there remains one exception: Members with disabilities and without internet access may obtain a paper ballot by contacting ALA customer service at 1-800-545-2433, ext. 5. Those without internet access at home or work can easily access the election site by visiting their local public (or in many instances academic or school) library.

Voters will receive email reminders on April 7 and April 14. Voting may be completed in one sitting, or an individual may park their ballot and return at a later date; however, a ballot is not cast until the “submit” button is clicked. Anyone with a parked ballot will receive an email reminder to complete the voting process before May 1.”

Please take 60 seconds to help libraries by March 20, 2015
Emily Sheketoff, director, ALA Washington Office

“Millions in federal funding for libraries is currently hanging in the balance. In order to save library funding from the chopping block – particularly the Library Services Technology Act (LSTA) and Innovative Approaches to Literacy (IAL) programs —library supporters need to contact offices of their Representative and Senator and ask them to show support for continued library funding by signing “Dear Appropriator” letters about LSTA & IAL that three Members of Congress who are huge library champions have drafted to the Appropriations Committees in the House and Senate. The more Members of Congress that we can get to sign these “Dear Appropriator” letters, the better the chance of preserving and securing real money for libraries so that libraries can continue to do all the great work they do in their communities. The only way we can achieve this is through grassroots efforts. Members of Congress need to hear from as many voters as we can rally to action.

Please email or phone your members of Congress and ask them to sign the Dear Appropriator letter supporting LSTA and IAL, then ask all other library supporters you know to do the same by no later than March 20th.

Contact info is here: (just put in your zip code in the box on the lower right side).

You are welcome to forward this email to local, state or regional library listservs.

To see whether your Members of Congress signed the letters last year, view the FY 2015 Funding Letter Signees document (pdf). If so, please be sure to thank and remind them of that when you email or call! More information can be found on District Dispatch and here’s some helpful background information:



LSTA is the only source of funding for libraries in the federal budget. The bulk of this funding is returned to states through a population-based grant program through the Institute of Museum and Library Services (IMLS). Libraries use LSTA funds to, among other things, build and maintain 21st century collections that facilitate employment and entrepreneurship, community engagement, and individual empowerment. For more information on LSTA, check out this document LSTA Background and Ask (pdf)

HOUSE STAFF/ CHAMPION Norma Salazar (Representative Raul Grijalva)

SENATE STAFF/ CHAMPION Elyse Wasch (Senator Jack Reed)


IAL is the only federal program supporting literacy for underserved school libraries and has become the primary source for federal funding for school library materials. Focusing on low income schools, these funds help many schools bring their school libraries up to standard. For more information on IAL, view School Libraries Brief (pdf).

HOUSE STAFF/ CHAMPION Don Andres (Representative Eddie Bernice Johnson)

SENATE STAFF/ CHAMPION James Rice (Senator Charles Grassle)

Current Online Learning Opportunities

Beyond Web Page Analytics: Using Google tools to assess user behavior across web properties
Presenters: Ed Sanchez, Rob Nunez and Keven Riggle
Offered: March 31, 2015
Currently sold out. To be placed on the wait list send an email to

Taking the Struggle Out of Statistics web course
Presenter: Jackie Bronicki
Offered: April 6 – May 3, 2015
Currently sold out. To be placed on the wait list send an email to

Yes, You Can Video: A how-to guide for creating high-impact instructional videos without tearing your hair out
Presenters: Anne Burke and Andreas Orphanides
Offered: May 12, 2015
Register Online page arranged by session date (login required)

I encourage you to connect with LITA by:

  1. Exploring our web site.
  2. Subscribing to LITA-L email discussion list.
  3. Visiting the LITA blog and LITA Division page on ALA Connect.
  4. Connecting with us on Facebook and Twitter.
  5. Reaching out to the LITA leadership at any time.

Please note: the Information Technology and Libraries (ITAL) journal is available to you and to the entire profession. ITAL features high-quality articles that undergo rigorous peer-review as well as case studies, commentary, and information about topics and trends of interest to the LITA community and beyond. Be sure to sign up for notifications when new issues are posted (March, June, September, and December).

If you have any questions or wish to discuss any of these items, please do let me know.

All the best,


Mary Taylor, Executive Director
Library and Information Technology Association (LITA)
50 E. Huron, Chicago, IL 60611
800-545-2433 x4267
312-280-4267 (direct line)
312-280-3257 (fax)
mtaylor (at)

Join us in Minneapolis, November 12-15, 2015 for the LITA Forum.

FOSS4Lib Recent Releases: VuFind - 2.4

planet code4lib - Wed, 2015-03-18 19:28

Last updated March 18, 2015. Created by Demian Katz on March 18, 2015.
Log in to edit this page.

Package: VuFindRelease Date: Monday, March 23, 2015

Meredith Farkas: Read your contract: Being OA isn’t enough

planet code4lib - Wed, 2015-03-18 17:57

So, I missed writing this for Open Access Week, or Fair Use Week, or Open Education Week, but I think these are topics that we should be focusing on every day of our professional lives; not just 3 weeks of the year.

Imagine for a moment that you’re doing an ego search (not that I would ever do that) and you find that someone is selling an article you wrote (with your name on it) as part of a book or journal that you never contracted with. Sure, you published the article, but for a completely different publisher. Now you find that some random company is making money off your work. You contact them and demand that they remove your article because what they’re doing is illegal, but they insist they are in the right.

Sound implausible? Did you sign away your copyright to the publisher? Or is your book chapter or article licensed under Creative Commons CC-BY? Then what they might be doing is perfectly legal based on what you agreed to. You, like most of us, just didn’t understand the implications of what you were signing.

Making your work open access is a fantastic thing to do. Our goals as faculty should be to promote and share knowledge as widely as possible, and the fruits of our research will be much more likely to benefit society when they are freely available for anyone to access. Too many authors completely sign away any rights to their work, often forcing libraries at their institutions to pay for students and faculty to use something the institution has essentially already funded. Sometimes they feel they have to in order to get tenure, but some are just unwilling or ill-equipped to read the terms of their contract. I still remember an instructor being annoyed with me that we didn’t have access to her Springer book chapter because PSU had “already paid for my research.” Sorry, I wasn’t the one who blindly signed a contract that didn’t even allow for depositing a preprint into an institutional repository.

When we were getting PDXScholar off the ground at Portland State, I talked to many faculty in my disciplines about getting their work into the repository. So many had no idea what the contract they signed did or did not allow them to do with their work and what it did or not not allow the publisher to do. I think we focus so much on the research and then the writing, then we spend months going through the peer-review process so that, by the time we receive an author agreement, we kind of mentally feel like our work is already done. I’ll admit that I wasn’t too savvy about this myself early on, but I’ve always made sure that I could make my articles and book chapters free available online in some way, shape, or form. I wrote about this back in 2013 when I was on the tenure track at PSU.

Now, I think the work we do in terms of negotiating the contract is as important as all the work that came before it. If few people can access your research, what was the point of doing it in the first place?

But it is also short-sighted to only think about whether or not we can make our work open to the public. We should also be concerned with what the publisher can do with our work. We usually think that once the work is published in the journal that the publisher is done with it, but we are sometimes signing contracts that allow them to do much, much more with our intellectual output. I once signed a contract for a book chapter that essentially said I could do anything I wanted with the work (in terms of republication), but so could the publisher. It was better than the first contract I was offered, which gave me no rights to do anything, even put the chapter in our repository, but it gave them the ability to republish my chapter in any other publications in the future.

How would you feel if you found your work published in a book that you knew nothing about? How would you feel to know that random people were making money off work you didn’t see a dime for (even originally)?

Several articles from The Scholarly Kitchen blog have made the point that just because something is published OA doesn’t necessarily mean that it can’t be reproduced for profit:

Many OA journals make their work available through the Creative Commons CC-BY license, which allows for the maximum reuse,  including the creation of derivative works and selling the work commercially, so long as the creator is given credit. I could take a bunch of articles with CC-BY licenses, package them into an anthology, and sell that anthology. All I’d have to do to stay within the license is to credit each author. But I could sell their work and not give them a penny of the profit. So could any publisher.

I would be deeply uncomfortable with the idea of licensing any of my work under a Creative Commons CC-BY license. I’m not ok with random people with whom I have no relationship making money off my work. I would guess that many people feel that way.

I’ve published in two open access journals in the past 18 months, both of which had Creative Commons non-commercial licenses. In Collaborative Librarianship, they license the work under a CC-BY-NC-ND, which allows people to share the work, with credit, so long as they don’t make money off it, but people also can’t make derivative works. I’m ok with that. The idea of someone being able to change and republish my article in some way I hadn’t intended does make me slightly uncomfortable, though I have no problem with my work being open to anyone to read, share, and benefit from. College and Research Libraries default license is CC-BY-NC (which does allow for derivative works), but I love that C&RL allows the author to specify a different license for their work in the author agreement, giving the ultimate freedom to the author to define how their work can be used.

While I’d rather my articles be CC-BY-NC-ND, there are other materials I create of which I would be happy to allow derivative works to be created. Those include tutorials, presentation slides, LibGuides, and perhaps some course materials. For those, a CC-BY-NC lisense should do the trick.

My blog is licensed under a CC-BY-NC-SA license (SA=Share Alike), which seriously limits the use of my content. People who use it must not only be non-commercial entities, but must license what they create from my work using the same license. That means that whatever they add to my work must be licensed in the exact same way. I feel ok about having that requirement with my blog content.

When we were looking at what license to use for our LibGuides at PCC, we toyed with the idea of a share-alike license in the spirit of “we want everyone to share their stuff.” Ultimately, we went with a CC-BY-NC license because we know that many libraries do not have the ability to put any sort of license on their LibGuides (due to college/university rules) and this would limit their use more than having no license at all. We want to make it clear that we welcome other librarians grabbing our content. Why reinvent the wheel?

But we need to consider more than just under what license our content is being released. If the publisher retains copyright of your work, they can ultimately do whatever they want with it. Just because it has a non-commercial license doesn’t mean that the publisher can’t allow another publisher to use your work for their profit. The Creative Commons license just tells people what they can do without needing to ask permission. Ultimately the copyright holder has the right to do what they want with the content unless your contract specifically spells out limitations. With the exception of the two articles I published in Emerald Journals, which I’m pretty sure I dropped the ball on, I’ve retained copyright on all of my publications, including my book Social Software in Libraries.

I’m not a lawyer or any kind of expert on contracts, but, ultimately, there are four key things I look for in any contract these days:

1. Who will hold the copyright?
2. What rights are we giving to the publisher and what could they consequently do with our work in a worst case scenario?
3. What rights do we have to the work and can we make it available, in some way, freely online (if it isn’t already through the journal)?
4. If the work is open access, under what license is it made available to the public?

Whether a contract says it or not, this is our intellectual property. It came from our minds and our considerable efforts. We should work to make sure we have some agency over how our work is made available and who benefits financially from it.

What success stories have you had in dealing with publishers? What frustrations? Any tips for those new to the universe of scholarly publishing?

Update: Just after posting this, Micah Vandergrift shared with me Bethany Nowviskie’s post which came to a very different conclusion about Creative Commons licensing. I think that’s great! Whatever decision we come to as individuals about how we’d like our work to be used, let it be well-considered.

David Rosenthal: More Is Not Better

planet code4lib - Wed, 2015-03-18 16:58
Hugh Pickens at /. points me to Attention decay in science, providing yet more evidence that the way the journal publishers have abdicated their role as gatekeepers is causing problems for science. The abstract claims:
The exponential growth in the number of scientific papers makes it increasingly difficult for researchers to keep track of all the publications relevant to their work. Consequently, the attention that can be devoted to individual papers, measured by their citation counts, is bound to decay rapidly. ... The decay is ... becoming faster over the years, signaling that nowadays papers are forgotten more quickly. However, when time is counted in terms of the number of published papers, the rate of decay of citations is fairly independent of the period considered. This indicates that the attention of scholars depends on the number of published items, and not on real time.Below the fold, some thoughts.

Their analysis is similar to many earlier analyses of the attention decay of general online content, except that their statistics aren't as good:
one cannot count on the high statistics available for online contents: the number of tweets posted on a single popular topic may exceed the total number of scientific publications ever made.Nevertheless, the similarity between the attention decay of papers and that of online content in general is striking. They argue:
Hence, the process of attention gathering needs to take into account the increasing competition between scientific products. With the increase of the number of journals and increasing number of publications in each journal ..., a scientist inevitably needs to filter where to allocate its attention, i.e. which papers to cite, among an extremely broad selection. This may also question whether a scientist is actually fully aware of all the relevant results available in scientific archives. Even though this effect is partially compensated by the increase of the average number of references, one needs to consider the impact of increasing publication volume on the attention decay.They conclude:
The existence of many time-scales in citation decay and our ability to construct an ultrametric space to represent this decay, leads us to speculate that citation decay is an ultradiffusive process, like the decay of popularity of online content. Interestingly, the decay is getting faster and faster, indicating that scholars “forget” more easily papers now than in the past. We found that this has to do with the exponential growth in the number of publications, which inevitably accelerates the turnover of papers, due to the finite capacity of scholars to keep track of the scientific literature. In fact, by measuring time in terms of the number of published works, the decay appears approximately stable over time, across disciplines, although there are slight monotonic trends for Medicine and Biology.Clearly, the response to this problem should not be for publishers to return to their role as gatekeepers, publishing only the good stuff. Research has conclusively shown that they are unable to recognize the good stuff well enough. Rather, in a world where everything gets published, the only question is where it gets published, and the where is not a reliable indicator of quality, we need to stop paying publishers vast sums for minimal value add, and devote the funds to better search, annotation and reputation tools.


Subscribe to code4lib aggregator