You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 5 days 9 hours ago

Evergreen ILS: Evergreen 2.8-beta released

Wed, 2015-03-04 18:00

The beta release of Evergreen 2.8 is available to download and test!

New features and enhancements of note in Evergreen 2.8 include:

  • Acquisitions improvements to help prevent the creation of duplicate orders and duplicate purchase order names.
  • In the select list and PO view interfaces, beside the line item ID, the number of catalog copies already owned is now displayed.
  • A new Apache access handler that allows resources on an Evergreen webs server, or which are proxied via an Evergreen web server, to be authenticated using user’s Evergreen credentials.
  • Copy locations can now be marked as deleted. This allows information about disused copy locations to be retained for reporting purposes without cluttering up location selection drop-downs.
  • Support for matching authority records during MARC import. Matches can be made against MARC tag/subfield entries and against a record’s normalized heading and thesaurus.
  • Patron message center: a new mechanism via which messages can be sent to patrons for them to read while logged into the public catalog.
  • A new option to stop billing activity on zero-balance billed transaction, which will help reduce the incidence of patron accounts with negative balances.
  • New options to void lost item and long overdue billings if a loan is marked as claims returned.
  • The staff interface for placing holds now offers the ability to place additional holds on the same title.
  • The active date of a copy record is now displayed more clearly.
  • A number of enhancements have been made to the public catalog to better support discoverability by web search engines.
  • There is now a direct link to “My Lists” from the “My Account” area in the top upper-right part of the public catalog.
  • There is a new option for TPAC to show more details by default.

For more information about what’s in the release, check out the draft release notes.

Note that the release was built yesterday before 2.7.4 existed, so the DB upgrade script applies to a 2.7.3 database. To apply to a 2.7.4 test database, remove updates 0908, 0913, and 0914 from the upgrade file, retaining the final commit. The final 2.8.0 DB upgrade script will be built from 2.7.4 instead.

LITA: LITA Webinar: Beyond Web Page Analytics

Wed, 2015-03-04 17:38

Or how to use Google tools to assess user behavior across web properties.

Tuesday March 31, 2015
11:00 am – 12:30 pm Central Time
Register now for this webinar

This brand new LITA Webinar shows how Marquette University Libraries have installed custom tracking code and meta tags on most of their web interfaces including:

  • CONTENTdm
  • Digital Commons
  • Ebsco EDS
  • ILLiad
  • LibCal
  • LibGuides
  • WebPac, and the
  • General Library Website

The data retrieved from these interfaces is gathered into Google’s

  • Universal Analytics
  • Tag Manager, and
  • Webmaster Tools

When used in combination these tools create an in-depth view of user behavior across all these web properties.

For example Google Tag Manager can grab search terms which can be related to a specific collection within Universal Analytics and related to a particular demographic. The current versions of these tools make systems setup an easy process with little or no programming experience required. Making sense of the volume of data retrieved, however, is more difficult.

  • How does Google data compare to vendor stats?
  • How can the data be normalized using Tag Manager?
  • Can this data help your organization make better decisions?

Join

  • Ed Sanchez, Head, Library Information Technology, Marquette University Libraries
  • Rob Nunez, Emerging Technologies Librarian, Marquette University Libraries and
  • Keven Riggle, Systems Librarian & Webmaster, Marquette University Libraries

In this webinar as they explain their new processes and explore these questions. Check out their program outline: http://libguides.marquette.edu/ga-training/outline

Then register for the webinar

Full details
Can’t make the date but still want to join in? Registered participants will have access to the recorded webinar.
Cost:

  • LITA Member: $39
  • Non-Member: $99
  • Group: $190

Registration Information

Register Online page arranged by session date (login required)
OR
Mail or fax form to ALA Registration
OR
Call 1-800-545-2433 and press 5
OR
email registration@ala.org

Questions or Comments?

For all other questions or comments related to the course, contact LITA at (312) 280-4269 or Mark Beatty, mbeatty@ala.org.

FOSS4Lib Updated Packages: digilib

Wed, 2015-03-04 15:40

Last updated March 4, 2015. Created by Peter Murray on March 4, 2015.
Log in to edit this page.

  • digilib is a web based client/server technology for images. The image content is processed on-the-fly by a Java Servlet on the server side so that only the visible portion of the image is sent to the web browser on the client side.
  • digilib supports a wide range of image formats and viewing options on the server side while only requiring an internet browser with Javascript and a low bandwidth internet connection on the client side.
  • digilib enables very detailed work on an image as required by scholars with elaborate viewing features like an option to show images on the screen in their original size.
  • digilib facilitates cooperation of scholars over the internet and novel uses of source material by image annotations and stable references that can be embedded in URLs.
  • digilib facilitates federation of image servers through a standards compliant IIIF image API.
  • digilib is Open Source Software under the Lesser General Public License, jointly developed by the Max Planck Institute for the History of Science, the Bibliotheca Hertziana, the University of Bern and others.
Package Type: Image Display and ManipulationLicense: LGPL v2.1 Package Links Development Status: Production/StableOperating System: Browser/Cross-Platform Releases for digilib Programming Language: JavaJavaScriptOpen Hub Link: https://www.openhub.net/p/digilibOpen Hub Stats Widget: 

FOSS4Lib Updated Packages: Mirador

Wed, 2015-03-04 15:24

Last updated March 4, 2015. Created by Peter Murray on March 4, 2015.
Log in to edit this page.

An open-source, web-based 'multi-up' viewer that supports zoom-pan-rotate functionality, ability to display/compare simple images, and images with annotations.

Package Type: Image Display and ManipulationLicense: Apache 2.0 Package Links Operating System: Browser/Cross-Platform Releases for Mirador Programming Language: JavaScriptOpen Hub Link: https://www.openhub.net/p/miradorOpen Hub Stats Widget: works well with: IIPImage

FOSS4Lib Updated Packages: IIPMooViewer

Wed, 2015-03-04 15:19

Last updated March 5, 2015. Created by Peter Murray on March 4, 2015.
Log in to edit this page.

IIPMooViewer is a high performance light-weight HTML5 Ajax-based javascript image streaming and zooming client designed for the IIPImage high resolution imaging system. It is compatible with Firefox, Chrome, Internet Explorer (Versions 6-10), Safari and Opera as well as mobile touch-based browsers for iOS and Android. Although designed for use with the IIP protocol and IIPImage, it has multi-protocol support and is additionally compatible with the Zoomify, Deepzoom, Djatoka (OpenURL) and IIIF protocols.

Version 2.0 of IIPMooViewer is HTML5/CSS3 based and uses the Mootools javascript framework (version 1.5+).

Package Type: Image Display and ManipulationLicense: GPLv3 Package Links Development Status: Production/StableOperating System: Browser/Cross-PlatformOpen Hub Link: https://www.openhub.net/p/iipmooviewerOpen Hub Stats Widget: works well with: IIPImage

FOSS4Lib Upcoming Events: Sharing Images of Global Cultural Heritage

Wed, 2015-03-04 15:09
Date: Tuesday, May 5, 2015 - 08:30 to 17:00Supports: IIPImageOpenSeadragonDjatoka JPEG2000 Image ServerLoris

Last updated March 4, 2015. Created by Peter Murray on March 4, 2015.
Log in to edit this page.

The International Image Interoperability Framework community (http://iiif.io/) is hosting a one day information sharing event about the use of images in and across Cultural Heritage institutions. The day will focus on how museums, galleries, libraries and archives, or any online image service, can take advantage of a powerful technical framework for interoperability between image repositories.

FOSS4Lib Upcoming Events: Hydra Camp London

Wed, 2015-03-04 15:01
Date: Monday, April 20, 2015 - 08:00 to Thursday, April 23, 2015 - 13:00Supports: HydraFedora Repository

Last updated March 4, 2015. Created by Peter Murray on March 4, 2015.
Log in to edit this page.

Hydra Camp London - a training event enabling technical staff to learn about the Hydra technology stack so they can establish their own implementation

Monday 20th April - lunchtime Thursday 23rd April 2015

FOSS4Lib Upcoming Events: Hydra Europe Symposium

Wed, 2015-03-04 14:58
Date: Thursday, April 23, 2015 - 10:30 to Friday, April 24, 2015 - 15:30Supports: HydraFedora Repository

Last updated March 4, 2015. Created by Peter Murray on March 4, 2015.
Log in to edit this page.

Hydra Europe Symposium - an event for digital collection managers, collection owners and their software developers that will provide insights into how Hydra can serve your needs

Thursday 23rd April - Friday 24th April 2015

This event is free of charge. Lunch and refreshments will be provided on both days

LITA: Agile Development: Estimation and Scheduling

Wed, 2015-03-04 14:00

Image courtesy of Wikipedia

In my last post, I discussed the creation of Agile user stories. This time I’m going to talk about what to do with them once you have them. There are two big steps that need to be completed in order to move from user story creation to development: effort estimation and prioritization. Each poses its own problems.

Estimating Effort

Because Agile development relies on flexibility and adaptation, creating a bottom-up effort estimation analysis is both difficult and impractical. You don’t want to spend valuable time analyzing a piece of functionality up front only to have the implementation details change because of something that happens earlier in the development process, be it a change in another story, customer feedback, etc. Instead, it’s better to rely on your development team’s expertise and come up with top-down estimates that are accurate enough to get the development process started. This may at times make you feel uncomfortable, as if you’re looking for groundwater with a stick (it’s called dowsing, by the way), but in reality it’s about doing the minimum work necessary to come up with a reasonably accurate projection.

Estimation methods vary, but the key is to discuss story size in relative terms rather than assigning a number of hours of development time. Some teams find a story that is easy to estimate and calibrate all other stories relative to it, using some sort of relative “story points” scale (powers of 2, the Fibonacci sequence, etc.). Others create a relative scale and tag each story with a value from it: this can be anything from vehicles (this story is a car, this one is an aircraft carrier, etc.), to t-shirt sizes, to anything that is intuitive to the team. Another method is planning poker: the team picks a set of sizing values, and each member of the team assigns one of those values to each story by holding up a card with the value on it; if there’s significant variation, the team discusses the estimates and comes up with a compromise.  What matters is not the method, but that the entire team participate in the estimation discussion for each story.

Learn more about Agile estimation here and here.

Prioritizing User Stories

The other piece of information we need in order to begin scheduling is the importance of each story, and for that we must turn to the business side of the organization. Prioritization in Agile is an ongoing process (as opposed to a one-time ranking) that allows the team to understand which user stories carry the biggest payoff at any point in the process. Once they are created, all user stories go into a the product backlog, and each time the team plans a new sprint it picks stories off the top of the list until their capacity is exhausted, so it is very important that the Product Owner maintain a properly ordered backlog.

As with estimation, methods vary, but the key is to follow a process that evaluates each story on the value it adds to the product at any point. If I just rank the stories numerically, that does not provide any clarity as to why that is, which will be confusing to the team (and to me as well as the backlog grows). Most teams adopt a ranking system that scores each story individually; here’s a good example. This method uses two separate criteria: urgency and business value. Business value measures the positive impact of a given story on users. Urgency provides information about how important it is to complete a story earlier rather than later in the development process, taking into account dependencies between user stories, contractual obligations, complexity, etc. Basically, Business Value represents the importance of including a story in the finished product, and Urgency tells us how much it matters when that story is developed (understanding that a story’s likelihood of being completed decreases the later in the process it is slotted). Once the stories have been evaluated along the two axes (a simple 1-5 scale can be used for each) an overall priority number is obtained by multiplying the two values, which gives us the final priority score. The backlog is then ordered using this value.

As the example in the link shows, a Product Owner can also create priority bands that describe stories at a high level: must-have, nice to have, won’t develop, etc. This provides context for the priority score and gives the team information about the PO’s expectations for each story.

I’ll be back next month to talk about building an Agile culture. In the meantime, what methods does your team use to estimate and prioritize user stories?

Open Knowledge Foundation: New research project to map the impact of open budget data

Wed, 2015-03-04 12:01

I’m pleased to announce a new research project to examine the impact of open budget data, undertaken as a collaboration between Open Knowledge and the Digital Methods Initiative at the University of Amsterdam, supported by the Global Initiative for Financial Transparency (GIFT).

The project will include an empirical mapping of who is active around open budget data around the world, and what the main issues, opportunities and challenges are according to different actors. On the basis of this mapping it will provide a review of the various definitions and conceptions of open budget data, arguments for why it matters, best practises for publication and engagement, as well as applications and outcomes in different countries around the world.

As well as drawing on Open Knowledge’s extensive experience and expertise around open budget data (through projects such as Open Spending), it will utilise innovative tools and methods developed at the University of Amsterdam to harness evidence from the web, social media and collections of documents to inform and enrich our analysis.

As part of this project we’re launching a collaborative bibliography of existing research and literature on open budget data and associated topics which we hope will become a useful resource for other organisations, advocates, policy-makers, and researchers working in this area. If you have suggestions for items to add, please do get in touch.

This project follows on from other research projects we’ve conducted around this area – including on data standards for fiscal transparency, on technology for transparent and accountable public finance, and on mapping the open spending community.

Financial transparency field network with the Issuecrawler tool based on hyperlink analysis starting from members of Financial Transparency Coalition, 12th January 2015. Open Knowledge and Digital Methods Initiative.

LibUX: The Inter[mediate]face

Wed, 2015-03-04 01:01

This postThe Battle Is For The Customer Interface by Tom Goodwin — captured my imagination. The fastest-growing companies in the world occupy the space between the product and the person. Uber doesn’t own any vehicles; Facebook doesn’t create any media; Aribnb doesn’t own any real estate. What they control is the interface.

They facilitate access — just like us.

The Library Interface

The trumped-up value of the library isn’t the dog-eared six-dollar paperbacks in its collection, nor can we squander the credit for the research behind the vendor paywall. Instead, its value continues to be what it has always been – as gatekeeper, the access point. A library is the intermediary touchpoint between the user and the content the user seeks.

We have talked before about how one of the most important features for a library website is that it stays out of the way; the most successful are — as Tom wrote — “thin layers that sit on top of vast supply systems.” In this way, libraries curate access points which are desirable to patrons because

  • they eliminate paywalls,
  • curate the signals from the noise,
  • and are delightful.

These are the core features of the library interface. Libraries absorb the community-wide cost to access information curated by knowledge-experts that help sift through the Googleable cruft. They provide access to a repository of physical items users want and don’t want to buy (books, tools, looms, 3d printers, machines). A library is, too, where community is accessed. In the provision of this access anywhere on the open web and through human proxies, the library creates delight.

The post The Inter[mediate]face appeared first on LibUX.

Library Tech Talk (U of Michigan): How to Create (and Keep Creating) a Digitization Workflow

Wed, 2015-03-04 00:00

It’s possible we should have written this blog post years ago, when we first created our workflow for how we shepherd digitization projects through our Digital Library. Well, we were busy creating it, that’s our excuse. Three years later, we’re on our third iteration.

DuraSpace News: VIVO Strategic Plan Lays Foundation for 2015-2016

Wed, 2015-03-04 00:00

Winchester, MA  During the past two and a half months, the VIVO Strategic Planning Group has developed a prioritized written strategy document for the VIVO project. The plan highlights key goals and recommendations that specifically focus on increasing the engagement of the VIVO community, hiring a full-time VIVO Technical Lead to make the open source development process more inclusive and transparent, and implementing a framework to increase productivity.

Evergreen ILS: SECURITY RELEASES: Evergreen 2.7.4, 2.6.7, and 2.5.9

Tue, 2015-03-03 22:55

On behalf of the Evergreen contributors, the 2.7.x release maintainer (Ben Shum) and the 2.6.x and 2.5.x release maintainer (Dan Wells), we are pleased to announce the release of Evergreen 2.7.4, 2.6.7, and 2.5.9.

The new releases can be downloaded from:

http://evergreen-ils.org/egdownloads/

THESE RELEASES CONTAIN SECURITY UPDATES, so you will want to upgrade as soon as possible.

In particular, the following security issues are fixed:

  • Bug 1424755: This bug allows unauthorized remote access to the value of certain library settings that are meant to be confidential.
  • Bug 1206589: This bug allows unauthorized remote access to the log of changes to library settings, including ones meant to be confidential.

All prior supported releases are vulnerable to these bugs.

All three of these new releases also contain bugfixes that not related to the security issues. For more information on the changes in these releases, please consult their change logs:

Please note that 2.5.9 is the last release expected in the 2.5.x series.

It is recommended that all Evergreen sites upgrade to one of the new releases as soon as possible.

If you cannot do a full upgrade at this time, it is extremely important that that you patch your Evergreen system to protect against these exploits. To that end, two patches are available, one for bug 1424755 and one for bug 1206589, that you can download and apply to a running system.

In order to secure your system, you must download the two patches and copy them to each of your Evergreen servers — in particular, any that run the open-ils.actor and/or open-ils.pcrud services. You will need to perform the following steps on each server to completely patch your system.

First, you must find where the Actor.pm module is located. This is usually under /usr/local somewhere. The following command will find it for you:

find /usr/local -name Actor.pm

On an Ubuntu 12.04 system, the above prints out /usr/local/share/perl/5.14.2/OpenILS/Application/Actor.pm so we will use that as our example, just be sure that when you do this for real, you use the actual path printed by the above command. If it prints nothing, you will need to check other locations.

Once you have the path, you can run the patch command. Assuming that you are in the directory where you put the patch file, the following command should apply the patch:

sudo patch -b /usr/local/share/perl/5.14.2/OpenILS/Application/Actor.pm lp1424755.patch

Unless you have made local edits to the affected file, the patch should apply cleanly.

Next, you will need to apply the patch for bug 1206589. This can be done as the opensrf user:

patch -b /openils/conf/fm_IDL.xml lp1206589.patch

After you have applied the patches, you will need to restart the open-ils.actor and open-ils.pcrud services. You do this by running osrf_control with the appropriate options:

osrf_control [--localhost] --restart --service open-ils.actor osrf_control [--localhost] --restart --service open-ils.pcrud

The --localhost is in brackets because you may or may not need it. Your system administrator should know if you do or not. If you do need it, remove the brackets. If you don’t need it, then omit the option entirely.

DPLA: Board Governance Committee Open Call: March 11, 2015, 1:00 PM Eastern

Tue, 2015-03-03 15:40

The DPLA Board of Directors’ Governance Committee will hold a conference call on Wednesday, March 11, 2015 at 1:00 PM Eastern. The call is open to the public.

Agenda

Public session

  • Rethinking DPLA open committee calls
  • Questions/comments from the public

Executive session

  • Update and next steps for Board Nominating Committee
Dial-in

District Dispatch: 3D printing technologies in libraries: intellectual property right issues

Tue, 2015-03-03 15:23

Photo by Subhashish Panigrahi

Join us for our next installment of CopyTalk, March 5th at 2pm Eastern Time. In the past the use of photocopy, printing, scanning and related technologies in libraries raised copyright issues alone. A new technology is making its way into libraries; 3D printing technology now allows a patron to create (print) three-dimensional objects as well. Patrons can now “print” entire mechanical devices or components of other devices from something as simple as a corkscrew to parts of a prosthetic body part. Objects of all sorts can be created in library maker spaces. These technologies raise not only copyright issues but now patent including design patents, trademark including trade dress as well as copyright issues. Learn about the legal issues involved and how the library can protect itself from liability when patrons use these technologies in library spaces and raise awareness of such issues among patrons.

Speakers

Professor Tomas Lipinski completed his Juris Doctor (J.D.) from Marquette University Law School, Milwaukee, Wisconsin, received the Master of Laws (LL.M.) from The John Marshall Law School, Chicago, Illinois, and the Ph.D. from the Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Mr. Lipinski has worked in a variety of legal settings including the private, public and non-profit sectors. He is the author of numerous articles and book chapters and has been a visiting professor in summers at the University of Pretoria-School of Information Technology (Pretoria, South Africa) and at the Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Professor Lipinski was the first named member of the Global Law Faculty, Faculty of Law, University of Leuven (Katholieke Universiteit Leuven), Belgium, in Fall of 2006 where he continues to lecture annually at its Centers for Intellectual Property Rights and Interdisciplinary Center for Law and ICT. In October he returned to the University of Wisconsin—Milwaukee to serve as Professor and Dean of its i-School, the School of Information Studies. He serves as a member of the IFLA Copyright and other Legal Matters Committee and an IFLA delegate to the WIPO Standing Committee on Copyright and Other Rights. His current project is a book on legal issues in maker spaces in libraries with Mary Minow and Gretchen McCord that should be available this summer or fall.

As OITP’s Information Policy Analyst, Charlie Wapner provides analytical, organizational, and logistical support to the ALA Washington Office as part of a team developing and implementing a national information policy agenda for America’s public libraries. He also lead’s OITP’s work on the policy implications of 3D printing. Prior to working at ALA, Charlie spent two-and-a-half years providing policy and communications support to members of the U.S. House of Representatives. He worked first for Congressman Mark Critz of Pennsylvania and then for Congressman Ron Barber of Arizona. Charlie holds a B.A. in diplomatic history from the University of Pennsylvania and an M.S. in public policy and management from Carnegie Mellon University.

There is no need to pre-register! Just show up on March 5, 2015, at 2:00 p.m. Eastern by clicking here.

The post 3D printing technologies in libraries: intellectual property right issues appeared first on District Dispatch.

LITA: Join LITA’s Imagineering IG at ALA Annual

Tue, 2015-03-03 13:00

Editor’s note: This is guest post by Breanne Kirsch.

During the upcoming 2015 ALA Annual Conference, LITA’s Imagineering Interest Group will host the program “Unknown Knowns and Known Unknowns: How Speculative Fiction Gets Technological Innovation Right and Wrong.” A panel of science fiction and fantasy authors will discuss their work and how it connects with technological developments that were never invented and those that came about in unimagined ways. Tor is sponsoring the program and bringing authors John Scalzi, Vernor Vinge, Greg Bear, and Marie Brennan. Baen Books is also sponsoring the program by bringing Larry Correia to the author panel.

John Scalzi wrote the Old Man’s War series and more recently, Redshirts, which won the 2013 Hugo Award for Best Novel. Vernor Vinge is known for his Realtime/Bobble and Zones of Thought Series and a number of short fiction stories. Greg Bear has written a number of series, including Darwin, The Forge of God, Songs of Earth and Power, Quantum Logic, and The Way. He has also written books for the Halo series, short fiction, and standalone books, most recently, War Dogs as well as the upcoming novels Eternity and Eon. Marie Brennan has written the Onyx Court series, a number of short stories, and more recently the Lady Trent series, including the upcoming Voyage of the Basilisk. Larry Correia has written the Monster Hunter series, Grimnoir Chronicles, Dead Six series, and Iron Kingdoms series. These authors will consider the role speculative fiction plays in fostering innovation and bringing about new ideas.

Please plan to attend the upcoming ALA Annual 2015 Conference and add the Imagineering Interest Group program to your schedule! We look forward to seeing you in San Francisco.

Breanne A. Kirsch is the current Chair of the Imagineering Interest Group as well as the Game Making Interest Group within LITA. She works as a Public Services Librarian at the University of South Carolina Upstate and is the Coordinator of Emerging Technologies. She can be contacted at bkirsch@uscupstate.edu or @breezyalli.

Open Knowledge Foundation: New Open Knowledge Local Groups in Macedonia, Pakistan, Portugal and Ukraine

Tue, 2015-03-03 12:45

It’s once again time for us to proudly announce the establishment of a new batch of Open Knowledge Local Groups, founded by community leaders in Macedonia, Pakistan, Portugal and Ukraine, which we hereby welcome warmly into the ever-growing family of Local Groups. This brings the total number of Local Groups and Chapters up to a whopping 58!

In this blog post we would like to introduce the founders of these new groups and invite everyone to join the community in these countries.

MACEDONIA

In Macedonia, the Local Group has been founded by Bardhyl Jashari, who is the director of Metamorphosis Foundation. His professional interests are mainly in the sphere of new technologies, media, civic activism, e-­government and participation. Previously he worked as Information Program Coordinator of the Foundation Open Society – Macedonia. In both capacities, he has run national and international­scope projects, involving tight cooperation with other international organizations, governmental bodies, the business and the civic sector. He is a member of the National Council for Information Society of Macedonia and National Expert for Macedonia of the UN World Summit Award. In the past he was a member of the Task Force for National Strategy for Information Society Development and served as a commissioner at the Agency for Electronic Communication (2005­-2011). Bardhyl holds a master degree at Paris 12 University­Faculty of Public Administration (France) and an Information System Designer Degree from University of Zagreb (Croatia).

To get in touch with Bardhyl and connect with the community in Macedonia, head here.

PAKISTAN

The new Local Group in Pakistan is founded by Nouman Nazim. Nouman has worked for 7+ years with leading Public Sector as well as Non Government Organizations in Pakistan and performed variety of roles related to Administration, Management, Monitoring etc. He has worn many other hats too in his career including programmer, writer, researcher, manager, marketer and strategist. As a result, he have developed unique abilities to manage multi-disciplinary tasks and projects as well as to navigate complex challenges. He has a Bachelor degree in Information Sciences and is currently persuing a Master’s degree in Computer Science besides working on his own startup outside of class. He believes open data lets us achieve what we could normally never be able to and that it has the potential to positively change millions of lives.

In the Open Knowledge Pakistan Local Group Nouman is supported by Sher Afgun Usmani and Sahigan Rana. Sher has studied Computer sciences and is an entrepreneur, co-founder of Yum Solutions and Urducation (an initiative to promote technical education in Urdu). He has been working for 4+ years in the field of software development. Shaigan holds a MBA degree in Marketing, and is now pursuing a Post-Graduate degree in internet marketing from Iqra University Islamabad, Pakistan. His research focuses on entrepreneurship, innovation and open access to international markets. He is co-founder of printingconcern.com and Yum Solutions. He has an interest and several years experience in internet marketing, content writing, Business development and direct sales.

To get in touch with Nouman, Sher and Shaigan and connect with the community in Pakistan, head here.

PORTUGAL

Open Knowledge Portugal is founded in unison by Ricardo Lafuente and Olaf Veerman.

Ricardo co-founded and facilitates the activities of Transparência Hackday Portugal, Portugal’s open data collective. Coming from a communications design background and an MA in Media Design, he has been busy developing tools and projects spanning the fields of typography, open data, information visualization and web technologies. He also co-founded the Porto office of Journalism++, the data-driven journalism agency, where he takes the role of designer and data architect along with Ana Isabel Carvalho. Ana and Ricardo also run the Manufactura Independente design research studio, focusing on libre culture and open design.

Olaf Veerman leads the Lisbon office of Development Seed and their efforts to contribute to the open data community in Europe, concretely by leading project strategy and implementation through full project cycles. Before joining Development Seed, Olaf lived throughout Latin America where he worked with civil society organizations to create social impact through the use of technology. He came over from Flipside, the Lisbon based organization he founded after returning to Portugal from his last stay in the Southern hemisphere. Olaf is fluent in English, Dutch, Portuguese, and Spanish.

To get in touch with Ricardo and Olaf – and connect with the community in Portugal, head here.

UKRAINE

Denis Gursky is the founder of the new Open Knowledge Local Group in Ukraine. He is also the found of SocialBoost; a set of innovative instruments incl. the open data movement in Ukraine, that improves civic engagement and makes government more digitalized — thus accountable, transparent and open. He is furthermore a digital communications and civic engagement expert and works on complex strategies for government and the commercial sector. He is one of the leaders of the open government data movement in Ukraine, supported by government and hacktivists, and is currently developing the Official Open Government Data Portal of Ukraine and Open Data Law.

To get in touch with Denis and connect with the community in Ukraine, head here.

Photo by flipside.org, CC BY-SA.

Raffaele Messuti: Epub linkrot

Tue, 2015-03-03 10:00

Linkrot also affects epub files (who would have thought! :)).
How to check the health of external links in epub books (required tools: a shell, atool, pup, gnu parallel).

Dan Scott: Library and Archives Canada: Planning for a new union catalogue

Tue, 2015-03-03 03:46

I attended a meeting with Library and Archives Canada today in my role as an Ontario Library Association board member to discuss the plans around a new Canadian union catalogue based on OCLC's hosted services. Following are some of the thoughts I prepared in advance of the meeting, based on the relatively limited materials to which I had access. (I will update this post once those materials have been shared openly; they include rough implementation timelines, perhaps the most interesting of which being that it the replacement system is not expected to be in production until August 2016.) Let me say at the outset that there were no solid answers on potential costs to participating libraries, other than that LAC is striving to keep the costs as low as possible.

Basic question: What form does LAC envision the solution taking?

Will it be:

  • "Library and Archives Canada begins adding records and holdings to WorldCat" as listed for many other countries in http://www.oclc.org/worldcat/catalog/national/timeline.en.html;
  • Or a separate, standalone but openly searchable WorldCat Local catalogue that Canadians can use like the Dutch or United Kingdom union catalogues (which lack significant functionality that standard WorldCat possesses, like the integrated schema.org discovery markup)?
  • Or a separate, standalone but closed catalogue like the Dutch union catalogue GGC and the Combined Regions UnityUK that require a subscription to access?

The answer was "yes, we will be adding records and holdings to WorldCat, and yes, you will be able to search a WorldCat Local instance for both LAC-specific and AMICUS as a whole" - but they're still working out the exact details. Later we determined that it will actually be WorldCat Discovery--essentially a rewrite of WorldCat Local--which assuaged some of my concerns about the current examples we can see of other OCLC-based union catalogues.

Privacy of Canadian citizens

The "Canadian office and data centre locations" requirement does not mean that usage data is exempt from Patriot Act concerns. Specifically, OCLC is an American company and thus the USA Patriot Act "allows US authorities to obtain records from any US-linked company operating in Canada" (per a 2004 brief submitted to the BC Privacy Commissioner by CIPPIC). Canadians should not be subject to this invasion of their privacy by the agents of another nation simply to use their own national union catalogue.

The response: The Justice, Agricultural, and NRCan agencies use US-hosted library systems (Evergreen, by Equinox). However, one of the other participants from a federal agency reported that they had been trying to update to Sierra from Millenium but have been stalled for two years because whatever policy allowed them to go live with US-hosted Millenium is not being allowed now.

LAC claimed that, due to NAFTA, they are not allowed to insist that data be held in Canada unless it is for national security reasons. They noted that any usage data collected wouldn't be the same volume of patron data that would be seen in public libraries. They did point out that Netherlands sends anonymized data to OCLC, but that costs money and impacts response time. Apparently the OCLC web site, they claim not to have had a request under Patriot Act.

Privacy of Canadian citizens, part 2

I didn't get the chance to bring this up during the call...

LAC noted in their background that modern systems have links to social media, and apparently want this as part of a new AMICUS. This would also open up potential privacy leaks; see Eric Hellman on this topic, for example; it is also an area of interest for the recently launched ALA Patron Privacy Technologies Interest Group.

Open data

Opening up access to data is part of the federal government's stated mission. Canada's Action Plan on Open Government 2014-16 says "Open Government Foundation - Open By Default" is a keystone of its plan; "Eligible data and information will be released in standardized, open formats, free of charge, and without restrictions on reuse" under the Open Government Licence - Canada 2.0. I therefore asserted:

  • A relaunched National Union Catalogue should therefore support open data per the federal initiative from launch.
  • The open data should include bibliographic, authority, and holdings records. Guy Berthiaume's reply to CLA and CAPAL that libraries can use the Z39.50 protocol to try to access records from individual library's Z39.50 servers ignores one of the primary purposes of a union catalogue, which is to avoid that time-consuming search across the various Z39.50 servers of the institutions that contributed their data to the union catalogue in the first place.

The response: The ACAN requirements document indicated a requirement that the data be made available under an ODC-BY license (matching OCLC's general WorldCat license); and LAC needs to get the data back to support their federated search tool.

I asked if they had checked to see if ODC-BY and Open Government License - Canada 2.0 licenses are compatible; they responded that that was something they would need to look into. Happily, the CLIPol tool indicates that the ODB-BY 1.0 and Open Government License - Canada 2.0 licenses are mostly compatible.

Contemporary features: are we achieving the stated goals?

The backgrounder benefits/objectives section stated: "In the current AMICUS?based context, the NUC has not kept pace with new technological functions, capabilities, and client needs. Contemporary features such as a user?oriented display and navigation, user customization, links to social media, and linked open data output were not available when AMICUS was implemented in the 1990s."

Canadian resource visibility

To preserve and promote our unique national culture, we want Canadian library resources to be as visible as possible on the web. This is generally accomplished by publishing a sitemap (a list of the web pages for a given web site, along with when each page was last updated) and allowing search engines like Google, Bing, and Yahoo to crawl those web pages and index their data.

To maximize the visibility of Canadian library resources on the open web, we need our union catalogue to generate a sitemap that points to only the actual records with holdings for Canadian libraries, not just WorldCat.org in general. For example, http://adamnet.worldcat.org/robots.txt simply points to the generic http://www.worldcat.org/libraries/sitemap_index.xml, not a specific sitemap for the Dutch union catalogue.

Our union catalogue should publish schema.org metadata to improve the discoverability of our resources in search engines (which initiated the schema.org standard for that purpose). WorldCat includes schema.org metadata, but WorldCat Local instances do not.

The response: There was some confusion about schema.org, and they asked if I didn't think that OCLC's syndication program was sufficient for enabling web discoverability. I replied in the negative.

Standards support (MARC21, RDA, ISO etc.)

I didn't get a chance to raise these questions.

What standards, exactly, are meant by this?

"Technical requirements including volumetrics and W3C compliance" is also very broad and vague. With respect to "W3C compliance", W3C Standards is just the start of many standards.

  • Presumably there will be WCAG compliance for accessibility - but to what extent?
  • Both the adamnet and fablibraries instances landing pages state that their canonical URL is www.worldcat.org, which effectively hides them from search engines.
Mobile support

The W3C Standards page mentions mobile friendliness as part of its standards.

WorldCat.org itself is not mobile friendly. It uses a separate website with different URLs to serve up mobile web pages, and does not automatically detect mobile browsers; the onus is on the user to find the "WorldCat Mobile" page, and that has been in a "Beta" state since 2009. The "beta" contravenes the stated requirements for the AMICUS replacement service to not be an alpha or beta, unless you choose to ignore the massive adoption of mobile devices for searching and browsing purposes, and the beta mobile experience lacks functionality compared to the desktop version.

The adamnet and fablibraries WorldCat Local instances don't advertise the mobile option, which is slightly different than the standard WorldCat Mobile version (for example, it offers record detail pages), but the navigation between desktop and mobile is sub-par. If you have bookmarked a page on the desktop, then open that bookmark on your synchronized browser on a mobile device, you can only get the desktop view.

Linked open data

Linked open data around records, holdings, and participating libraries has arguably been a standard since the W3 Library Linked Data working group issued its final report in 2011.

  • Data--including library holdings--should be available both as bulk downloads and as linked open data
  • Records need to be linked to libraries and holdings. For humans, that missing link in WorldCat is supplied by a JavaScript lookup based on geographic location info that the human supplies. This prevents other automated services from aggregating the data and creating new services based on it (including entirely Canadian-built and hosted services which would then protect Canadians from USA Patriot Act concerns).
  • MARC records should be one of the directly downloadable formats via the web. Currently download options are limited to experimental & incomplete ntriple, turtle, JSON-LD, and RDF-XML formats.
Application programming interface (API)

I didn't get the chance to bring this up during the call...

OCLC offers the xID API in a very limited fashion to non-members, which is one of the only ways to match ISBN, LCCN, and OCLC numbers. LAC should ensure that Canadian libraries have access to some similarly efficient means of finding matching records without having to become full OCLC Cataloguing members.

Updating the NUC

I didn't get the chance to bring this up during the call...

In an ideal world, the NUC would adopt the standard web indexing practice of checking sitemaps (for those libraries that produce them) on a regular (daily or weekly basis) and add/replace any new/modified records & holdings from the contributing libraries accordingly, rather than requiring libraries to upload their own records & holdings on an irregular basis.

Pages