You are here

Feed aggregator

FOSS4Lib Upcoming Events: Sharing Images of Global Cultural Heritage

planet code4lib - Wed, 2015-03-04 15:09
Date: Tuesday, May 5, 2015 - 08:30 to 17:00Supports: IIPImageOpenSeadragonDjatoka JPEG2000 Image ServerLoris

Last updated March 4, 2015. Created by Peter Murray on March 4, 2015.
Log in to edit this page.

The International Image Interoperability Framework community (http://iiif.io/) is hosting a one day information sharing event about the use of images in and across Cultural Heritage institutions. The day will focus on how museums, galleries, libraries and archives, or any online image service, can take advantage of a powerful technical framework for interoperability between image repositories.

FOSS4Lib Upcoming Events: Hydra Camp London

planet code4lib - Wed, 2015-03-04 15:01
Date: Monday, April 20, 2015 - 08:00 to Thursday, April 23, 2015 - 13:00Supports: HydraFedora Repository

Last updated March 4, 2015. Created by Peter Murray on March 4, 2015.
Log in to edit this page.

Hydra Camp London - a training event enabling technical staff to learn about the Hydra technology stack so they can establish their own implementation

Monday 20th April - lunchtime Thursday 23rd April 2015

FOSS4Lib Upcoming Events: Hydra Europe Symposium

planet code4lib - Wed, 2015-03-04 14:58
Date: Thursday, April 23, 2015 - 10:30 to Friday, April 24, 2015 - 15:30Supports: HydraFedora Repository

Last updated March 4, 2015. Created by Peter Murray on March 4, 2015.
Log in to edit this page.

Hydra Europe Symposium - an event for digital collection managers, collection owners and their software developers that will provide insights into how Hydra can serve your needs

Thursday 23rd April - Friday 24th April 2015

This event is free of charge. Lunch and refreshments will be provided on both days

LITA: Agile Development: Estimation and Scheduling

planet code4lib - Wed, 2015-03-04 14:00

Image courtesy of Wikipedia

In my last post, I discussed the creation of Agile user stories. This time I’m going to talk about what to do with them once you have them. There are two big steps that need to be completed in order to move from user story creation to development: effort estimation and prioritization. Each poses its own problems.

Estimating Effort

Because Agile development relies on flexibility and adaptation, creating a bottom-up effort estimation analysis is both difficult and impractical. You don’t want to spend valuable time analyzing a piece of functionality up front only to have the implementation details change because of something that happens earlier in the development process, be it a change in another story, customer feedback, etc. Instead, it’s better to rely on your development team’s expertise and come up with top-down estimates that are accurate enough to get the development process started. This may at times make you feel uncomfortable, as if you’re looking for groundwater with a stick (it’s called dowsing, by the way), but in reality it’s about doing the minimum work necessary to come up with a reasonably accurate projection.

Estimation methods vary, but the key is to discuss story size in relative terms rather than assigning a number of hours of development time. Some teams find a story that is easy to estimate and calibrate all other stories relative to it, using some sort of relative “story points” scale (powers of 2, the Fibonacci sequence, etc.). Others create a relative scale and tag each story with a value from it: this can be anything from vehicles (this story is a car, this one is an aircraft carrier, etc.), to t-shirt sizes, to anything that is intuitive to the team. Another method is planning poker: the team picks a set of sizing values, and each member of the team assigns one of those values to each story by holding up a card with the value on it; if there’s significant variation, the team discusses the estimates and comes up with a compromise.  What matters is not the method, but that the entire team participate in the estimation discussion for each story.

Learn more about Agile estimation here and here.

Prioritizing User Stories

The other piece of information we need in order to begin scheduling is the importance of each story, and for that we must turn to the business side of the organization. Prioritization in Agile is an ongoing process (as opposed to a one-time ranking) that allows the team to understand which user stories carry the biggest payoff at any point in the process. Once they are created, all user stories go into a the product backlog, and each time the team plans a new sprint it picks stories off the top of the list until their capacity is exhausted, so it is very important that the Product Owner maintain a properly ordered backlog.

As with estimation, methods vary, but the key is to follow a process that evaluates each story on the value it adds to the product at any point. If I just rank the stories numerically, that does not provide any clarity as to why that is, which will be confusing to the team (and to me as well as the backlog grows). Most teams adopt a ranking system that scores each story individually; here’s a good example. This method uses two separate criteria: urgency and business value. Business value measures the positive impact of a given story on users. Urgency provides information about how important it is to complete a story earlier rather than later in the development process, taking into account dependencies between user stories, contractual obligations, complexity, etc. Basically, Business Value represents the importance of including a story in the finished product, and Urgency tells us how much it matters when that story is developed (understanding that a story’s likelihood of being completed decreases the later in the process it is slotted). Once the stories have been evaluated along the two axes (a simple 1-5 scale can be used for each) an overall priority number is obtained by multiplying the two values, which gives us the final priority score. The backlog is then ordered using this value.

As the example in the link shows, a Product Owner can also create priority bands that describe stories at a high level: must-have, nice to have, won’t develop, etc. This provides context for the priority score and gives the team information about the PO’s expectations for each story.

I’ll be back next month to talk about building an Agile culture. In the meantime, what methods does your team use to estimate and prioritize user stories?

Open Knowledge Foundation: New research project to map the impact of open budget data

planet code4lib - Wed, 2015-03-04 12:01

I’m pleased to announce a new research project to examine the impact of open budget data, undertaken as a collaboration between Open Knowledge and the Digital Methods Initiative at the University of Amsterdam, supported by the Global Initiative for Financial Transparency (GIFT).

The project will include an empirical mapping of who is active around open budget data around the world, and what the main issues, opportunities and challenges are according to different actors. On the basis of this mapping it will provide a review of the various definitions and conceptions of open budget data, arguments for why it matters, best practises for publication and engagement, as well as applications and outcomes in different countries around the world.

As well as drawing on Open Knowledge’s extensive experience and expertise around open budget data (through projects such as Open Spending), it will utilise innovative tools and methods developed at the University of Amsterdam to harness evidence from the web, social media and collections of documents to inform and enrich our analysis.

As part of this project we’re launching a collaborative bibliography of existing research and literature on open budget data and associated topics which we hope will become a useful resource for other organisations, advocates, policy-makers, and researchers working in this area. If you have suggestions for items to add, please do get in touch.

This project follows on from other research projects we’ve conducted around this area – including on data standards for fiscal transparency, on technology for transparent and accountable public finance, and on mapping the open spending community.

Financial transparency field network with the Issuecrawler tool based on hyperlink analysis starting from members of Financial Transparency Coalition, 12th January 2015. Open Knowledge and Digital Methods Initiative.

LibUX: The Inter[mediate]face

planet code4lib - Wed, 2015-03-04 01:01

This postThe Battle Is For The Customer Interface by Tom Goodwin — captured my imagination. The fastest-growing companies in the world occupy the space between the product and the person. Uber doesn’t own any vehicles; Facebook doesn’t create any media; Aribnb doesn’t own any real estate. What they control is the interface.

They facilitate access — just like us.

The Library Interface

The trumped-up value of the library isn’t the dog-eared six-dollar paperbacks in its collection, nor can we squander the credit for the research behind the vendor paywall. Instead, its value continues to be what it has always been – as gatekeeper, the access point. A library is the intermediary touchpoint between the user and the content the user seeks.

We have talked before about how one of the most important features for a library website is that it stays out of the way; the most successful are — as Tom wrote — “thin layers that sit on top of vast supply systems.” In this way, libraries curate access points which are desirable to patrons because

  • they eliminate paywalls,
  • curate the signals from the noise,
  • and are delightful.

These are the core features of the library interface. Libraries absorb the community-wide cost to access information curated by knowledge-experts that help sift through the Googleable cruft. They provide access to a repository of physical items users want and don’t want to buy (books, tools, looms, 3d printers, machines). A library is, too, where community is accessed. In the provision of this access anywhere on the open web and through human proxies, the library creates delight.

The post The Inter[mediate]face appeared first on LibUX.

Library Tech Talk (U of Michigan): How to Create (and Keep Creating) a Digitization Workflow

planet code4lib - Wed, 2015-03-04 00:00

It’s possible we should have written this blog post years ago, when we first created our workflow for how we shepherd digitization projects through our Digital Library. Well, we were busy creating it, that’s our excuse. Three years later, we’re on our third iteration.

DuraSpace News: VIVO Strategic Plan Lays Foundation for 2015-2016

planet code4lib - Wed, 2015-03-04 00:00

Winchester, MA  During the past two and a half months, the VIVO Strategic Planning Group has developed a prioritized written strategy document for the VIVO project. The plan highlights key goals and recommendations that specifically focus on increasing the engagement of the VIVO community, hiring a full-time VIVO Technical Lead to make the open source development process more inclusive and transparent, and implementing a framework to increase productivity.

Evergreen ILS: SECURITY RELEASES: Evergreen 2.7.4, 2.6.7, and 2.5.9

planet code4lib - Tue, 2015-03-03 22:55

On behalf of the Evergreen contributors, the 2.7.x release maintainer (Ben Shum) and the 2.6.x and 2.5.x release maintainer (Dan Wells), we are pleased to announce the release of Evergreen 2.7.4, 2.6.7, and 2.5.9.

The new releases can be downloaded from:

http://evergreen-ils.org/egdownloads/

THESE RELEASES CONTAIN SECURITY UPDATES, so you will want to upgrade as soon as possible.

In particular, the following security issues are fixed:

  • Bug 1424755: This bug allows unauthorized remote access to the value of certain library settings that are meant to be confidential.
  • Bug 1206589: This bug allows unauthorized remote access to the log of changes to library settings, including ones meant to be confidential.

All prior supported releases are vulnerable to these bugs.

All three of these new releases also contain bugfixes that not related to the security issues. For more information on the changes in these releases, please consult their change logs:

Please note that 2.5.9 is the last release expected in the 2.5.x series.

It is recommended that all Evergreen sites upgrade to one of the new releases as soon as possible.

If you cannot do a full upgrade at this time, it is extremely important that that you patch your Evergreen system to protect against these exploits. To that end, two patches are available, one for bug 1424755 and one for bug 1206589, that you can download and apply to a running system.

In order to secure your system, you must download the two patches and copy them to each of your Evergreen servers — in particular, any that run the open-ils.actor and/or open-ils.pcrud services. You will need to perform the following steps on each server to completely patch your system.

First, you must find where the Actor.pm module is located. This is usually under /usr/local somewhere. The following command will find it for you:

find /usr/local -name Actor.pm

On an Ubuntu 12.04 system, the above prints out /usr/local/share/perl/5.14.2/OpenILS/Application/Actor.pm so we will use that as our example, just be sure that when you do this for real, you use the actual path printed by the above command. If it prints nothing, you will need to check other locations.

Once you have the path, you can run the patch command. Assuming that you are in the directory where you put the patch file, the following command should apply the patch:

sudo patch -b /usr/local/share/perl/5.14.2/OpenILS/Application/Actor.pm lp1424755.patch

Unless you have made local edits to the affected file, the patch should apply cleanly.

Next, you will need to apply the patch for bug 1206589. This can be done as the opensrf user:

patch -b /openils/conf/fm_IDL.xml lp1206589.patch

After you have applied the patches, you will need to restart the open-ils.actor and open-ils.pcrud services. You do this by running osrf_control with the appropriate options:

osrf_control [--localhost] --restart --service open-ils.actor osrf_control [--localhost] --restart --service open-ils.pcrud

The --localhost is in brackets because you may or may not need it. Your system administrator should know if you do or not. If you do need it, remove the brackets. If you don’t need it, then omit the option entirely.

DPLA: Board Governance Committee Open Call: March 11, 2015, 1:00 PM Eastern

planet code4lib - Tue, 2015-03-03 15:40

The DPLA Board of Directors’ Governance Committee will hold a conference call on Wednesday, March 11, 2015 at 1:00 PM Eastern. The call is open to the public.

Agenda

Public session

  • Rethinking DPLA open committee calls
  • Questions/comments from the public

Executive session

  • Update and next steps for Board Nominating Committee
Dial-in

District Dispatch: 3D printing technologies in libraries: intellectual property right issues

planet code4lib - Tue, 2015-03-03 15:23

Photo by Subhashish Panigrahi

Join us for our next installment of CopyTalk, March 5th at 2pm Eastern Time. In the past the use of photocopy, printing, scanning and related technologies in libraries raised copyright issues alone. A new technology is making its way into libraries; 3D printing technology now allows a patron to create (print) three-dimensional objects as well. Patrons can now “print” entire mechanical devices or components of other devices from something as simple as a corkscrew to parts of a prosthetic body part. Objects of all sorts can be created in library maker spaces. These technologies raise not only copyright issues but now patent including design patents, trademark including trade dress as well as copyright issues. Learn about the legal issues involved and how the library can protect itself from liability when patrons use these technologies in library spaces and raise awareness of such issues among patrons.

Speakers

Professor Tomas Lipinski completed his Juris Doctor (J.D.) from Marquette University Law School, Milwaukee, Wisconsin, received the Master of Laws (LL.M.) from The John Marshall Law School, Chicago, Illinois, and the Ph.D. from the Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Mr. Lipinski has worked in a variety of legal settings including the private, public and non-profit sectors. He is the author of numerous articles and book chapters and has been a visiting professor in summers at the University of Pretoria-School of Information Technology (Pretoria, South Africa) and at the Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Professor Lipinski was the first named member of the Global Law Faculty, Faculty of Law, University of Leuven (Katholieke Universiteit Leuven), Belgium, in Fall of 2006 where he continues to lecture annually at its Centers for Intellectual Property Rights and Interdisciplinary Center for Law and ICT. In October he returned to the University of Wisconsin—Milwaukee to serve as Professor and Dean of its i-School, the School of Information Studies. He serves as a member of the IFLA Copyright and other Legal Matters Committee and an IFLA delegate to the WIPO Standing Committee on Copyright and Other Rights. His current project is a book on legal issues in maker spaces in libraries with Mary Minow and Gretchen McCord that should be available this summer or fall.

As OITP’s Information Policy Analyst, Charlie Wapner provides analytical, organizational, and logistical support to the ALA Washington Office as part of a team developing and implementing a national information policy agenda for America’s public libraries. He also lead’s OITP’s work on the policy implications of 3D printing. Prior to working at ALA, Charlie spent two-and-a-half years providing policy and communications support to members of the U.S. House of Representatives. He worked first for Congressman Mark Critz of Pennsylvania and then for Congressman Ron Barber of Arizona. Charlie holds a B.A. in diplomatic history from the University of Pennsylvania and an M.S. in public policy and management from Carnegie Mellon University.

There is no need to pre-register! Just show up on March 5, 2015, at 2:00 p.m. Eastern by clicking here.

The post 3D printing technologies in libraries: intellectual property right issues appeared first on District Dispatch.

LITA: Join LITA’s Imagineering IG at ALA Annual

planet code4lib - Tue, 2015-03-03 13:00

Editor’s note: This is guest post by Breanne Kirsch.

During the upcoming 2015 ALA Annual Conference, LITA’s Imagineering Interest Group will host the program “Unknown Knowns and Known Unknowns: How Speculative Fiction Gets Technological Innovation Right and Wrong.” A panel of science fiction and fantasy authors will discuss their work and how it connects with technological developments that were never invented and those that came about in unimagined ways. Tor is sponsoring the program and bringing authors John Scalzi, Vernor Vinge, Greg Bear, and Marie Brennan. Baen Books is also sponsoring the program by bringing Larry Correia to the author panel.

John Scalzi wrote the Old Man’s War series and more recently, Redshirts, which won the 2013 Hugo Award for Best Novel. Vernor Vinge is known for his Realtime/Bobble and Zones of Thought Series and a number of short fiction stories. Greg Bear has written a number of series, including Darwin, The Forge of God, Songs of Earth and Power, Quantum Logic, and The Way. He has also written books for the Halo series, short fiction, and standalone books, most recently, War Dogs as well as the upcoming novels Eternity and Eon. Marie Brennan has written the Onyx Court series, a number of short stories, and more recently the Lady Trent series, including the upcoming Voyage of the Basilisk. Larry Correia has written the Monster Hunter series, Grimnoir Chronicles, Dead Six series, and Iron Kingdoms series. These authors will consider the role speculative fiction plays in fostering innovation and bringing about new ideas.

Please plan to attend the upcoming ALA Annual 2015 Conference and add the Imagineering Interest Group program to your schedule! We look forward to seeing you in San Francisco.

Breanne A. Kirsch is the current Chair of the Imagineering Interest Group as well as the Game Making Interest Group within LITA. She works as a Public Services Librarian at the University of South Carolina Upstate and is the Coordinator of Emerging Technologies. She can be contacted at bkirsch@uscupstate.edu or @breezyalli.

Open Knowledge Foundation: New Open Knowledge Local Groups in Macedonia, Pakistan, Portugal and Ukraine

planet code4lib - Tue, 2015-03-03 12:45

It’s once again time for us to proudly announce the establishment of a new batch of Open Knowledge Local Groups, founded by community leaders in Macedonia, Pakistan, Portugal and Ukraine, which we hereby welcome warmly into the ever-growing family of Local Groups. This brings the total number of Local Groups and Chapters up to a whopping 58!

In this blog post we would like to introduce the founders of these new groups and invite everyone to join the community in these countries.

MACEDONIA

In Macedonia, the Local Group has been founded by Bardhyl Jashari, who is the director of Metamorphosis Foundation. His professional interests are mainly in the sphere of new technologies, media, civic activism, e-­government and participation. Previously he worked as Information Program Coordinator of the Foundation Open Society – Macedonia. In both capacities, he has run national and international­scope projects, involving tight cooperation with other international organizations, governmental bodies, the business and the civic sector. He is a member of the National Council for Information Society of Macedonia and National Expert for Macedonia of the UN World Summit Award. In the past he was a member of the Task Force for National Strategy for Information Society Development and served as a commissioner at the Agency for Electronic Communication (2005­-2011). Bardhyl holds a master degree at Paris 12 University­Faculty of Public Administration (France) and an Information System Designer Degree from University of Zagreb (Croatia).

To get in touch with Bardhyl and connect with the community in Macedonia, head here.

PAKISTAN

The new Local Group in Pakistan is founded by Nouman Nazim. Nouman has worked for 7+ years with leading Public Sector as well as Non Government Organizations in Pakistan and performed variety of roles related to Administration, Management, Monitoring etc. He has worn many other hats too in his career including programmer, writer, researcher, manager, marketer and strategist. As a result, he have developed unique abilities to manage multi-disciplinary tasks and projects as well as to navigate complex challenges. He has a Bachelor degree in Information Sciences and is currently persuing a Master’s degree in Computer Science besides working on his own startup outside of class. He believes open data lets us achieve what we could normally never be able to and that it has the potential to positively change millions of lives.

In the Open Knowledge Pakistan Local Group Nouman is supported by Sher Afgun Usmani and Sahigan Rana. Sher has studied Computer sciences and is an entrepreneur, co-founder of Yum Solutions and Urducation (an initiative to promote technical education in Urdu). He has been working for 4+ years in the field of software development. Shaigan holds a MBA degree in Marketing, and is now pursuing a Post-Graduate degree in internet marketing from Iqra University Islamabad, Pakistan. His research focuses on entrepreneurship, innovation and open access to international markets. He is co-founder of printingconcern.com and Yum Solutions. He has an interest and several years experience in internet marketing, content writing, Business development and direct sales.

To get in touch with Nouman, Sher and Shaigan and connect with the community in Pakistan, head here.

PORTUGAL

Open Knowledge Portugal is founded in unison by Ricardo Lafuente and Olaf Veerman.

Ricardo co-founded and facilitates the activities of Transparência Hackday Portugal, Portugal’s open data collective. Coming from a communications design background and an MA in Media Design, he has been busy developing tools and projects spanning the fields of typography, open data, information visualization and web technologies. He also co-founded the Porto office of Journalism++, the data-driven journalism agency, where he takes the role of designer and data architect along with Ana Isabel Carvalho. Ana and Ricardo also run the Manufactura Independente design research studio, focusing on libre culture and open design.

Olaf Veerman leads the Lisbon office of Development Seed and their efforts to contribute to the open data community in Europe, concretely by leading project strategy and implementation through full project cycles. Before joining Development Seed, Olaf lived throughout Latin America where he worked with civil society organizations to create social impact through the use of technology. He came over from Flipside, the Lisbon based organization he founded after returning to Portugal from his last stay in the Southern hemisphere. Olaf is fluent in English, Dutch, Portuguese, and Spanish.

To get in touch with Ricardo and Olaf – and connect with the community in Portugal, head here.

UKRAINE

Denis Gursky is the founder of the new Open Knowledge Local Group in Ukraine. He is also the found of SocialBoost; a set of innovative instruments incl. the open data movement in Ukraine, that improves civic engagement and makes government more digitalized — thus accountable, transparent and open. He is furthermore a digital communications and civic engagement expert and works on complex strategies for government and the commercial sector. He is one of the leaders of the open government data movement in Ukraine, supported by government and hacktivists, and is currently developing the Official Open Government Data Portal of Ukraine and Open Data Law.

To get in touch with Denis and connect with the community in Ukraine, head here.

Photo by flipside.org, CC BY-SA.

Raffaele Messuti: Epub linkrot

planet code4lib - Tue, 2015-03-03 10:00

Linkrot also affects epub files (who would have thought! :)).
How to check the health of external links in epub books (required tools: a shell, atool, pup, gnu parallel).

Dan Scott: Library and Archives Canada: Planning for a new union catalogue

planet code4lib - Tue, 2015-03-03 03:46

I attended a meeting with Library and Archives Canada today in my role as an Ontario Library Association board member to discuss the plans around a new Canadian union catalogue based on OCLC's hosted services. Following are some of the thoughts I prepared in advance of the meeting, based on the relatively limited materials to which I had access. (I will update this post once those materials have been shared openly; they include rough implementation timelines, perhaps the most interesting of which being that it the replacement system is not expected to be in production until August 2016.) Let me say at the outset that there were no solid answers on potential costs to participating libraries, other than that LAC is striving to keep the costs as low as possible.

Basic question: What form does LAC envision the solution taking?

Will it be:

  • "Library and Archives Canada begins adding records and holdings to WorldCat" as listed for many other countries in http://www.oclc.org/worldcat/catalog/national/timeline.en.html;
  • Or a separate, standalone but openly searchable WorldCat Local catalogue that Canadians can use like the Dutch or United Kingdom union catalogues (which lack significant functionality that standard WorldCat possesses, like the integrated schema.org discovery markup)?
  • Or a separate, standalone but closed catalogue like the Dutch union catalogue GGC and the Combined Regions UnityUK that require a subscription to access?

The answer was "yes, we will be adding records and holdings to WorldCat, and yes, you will be able to search a WorldCat Local instance for both LAC-specific and AMICUS as a whole" - but they're still working out the exact details. Later we determined that it will actually be WorldCat Discovery--essentially a rewrite of WorldCat Local--which assuaged some of my concerns about the current examples we can see of other OCLC-based union catalogues.

Privacy of Canadian citizens

The "Canadian office and data centre locations" requirement does not mean that usage data is exempt from Patriot Act concerns. Specifically, OCLC is an American company and thus the USA Patriot Act "allows US authorities to obtain records from any US-linked company operating in Canada" (per a 2004 brief submitted to the BC Privacy Commissioner by CIPPIC). Canadians should not be subject to this invasion of their privacy by the agents of another nation simply to use their own national union catalogue.

The response: The Justice, Agricultural, and NRCan agencies use US-hosted library systems (Evergreen, by Equinox). However, one of the other participants from a federal agency reported that they had been trying to update to Sierra from Millenium but have been stalled for two years because whatever policy allowed them to go live with US-hosted Millenium is not being allowed now.

LAC claimed that, due to NAFTA, they are not allowed to insist that data be held in Canada unless it is for national security reasons. They noted that any usage data collected wouldn't be the same volume of patron data that would be seen in public libraries. They did point out that Netherlands sends anonymized data to OCLC, but that costs money and impacts response time. Apparently the OCLC web site, they claim not to have had a request under Patriot Act.

Privacy of Canadian citizens, part 2

I didn't get the chance to bring this up during the call...

LAC noted in their background that modern systems have links to social media, and apparently want this as part of a new AMICUS. This would also open up potential privacy leaks; see Eric Hellman on this topic, for example; it is also an area of interest for the recently launched ALA Patron Privacy Technologies Interest Group.

Open data

Opening up access to data is part of the federal government's stated mission. Canada's Action Plan on Open Government 2014-16 says "Open Government Foundation - Open By Default" is a keystone of its plan; "Eligible data and information will be released in standardized, open formats, free of charge, and without restrictions on reuse" under the Open Government Licence - Canada 2.0. I therefore asserted:

  • A relaunched National Union Catalogue should therefore support open data per the federal initiative from launch.
  • The open data should include bibliographic, authority, and holdings records. Guy Berthiaume's reply to CLA and CAPAL that libraries can use the Z39.50 protocol to try to access records from individual library's Z39.50 servers ignores one of the primary purposes of a union catalogue, which is to avoid that time-consuming search across the various Z39.50 servers of the institutions that contributed their data to the union catalogue in the first place.

The response: The ACAN requirements document indicated a requirement that the data be made available under an ODC-BY license (matching OCLC's general WorldCat license); and LAC needs to get the data back to support their federated search tool.

I asked if they had checked to see if ODC-BY and Open Government License - Canada 2.0 licenses are compatible; they responded that that was something they would need to look into. Happily, the CLIPol tool indicates that the ODB-BY 1.0 and Open Government License - Canada 2.0 licenses are mostly compatible.

Contemporary features: are we achieving the stated goals?

The backgrounder benefits/objectives section stated: "In the current AMICUS?based context, the NUC has not kept pace with new technological functions, capabilities, and client needs. Contemporary features such as a user?oriented display and navigation, user customization, links to social media, and linked open data output were not available when AMICUS was implemented in the 1990s."

Canadian resource visibility

To preserve and promote our unique national culture, we want Canadian library resources to be as visible as possible on the web. This is generally accomplished by publishing a sitemap (a list of the web pages for a given web site, along with when each page was last updated) and allowing search engines like Google, Bing, and Yahoo to crawl those web pages and index their data.

To maximize the visibility of Canadian library resources on the open web, we need our union catalogue to generate a sitemap that points to only the actual records with holdings for Canadian libraries, not just WorldCat.org in general. For example, http://adamnet.worldcat.org/robots.txt simply points to the generic http://www.worldcat.org/libraries/sitemap_index.xml, not a specific sitemap for the Dutch union catalogue.

Our union catalogue should publish schema.org metadata to improve the discoverability of our resources in search engines (which initiated the schema.org standard for that purpose). WorldCat includes schema.org metadata, but WorldCat Local instances do not.

The response: There was some confusion about schema.org, and they asked if I didn't think that OCLC's syndication program was sufficient for enabling web discoverability. I replied in the negative.

Standards support (MARC21, RDA, ISO etc.)

I didn't get a chance to raise these questions.

What standards, exactly, are meant by this?

"Technical requirements including volumetrics and W3C compliance" is also very broad and vague. With respect to "W3C compliance", W3C Standards is just the start of many standards.

  • Presumably there will be WCAG compliance for accessibility - but to what extent?
  • Both the adamnet and fablibraries instances landing pages state that their canonical URL is www.worldcat.org, which effectively hides them from search engines.
Mobile support

The W3C Standards page mentions mobile friendliness as part of its standards.

WorldCat.org itself is not mobile friendly. It uses a separate website with different URLs to serve up mobile web pages, and does not automatically detect mobile browsers; the onus is on the user to find the "WorldCat Mobile" page, and that has been in a "Beta" state since 2009. The "beta" contravenes the stated requirements for the AMICUS replacement service to not be an alpha or beta, unless you choose to ignore the massive adoption of mobile devices for searching and browsing purposes, and the beta mobile experience lacks functionality compared to the desktop version.

The adamnet and fablibraries WorldCat Local instances don't advertise the mobile option, which is slightly different than the standard WorldCat Mobile version (for example, it offers record detail pages), but the navigation between desktop and mobile is sub-par. If you have bookmarked a page on the desktop, then open that bookmark on your synchronized browser on a mobile device, you can only get the desktop view.

Linked open data

Linked open data around records, holdings, and participating libraries has arguably been a standard since the W3 Library Linked Data working group issued its final report in 2011.

  • Data--including library holdings--should be available both as bulk downloads and as linked open data
  • Records need to be linked to libraries and holdings. For humans, that missing link in WorldCat is supplied by a JavaScript lookup based on geographic location info that the human supplies. This prevents other automated services from aggregating the data and creating new services based on it (including entirely Canadian-built and hosted services which would then protect Canadians from USA Patriot Act concerns).
  • MARC records should be one of the directly downloadable formats via the web. Currently download options are limited to experimental & incomplete ntriple, turtle, JSON-LD, and RDF-XML formats.
Application programming interface (API)

I didn't get the chance to bring this up during the call...

OCLC offers the xID API in a very limited fashion to non-members, which is one of the only ways to match ISBN, LCCN, and OCLC numbers. LAC should ensure that Canadian libraries have access to some similarly efficient means of finding matching records without having to become full OCLC Cataloguing members.

Updating the NUC

I didn't get the chance to bring this up during the call...

In an ideal world, the NUC would adopt the standard web indexing practice of checking sitemaps (for those libraries that produce them) on a regular (daily or weekly basis) and add/replace any new/modified records & holdings from the contributing libraries accordingly, rather than requiring libraries to upload their own records & holdings on an irregular basis.

DuraSpace News: Quarterly Report from Fedora, October - December 2014

planet code4lib - Tue, 2015-03-03 00:00

From The Fedora Steering Group

Fedora Development - In the past quarter, the development team released the production release of Fedora 4.0; detailed release notes are here:

SearchHub: Thoughts on “Search vs. Discovery”

planet code4lib - Mon, 2015-03-02 22:21
“Search vs discovery” is a common dichotomy that is used in discussions about search technology, where the former is about finding specific things that are either known or assumed to exist, and the latter is about using the search/browse interface to discover what content is available. A single user session may include both of these “agendas”, especially if a users’ assumption that a certain piece of content exists, is not quickly verified by finding it. Findability is impaired when there are too many irrelevant or noise hits (false positives), which obscures or camouflages the intended results. This happens when metadata is poorly managed, search relevance is poorly tuned or when the users’ query is ambiguous and no feedback is provided by the application (such as autocomplete, recommendation or did you mean) to help improve it. Content Visibility Content visibility is important because a document must first be included in the result set to be found (obviously), but it is also critical for discovery especially with very large content sets. User experience has shown that faceted navigation is one of the best ways to provide this visualization especially if it includes dimensions that focus on “aboutness” and “relatedness”. However if a document is not appropriately tagged, it may become invisible to the user once the facet that it should be included in (but is not) is selected. Data quality really matters here! (My colleague Mark Bennett has authored a Data Quality Toolkit to help with this.  The venerable Lucene Index Toolbox or “Luke” which can be used to inspect the back end Lucene index is also very useful. The LukeRequestHandler is bundled with Solr. ) Without appropriate metadata, the search engine has no way of knowing what is related to what. Search engines are not smart in this way – the intelligence of a search application is built into its index. Search and Content Curation Findability and visibility are also very important when the search application is used as a tool for content curation within an organization. Sometimes, the search agenda is to see if something has been created before, as a “do diligence” activity before creating it. Thus, the phrase “out of sight, out of mind” becomes important when content that can’t be found tends to be re-created. This leads to unnecessary duplication, which is wasteful but also counter-productive to search both by adding to the repository size and by increasing the possibility of obfuscation by similarity. Applying “deduplication” processes after the fact is a band-aid – we should make it easier to find things in the first place so we don’t have to do more work later to clean up the mess. We also need to be confident in our search results, so that if we don’t find it, it is likely that it doesn’t exist – see my comments on this point in Introducing Query Autofiltering. Note that this is always a slippery slope. In science, absence of evidence does not equate to evidence of absence – hence “Finding Bigfoot”!  (If they ever do find “Squatch” then no more show – or they have to change the title to “Bigfoot Found!” – which would be very popular but also couldn’t be a series!  That’s OK, I only watched it once to discover that they don’t actually “find” Bigfoot – hence the ‘ing’ suffix. I suppose that “Searching for” sounds too futile to tune it in even once.) Auto-classification Tuning Auto-classification technology is a potential cure in all of the above cases, but can also exacerbate the problem if not properly managed. Machine Learning approaches or using an ontologies and associated rules, provide ways to enhance the relevance of important documents and to organize them in ways that improve both search and discovery. However, in the early phases of development it is likely that an auto-classification system will make two types of errors, that if not fixed can lead to problems of both findability and visibility.  First, it will tag documents erroneously leading to the camouflage or noise problem and second, it will not tag documents that it should – leading to a problem with content visibility. We call these “precision” and “recall” errors respectively. The recall error is especially insidious because if not detected will cause documents to be dropped from consideration when a navigation facet is clicked. Also, errors of omission are more difficult to detect, and require the input of persons who understand the content set well enough to know what the autoclassifier “should” do. Manual tagging, while potentially more accurate, is simply not feasible in many cases because Subject Matter Experts are difficult to outsource. Data quality analysis/curation is the key here. Many times the problem is not the search engines fault. Garbage-In-Garbage-Out as the saying goes. Data Visualization – Search-Driven Analytics I think that one of the most exciting usages of search as a discovery tool is the combination of the search paradigm with analytics. This used to be the purview of the relational database model which is at the core of what we call “Business Intelligence” or BI. Reports generated by analysts from relational data go under the rubric of OLAP (online analytical processing) which typically involves a Data Analyst who designs a set of relational queries, the output of which are then input to a graphing engine to generate a set of charts. When the data changes, the OLAP “cube” is re-executed and a new report emerges. Generating new ways to look at the data require the development, testing, etc of new cubes. This process by its very nature leads to stagnation – cubes are expensive to create and this may stifle new ideas since there is some expert labor required to bring these ideas to fruition. Search engines and relational databases are very different animals. Search engines are not as good as RDBMS at several things – ACID transactions, relational joins, etc — but they are much better at dealing with complex queries that include both structured and unstructured (textual) components. Search indexes like Lucene can include numerical, spatial and temporal data alongside textual information.  Using facets, they can also count things that are the output of these complex queries. This enables us to ask more interesting questions about data – questions that get to “why” something happened rather than just “what”.  Furthermore, recent enhancements to Solr have added statistical analyses to the mix – we can now develop highly interactive data discovery/visualization applications which remove the data analyst from the loop. While there is still a case for traditional BI, search-driven discovery will fill the gap by allowing any user – technical or not – to do the “what if” questions. Once an important analysis has been discovered, it can be encapsulated as an OLAP cube so that the intelligence of its questions can be productized/disseminated. Since this section is about visualization and there are no pictures in this post, you may want to “see” examples of what I am talking about. First, check out Chris Hostetter (aka “Hoss”)’s blog post “Hey, You Got Your Facets in My Stats! You Got Your Stats In My Facets!!” , and his earlier post on pivot facets. Another way cool demonstration of this capability comes from Sam Mefford when he worked at Avalon Consulting – this is a very compelling demonstration of how faceted search can be used as a discovery/visualization tool. Bravo Sam! This is where the rubber meets the road folks!

The post Thoughts on “Search vs. Discovery” appeared first on Lucidworks.

District Dispatch: Free webinar: Expanding immigrant access through libraries

planet code4lib - Mon, 2015-03-02 22:17

Hartford Public Library

Library services to immigrants are extensive and include world language collections, multicultural programming, ESL, citizenship, computer classes, and information brokering. Learn how your library can better support immigrants in “We Belong Here: Expanding Immigrant Access to Government and Community,” a free webinar hosted by e-government service Lib2gov from the American Library Association’s Washington Office and University of Maryland’s iPAC.

This webinar will focus on e-government services that open access for immigrants, using the Hartford Public Library’s American Place Initiative as a national model for immigrant services, resources, and engagement through public libraries.

Homa Naficy, chief adult learning officer for the Hartford Public Library, will lead the interactive webinar. Homa Naficy joined Hartford Public Library in 2000 to design and direct The American Place program for Hartford’s immigrants and refugees. Born in Paris, a native of Iran and now an American citizen, Multicultural Services Director Homa Naficy began her library career as a reference librarian at Newark Public Library. Before joining the staff of Hartford Public Library, she served as a reference librarian at Yonkers Public Library and later as librarian for Adult Services and Outreach for the Westchester Library System.

The American Place has become a magnet for new arrivals seeking immigration information, resources for learning English and preparing for United States citizenship. In 2010, the program was awarded two major grants, a citizenship education grant from the United States Citizenship and Immigration Services (the only library in the nation to receive such funding), and a National Leadership grant from the Institute of Museum and Library Services designed to promote immigrant civic engagement. On completion, this project will serve as a model for other libraries nationally. The American Place program is also the only library in the state to receive funding for adult basic education from the Connecticut Department of Education. In 2001, Ms. Naficy received the Connecticut Immigrant of the Year Award, and in 2013 she was chosen a “Champion of Change” by The White House.

Webinar title: We Belong Here: Expanding Immigrant Access to Government and Community
Date: March 11, 2015
Time: 2:00-3:00 p.m. EST
Register now

The webinar will be archived.

The post Free webinar: Expanding immigrant access through libraries appeared first on District Dispatch.

Islandora: Islandora/Fedora 4 Project Update II

planet code4lib - Mon, 2015-03-02 20:21

On Friday, February 27th, the Fedora 4 Interest Group met for the second time to discuss the progress of our big upgration (the first meeting was back at the end of January). The full notes from the meeting are here, but I'll summarize some of the highlights:

Project Updates

The project has entered its second month with plenty accomplished. Nick was sent to Code4Lib 2015 in Portland, Oregon to work with our Technical Lead, Danny Lamb. The two worked on the proof-of-concept, and it was presented as a lightning talk (video demo). Additionally, Nick and Danny worked with the Hydra and Fedora communities on a shared data model, Hydra Works, which evolved into the Fedora Community Data Model.

After Code4Lib 2015, Nick and Danny focused on updating the Technical Design document, that provides:

  1. an understanding of the Islandora 7.x-2.x design rationale
  2. the importance of using an integration framework
  3. the use of camel
  4. inversion of control and camel
  5. camel and scripting languages
  6. Islandora Sync
  7. Solr and Triple store indexing
  8. Islandora (Drupal).

Or, to sum up the new ways of Islandora in one imge:

Nick and Danny also focused on the development virtual environment (DevOps) for the project. Nick decided to move away from using Chef and Berkshelf due to dependency support. The DevOps setup was moved to basic bash scripts and Vagrant. Contributors to the project can now spin a virtual development environment (which includes the proof-of-concept) in about 5 minutes with a single command: vagrant up. Instructions here.

Nick also focused on project documentation and documentation deployment. All document for the project resides in the git repository for the project, in Markdown format. The documentation can be generated into a static site with MkDocs and thendeployed to GitHub Pages. The documentation for the project can be viewed here, and information about how the documentation is built and deployed can be found here. There is also an outline of how you can contribute to the project here (regardless of your background. We need far more than programmers).

A new use case template makes bringing your ideas to the table much easier. Check out some of the existing use cases for examples - and add yours!

Nick, Danny, and Melissa also did an interview for Duraspace.

Upgration

The upgration portion of the project is dependant on a couple of sub-items of the project to play out, but continues in tandem.

The first sub-item is the Fedora Audit Service. The Islandora community make use of the audit service in Fedora 3.x for PREMIS and other provance services. It currently does not exist in Fedora 4.x, so the community has come together to plan our the service over two conference calls that will outline use cases and functional requirements, which will then translate to JIRA tickets for a Fedora code sprint in late March. Notes from the first meeting are here. Nick has been tasked with identifying if the community should use the PROV-O ontology, the PREMIS ontology, or a combination of both. The second item is bridging the work of Mike Durbin’s migration-utils and Danny’s Apache Camel work in the Islandora & Fedora 4 project. While Nick was working to create test fixtures for Mike and Danny, he discovered a bug in Fedora 3.8.0, which will need to be resolved before any test fixtures can come out of York University's upgration pilot.

Nick and Danny will most likely focus on migration work and community contributed developer tasks in March.

Funding

The Islandora Foundation is pleased to welcome Simon Fraser University as a Partner for their support of the Fedora 4 project. Longtime member PALS has also earmarked some of their membership dues to help out the upgration. If you or your instition are interested in being financial supporters, please drop me a line.

Other News

Contributor Kevin Bowrin wrote up an account of exprience installing and trying out the work our team has done so far. Check it out.

 

 

Pages

Subscribe to code4lib aggregator