You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 52 min 43 sec ago

SearchHub: Erik Hatcher, Developer on Fire, Profiled by Dave Rael

Sat, 2016-01-09 00:10

Lucidworks engineer, Apache Lucene committer, and co-author of Lucene in Action as well as co-author of Java Development with Ant is profiled on Dave Rael’s Developer on Fire podcast.

Subscribe or listen on iTunes or via the podcast’s feed.

The post Erik Hatcher, Developer on Fire, Profiled by Dave Rael appeared first on Lucidworks.com.

Galen Charlton: Whence library technology innovation?

Fri, 2016-01-08 22:30

Rob McGee has been moderating the “View from the Top” presidents [of library technology companies] seminar for 26 years. As an exercise in grilling executives, its value to librarians varies; while CEOs, presidents, senior VPs and the like show up, the discussion is usually constrained. Needless to say, it’s not common for concerns to be voiced openly by the panelists, and this year was no different. The trend of consolidation in the library automation industry continued to nobody’s surprise; that a good 40 minutes of the panel was spent discussing the nuts and bolts of who bought whom for how much did not result in any scintillating disclosures.

But McGee sometimes mixes it up. I was present to watch the panel, but ended up letting myself get plucked from the audience to make a couple comments.

One of the topics discussed during the latter half of the panel was patron privacy, and I ended up in the happy position of getting the last word in, to the effect that for 2016, patron privacy is a technology trend. With the ongoing good work of the Library Freedom Project and Eric Hellmann, the release of the NISO Privacy Principles, the launch of Let’s Encrypt, and various efforts by groups within ALA doing educational and policy work related to patron privacy, lots of progress is being made in turning our values into working code.

However, the reason I ended up on the panel was that McGee wanted to stir the pot about where innovation in library technology comes from. The gist of my response: it comes from the libraries themselves and from free and open source projects initiated by libraries.

This statement requires some justification.

First, here are some things that I don’t believe:

  • The big vendors don’t innovate. Wrong: if innovation is an idea plus the ability to implement it plus the ability to convince others that the idea is good in the first place, well, the big firms do have plenty of resources to apply to solving problems. So do, of course, the likes of OCLC and, in particular, OCLC Research. On the other hand, big firms do have constraints that limit the sorts of risks they can take. It’s one thing for a library project to fail or for a startup to go bust; it’s another thing for a firm employing hundreds of people and (often) answering to venture capital to take certain kinds of technology risks: nobody is running Taos or Horizon 8, and nobody wants to be the one to propose the next big failure.
  • Libraries are the only source of innovative new ideas. Nope; lots of good ideas come from outside of libraries (although that’s no reason to think that they only originate from outside). Also, automation vendors can attain a perspective that few librarians enjoy: I submit that there are very few professional librarians outside of vendor employees who have broad experience with school libraries and public libraries and academic libraries and special libraries and national libraries. A vendor librarian who works as an implementation project manager can gain that breadth of experience in the space of three years.
  • Only developers who work exclusively in free or open source projects come up with good ideas. Or only developers who work exclusively for proprietary vendors come up with good ideas. No: technical judgment and good design sense doesn’t distribute itself that way.
  • Every idea for an improvement to library software is an innovation. Librarians are not less prone to bikeshedding than anybody else (nor are they necessarily more prone to it). However, there is undoubtedly a lot of time and money spent on local tweaks, or small tweaks, or small and local tweaks (for both proprietary and F/LOSS projects) that would be better redirected to new things that better serve libraries and their users.

That out of the way, here’s what I do believe:

  • Libraries have initiated a large number of software and technology projects that achieved success, and continue to do so. Geac, anybody? NOTIS? VTLS? ALEPH. Many ILSs had their roots in library projects that later were commercialized. For that matter, from one point of view both Koha and Evergreen are also examples of ILSs initiated by libraries that got commercialized; it’s just that the free software model provides a better way of doing it as opposed to spinning off a proprietary firm.
  • Free and open source software models provide a way for libraries to experiment and more readily get others to contribute to the experiments than was the case previously.
  • And finally, libraries have different incentives that affect not just how they innovate, but to what end. It still matters that the starting point of most library projects is better serving the needs of the library, their users, or both, not seeking a large profit in three years time.

But about that last point and the period of three years to profit—I didn’t pull that number out of my hat; it came from a fellow panelist who was describing the timeframe that venture capital firms care about. (So maybe that nuts-and-bolts discussion about mergers and acquisitions was useful after all).

Libraries can afford to take a longer view. More time, in turn, can contribute to innovations that last.

Villanova Library Technology Blog: Sarah Wingo and Kallie Stahl in the Classroom

Fri, 2016-01-08 20:56

Sarah Wingo and Kallie Stahl

Sarah Wingo, Humanities II team leader and subject librarian for English, literature and theatre, taught an eight week honors course last semester. Her course, “Superheroes as Modern Mythology,” looked at comic books and their heroes as modern mythology. Wingo focused on the DC and Marvel comic books and movie franchises and also explored fan culture, history and other topics related to comic books.

When asked how a librarian with her background in Shakespeare and other early modern English playwrights became interested in pop culture comic book superheroes, Wingo answered, “[O]ne of the things that always fascinated me about Shakespeare … is that during his time Shakespeare wasn’t seen as the highbrow cultural icon that he is today. Shakespeare’s plays were a form of popular entertainment. … I’m interested in popular culture and popular entertainment, whether it be in Elizabethan England or 2015. I’m interested in what it says about us as a society and how we engage with it as a society.

Wingo went on to explain that she had watched the Batman, Spiderman and X-Men series in the 1980s and ‘90s and more recently her partner, who is interested in comic books and related media, has stimulated her interest in comic books and superheroes. She said, “It is easy to dismiss comic books and superheroes as childish, but just like Shakespeare they are responding to their times and dealing with cultural and societal themes that are important to the society in which they are created.”

As a finale to the course, Wingo invited Kallie Stahl, a graduate assistant to Falvey’s Scholarly Outreach team, to give a presentation on her current research on fandom. Fandom, according to Stahl and the “Urban Dictionary,” consists of a “community that surrounds a TV show/movie/book, etc.” The community may include message boards, online groups and other forms of communication.

Stahl is a second year graduate student, working on a master’s degree in communication. Her interests are popular culture, new media and cultural studies. Her research on fandom focuses on “Castle,” a popular television program.


Like11 People Like This Post

David Rosenthal: Aggregating Web Archives

Fri, 2016-01-08 19:00
Starting five years ago, I've posted many times about the importance of Memento (RFC7089), and in particular about the way Memento Aggregators in principle allow the contents of all Web archives to be treated as a single, homogeneous resource. I'm part of an effort by Sawood Alam and others to address some of the issues in turning this potential into reality. Sawood has a post on the IIPC blog, Memento: Help Us Route URI Lookups to the Right Archives that reveals two interesting aspects of this work.

First, Ilya Kreymer's oldweb.today shows there is a significant demand for aggregation:
We learned in the recent surge of oldweb.today (that uses MemGator to aggregate mementos from various archives) that some upstream archives had issues handling the sudden increase in the traffic and had to be removed from the list of aggregated archives.Second, the overlap between the collections at different Web archives is low, as shown in Sawood's diagram. This means that the contribution of even small Web archives to the effectiveness of the aggregated whole is significant.This is important in an environment where the Internet Archive has by far the biggest collection of preserved Web pages. It can be easy to think that the efforts of other Web archives add little. But Sawood's research shows that, if they can be effectively aggregated, even small Web archives can make a contribution.

LITA: Flexing your instructional muscles: using technology to extend your reach beyond the classroom

Fri, 2016-01-08 17:53

We’re in the midst of re-thinking our entire Information Literacy curriculum, and I’ve been waxing philosophical on the role technology will play into this new and uncharted land. The new Framework for Information Literacy has thrown the instructional library world into a tizzy. We are all grappling with everything from understanding the threshold concepts themselves to determining how to best teach them. We’ve done this all along of course with the previous Standards for Information Literacy, but there’s something about this new incarnation that seems to perplex and challenge at the same time.

For me, the biggest revelation was the idea that we could no longer rely on the traditional 50 minute one-shot to cover all of these concepts in one fell swoop. But wait, you might say, we never did that before either! That may be true, but there was something comforting and decidedly familiar in that set of neatly laid out outcomes that one could follow almost like a recipe to make one feel as though it could be accomplished in one sitting and more importantly, the students would be able to learn it. I’m the first to admit that it was easy for me to fall into this pattern and I was so focused on making that one interaction perfect, that I didn’t really think much about what happened before or after it.

And perhaps it’s purely a placebo effect at play here, but the framework turned on a light bulb for me that had previously remained unlit. Of course you cannot cover everything there is to know about Information Literacy in one session! Readers might be tearing their hair out right now and yelling at the screen that this a very obvious observation. And perhaps it is, but it’s helped me to realize how important the role technology plays in all of this to help us think beyond the one shot.

There’s been a ton of discussion about the benefits of the flipped classroom before students even see you so that you can dispense with the more mundane elements and cover the good stuff in class. But what happens after that? What if you don’t have a chance to assess that class or ever talk to those students again? What I’m really talking about is the post instruction flip. With this model, you still retain the one-shot format (because it’s still very much part of our instructional reality), but the conceptual underpinnings of the framework can now be stretched out across the entire semester and the online content can help you maintain a presence and still deliver the needed information.

Challenges and Opportunities

Two major obstacles might present themselves in your minds at this point: the willingness of the faculty to let your virtual presence linger and a lack of resources. As I think I’ve mentioned in other posts, piloting is often the key to success. Work with a few faculty who you know will be willing to let you post or send additional learning objects after the session with their students is over and who will ensure that they take it seriously by making it an actual part of the assignment. Modular objects work well for this reason and can cover both more abstract ideas as well as more point and click type skills. Assessment will also be crucial here and if you can compare the results of students who had access to this online content to those who didn’t, that will help you scale up this model especially if the scores show a positive difference. A final component to overcoming this first issue is that of integration with the course. This is where you will have to decide what needs to go online and at what point, how it should be accessed, and all the surrounding logistics. This will require collaboration, ongoing discussion, and some tailoring of content depending on the assignment and the pace of the course. This whole idea is especially important if you don’t have a large staff on hand and you want to concentrate your efforts to those very important in-person interactions and let the virtual content help with the rest.

What if you don’t have an unlimited amount of money and staff time to create an amazing tutorial? There are free tools out there that don’t require too much of a learning curve-for example, Blendspace allows you to drag and drop content from other sources in the form of tiles that can be mixed and matched. In addition to the built in quiz option, there’s a feature that allows students to tell you if they understood the content or not and provides yet another opportunity for you to provide feedback and clarification.

Blubbr TV is a trivia-type tool that allows you to append questions to video clips. Although it’s meant to foster team-based competition, it’s an easy way to assess comprehension of basic concepts. So if a student had watched a video on Boolean operators, you wouldn’t need to include that as part of your assessment, because you would already know how he/she did and could address issues much more quickly and directly.

I do want to make a side point here, so please bear with me. The idea is not so much that you are using these tools to help you assess student learning-there are many ways to do that which don’t require technology and many that do, but rather that you are using these resources to help you provide additional support for students as they’re going through the entire process not simply at an arbitrary point in time chosen by the faculty member because it works with the schedule. Too often we feel compelled to create an entire tutorial that covers everything and we get overwhelmed with the details and the potential cost either in staff time or software, but with this model you’re not trying to recreate the one-shot online, rather enrich and broaden it.

A final tip is to get to know your campus instructional technologist and/or designer. He/she can help point you in the right direction, whether it’s about a new tool you might not have seen before or simply a pedagogical approach to maximize the benefit of your online resources. More and more I find myself turning to this field for inspiration and ideas and am finding applications for instructional tools and activities I didn’t consider before simply by looking outside the library.

Conclusion

Now that I’ve thoroughly vexed you with my musings, it was all to say that technology is going to become even more important as we continue to explore the complexities of the framework and delve into its intricate layers. Using online tools will not always alleviate our time and staffing issues, but it should help us to continue working with students well beyond the time we see them and hopefully it will provide, perhaps ironically, greater individualized interaction at the point of need, and help us realize that the one-shot is not the end, but rather just the beginning.

*Images taken from Pixabay

Villanova Library Technology Blog: What could be better than two new printers?

Fri, 2016-01-08 15:08

Three new printers have replaced the two public printers on Falvey’s first floor.

Although smaller than the previous printers, their speed is about the same. Most importantly, three machines provide a much greater capacity.

If a printer

…..needs paper,

……….has an error message,

……………has a paper jam,

………………..or has any other problem,

…………………….please notify the Service Desk Supervisor.

Falvey staff received specialized training from the supplier on how to service these new machines. Having only trained personnel service the printers will ensure that repairs are accurate and quick and that the printers will avoid chronic problems in the future.

Library staff welcome this improvement to our services and remain committed to your success.


Like0

Villanova Library Technology Blog: Content Roundup – end of 2015 and first week of 2016

Fri, 2016-01-08 12:55

[1] p., Catholic Almanac for 1846

As the old year ends and the new year begins, take time to reflect upon some of the treasures of the past including a host of newly digitized historic popular literature titles including the fictional work “The Man” by Elbert Hubbard, published under the pen name Aspasia Hobbs, which tells the “true” story of the then 300-year-old Shakespeare living in a log cabin in the woods outside of Buffalo.

Catholica

The Official Catholic directory  (1846 volume added)
[http://digital.library.villanova.edu/Item/vudl:424829]

Dime Novel and Popular Literature

Fiction

Front cover, The senator’s bride / by Mrs. Alex. McVeigh Miller

The senator’s bride / by Mrs. Alex. McVeigh Miller
[http://digital.library.villanova.edu/Item/vudl:440123]

Front cover (selection), The man : a story of to-day : with facts, fancies and faults peculiarly its own

The man: a story of to-day: with facts, fancies and faults peculiarly its own: containing certain truths heretofore unpublished concerning right relation of the sexes, etc., etc. / by Aspasia Hobbs
[http://digital.library.villanova.edu/Item/vudl:439628]

Front cover, A dangerous flirtation; or, Did Ida May sin? / by Miss Laura Jean Libbey

A dangerous flirtation; or, Did Ida May sin? / by Miss Laura Jean Libbey
[http://digital.library.villanova.edu/Item/vudl:439857]

Front cover, The Brighton boys at St. Mihiel / by Lieutenant James R. Driscoll

The Brighton boys at St. Mihiel / by Lieutenant James R. Driscoll
[http://digital.library.villanova.edu/Item/vudl:440942]

Front cover, Old Sleuth’s triumph; or, The great Bronx mystery / by “Old Sleuth”

Old Sleuth Library (9 issues added)
[http://digital.library.villanova.edu/Item/vudl:438824]
[http://digital.library.villanova.edu/Item/vudl:438862]
[http://digital.library.villanova.edu/Item/vudl:438900]
[http://digital.library.villanova.edu/Item/vudl:439246]
[http://digital.library.villanova.edu/Item/vudl:439558]
[http://digital.library.villanova.edu/Item/vudl:437298]
[http://digital.library.villanova.edu/Item/vudl:438748]
[http://digital.library.villanova.edu/Item/vudl:438786]
[http://digital.library.villanova.edu/Item/vudl:437260]

Non-Fiction

Front cover, How to tell fortunes : containing Napoleon’s Oraculum

How to tell fortunes : containing Napoleon’s Oraculum and the key to work it : also tells fortunes by cards, lucky and unlucky days, signs and omens
[http://digital.library.villanova.edu/Item/vudl:441363]

Periodicals

[1] p., Happy days, v. XXX, no. 780, September 25, 1909

Happy Days (16 issues added)
[http://digital.library.villanova.edu/Item/vudl:434548]
[http://digital.library.villanova.edu/Item/vudl:434566]
[http://digital.library.villanova.edu/Item/vudl:434584]
[http://digital.library.villanova.edu/Item/vudl:434602]
[http://digital.library.villanova.edu/Item/vudl:434620]
[http://digital.library.villanova.edu/Item/vudl:435605]
[http://digital.library.villanova.edu/Item/vudl:435623]
[http://digital.library.villanova.edu/Item/vudl:435641]
[http://digital.library.villanova.edu/Item/vudl:435659]
[http://digital.library.villanova.edu/Item/vudl:435677]
[http://digital.library.villanova.edu/Item/vudl:435695]
[http://digital.library.villanova.edu/Item/vudl:435713]
[http://digital.library.villanova.edu/Item/vudl:434638]
[http://digital.library.villanova.edu/Item/vudl:435731]
[http://digital.library.villanova.edu/Item/vudl:435749]
[http://digital.library.villanova.edu/Item/vudl:434530]

The Young Men of America (1 issues added)
[http://digital.library.villanova.edu/Item/vudl:441344]

German Society of Pennsylvania

[1] p., Deutsch-Amerika, v.2, no. 52, December 23, 1916

Deutsch-Americka (40 issues added)
[http://digital.library.villanova.edu/Collection/vudl:428065]

Philadelphia Archdiocesan Historical Research Center

Historic Papers

Catholic Club of Philadelphia Records, 1871-1923 (52 items added)
[http://digital.library.villanova.edu/Item/vudl:428999]

Villanova Digital Collection

Falvey Memorial Library

Daily Doodles

Daily Doodle, “Stephen Hawking 71st birthday”, January 8, 2013

2013 (4 added)
[http://digital.library.villanova.edu/Item/vudl:441442]

2015 (7 added)
[http://digital.library.villanova.edu/Item/vudl:415994]

Van Houten’s Cocoa Ad, Inside Front cover, “The Man”.


Like0

FOSS4Lib Recent Releases: Senayan Library Management System (SLiMS) - 8 ( Akasia )

Fri, 2016-01-08 10:16
Package: Senayan Library Management System (SLiMS)Release Date: Tuesday, December 1, 2015

Last updated January 8, 2016. Created by gurujim on January 8, 2016.
Log in to edit this page.

Additional features include:
* New OPAC template, new Admin template
* System environment display for troubleshooting
* Partial RDA implementation ( transitional towards full implementation )
* Inbuilt staff-patron chat system
* Generation of citations from bibliographic entries, in a variety of common formats, using a template model which can be expanded to include other styles

Karen G. Schneider: Speaking about writing: I nominate me

Fri, 2016-01-08 04:27

I have been immersed in a wonderful ordinariness: completing my first full year as dean, moving my doctoral work toward the proposal-almost-ready stage, and observing the calendar in my personal life. In November I pulled Piney III, our Christmas tree, out of his box in the garage, and he is staying up until next weekend. We missed him last year, so he gets to spend a little more time with us this season.

Meanwhile, I spent a few spare moments this week trying to wrap my head around a LibraryLand kerfuffle. An article was published in American Libraries that according to the authors was edited after the fact to include comments favorable to a vendor. I heard back-alley comments that this wasn’t the full story and that the authors hadn’t followed the scope, which had directed them to include this perspective, and therefore it was really their fault for not following direction and complaining, etc. And on the social networks, everyone got their knickers in a twist and then, as happens, moved on. But as someone with a long publishing history, this has lingered with me (and not only because someone had to mansplain to me, have you read the article? Yes, I had read the article…).

Here’s my offer. I have been fairly low-key in our profession for a couple of years, while I deal with a huge new job, a doctoral program, family medical crises, household moves, and so on. My term on ALA Council ended last summer, and while I do plan to get involved in ALA governance again, it’s not immediate.

But once upon a time, I made a great pitch to American Libraries. I said, you should have a column about the Internet, and I should write it. I had to walk around the block four times before I screwed up enough courage to go into 50 East Huron and make that pitch (and I felt as if I had an avocado in my throat the whole time), but thus the Internet Librarian column was born, and lo it continues on to this day, two decades later.

My pitch these days is that American Libraries steal a page from the New York Times and appoint a Public Editor or if you prefer, Omsbudman (Omsbudwimmin?), and that person should be me. Why me? Because I have a strong appreciation for all aspects of publishing. Because I’ve been an author and a vendor. Because I may be an iconoclast, but most people see me as fair. Because a situation like this needs adjudication before it becomes fodder for Twitter or Facebook. Because at times articles might even need discussion when no one is discussing them. Because I came up with the idea, and admit it, it’s a really good one.

A long time ago, when I was active in Democratic Party politics in Manhattan, a politician in NY made himself locally famous for saying of another pol, “He is not for sale… but he can be rented.” One thing about me, despite two books, over 100 articles, being a Pushcart nominee, being anthologized, etc.: I am not for sale or for rent. That has at times limited my ascendancy in certain circles, but it makes me perfect for this role.

If you’re on the board of American Libraries, or you know someone who is, give this some thought. We all have a place in the universe. I feel this would be perfect for me, and a boon for the profession.

Bookmark to:

Evergreen ILS: Join us in Boston!

Thu, 2016-01-07 22:06

Are you going to ALA Midwinter in Boston this weekend? If so, we invite Evergreen users, enthusiasts, or those who are just interested in learning more about this great open-source library system to join us for a meetup this Saturday at Stan Getz Library, Berklee College of Music. The meetup is scheduled for 4:30 to 6 p.m. Saturday, January 9.

Here are some of the activities you can look forward to at the meetup

  • We’ll look at some of the new features that have been added to Evergreen in the most-recent release.
  • 2.10 Release Manager Galen Charlton will talk about plans for the March Evergreen release.
  • We’ll share some community highlights from the past year.
  • We’ll talk about any other Evergreen or open-source issues and questions that are on people’s minds.

There are two ways to get to the Stan Getz Library:

  • Best option if coming from the convention center: Take the ALA Shuttle to the Sheraton Boston stop (Route 5). The bus will drop you off on Dalton St. Walk towards Belvidere Street, where you will take a right. Take a right when you reach Massachusetts Ave. The library is located on the right at 142 Massachusetts Avenue.
  • Public transportation: Take the MBTA green line (B, C, or D) to the Hynes Convention Center stop. As you leave the subway station, take a left on Massachusetts Avenue. The library is located on the left at 142 Massachusetts Avenue.

We also have a Google map showing walking directions from both locations.

Since the school is still on break, the library is closed, but Yamil Suarez will be available to escort everyone to the meeting room. If nobody is at the door when you arrive, call Yamil at 617-748-2617.

Feel free to send any questions along to Kathy Lussier at klussier@masslnc.org.

We look forward to seeing you at Midwinter!

 

LibX: Signed LibX Add-On Pushed for Firefox

Thu, 2016-01-07 21:08

We just pushed a signed LibX add-on for Firefox.

If you want to pull in the update immediately, open the Firefox browser, select Add-Ons, then Check for Updates. It will ask you to restart the browser.

Please let us know if you see any problems.

Thank you for your patience,
Annette & Godmar

LibX: LibX, Firefox, and Signatures

Thu, 2016-01-07 20:59

LibX is currently working in Google Chrome.

LibX is currently disabled in Firefox version 43.

We have edited LibX code so that it has passed Mozilla’s automatic verification. We can now upload code, have it checked, get it signed, then download it. We are still working on a bug fix and the creation of an update.rdf to push to you, our users.

We will post updates on this site.

Annette & Godmar

LibraryThing (Thingology): ALAMW 2016 in Boston (and Free Passes)!

Thu, 2016-01-07 20:20

Abby and KJ will be at ALA Midwinter in Boston this weekend, showing off LibraryThing for Libraries. Since the conference is so close to LibraryThing headquarters, chances are good that a few other LT staff members may appear, as well!

Visit Us. Stop by booth #1717 to meet Abby & KJ (and potential mystery guests!), get a demo, and learn about all the new and fun things we’re up to with LibraryThing for Libraries, TinyCat, and LibraryThing.

Get in Free. Are you in the Boston area and want to go to ALAMW? We have free exhibit only passes. Click here to sign up and get one! Note: It will get you just into the exhibit hall, not the conference sessions themselves.

Open Knowledge Foundation: Open Data goes local in Nepal: Findings of Nepal Open Data Index 2015

Thu, 2016-01-07 19:02

Nepal Open Data Index 2015 – White Paper

The Local Open Data Index Nepal 2015 is a crowdsourced survey that examines the availability of Open Data at city level. The survey was conducted for the second time in Nepal by Open Knowledge Nepal. See our previous post that announced the local index here.

Background

For the decentralization of power from central authority to district, village and municipality levels, Nepal government use Local Self Governance Regulation, 2056 (1999). where Village Development Committee (VDC) and District Development Committees (DDC) both act as planners and program implementing bodies of the government. Where municipalities are also doing the same kinds of tasks but at smaller scale, it has created difficulties in understanding layers of governing units. This overlapping of powers and roles has also been found in the government data space; average citizens still don’t know which local governance units are responsible for the data they need. This highlights the importance of a survey around open data and publishing.

Global surveys such as the Global Open Data Index and Open Data Barometer taught us that availability of open data and participatory governance in Nepal is not reaching full potential in terms of everything from citizen readiness, to data release and data infrastructure in Nepal. Using World Wide Web Foundation terminology, in Nepal we are operating in a “capacity constrained” environment.

Furthermore, in Nepal citizen participation and using open data often makes more sense and is more powerful at local level as it is local governments that handle all national and international project for citizens and generates data from it. However, open data is still a new concept in Nepal and the central government has only just started releasing data, with data even less available at the local level.

Why do we need a Local Open Data Index in Nepal?

The Local Open Data Index is intended to help to put the discrepancies of local level on the map (literally!). Peter Drucker said, “What gets measured gets managed.” Mapping the gaps will aid strategic planning and help create a framework for action and citizen engagement at all levels.

For local governments to adopt openness, they need to understand the what, why and how of opening up their data. Government need to learn why making data open is not only a means to make them accountable (or worse – alarmed), but also a tool to help them become more efficient and effective in their work. Governments need to understand that opening data is only the beginning of participatory governance, and for them to participate they need well defined and easy-to-adopt mechanisms.

The Local Open Data Index for Nepal will help in assessing the baseline of availability and nature of open data in Nepali cities. This will help to identify gaps, and plan strategic actions to make maximum impact.

Summary

A survey was done in 10 major cities of Nepal by open data enthusiasts and volunteers inside and outside of Open Knowledge Nepal. The cities chosen were Kathmandu, Bhaktapur, Butwal, Chitwan, Dolakha, Dhading, Hetauda, Kavre, Lalitpur, and Pokhara. The datasets that we survey were Annual Budget, Procurement Contracts, Crime Statistics, Business Permits, Traffic Accident, and Air Quality.

Unsurprisingly, the largest municipality and the capital of Nepal – Kathmandu – ranked highest, followed by Pokhara and Chitwan.

Different datasets were available in all 10 cities in digital format on the government websites. All available datasets are free to access. However, none of the datasets were machine readable, nor were any datasets licensed with any of the standard open data licences.

Datasets regarding annual budgets and procurement contracts are easily available digitally, although not open in standard sense of the term. Datasets for air quality are virtually nonexistent. It is not clear whether data is available in categories such as Traffic Accidents or Business Permits.

The central government of Nepal has been slowly adopting open data as a policy, and has shown commitment through projects such as the Aid Management Platform, Election Data, and interactive visualization available in National Planning Commission website. The enthusiasm is growing, but, has not yet spread to local governing authorities.

Key Findings
  1. None of the data sets are completely open. All of them lack machine readability and standard licensing.
  2. Annual budget data is publicly available in almost all cities surveyed. Air quality data is not available in any city. Other datasets fall somewhere in between.
  3. The enthusiasm and progress shown by central government in terms of open data projects has yet to catch on at the local level.

Read more about it in the official white paper.

Library of Congress: The Signal: APIs: How Machines Share and Expose Digital Collections

Thu, 2016-01-07 19:01

By DLR German Aerospace Center (Zwei Roboterfreunde / Two robot friends) [CC BY 2.0], via Wikimedia Commons.

Kim Milai, a retired school teacher, was searching on ancestry.com for information about her great grandfather, Amohamed Milai, when her browser turned up something she had not expected: a page from the Library of Congress’s Chronicling America site displaying a scan of the Harrisburg Telegraph newspaper from March 13, 1919. On that page was a story with the headline, “Prof. Amohamed Milai to Speak at Second Baptist.” The article was indeed about her great grandfather, who was an enigmatic figure within her family, but…”Professor!?,” Milai said. “He was not a professor. He exaggerated.” Whether it was the truth or an exaggeration, it was, after all, a rare bit of documentation about him, so Milai printed it out and got to add another colorful piece to the mosaic of her family history. But she might never have found that piece if it wasn’t for ancestry.com’s access to Chronicling America’s collections via an API.

Application Programming Interfaces (APIs) are not new. API-based interactions are part of the backdrop of modern life. For example, your browser, an application program, interfaces  with web servers. Another example is when an ATM screen enables you to interact with a financial system. When you search online for a flight, the experience involves multiple API relationships: your travel site or app communicates with individual airlines sites which, in turn, query their systems and pass their schedules and prices back to your travel site or app. When you book the flight, your credit card system gets involved. But all you see during the process are a few screens, while in the background, at each point of machine-to-machine interaction, servers rapidly communicate with each other, across their boundaries, via APIs. But what exactly are they?

Chris Adams, an information technology specialist at the Library of Congress, explained to me that APIs can be considered a protocol – or a set of rules governing the format of messages exchanged between applications. This allows either side of the exchange to change without affecting other parties as long as they continue to follow the same rules.

World Digital Library, Library of Congress.

Adams created the APIs for the World Digital Library, an international project between approximately 190 libraries, archives and museums. The World Digital Library’s APIs describe what to expect from the API and explain how to build tools to access the WDL’s collections. Adams said, “The APIs declare that we publish all of our data in a certain format, at a certain location and ‘here’s how you can interact with it.’ ” Adams also said that an institution’s digital-collections systems can and should evolve over time but their APIs should remain stable in order to provide  reliable access to the underlying data.  This allows outside users the stability needed to build tools which use those APIs and frequently saves time even within the same organization as, for example, the front-end or user-visible portion of a website can be improved rapidly without the need to touch the complex back-end application running on the servers.

HathiTrust Digital Library. Hathitrust.org.

So, for us consumers, the experience of booking a flight or buying a book online just seems like the way things ought to be. And libraries, museums, government agencies and other institutions are coming around to “the way things ought to be” and beginning to implement APIs to share their digital collections in ways that consumers have come to expect.

Another example of implementation, similar to the WDL’s, is how HathiTrust uses APIs among shared collections. For example, a search of HathiTrust for the term “Civil War” queries the collections of all of their 110 or so consortium partners (the Library of Congress is among them) and the search results include a few million items, which you can filter by Media, Language, Country and a variety of other filters. Ultimately it may not matter to you which institutions you got your items from; what matters is that you got an abundance of good results for your search. To many online researchers, it’s the stuff that matters, not so much which institution hosts the collection.

That doesn’t mean that the online collaboration of cultural institutions might diminish the eminence of any individual institution. Each object in the search results — of HathiTrust, WDL and similar resources — is clearly tagged with metadata and information about where the original material object resides, and so the importance of each institution’s collections becomes more widely publicized. APIs help cultural institutions increase their value — and their web traffic — by exposing more of their collections and sharing more of their content with the world.

The increasing use of APIs does not mean that institutions who want them are required to write code for them. David Brunton, a supervisory IT specialist at the Library of Congress, said that most people are using time-tested APIs instead of writing their own, and, as a result, standardized APIs are emerging. Brunton said, “Other people have already written the code, so it’s less work to reuse it. And most people don’t have infinite programming resources to throw at something.”

Example 1. Adding the Library of Congress search engine to Firefox.

Brunton cites OpenSearch as an example of a widely used, standardized API. OpenSearch helps search engines and clients communicate, by means of a common set of formats, to perform search requests and publish results for syndication and aggregation. He gave an example of how to view it in action by adding a Library of Congress search engine to the Firefox browser.

“In Firefox, go to www.loc.gov and look in the little search box at the top of the browser,” Brunton said. “A green plus sign (+) pops up next to ‘Search.’ If you click on the little green Plus sign, one of the things you see in the menu is ‘Add the Library of Congress search.’ [Example 1.] When you click on that, the Library’s search engine gets added into your browser and you can search the Library’s site from a non-Library page.”

As institutions open up more and more of their online digital collections, Chris Adams sees great potential in using another API, the International Image Interoperability Framework , as a research tool. IIIF enables users to, among other things, compare and annotate side-by-side digital objects from participating institutions without the need for each institution to run the same applications or specifically enable each tool used to view the items.  Adams points to an example of how it works by means of the Mirador image viewer. Here is a demonstration:

  1. Go to http://iiif.github.io/mirador/ and, at the top right of the page, click “Demo.” The subsequent page, once it loads, should display two graphics side by side – “Self-Portrait Dedicated to Paul Gauguin” in the left window and “Buddhist Triad: Amitabha Buddha Seated” in the right window. [Example 2.]

    Example 2. Mirador image viewer demo.

  2. Click on the thumbnails at the bottom of each window to change the graphic in the main windows.
  3. In the left window, select the grid symbol in the upper left corner and, in the drop down menu, select “New Object.” [Example 3.]

    Example 3. Select New Object.

  4. The subsequent page should display thumbnails of sample objects from different collections at Harvard, Yale, Stanford, BnF, the National Library of Wales and e-codices. [Example 4.]

    Example 4. Thumbnails from collections.

  5. Double-click a new object and it will appear in left image viewer window.
  6. Repeat the process for the right viewer window.

To see how it could work with the WDL collections:

  1. Go to http://iiif.github.io/mirador/ and click “Demo” at the top right of the page. The subsequent page will display the page with the two graphics.
  2. Open a separate browser window or tab.
  3. Open “The Sanmai-bashi Bridges in Ueno.”
  4. Scroll to the bottom of the page and copy the link displayed under “IIIF Manifest,” The link URL is http://www.wdl.org/en/item/11849/manifest
  5. Go back to the Mirador graphics page, to the left window, select the grid symbol and in the drop down menus select “New Object.”
  6. In the subsequent page, in the field that says “Add new object from URL…” paste the IIIF Manifest URL. [Example 5.]

    Example 5. Add new object from URL…”

  7. Click “enter/return” on your computer keyboard. “The Sanmai-bashi Bridges in Ueno” should appear at the top of the list of collections. Double-click one of the three thumbnails to add it to the left graphics viewer window.
  8. For the right window in the graphics viewer page use another sample from WDL, “The Old People Mill,” and copy its IIIF Manifest URL from the bottom of the page (http://www.wdl.org/en/item/11628/manifest).
  9. Return to the graphics viewer page, to the right window, select the grid symbol and in the drop down menus select “New Object.”
  10. In the subsequent page, in the field that says “Add new object from URL…,” paste the IIIF Manifest URL and click the “enter/return” key. “The Old People Mill” should appear at the top of the list of collections. Double-click to add it to the right graphics viewer window.

This process can be repeated using any tool which supports IIIF, such as the Universal Viewer, and new tools can be built by anyone without needing to learn a separate convention for each of the many digital libraries in the world which support IIIF.

Adams said that implementing an API encourages good software design and data management practices. “The process of developing an API can encourage you to better design your own site,” Adams said. “It forces you to think about how you would split responsibilities.” As programmers rush to meet deadlines, they often face the temptation of solving a problem in the simplest way possible at the expense of future flexibility; an API provides a natural point to reconsider those decisions. This encourages code which is easier to develop and test, and makes it cheaper to expand server capacity as the collections grow and user traffic increases.

Meanwhile, the APIs themselves should remain unchanged, clarifying expectations on both sides, essentially declaring, “I will do this. You must do that. And then it will work.”

APIs enable a website like the HathiTrust, Digital Public Library of America or Europeana to display a vast collection of digital objects without having to host them all. APIs enable a website like Chronicling America or the World Digital Library to open up its collections to automated access by anyone. In short, APIs enable digital collections to become part of a collective, networked system where they can be enjoyed — and used — by a vast international audience of patrons.

“Offering an API allows other people to reuse your content in ways that you didn’t anticipate or couldn’t afford to do yourself,” said Adams. “That’s what I would like for the library world, those things that let other people re-use your data in ways you didn’t even think about.”

Islandora: Islandora CLAW Community Sprint 003: January 18 - 29

Thu, 2016-01-07 18:44

The Islandora community is kicking off the new year with our third volunteer sprint on the Islandora CLAW project. Continuing with our plan for monthly sprints, this third go around will continue some of the tickets from the second sprint, put a new focus on developing Collection service in PHP, and put more work into PCDM. To quote CLAW Committer Jared Whiklo, we shall PCDMize the paradigm.

This sprint will be developer focussed, but the team is always happy to help new contributors get up to speed if you want to take part in the project. If you have any questions about participating in the sprint, please do not hesitate to contact CLAW Project Director, Nick Ruest. A sign up sheet for the sprint is available here, and the sprint will be coordinated via a few Skype meetings and a lot of hanging around on IRC in the #islandora channel on freenode.

Villanova Library Technology Blog: Foto Friday: Reflection

Thu, 2016-01-07 16:58

“Character is like a tree and reputation like a shadow.
The shadow is what we think of it; the tree is the real thing.”
— Abraham Lincoln

Photo and quote contributed by Susan Ottignon, research support librarian: languages and literatures team.


Like11 People Like This Post

Villanova Library Technology Blog: A New Face in Access Services

Thu, 2016-01-07 14:33

Cordesia (Dee-Dee) Pope recently joined Falvey’s staff as a temporary Access Services specialist reporting to Luisa Cywinski, Access Services team leader. Pope described her duties as “providing superb assistance to Falvey Memorial Library’s patrons.”

Pope, a native of Philadelphia, attended the PJA School where she earned an associate’s degree in paralegal studies and business administration. She has approximately 10 years of experience as a paralegal.

When asked about hobbies and interests she says, “I enjoy spending time with my two children, reading books of every genre, watching movies and learning new things.”


Like11 People Like This Post

LITA: A Linked Data Journey: Interview with Julie Hardesty

Thu, 2016-01-07 14:00

Image Courtesy of Marcin Wichary under a CC BY 2.0 license.

Introduction

This is part four of my Linked Data Series. You can find the previous posts in my author feed. I hope everyone had a great holiday season. Are you ready for some more Linked Data goodness? Last semester I had the pleasure of interviewing Julie Hardesty, metadata extraordinaire (and analyst) at Indiana University, about Hydra, the Hydra Metadata Interest Group, and Linked Data. Below is a bio and a transcript of the interview.

Bio:

Julie Hardesty is the Metadata Analyst at Indiana University Libraries. She manages metadata creation and use for digital library services and projects. She is reachable at jlhardes@iu.edu.

The Interview

Can you tell us a little about the Hydra platform?

Sure and thanks for inviting me to answer questions for the LITA Blog about Hydra and Linked Data! Hydra is a technology stack that involves several pieces of software – a Blacklight search interface with a Ruby on Rails framework and Apache Solr index working on top of the Fedora Commons digital repository system. Hydra is also referred to when talking about the open source community that works to develop this software into different packages (called “Hydra Heads”) that can be used for management, search, and discovery of different types of digital objects. Examples of Hydra Heads that have come out of the Hydra Project so far include Avalon Media System for time-based media and Sufia for institutional repository-style collections.

What is the Hydra Metadata Interest Group and your current role in the group?

The Hydra Metadata Interest Group is a group within the Hydra Project that is aiming to provide metadata recommendations and best practices for Hydra Heads and Hydra implementations so that every place implementing Hydra can do things the same way using the same ontologies and working with similar base properties for defining and describing digital objects. I am the new facilitator for the group and try to keep the different working groups focused on deliverables and responding to the needs of the Hydra developer community. Previous to me, Karen Estlund from Penn State University served as facilitator. She was instrumental in organizing this group and the working groups that produced the recommendations we have so far for technical metadata and rights metadata. In the near-ish future, I am hoping we’ll see a recommendation for baseline descriptive metadata and a recommendation for referring to segments within a digitized file, regardless of format.

What is the group’s charge and/or purpose? What does the group hope to achieve?

The Hydra Metadata Interest Group is interested in working together on base metadata recommendations, as a possible next step of the successful community data modeling, Portland Common Data Model. The larger goals of the Metadata Interest Group are to identify models that may help Hydra newcomers and further interoperability among Hydra projects. The scope of this group will concentrate primarily on using Fedora 4. The group is ambitiously interested in best practices and helping with technical, structural, descriptive, and rights metadata, as well as Linked Data Platform (LDP) implementation issues.

The hope is to make recommendations for technical, rights, descriptive, and structural metadata such that the Hydra software developed by the community uses these best practices as a guide for different Hydra Heads and their implementations.

Can you speak about how Hydra currently leverages linked data technologies?

This is where keeping pace with the work happening in the open source community is critical and sometimes difficult to do if you are not an active developer. What I understand is that Fedora 4 implements the W3C’s Linked Data Platform specification and uses the Portland Common Data Model (PCDM) for structuring digital objects and relationships between them (examples include items in a collection, pages in a book, tracks on a CD). This means there are RDF statements that are completely made of URIs (subject, predicate, and object) that describe how digital objects relate to each other (things like objects that contain other objects; objects that are members of other objects; objects ordered in a particular way within other objects). This is Linked Data, although at this point I think I see it as more internal Linked Data. The latest development work from the Hydra community is using those relationships through the external triple store to send commands to Fedora for managing digital objects through a Hydra interface. There is an FAQ on Hydra and the Portland Common Data Model that is being kept current with these efforts. One outcome would be digital objects that can be shared at least between Hydra applications.

For descriptive metadata, my understanding is that Hydra is not quite leveraging Linked Data… yet. If URIs are used in RDF statements that are stored in Fedora, Hydra software is currently still working through the issue of translating that URI to show the appropriate label in the end user interface, unless that label is also stored within the triple store. That is actually a focus of one of the metadata working groups, the Applied Linked Data Working Group.

What are some future, anticipated capabilities regarding Hydra and linked data?

That capability I was just referring to is one thing I think everyone hopes happens soon. Once URIs can be stored for all parts of a statement, such as “this photograph has creator Charles W. Cushman,” and Charles W. Cushman only needs to be represented in the Fedora triple store as a URI but can show in the Hydra end-user interface as “Charles W. Cushman” – that might spawn some unicorns and rainbows.

Another major effort in the works is implementing PCDM in Hydra. Implementation work is happening right now on the Sufia Hydra Head with a base implementation called Curation Concerns being incorporated into the main Hydra software stack as its own Ruby gem. This involves Fedora 4’s understanding of PCDM classes and properties on objects (and implementing Linked Data Platform and ordering ontologies in addition to the new PCDM ontology). Hydra then has to offer interfaces so that digital objects can be organized and managed in relation to each other using this new data model. It’s pretty incredible to see an open source community working through all of these complicated issues and creating new possibilities for digital object management.

What challenges has the Hydra Metadata Interest Group faced concerning linked data?

We have an interest in making use of Linked Data principles as much as possible since that makes our digital collections that much more available and useful to the Internet world. Our recommendations are based around various RDF ontologies due to Fedora 4’s capabilities to handle RDF. The work happening in the Hydra Descriptive Metadata Working Group to define a baseline descriptive metadata set and the ontologies used there will be the most likely to want Linked Data URIs used as much as possible for those statements. It’s not an easy task to agree on a baseline set of descriptive metadata for various digital object types but there is precedence in both the Europeana Data Model and the DPLA Application Profile. I would expect we’ll follow along similar lines but it is a process to both reach consensus and have something that developers can use.

Do you have any advice for those interested in linked data?

I am more involved in the world of RDF than in the world of Linked Data at this point. Using RDF like we do in Hydra does not mean we are creating Linked Data. I think Linked Data comes as a next step after working in RDF. I am coming from a metadata world heavily involved in XML and XML schemas so to me this isn’t about getting started with Linked Data, it’s about understanding how to transition from XML to Linked Data (by way of RDF). I watch for reports on creating Linked Data and, more importantly, transitioning to Linked Data from current metadata standards and formats. Conferences such as Code4Lib (coming up in March 2016 in Philadelphia), Open Repositories (in Dublin, Ireland in June 2016) and the Digital Library Federation Forum (in Milwaukee in November 2016) are having a lot of discussion about this sort of work.

Is there anything we can do locally to prepare for linked data?

Recommended steps I have gleaned so far include cleaning the metadata you have now – syncing up names of people, places, and subjects so they are spelled and named the same across records; adding authority URIs whenever possible, this makes transformation to RDF with URIs easier later; and considering the data model you will move to when describing things using RDF. If you are using XML schemas right now, there isn’t necessarily a 1:1 relationship between XML schemas and RDF ontologies so it might require introducing multiple RDF ontologies and creating a local namespace for descriptions that involve information that is unique to your institution (you become the authority). Lastly, keep in mind the difference between Linked Data and Linked Open Data and be sure if you are getting into publishing Linked Data sets that you are making them available for reuse and aggregation – it’s the entire point of the Web of Data that was imagined by Tim Berners-Lee when he first discussed Linked Data and RDF (http://www.w3.org/DesignIssues/LinkedData.html).

Conclusion

A big thank you to Julie for sharing her experiences and knowledge. She provided a plethora of resources during the interview, so go forth and explore! As always, please feel free to leave a comment or contact Julie/me privately. Until next time!

Ed Summers: Craft and Computation

Thu, 2016-01-07 05:00

(???) provides an interesting view into how the funiture artist Wendell Castle uses 3D scanning and digital fabrication tools in his work. Usefully (for me) the description is situated in the larger field of human-computer interaction, and computer supported work, which I’m trying to learn more about. It’s worth checking out if you are interested in a close look at how a small furniture studio (that has built an international reputation for craftsmanship) uses 3d scanning and robotics to do its work.

One fascinating piece of the story is the work of the studio director, Marvin Pallischeck (Marv), who adapted a CNC machine designed for [pick-and-place] work in the US Postal Service, to serve as a milling machine. This robot is fed 3D scans of prototypes created by Castle along with material (wood) and then goes to work. The end result isn’t a completed piece, but one that a woodcarver can then work with further to get it into shape. The 3d scanning is done by an offiste firm that does work in scanning wood. They deliver a CAD file that needs to be converted to a CAM file. The CAM file then needs to be adjusted to control the types of cutters and feed speeds that are used, to fit the particular wood being worked on.

The work is also iterative where the robot successively works on the parts of the whole piece, getting closer and closer with Marv’s help. The process resists full autmation:

“At the end of the day, it’s the physical properties of the material that drives our process”, says Marv as he describes the way the wood grain of a Castle piece can be read to determine the orientation of the tree’s growth within the forest. “I always say, this tree is now dead, but its wood is not - and it’s important to know that going into this.” Bryon understands this in a similar way, “There’s a lot of tension in wood. When you start cutting it up, that tension is released, free to do as it will. And form changes. Things crack, they bend, and warp”

There is also an impact on the clients perception of the work: its authenticity and authorship. On the theoretical side, Cheadle and Jackson are drawing attention to how the people, their creative processes, the computers and the materials they are working with, are all part of a network. As with Object Oriented Ontology (Bogost is cited), the lines between the human and the non-human objects begin to get fuzzy, and complicated. More generally the interviews and ethnographic work point the work of Wanda Orlikowski.

These arguments build in turn on a broader body of work around materiality and social life growing in the organizational and social sciences. Orlikowski finds that materiality is integral to organizational life and that developing new ways of dealing with material is critical if one is to understand the multiple, emergent, shifting and interdependent technologies at the heart of contemporary practice (Orlikowski, 2007). Orlikowski sees humans and technology as bound through acts of ‘recursive intertwining’ or ‘constitutive entanglement’ that eschew pre-ordered hierarchies or dualisms. Rather, human actors and technological practices are enmeshed and co-constituted, emerging together from entangled networks that are always shifting and coemerging in time.

I think this is an angle I’m particularly interested in exploring with respect to Web archiving work: the ways in which traditional archival materials (paper, film, audio, photographs, etc) and processes are challenged by the material of the Web. With respect to this work by Cheatle and Jackson: the ways in which our automated tools (crawlers, viewers, inventory/appraisal tools) have been designed (or not) to fit the needs of archivists. How are archivists, the medium of the Web, and the archival tools/processes are entangled, and how an understanding of this entanglement can inform the design of new archival tools.

Orlikowski, W. J. (2007). Sociomaterial practices: Exploring technology at work. Organization Studies, 28(9), 1435–1448. http://doi.org/10.1177/0170840607081138

Pages