You are here

Feed aggregator

Mita Williams: G H O S T S T O R I E S

planet code4lib - Fri, 2016-03-04 14:58
G H O S T    S T O R Y    1

It’s funny that I ended up as a librarian because my earliest memories of libraries were not entirely positive.

While the children’s section of the central branch library and the school bookmobile regularly brought me joy (largely in the form of Peanuts Parade volumes), I have distinct memories of being filled with dread every time I had to move through the towering shelves of the grown-up section of the library.

Yes, the main library was largely devoid of the sound and colour and the furious activity of the children’s section, but that wasn’t the entire reason why it gave me the creeps. I distinctly remember that when I was younger I associated all the books on the shelves of the library with the work of dead people. Each book represented a person who was now gone and they had left their books behind and the terrible thing was that, by and large, it looked like most of the books stayed on the shelves, unread.


Now, I didn’t actually think that the library was haunted. And over time the whole library became  comfortable to me. Eventually I became a librarian and now I think the library is and can be many, many things to many people.

Some years ago, I wrote this

What if every person who worked at a library was obligated to create and leave one book that remained in the library as long as it remained. Imagine the sense of legacy and the sense of connection that could be established by the shelves of these books. Imagine the ways that those who made these books would choose to express themselves. Would they write a history? a biography? poetry? How could these books connect the people to the place to the time of the library? 
I still think of the library as a memento mori.

 
G H O S T    S T O R Y    2

#53 In The Desert February 4, 2016
“You know, there’s always that fear that an unreasonable person is going to show up.”
-- Michael Saba, on his house being The Bermuda Triangle of cell phones. 





Strangers keep coming to Mike and Christina’s house looking for their stolen cell phones. Nobody knows why. We travel to Atlanta to find out what’s going on, in our thorniest Super Tech Support yet.

G H O S T    S T O R Y    3 


Art and Math and Science, Oh My!
by sailor mercury
Technology can bring art to life.








One very literal example of art bringing technology to life is the experimental theatrical show Sleep No More: an interactive modern retelling of Macbeth where you walk around 4 floors of the set to watch and interact with the actors.
For future shows, they’re working together with the MIT media lab on making the set itself more interactive with embedded programming: mirrors that write messages to you in blood or typewriters that type out cryptic messages to you if you linger too long in front of them.

G H O S T    S T O R Y     4




In the Future, We'll All Be Harry Potter
by Jakob Nielsen on December 9, 2002


Topics:


Summary: The world of magic is a world where inanimate objects come alive; it's as if they had computational power, sensors, awareness, and connectivity.
By saying that we'll one day be like Harry Potter, I don't mean that we'll fly around on broomsticks or play three-dimensional ballgames (though virtual reality will let enthusiasts play Quidditch matches). What I do mean is that we're about to experience a world where spirit inhabits formerly inanimate objects.

Much of the Harry Potter books' charm comes from the quirky magic objects that surround Harry and his friends. Rather than being solid and static, these objects embody initiative and activity. This is precisely the shift we'll experience as computational power moves beyond the desktop into everyday objects....

 G H O S T    S T O R Y     5 
  
 

After reading a book of German ghost stories, somebody suggested they each write their own. Byron's physician, John Polidori, came up with the idea for The Vampyre, published in 1819,1 which was the first of the "vampire-as-seducer" novels. Godwin's story came to her in a dream, during which she saw "the pale student of unhallowed arts kneeling beside the thing he had put together."2 Soon after that fateful summer, Godwin and Shelley married, and in 1818, Mary Shelley's horror story was published under the title, Frankenstein, Or, the Modern Prometheus.3Frankenstein lives on in the popular imagination as a cautionary tale against technology. We use the monster as an all-purpose modifier to denote technological crimes against nature. When we fear genetically modified foods we call them "frankenfoods" and "frankenfish." It is telling that even as we warn against such hybrids, we confuse the monster with its creator. We now mostly refer to Dr. Frankenstein's monster as Frankenstein. And just as we have forgotten that Frankenstein was the man, not the monster, we have also forgotten Frankenstein's real sin. Dr. Frankenstein's crime was not that he invented a creature through some combination of hubris and high technology, but rather that he abandoned the creature to itself. When Dr. Frankenstein meets his creation on a glacier in the Alps, the monster claims that it was not born a monster, but that it became a criminal only after being left alone by his horrified creator, who fled the laboratory once the horrible thing twitched to life. "Remember, I am thy creature," the monster protests, "I ought to be thy Adam; but I am rather the fallen angel, whom thou drivest from joy for no misdeed... I was benevolent and good; misery made me a fiend. Make me happy, and I shall again be virtuous."

Written at the dawn of the great technological revolutions that would define the 19th and 20th centuries, Frankenstein foresees that the gigantic sins that were to be committed would hide a much greater sin. It is not the case that we have failed to care for Creation, but that we have failed to care for our technological creations. We confuse the monster for its creator and blame our sins against Nature upon our creations. But our sin is not that we created technologies but that we failed to love and care for them. It is as if we decided that we were unable to follow through with the education of our children.4 - Bruno Latour

G H O S T    S T O R Y    6
[Confession: the whole point of this post is to encourage you to read this]
Our Gothic Future
The other day, after watching Crimson Peak for the first time, I woke up with a fully-fleshed idea for a Gothic horror story about experience design. And while the story would take place in the past, it would really be about the future. Why? Because the future itself is Gothic.


First, what is Gothic? Gothic (or “the Gothic” if you’re in academia) is a Romantic mode of literature and art. It’s a backlash against the Enlightenment obsession with order and taxonomy. It’s a radical imposition of mystery on an increasingly mundane landscape. It’s the anticipatory dread of irrational behaviour in a seemingly rational world. But it’s also a mode that places significant weight on secrets — which, in an era of diminished privacy and ubiquitous surveillance, resonates ever more strongly....

... Consider the disappearance of the interface. As our devices become smaller and more intuitive, our need to see how they work in order to work them goes away. Buttons have transformed into icons, and icons into gestures. Soon gestures will likely transform into thoughts, with brainwave-triggers and implants quietly automating certain functions in the background of our lives. Once upon a time, we valued big hulking chunks of technology: rockets, cars, huge brushed-steel hi-fis set in ornate wood cabinets, thrumming computers whose output could heat an office, even odd little single-purpose kitchen widgets. Now what we want is to be Beauty in the Beast’s castle: making our wishes known to the household gods, and watching as the “automagic” takes care of us. From Siri to Cortana to Alexa, we are allowing our lives and livelihoods to become haunted by ghosts without shells.


Now, I’m not at all the only person to notice this particular trend (or, more accurately, to read the trend through this particular lens). It’s central to David Rose’s book Enchanted Objects, which you all should read. This is also why FutureEverything’s Haunted Machines symposium exists....
 [you really should read the whole thing]



 G H O S T    S T O R Y    7

avocado's thoughts about ghosts and thoughts about libraries are very intertwingled rn— Avocado (@RealAvocadoFact) February 27, 2016 @copystar maybe there are things we can leave behind that are even more alive than ghosts and libraries are more like gardens than tombs— Avocado (@RealAvocadoFact) February 27, 2016

Terry Reese: MarcEdit Mid-week update

planet code4lib - Fri, 2016-03-04 07:01

Yesterday, I had someone indicate that there was a problem with the Add/Delete Field function.  An update in the last version to allow for deduplication deletions based on subfields tripped other deletions.  This was definitely problematic.  This has been corrected, in addition to a couple other changes.

Change log:

6.2.88

  • Bug Fix: Add/Delete Field: I introduced an element into the Delete function to allow dedup deletions to happen at the subfield level. This tripped non-dedup deletions. This has been corrected.
  • Update: Build New Links: FAST headings in the 600,611,630 weren’t being processed. I’ve updated the rules file appropriately.
  • Update: RDA Helper Abbrevs File: Add S.L. abbreviation.
  • Bug Fix: Validate Headings: The Check A only when subject checking wasn’t honoring that option. This is corrected.

Changes can be found on the downloads page: http://marcedit.reeset.net/downloads

 

tr

Ed Summers: Practice

planet code4lib - Fri, 2016-03-04 05:00

A few weeks ago Cliff Lampe visited UMD to give a talk about his work on citizen interaction design, connecting the University of Michigan iSchool with the City of Jackson, Michigan and other cities around Michigan. At a high level the goal of the project is to get iSchool students out in the field working with local governments to try to collaborate on solutions to problems that they have.

Lampe stressed that much of the work was in determining what problems could be effectively worked on in a semester, and jointly arriving at sustainable solutions. The sustainable part is hard, especially when the students are here one year and gone the next–leaving websites, databases and other artifacts behind that need attention, care and repair. A focus on the actual dimensions of the problem and not the technical solution is key, as is sustained support from the University and the city. I seem to remember he also highlighted the need for simple solutions: e.g. a Google spreadsheet, rather than a full blown Web application with a database. You can see a list of some of these projects here.

One thing Lampe really impressed on me was the importance of a practice orientation to this and other information studies work. He has been very active in the HCI community for a number of years, and feels like there has been a trend towards a broadened study of the processes and contexts that information systems are a part of (Practice paradigm). He said he was working on a paper to discuss this trend in HCI, but then found that Kuutti and Bannon had already written one (Kuutti & Bannon, 2014).

I’m still in the process of digesting the paper, but thought I’d just jot down some quotes that struck me as I was reading.

For the Interaction paradigm, the scope of the intervention is viewed as changing human actions by means of novel technology. For the Practice paradigm, a whole practice is the unit of intervention, not only technology, but everything related and interwoven in the performance is under scrutiny and potentially changeable, depending on the goals of the intervention. Thus the changing technology is but one of the options.

Only focusing on technology and immediate interactions isn’t enough. It’s important to decenter the technology by placing it in the larger cultural and social context. Of course that perspective can be difficult to maintain without getting completely abstracted and lost. Zooming in on actual practices seems like a useful way to avoid doing that. It feels like there might be connections to Latour’s work on Actor-Network Theory and Object-Oriented Ontology here too, to aid in this kind of study of practices–particularly regarding the interest in artifacts.

Practice theories do not locate the origin of the social in the mind, discourse, or interaction, but in ‘practices’ - routines consisting of a number of interconnected and inseparable elements: physical and mental activities of human bodies, the material environment, artifacts and their use, contexts, human capabilities, affinities and motivation. Practices are wholes, whose existence is dependent on the temporal interconnection of all these elements, and cannot be reduced to, or explained by, any one single element.

I like this idea of practices as wholes, since too often we focus on one small part of the practice and miss the larger picture. This larger picture where technology is just part involves the values and outcomes of particular practices. What do we want to happen in the world?

From a practices perspective the world is a network of performances that are durable, because the ways of doing things are coded in minds, bodies, artifacts, objects and texts, and connected together so that the result of performing one activity serves as a resource for another.

Kuutti and Bannon remind me a bit of work we’ve been doing in MITH on our Digital Incubator series, where we are focused on process rather tool building. Many digital humanities projects are oriented around tools or a particular set of content, but often what can be really rewarding is teaching a process or practice that involves tools and content. It’s a craft thing I guess.

Anyhow as a student I really like these kinds of papers because they serve as guide posts and provide lots of useful pointers out into the literature. The paper builds on Davide Nicolini’s Practice Theory, Work, and Organization which looks like a good introduction to this area.

References

Kuutti, K., & Bannon, L. J. (2014). The turn to practice in HCI: Towards a research agenda. In Proceedings of the 32nd annual ACM Conference on Human Factors in Computing Systems (pp. 3543–3552). Association for Computing Machinery.

pinboard: GitHub - NCSU-Libraries/quick_search: QuickSearch is a toolkit for easily creating custom bento-box search applications

planet code4lib - Fri, 2016-03-04 01:13
RT @yo_bj: RT @ronallo: QuickSearch from @ncsulibraries is now open source: #code4lib

LITA: Jobs in Information Technology: March 2, 2016

planet code4lib - Thu, 2016-03-03 14:00

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week:

Penn State University Libraries, Digital Scholarship Research Coordinator, University Park, PA

Loyola Notre Dame Library, Digital Services Coordinator, Baltimore, MD

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

Mita Williams: The Heritage Heritage Minute and The Digital Library of Canada We Lost

planet code4lib - Thu, 2016-03-03 12:45
It's time for me to write about Heritage - the ten year digitization project that is being undertaken by the non-profit charity Canadiana.org with 50,000 reels of microfilmed historical material from Canada's national library, Library and Archives Canada - which has been partly funded by most of the university libraries of Canada through CRKN, the Canadian Research Knowledge Network, which raised $1.74 million dollars toward the deal signed on June 14th.

CRKN is a licensing agency that negotiates deals with publishers and then brings those deals to its members to sign onto. Normally, for those librarians - like myself - who are not privy to the conversations of library directors or not among the small set of subscribers to the restricted CRKN listserv, it is not unusual to hear about the proposed deals only after they have been signed and committed to by their leadership.

But that didn't happen this time. What happened?  To explain, let's have a Heritage Minute.


Part One: The Heritage Heritage Minute
We don't know when Canadiana first approached CRKN but on May 1st a proposal from Canadiana went to the library directors that make up the membership of CRKN with a signing deadline of May 31st and with a statement that the deal was under an non-disclosure agreement until June 14th when the project was slated to go to public.

Someone who had seen the proposal was concerned enough by its contents to provide a copy of the document(s) to Myron Groover a librarian / archivist who has been following writing and commenting on the decline of affairs of Canada's national library for some time now. Myron first raised the matter of the proposal on June 6th on his blog, Bibliocracy and then followed up by posting a transcript of the summary of the plan on June 10th.

Concerns from librarians, archivists, researchers, and citizens over the proposed deal were shared casually online on Twitter and Facebook but things really heated up when NDP MP Andrew Cash brought his concerns with the proposal to the attention of James Moore, the Minister of Canadian Heritage and Official Languages in the House of Commons on June 11th.

Confusing matters, later that same day the CBC reported that the deal in question was to be delayed til the fall based on the Moore's comment that the digitization project would resume once a new Head of Libraries and Archives would be appointed. No one really knows if this was a simple misunderstanding during the confusion of Question Period or if this was the result of of James Moore being unaware of the Heritage deal at this point in time.

On the next day, June 12th, the Ottawa Citizen covered this story around the leaked proposal in an article with the headline, Library and Archives Canada private deal would take millions of documents out of public domain.

(As an aside, the media coverage of this story seems to implicitly frame the controversy as a battle between archivists vs. librarians. Myron is referred to as an archivist (and not a archivist / librarian) and other archivists were interviewed to give the 'against' side of the story. This framing may have come about unintentionally because no librarians involved were allowed to speak on the matter and as none one of the professional bodies that represent Canada's archives community were involved in the negotiations around the Heritage Project, so they were free to speak their displeasure).

Also around this time CAUT - the labour organization that represents university faculty and librarians started a campaign to stop the Heritage proposal from going forward.

It's worth noting that during this particularly frantic week, there was not a single official statement made publicly by CRKN on the matter.  Their Twitter feeds points to clarifying statements by Canadiana (No paywall, no privatization) and a short radio interview from a director general at LAC (Library and Archives responds to concerns about a new digital service). Meanwhile, an employee of Canadiana felt comfortable to speak out on the matter on his own personal blog (Good news Canadiana & LAC project spun into bad news?)

It appears that CKRN did sent out an email to the members of the CRKN listserv to clarify matters (which was then posted on Bibliocracy) but from what I can tell, they did not make public notice of the same information.

The most important difference between the original leaked proposal and the 'clarification' from CRKN is that the language that described the digitized content as 'open access for Canadians' (which as Heather Morrison aptly put, there is no such thing) was changed to being 'under a Creative Commons licence for non commercial use'. As an outsider, I have no way of knowing whether these changes were the result of negotiations that occurred before May 31st or afterwards once the proposal was leaked and the objections raised, but conjecture suggests the latter.

CRKN also retweeted a particular telling document on June 13th, the day before the deadline, a letter from CARL entitled "CARL urges Minister Moore to go forward with Héritage Project". For me, this suggests that perhaps the deal was in some jeopardy, either due to the fact that it had now become controversial or perhaps less palatable since the Minister of Canadian Heritage and Official Languages had gone on record stating that Canadians should not have to pay to access our achives.



A similar letter of support for this project was produced by the Ontario Council of University Libraries. These letters are not surprising since almost all of members of these organizations were the same ones who signed on to the CRKN deal. There may have been other letters of support for the deal sent to the Minister but the only other one that I know of was from the Canadian Urban Libraries Council [pdf].

addendum: The Canadian Library Association also wrote a letter of support [pdf]
On the afternoon of June 14th, there was a message sent out to the institutions of CRKN that the deal had been signed and this announcement - CRKN Participates in Innovative Project to Increase Access to Canadian Documentary Heritage - was posted on their website. What's curious about this public document is that license for the digital material of Heritage is described vaguely as under a 'Creative Commons' license instead of explicitly as a CC-NC as they had done earlier that week in their internal communication.

Which brings us to today. I've been writing this over the weekend of June 22nd and 23rd. Earlier this week, the Heritage site quietly launched. And there are still many unanswered questions surrounding the licensing of the work involved.  The next part of this post will explain why I think these unanswered questions are still very important.



Part Two: The Digital Library of Canada We Lost
Now, before I go further, please understand that I am empathic with many of the proponents of this deal who were were confused, frustrated, and even hostile to the fact that there was a group of librarians and archivists who were asking critical questions about this deal while they watched helplessly as these concerns were being raised in Parliament and in the press and remained unaddressed from their leadership.

I know several librarians whose professional judgement I trust who have stated that they believe that the Heritage deal is good for the partners involved and good for Canadian history. Some of them asked, libraries sign similar deals with commercial vendors all the time - what's the fuss here? What's the difference between this deal and the deal that presumably led to the creation of the digital product of Early Canadiana Online

So I'll try my best to tell explain my concerns.

First off, what I think is particularly damning is that - with all of the media coverage and the fact that the matter was worthy enough to be brought up in the House of Commons - the questions that critics like myself have been raising - still have not been answered by the parties involved.

Many of these questions raised by those concerned have been best captured and expressed by Kevin Read in his post, Concerning the deal between LAC and Canadiana: We ask for transparency:

But there are two questions that I would like to add to his list:
1) When, if ever, does the material in Heritage turn from CC-NC (Creative Commons Non-Commercial) to CC-0 (Public Domain) licence?

1) Why isn't the material being put immediately into the public domain (CC-0)?

2) Where and how will the Linked Open Data that was explicitly promised as part of the original proposal from Canadiana that was signed by CRKN going to be made available?If the documents and metadata in Heritage never make a transition to explicitly being open and unrestricted for commercial use (such as to be published in a book that is subsequently made for sale), then CRKN has indeed paid for a commercial product that Canadians will 'have to pay for twice' to use and as feared, millions of documents will be taken out of the public domain.

We have heard nothing that contradicts these fears.

By allowing Canadiana to maintain a CC-NC licence for the materials involved, does Canadiana essentially becomes a licensing agency for the use of scholarly materials just like Access Copyright?  And does Canadiana even have the right to apply CC-NC in the mass digitization of microfilm? One librarian well versed in copyright measures isn't so sure.

In short, the Heritage deal may prove a good deal financially for the organizations involved, but it fails the public in some profound ways.

To explain why, let's do a thought experiment. Let's imagine that CRKN responded to the Canadiana proposal with a counter proposal that would have absorbed the amount of money that was estimated as coming in from cost-recovery measures into the CRKN contribution. Let's imagine that like a true Open Access project, the costs are not passed on to the reader. Admittedly the project would indeed result in less material being described but there would be other benefits that would come from the provision that all digitized material and metadata created would be immediately placed in the public domain. Just imagine what sort of activities this new platform could support:

  • Like the British Library, libraries and archives across Canada could easily partner with organizations such as the Wikimedia Foundation to co-host events like this history-themed editathon
  • Like the Digital Public Library of America, the digital collections could be considered a platform for others to build work on. For example, once the LAC documents are geo-coded, there would be a variety of applications that could be developed that could add document discovery through geolocation.  This could allow anyone - researchers, students, entrepreneurs - to build web or mobile apps that present historical documents in an historical, gaming, or creative context without having to make arrangements to pay Canadiana ahead of time for use of the documents. 
  • Libraries could reassure the Canadian people, as well as the current Harper government, that they - unlike companies and not-for-profit charities - they exist to make information and creative works available for free to the Canadian public

I can only hope that from this controversy that our leadership has learned that our reading public now has a far greater literacy and expectations for matters regarding licensing and the public domain than ever before.

Case in point:  Aaron Swartz is on the cover of the Time Magazine this week. 




I believe that in the pursuit of brute efficiency in the manifestation of the Heritage deal, something was lost. So, in the spirit of the quiet hope that is embodied in the lament of Anil Dash's The Web We Lost, I would like to write about the the digital library of Canada that was not to be.


What could have we done instead?  Well, I like what what librarian Mike Ridley suggested over a year ago:

... we need to form a collaborative organization linking libraries, museums, and archives to operate this distributed collection and service. We need to take on the long term responsibility that this government is refusing to do. Yes I know we have no money or space or staff; we need to do it anyway.
Oh yes, he also offered this timely warning:
Shouldn’t we partner with LAC on this? OK but let’s be careful. Not being harsh here. LAC has a history of not always playing nice with others. The wonderful and visionary Alouette Canada initiative (now part of Canadiana.org; a good model for at least part of this mission BTW) was launched with strong support from LAC; they enthusiastically offered to seek federal funding for this national, collaborative project. Money they did get and it went to LAC digital projects not those of the consortium. Lesson: Don’t get fooled again.
Well, at least this time Canadiana and LAC were upfront in regards to where all the CRKN money will go: it goes "to fund metadata creation and and to build a sustainability fund to maintain the platform"... both of which, I will remind you, belong and remain to Canadiana alone.

To imagine what else could have happened, it's useful to look at CKRN's first deal - its pilot project called CNSLP:

In January 2000, 64 universities in Canada signed a historic inter-institutional agreement that launched the Canadian National Site Licensing Project (CNSLP), a pilot project totaling Cdn$50 million over three years.
The gist of the CNSLP project was that instead of individually licensing digital products from commercial vendors, the academic libraries of Canada would band together and achieving substantial savings from the bulk licensing on a nationwide basis. In Ontario, the savings gained from CNSLP were not directed into buying more products, but were instead directed into an infrastructure fund that gave rise to OCUL's Scholars Portal.

Canadian National Site Licensing Project (CNSLP)
2000 : all OCUL libraries are participants in this Canada Foundation for Innovation (CFI) funded initiative ($20M from CFI and $30M in matching funds from 64 academic institutions) to enable national licensing of electronic resources to increase access and reduce costs. Ontario Innovative Trust funding ($7.6M) will enable Ontario to initiate the Ontario Information Infrastructure to ensure rapid and ongoing access to these new resources for OCUL libraries...

Scholars Portal
2002: Scholars Portal (created with Ontario Information Infrastructure (OII) funding) is a shared technology infrastructure and shared collections for all 21 universities in Ontario. Scholars Portal Journals and Racer (Rapid Access to Collections by Electronic Requesting), an online interlibrary loan request system, are the first modules to go live.
And ten years later, unlike the National Library of Canada, we - the academic libraries of Ontario - have our own Interlibrary Loan Service and and a Trusted Digital Repository, among many other cherished services provided by excellent and skilled library professionals.

And what will academic libraries get from ten years from now from the Heritage project?  We will have gained no increased infrastructure or additional expertise from the digitization of historical materials that we can share with our local communities. And ten years from now, I'm afraid to say that I believe that we will have less capacity and smaller budgets to do the work that Canadiana now does for us.

We have outsourced ourselves. Again.



But maybe this dream of an open digital library of Canada is not completely lost.

As Russel McOrmand of Canadiana reminds us in his post Why is a license required for a Canadiana project built from public domain material?

I am a system administrator at Canadiana, and not someone involved in policy relating to licensing of the parts of this project that will be covered by Canadiana copyright. When it is a Canadiana decision, it is our Board of Directors made up of librarians and archivists, and our executive director, who ultimately are responsible for such policies.
Our library leadership sits on the board of directors of Canadiana.

What it is is up to us.

Peter Murray: How to fix a directory that Git thinks is a submodule

planet code4lib - Wed, 2016-03-02 21:53

Nuts. I added and committed a directory to my Git repository when the directory itself was another separate Git repository. Now Git thinks it’s some sort of submodule, but it doesn’t know how to deal with it:

$ git submodule update No submodule mapping found in .gitmodules for path 'blah'

And worse, Git won’t let me remove it:

$ git rm blah error: the following submodule (or one of its nested submodules) uses a .git directory: blah (use 'rm -rf' if you really want to remove it including all of its history)

So what to do? This:

$ git rm --cached blah $ git add blah

In my case I had a situation where there were several Git repositories-inside-a-repository, so I wanted a way to deal with them all:

$ for i in `find . -type d -name .git -print | sed 's#/.git##'`; do > echo $i > rm -rf $i/.git > git rm --cached $i > git add $i > done

(Be careful not to run this find command at the root of your Git repository, of course, or else you will effectively destroy its usefulness as a git repo. )

LITA: 3 Exciting LITA Preconferences at ALA Annual, Orlando FL

planet code4lib - Wed, 2016-03-02 20:49
Help LITA kick off its year of 50th anniversary celebrations.

By attending any one of three exciting new preconferences at ALA Annual in Orlando FL. They will all be held on:

Friday, June 24 from 1:00 pm to 4:00 pm.

For more information and registration, check out the LITA at ALA Annual conference web page

Digital Privacy and Security: Keeping You and Your Library Safe and Secure in a Post-Snowden World

Presenters: Blake Carver, LYRASIS and Jessamyn West, Library Technologist at Open Library

Learn strategies on how to make you, your librarians and your patrons more secure & private in a world of ubiquitous digital surveillance and criminal hacking. We’ll teach tools that keep your data safe inside of the library and out — how to secure your library network environment, website, and public PCs, as well as tools and tips you can teach to patrons in computer classes and one-on-one tech sessions. We’ll tackle security myths, passwords, tracking, malware, and more, covering a range of tools from basic to advanced, making this session ideal for any library staff.

Blake Carver Jessamyn West Islandora for Managers: Open Source Digital Repository Training

Presenters: Erin Tripp, Business Development Manager at discoverygarden inc. and Stephen Perkins, Managing Member of Infoset Digital Publishing

Islandora is an OAIS adherent and open source digital repository framework. It combines the Drupal CMS and Fedora Commons repository software, together with additional open source applications, the framework delivers a wide range of functionality out of the box. The proposed workshop will provide an overview of Islandora, it’s community of users, and allow users to test drive a full Islandora installation using local virtual machines or the online Islandora sandbox.

Erin Tripp Stephen Perkins Technology Tools and Transforming Librarianship

Presenters: Lola Bradley, Reference Librarian, Upstate University; Breanne Kirsch, Coordinator of Emerging Technologies, Upstate University; Jonathan Kirsch, Librarian, Spartanburg County Public Library; Rod Franco, Librarian, Richland Library; Thomas Lide, Learning Engagement Librarian, Richland Library

Technology envelops every aspect of librarianship, so it is important to keep up with new technology tools and find ways to use them to improve services and better help patrons. This hands-on, interactive preconference will teach six to eight technology tools in detail and show attendees the resources to find out about 50 free technology tools that can be used in all libraries. There will be plenty of time for exploration of the tools, so please BYOD! You may also want to bring headphones or earbuds.

Lola Bradley Breanne Kirsch Jonathan Kirsch Rod Franco Thomas Lide And be sure to attend the LITA President’s Program featuring Dr. Safiya Noble

Sunday June 26, 2016 from 3:00 pm to 4:00 pm

Dr. Noble is an Assistant Professor in the Department of Information Studies in the Graduate School of Education and Information Studies at UCLA. She conducts research in socio-cultural informatics; including feminist, historical and political-economic perspectives on computing platforms and software in the public interest. Her research is at the intersection of culture and technology in the design and use of applications on the Internet.

Safiya Noble More Information and Registration

Check out the LITA at ALA Annual conference web page.

Library of Congress: The Signal: Assessing Digital Preservation at the John F. Kennedy Presidential Library

planet code4lib - Wed, 2016-03-02 19:40

The following is a guest post by Alice Sara Prael, National Digital Stewardship Resident at the John F. Kennedy presidential Library. She participates in the NDSR-Boston cohort.

The John F. Kennedy Presidential Library began the “Access to Legacy” project in 2007 with the goal to digitize, describe, and permanently retain millions of presidential documents, photographs, and audiovisual recordings. Since the project began the Library has accumulated over 150 terabytes of data. With this much data in our holdings, how can we preserve the digital files over the long term? That’s where our NDSR project comes into play.

Alice Sara Prael, National Digital Stewardship Resident

The goal of the project is to ‘develop a long-range digital preservation strategy’ which would address all digital archival holdings at the John F. Kennedy Presidential Library. This is a challenging goal, so we broke it down into three phases. The first was to assess current infrastructure against community standards and make brief recommendations on how to improve digital preservation practices. The second phase will explore potential solutions to address the recommendations made in the first phase. The final phase will determine a single path forward based on the solutions explored and create an action plan for how to implement that solution. I recently completed a report of my initial findings and moved onto the second phase, researching potential systems and solutions forward.

Since the aim of my project is to make recommendations that will be carried out after my residency, advocacy has been key. I recognized that staff input would be incredibly important early on, so I started by interviewing archivists and IT personnel about their processes and how they use the systems in place at the library. Armed with the staff perspective, I dove into researching the systems through help guides and communication with support staff. At the library we use a digital asset management system called Documentum, created and donated by EMC. For storage we use Centera servers on-site and a mirrored back-up held off site; both storage systems are managed cooperatively by IT staff at the Library and EMC. There are other systems in place, mainly for indexing and access, but these were the focus of my project.   Since these systems are proprietary it hasn’t always been easy gaining access to documentation. I was provided with a help guide, but many of the more technical details were acquired through conversations with EMC support staff.

Systems in use at The John F. Kennedy Presidential Library

During my research into the preservation practices and systems, I regularly referred to community standards and guidelines such as ISO 14721: Reference Model for an Open Archival Information System (OAIS), ISO 16363: Audit and Certification of Trustworthy Digital Repositories, and the National Digital Stewardship Alliance (NDSA) Levels of Digital Preservation. Each gives a slightly different perspective on what is required for digital preservation. Ideally we would want our program to pass an assessment based on any standard with top marks. However, in the reality of limited resources and staff time it’s important to recognize when to aim for “good enough” digital preservation. “Good enough” can be defined by the available resources, the needs of the collection, and priorities of the institution. It will be defined differently for different scenarios so we need to find out what is good enough for us.

After the completion of the first phase I wrote a report of initial findings. I grounded the report by connecting my recommendations to the Levels of Digital Preservation created by NDSA. The NDSA Levels are not as in depth as ISO 16363 and ISO 14721, but they are easier to understand at a glance, especially for those who are less familiar with the needs of digital preservation. It’s great that there are intermediary levels so an institution can address digital preservation without an all-or-nothing mindset. It also creates a useful visual aid for identifying strengths and weaknesses.

NDSA Levels color coded to show the strengths and weaknesses of digital preservation at the John F. Kennedy Presidential Library

We are strong on file formats and weaker when it comes to storage and geographic location. Making these points clearly and early on helps with long-term advocacy. With a clear starting point, we can continue to document how we improve and address these weaknesses. Now that we have identified specific places for improvement, I know where to focus during the next phase of the project.

Since the NDSA Levels are focused on the technological requirements, I pulled from the ISO standards to address the organizational and policy needs. I found that the JFK Library, like so many cultural heritage institutions, is in need of better documentation. Some processes have never been fully documented and live exclusively in the mind of the archivist, which becomes problematic when the archivist leaves – especially if it’s a sudden departure. As a new addition to the digital archives team part of my charge has been to ask questions about the existing policies and to fill documentation gaps where necessary.

My work has focused on the largest gap in the existing documentation for digital archives, a digital preservation policy. Since a policy is a record of decisions, my initial focus was to identify the decisions for digital preservation – those that need to be made, those that have been made but not documented, and those that have been documented elsewhere. I started by reviewing the policies on the Scalable Preservation Environment’s wiki of published preservation policies. I found frameworks that best suited the digital preservation environment at the JFK Library. Once I had an outline for how a digital preservation policy might work and a list of decisions to make, I returned to the key stakeholders. Together we have created a draft policy, but it is still a work in progress. We have come to a consensus on how to address many digital preservation challenges, but parts of the drafted policy are still aspirational. We hope that once the NDSR project is complete and we have a clear implementation plan for improved digital preservation, the policy will be a true reflection of the practices at the Library.

HangingTogether: The Books Shackleton Took to Antarctica

planet code4lib - Wed, 2016-03-02 16:44

As was recently reported, the Royal Geographic Society in London digitized a photograph that was taken in 1915 of Sir Ernest Shackelton’s Antarctic library by Frank Hurley. They were then able to discern the book titles, which I have linked to WorldCat and any open copy I could find. Just imagine, you can read a book that Shackleton may have read before the Endurance was crushed in the pack ice and sank.

As you review the list of books he selected to take to one of the remotest parts of our planet, keep in mind that this happened long before the author of Harry Potter was born.

Books on Shackleton’s bookshelf:

About Roy Tennant

Roy Tennant works on projects related to improving the technological infrastructure of libraries, museums, and archives.

Mail | Web | Twitter | Facebook | LinkedIn | Flickr | YouTube | More Posts (94)

Open Knowledge Foundation: New Initiative: Open Data for Tax Justice #OD4TJ

planet code4lib - Wed, 2016-03-02 13:07

Every year countries lose billions of dollars to tax avoidance, tax evasion and more generally to illicit financial flows. According to a recent IMF estimate around $700 billion of tax revenues is lost each year due to profit-shifting. In developing countries the loss is estimated to be around $200 billion, which as a share of GDP represents nearly three times the loss suffered by OECD countries. Meanwhile, economist Gabriel Zucman estimates that certain components of undeclared offshore wealth total above $7 trillion, implying tax losses of $200 billion annually; Jim Henry’s work for TJN suggests the full total of offshore assets may range between $21 trillion and $32 trillion.

We want to transform the way that data is used for advocacy, journalism and public policy to address this urgent challenge by creating of a global network of civil society groups, investigative reporters, data journalists, civic hackers, researchers, public servants and others.

Today, Open Knowledge and the Tax Justice Network are delighted to announce the launch of a new initiative in this area: Open Data for Tax Justice. We want to initiate a global network of people and organisations working to create, use and share data to improve advocacy and journalism around tax justice. The website is: http://datafortaxjustice.net/ and using the hashtag #od4tj.

The network will work to rally campaigners, civil society groups, investigative reporters, data journalists, civic hackers, researchers, public servants and others; it will aim to catalyse collaborations and forge lasting alliances between the tax justice movement and the open data movement. We have received a huge level of support and encouragement from preliminary discussions with our initial members, and look forward to expanding the network and its activities over the coming months.

What is on the cards? We’re working on a white paper on what a global data infrastructure for tax justice might look like. We also want to generate more practical guidance materials for data projects – as well as to build momentum with online and offline events. We will kick off with some preliminary activities at this year’s global Open Data Day on Saturday 5th March. Tax justice will be one of the main themes of the London Open Data Day, and if you’d like to have a go at doing something tax related at an event that you’re going to, you can join the discussion here.

DuraSpace News: DuraSpace at the SPARC MORE Meeting

planet code4lib - Wed, 2016-03-02 00:00

Austin, TX  DuraSpace is a proud sponsor of the upcoming SPARC MORE Meeting, March 7-8, 2016 in San Antonio, Texas. The gathering of librarians, educators, and researchers will focus on the 2014 “Convergence” meeting theme and will explore the increasingly central role libraries play in the growing shift toward Open Access, Open Education and Open Data.

DuraSpace News: German DSpace User Group Meeting to be Held in Hamburg, Sept. 27, 2016

planet code4lib - Wed, 2016-03-02 00:00

From Jan Weiland, ZBW - Deutsche Zentralbibliothek für Wirtschaftswissenschaften

Hamburg, Germany  The ZBW - German National Library of Economics gladly invites you to join the next German DSpace User Group meeting in Hamburg:

Date: Tuesday, 27th September 2016, from 11 a.m. to 5 p.m.
Venue: ZBW, Neuer Jungfernstieg 21, 20354 Hamburg, Germany, fifth floor, Room 519

DuraSpace News: Introducing the first Open Peer Review Module for DSpace Repositories

planet code4lib - Wed, 2016-03-02 00:00

From Emilio Lorenzo, ARVO Consultores

Asturias, Spain  With the support of OpenAIRE, Open Scholar has coordinated a consortium of five partners to develop the first Open Peer Review Module (OPRM) for DSPACE.

DuraSpace News: VIVO Updates for February 28–Anniversary, Upcoming Events, Open VIVO

planet code4lib - Wed, 2016-03-02 00:00

From Mike Conlon, VIVO Project Director

Richard Wallis: Evolving Schema.org in Practice Pt3: Choosing Where to Extend

planet code4lib - Tue, 2016-03-01 16:24

In this third part of the series I am going to concentrate less on the science of working with the technology of Schema.org and more on what you might call the art of extension.

It builds on the previous two posts The Bits and Pieces which introduces you to the mechanics of working with the Schema.org repository in GitHub and your own local version; and Working Within the Vocabulary which takes you through the anatomy of the major controlling files for the terms and their examples, that you find in the repository.

Art maybe an over ambitious word for the process that I am going to try and describe. However it is not about rules, required patterns, syntaxes, and file formats – the science; it is about general guidelines, emerging styles & practices, and what feels right.  So art it is.

OK. You have read the previous posts in this series. You have said to yourself I only wish that I could describe [insert you favourite issue here] in Schema.org. You are now inspired to do something about it, or get together with a community of colleagues to address the usefulness of Schema.org for your area of interest.  Then comes the inevitable question…

Where do I focus my efforts – the core vocabulary or a Hosted Extension or an External Extension?

Firstly a bit of background to help answer that question.

The core of the Schema.org vocabulary has evolved since its launch by Google, Bing, and Yahoo! (soon joined by Yandex), in June 2011. By the end of 2015 its term definitions had reached 642 types and 992 properties.  They cover many many sectors commercial, and not, including sport, media, retail, libraries, local businesses, heath, audio, video, TV, movies, reviews, ratings, products, services, offers and actions.  Its generic nature has facilitated is spread of adoption across well over 10 million sites.  For more background I recommend the December 2015 article Schema.org: Evolution of Structured Data on the Web – Big data makes common schemas even more necessary. By Guha, Brickley and Macbeth.

That generic nature however does introduce issues for those in specific sectors wishing to focus in more detail on the entities and relationships specific to their domain whist still being part of, or closely related to, Schema.org.  In the spring of 2015 an Extension Mechanism, consisting of Hosted and External extensions, was introduced to address this.

Reviewed/Hosted Extensions are domain focused extensions hosted on the Schema.org site. They will have been reviewed and discussed by the broad Schema.org community as to style, compatibility with the core vocabulary, and potential adoption.  An extension is allocated its own part of the schema.org namespace – auto.schema.org & bib.schema.org being the first two examples.

External Extensions are created and hosted separate from Schema.org in their own namespace.  Although related to and building upon [extending] the Schema.org vocabulary these extensions are not part of the vocabulary.  I am editor for an early example of such an external extension BiblioGraph.net that predates the launch of the extension mechanism.  Much more recently GS1 (The Global Language of Business) have published their External Extension – the GS1 Web Vocabulary at http://gs1.org/voc/.

An example of how gs1.org extends Schema.org can be seen from inspecting the class gs1:WearableProduct which is a subclass of gs1:Product which in turn is defined as an exact match to schema:Product.  Looking at an example property of gs1:Product, gs1:brand we can see that it is defined as a subproperty of schema:brand.  This demonstrates how Schema.org is foundational to GS1.org.

Choosing Where to Extend

This initially depends on what and how much you are wanting to extend.

If all you are thinking of is adding the odd property to an already existent type, or to add another type to the domain and/or range of a property, or improve the description of a type or property; you probably do not need to create an extension.  Raise an issue, and after some thought and discussion, go for it – create the relevant code and associated Pull Request for the Schema.org Gihub repositiory.

More substantial extensions require a bit of thought.

When proposing extension to the Schema.org vocabulary the above-described structure provides the extender/developer with three options.  Extend the core; propose a hosted extension; or develop an external extension.  Potentially a proposal could result in a combination of all three.

For example a proposal could be for a new Type (class) to be added to the core, with few or no additional properties other than those inherited from its super type.  In addition more domain focused properties, or subtypes, for that new type could be proposed as part of a hosted extension, and yet more very domain specific ones only being part of an external extension.

Although not an exact science, there are some basic principles behind such choices.  These principles are based upon the broad context and use of Schema.org across the web, the consuming audience for the data that would be marked up; the domain specific knowledge of those that would do the marking up and reviewing the proposal; and the domain specific need for the proposed terms.

Guiding Questions
A decision as to if a proposed term should be in the core, hosted extension or external extension can be aided by the answers to some basic questions:

  • Public or not public? Will the data that would be marked up using the term be normally shared on the web?  Would you expect to find that information on a publicly accessible web page today?If the answer is not public, there is no point in proposing the term for the core or a hosted extension.  It would be defined in an external extension.
  • General or Specific?  Is the level of information to be marked up, or the thing being described, of interest or relevant to non-domain specific consumers?If the answer is general, the term could be a candidate for a core term. For example Train could be considered as a potential new subtype of Vehicle to describe that mode of transport that is relevant for general travel discovery needs.  Whereas SteamTrain and its associated specific properties about driving wheel configuration etc. would be more appropriate to a railway extension.
  • Popularity? How many sites on the web would potentially be expected to make use of these term(s) How many webmasters would find them useful?If the answer is lots, you probably have a candidate for the core. If it is only a few hundred, especially if they would be all in a particular focus of interest, it would be more likely a candidate for a hosted extension. If it is a small number, it might be more appropriate in an external extension.
  • Detailed or Technical? Is the information, or the detailed nature of proposed properties, too technical for general consumption?If yes, the term should be proposed for a hosted or external extension. In the train example above, the fact that a steam train is being referenced could be contained in the text based description property of a Train type. Whereas the type of steam engine configuration could be a defined value for a property in an external extension.
Evolutionary Steps

When defining and then proposing enhancements to the core of Schema, or for hosted extensions, there is a temptation to take an area of concern, analyse it in detail and then produce a fully complete proposal.   Experience has demonstrated that it is beneficial to gain feedback on the use and adoption of terms before building upon them to extend and add more detailed capability.

Based on that experience the way of extending Schema.org should be by steps that build upon each other in stages.  For example introducing a new subtype with few if any new specific properties.  Initial implementers can use textual description properties to qualify its values in this initial form.  In a later releases more specific properties can be proposed, their need being justified by the take-up, visibility, and use of the subtype on sites across the web.

Closing Summary

Several screen-full’s and a few weeks ago, this started out as a simple post in an attempt to cover off some of the questions I am often asked about how Schema.org is structured, and how it can be made more appropriate for this project or that domain.  Hopefully you find the distillation of my experience and my personal approach, across these three resulting posts on Evolving Schema.org in Practice, enlightening and helpful.  Especially if you are considering proposing a change, enhancement or extension to Schema.org.

My association with Schema.org – applying the vocabulary; making personal proposals; chairing W3C Community groups (Schema Bib Extend, Schema Architypes, The Tourism Structured Web Data Community Group); participating in others (Schema Course extension Community Group, Sport Schema Community Group, Financial Industry Business Ontology Community Group, Schema.org Community Group); being editor of the BiblioGraph.net extension vocabulary; working with various organisations such as OCLC, Google’s Schema.org team, and the Financial Industry Business Ontology (FIBO); and preparing & presenting workshops & keynotes at general data and industry specific events –  has taught me that there is much similarity between, on the surface disparate, industries and sectors when it comes to preparing structured data to be broadly shared and understood.

Often that similarity is hidden behind sector specific views, understanding, and issues in dealing with the open wide web of structured data where they are just another interested group, looking to benefit from shared recognition of common schemas by the major search engine organisations.  But that is all part of the joy and challenge I relish when entering a new domain and meeting new interested and motivated people.

Of course enhancing, extending and evolving the Schema.org vocabulary is only one part of the story.  Actually applying it for benefit to aid the discovery of your organisation, your resources and the web sites that reference them is the main goal for most.

I get the feeling that there maybe another blog post series I should be considering!

FOSS4Lib Recent Releases: veraPDF - 0.10

planet code4lib - Tue, 2016-03-01 14:45

Last updated March 1, 2016. Created by Peter Murray on March 1, 2016.
Log in to edit this page.

Package: veraPDFRelease Date: Monday, February 29, 2016

LITA: I’m a Librarian. Of tech, not books.

planet code4lib - Tue, 2016-03-01 03:38
Image from http://www.systemslibrarian.co.za/

When someone finds out I’m a librarian, they automatically think I know everything there is to know about, well, books. The thing is, I don’t. I got into libraries because of the technology. My career in libraries started with the take off, a supposed library replacement, of ebooks. Factor in the Google “scare” and librar*s  were going to be done forever. Librar*s were frantic to debunk that they were no longer going to be useful, insert perfect time and opportunity to join libraries and technology.

I am a Systems Librarian and the most common and loaded question I get from non-librarians is (in 2 parts), “What does that mean? and What do you do?” Usually this resorts to a very simple response:
I maintain the system the library sits on, the one that gives you access to the collection from your computer in the comfort of your home. This tool, that lets you view the collection online and borrow books and access databases and all sorts of resources from your pajamas, my job is to make sure that keeps running the way we need it to so you have the access you want.
My response aims to give a physical picture about a technical thing. There is so much we do as systems librarians that if I were to get in-deep with what I do, we’d be there for a while. Between you and I, I don’t care to talk *that* much, but maybe I should.

There’s a lot more to being a Systems Librarian, much of which is unspoken and you don’t know about it until you’re in the throws of being a systems librarian. There was a Twitter conversation prompted when a Twitter’er asked for recommendations on things to teach or include in on the job training for someone who is interested in library systems. It got me thinking, because I knew little to nothing about being a Systems Librarian and just happened upon it (Systems Librarianship) because the job description sounded really interesting and I was already a little bit qualified. It also allowed me to build a skill set that provided me a gateway out of libraries if and when the time arrived. Looking back, I wonder what would I have wanted to know before going into Systems, and most importantly, would it have changed my decision to do so, or rather, to stay? So what is it to be a Systems Librarian?

The unique breed: A Systems Librarian:

  • makes sure users can virtually access a comprehensive list of the library’s collection
  • makes sure library staff can continue to maintain that ever-growing collection
  • makes sure that when things in the library system break, everything possible is done to repair it
  • needs to be able to accurately assess the problem presented by the frantic library staff member that cannot log into their ILS account
  • needs to be approachable while still being the person that may often say no
  • is an imperfect person that maintains an imperfect system so that multiple departments doing multiple tasks can do their daily work.
  • must combine the principles of librarianship with the abilities of computing technology
  • must be able to communicate the concerns and needs of the library to IT and communicate the concerns and needs of IT to the library

Things I would have wanted to know about Systems Librarianship: When you’re interested but naive about what it takes.

  • You need to be able to see the big and small pictures at once and how every piece fits into the puzzle
  • Systems Librarianship requires you to communicate, often and on difficult to explain topics. Take time to master this. You will be doing a lot of it and you want everyone involved to understand, because all parties will most likely be affected by the decision.
  • You don’t actually get to sit behind a computer all day every day just doing your thing.
  • You are the person to bridge the gap between IT and librarians. Take the time to understand the inner workings of both groups, especially as they relate to the library.
  • You’ll be expected to communicate between IT staff and Library staff why their request, no matter the intention, will or will not work AND if it will work, but would make things worse – why.
  • You will have a new problem to tackle almost every day. This is what makes the job so great
  • You need to understand the tasks of every department in the library. Take the time to get to know the staff of those departments as well – it will give insight to how people work.
  • You need to be able to say no to a request that should not or cannot be done, yes even to administration.
  • No one really knows all you do, so it’s important to take the time to explain your process when the time calls for it.
  • You’ll most likely inherit a system setup that is confusing at best. It’s your job to keep it going, make it better even.
  • You’ll be expected to make the “magic” happen, so you’ll need to be able to explain why things take time and don’t appear like a rabbit out of a hat.
  • You’ll benefit greatly from being open about how the system works and how one department’s requests can dramatically, or not so dramatically, affect another part of the system.
  • Be honest when you give timelines. If you think the job will take 2 weeks, give yourself 3.
  • You will spend a lot of time working with vendors. Don’t take their word for  “it,” whatever “it” happens to be.
  • This is important– you’re not alone. Ask questions on the email lists, chat groups, Twitter, etc..
  • You will be tempted to work on that problem after work, schedule time after work to work on it but do not let it take over your life, make sure you find your home/work life balance.

Being a systems librarian is hard work. It’s not always an appreciated job but it’s necessary and in the end, knowing everything I do,  I’d choose it again. Being a tech librarian is awesome and you don’t have to know everything about books to be good at it. I finally accepted this after months of ridicule from my trivia team for “failing” at librarianship because I didn’t know the answer to that obscure book reference from an author 65 years ago.

Also, those lists are not, by any means, complete — I’m curious, what would you add?

Possibly of interest, a bit dated (2011) but a comprehensive list of posts on systems librarianship: https://librarianmandikaye.wordpress.com/systems-librarian/

HangingTogether: The end of an era — goodbye to Jim Michalko

planet code4lib - Tue, 2016-03-01 00:19

Today is the day when we say goodbye to our leader and colleague Jim Michalko. Rather than wallowing in our loss, we’d like this post to celebrate Jim’s accomplishments and acknowledge his many wonderful qualities.

Jim Michalko February 2016

Before OCLC, Jim was the president of the Research Libraries Group. He came to RLG from the administration team at the University of Pennsylvania Libraries in 1980. In those relatively early days of library automation, RLG was very much a chaotic start up. Jim, with both a MLS and an MBA, came on as the business manager and as part of the senior administrative team helped to get the organization on more stable footing. He was named RLG president in 1989.

In 2006, Jim once again played a key role in a time of uncertainty, helping to bring RLG into the OCLC fold. This included both integrating RLG data assets into OCLC services and bringing forward programmatic activities into OCLC Research. A key part of those programmatic activities is collaboration with the research library community, and the OCLC Research Library Partnership is a key component in driving our work agenda. Under Jim’s leadership, the Partnership has grown from 110 in 2006 to over 170 institutions now, including libraries at 25 of the top 30 universities in the Times Higher Education World University rankings.

Jim is a wise and gentle leader with a sardonic sense of humor. We’ve appreciated his ability to foster experimentation (and his patience while those experiments played out), his willingness to get obstacles out of our way so that we can get our work done, his tolerance of our quirks and other personal qualities, and his ability to maximize our strengths.

Jim’s retirement is part of a larger story that is playing out in the larger research library community as those who have overseen generations of change in technology, education, and policy are moving on. We will honor these leaders by following in their footsteps, while reminding ourselves that the path they set was marked by innovation.

 

About Merrilee Proffitt

Mail | Web | Twitter | Facebook | LinkedIn | More Posts (284)

Pages

Subscribe to code4lib aggregator