You are here

Feed aggregator

LITA: Tips for Managing Electronic Resources

planet code4lib - Fri, 2015-04-24 13:00

Credit: Pixabay user Geralt, CC0 Public Domain

Last fall, I unexpectedly took on the electronic resources management (ERM) role at my university. Consequently, I had to teach myself–on the fly–how to manage 130+ electronic resources, along with a budget of several hundred thousand dollars. My initial six months focused on finances, licensing, and workflows rather than access, discoverability, or other key issues. So here are some life-saving tips for all you new e-librarians, because I know you didn’t learn this in library school!

Let’s start, as always, with the users.

Evaluate user needs.

Are you new at your job? Then begin by conducting a needs assessments, formal or informal. Check the programs and course offerings to make sure they still align with the e-resources for which you pay. Seek out faculty, colleagues, and students to get a sense of what resources they assign, use, or see used. Pull usage statistics from each database–and be sure to cross-reference this vendor data with web analytics because vendor data can be self-serving to the point of fictitious. Do your users use each resource enough to justify its cost? And they do really require the level of access you’re paying for? If not, can the resources be marketed and usage increased? And if there’s just no market, can those funds be reallocated and more relevant sources acquired?

Be budget-conscious.

Budgets are a huge consideration for any e-resources manager given that libraries are constantly absorbing budget cuts while vendors raise prices 3-5% a year, on average. Can your library afford to provide the resources it currently offers? More importantly, can the funds be used better? Can you save ten thousand dollars on one contract simply by renegotiating the number of concurrent users so as to reflect enrollment? Can you review your databases for duplication of content? Can you tap free, open access resources to plug content gaps or replace proprietary platforms? Can you talk to vendors and peruse old records to check for any unused credits lying around? And above all, how can you make the case for spending more money on electronic resources?

Negotiate terms.

Often you don’t actually need to throw more money at e-resources to get the best value. Most vendor reps are authorized to reduce off-the-shelf pricing by 20-25% without consulting their boss, and if you push hard enough–especially with smaller or longstanding service providers with a stake in the clientele–you can save potentially huge sums that can then be reallocated to purchase more databases or ebooks. And even if you don’t get a big discount, at least you can get special add-ons or other privileges. But you have be willing to negotiate and drive a hard bargain. Don’t be mean, because vendors are people too–usually very nice people; I’m Facebook friends with several. But we have to remember that our first duty is to get the best value for our taxpayers or students, not to “be nice” to the private sector and hand them all our money without demur.

Take advantage of add-ons.

Even if you aren’t a tough negotiator, you can derive maximum benefit from your subscriptions by exploring untapped services and add-ons most vendors provide. Want to market an e-resource? Check with the vendor-chances are that they can provide free web-based training and marketing materials. Annoyed that a database doesn’t integrate with your discovery layer? Talk to the vendor’s tech team; chances are that you can work something out. And major subscriptions often come with package deals and free add-ons. For example, libraries that use OCLC’s WorldShare as their ILS may be surprised to discover that ContentDM comes bundled with a WMS subscription.

Think consortia.

Speaking of packages, remember the value of group or consortial deals! We save 15% on our EBSCO databases through our free membership in an independent college consortium. Scan your environment to see if there are any great consortial arrangements out there. If not, consider initiating one with area libraries that have similar user populations and information needs. Talk to your state association and regional network or cooperative as well as to folks at your university. That said, be sure to evaluate critically the e-resources and terms of each consortial deal–beware of paying for stuff you don’t need, let alone paying twice for databases you already have.

Learn to love documentation.

Document everything. Seriously. When I started my position, there was no systematic workflow or documentation in place, older invoices were packed loose into folders, and invoices would trickle in randomly through snail mail. I created budget spreadsheets listing databases, vendors, pricing, and period of service; digitized and classified a year’s worth of records; and converted the system to e-invoicing. I also created a master password list for all administrative logins and a contact list for the reps and tech support for each e-resource. Not only does this streamline your workflows and preempt internal audits, but also enables you to document what e-resources you have, how much money you have saved, and how much money you can spend before the new fiscal year.

Read the contracts.

Read licensing agreements and contracts before signing. PLEAZ. Words are negotiable, same as prices. Can you tweak the wording to soften your legal obligations and remove financial penalties for violating the terms of use? Can you demand a VPAT documenting the e-resource’s accessibility? Can you add a clause excluding the library from liability if a user or advocacy group sues because disabled users cannot access the e-resource? Can you give the library a quick out clause in cases of multiyear contracts? Can you get reimbursed if the e-resource goes offline for an extended period? . . . In short, can you modify the standard contract? In all cases, the answer is yes. You can.

Ensure legal compliance.

Credit: Pixabay user Geralt, CC0 Public Domain

Be sure your institution is complying with the terms of the contract. You don’t want to get sued or have your access terminated without notice because people didn’t read the contract carefully enough and gave two hundred students access to an e-resource budgeted for only two users.

Closing thought.

Be that person who interrogates assumptions, saves the library money, and better serves staff and end users. If something was done that way for years, chances are it can be done better.

Do you manage electronic resources? Have you done in the past? Please share your tips below!

ACRL TechConnect: Best Practices for Hacking Third-Party Sites

planet code4lib - Fri, 2015-04-24 11:52

While customizing vendor web services is not the most glamorous task, it’s something almost every library does. Whether we have full access to a templating system, as with LibGuides 2, or merely the ability to insert an HTML header or footer, as on many database platforms, we are tested by platform limitations and a desire to make our organization’s fractured web presence cohesive and usable.

What does customizing a vendor site look like? Let’s look at one example before going into best practices. Many libraries subscribe to EBSCO databases, which have a corresponding administrative side “EBSCOadmin”. Electronic Resources and Web Librarians commonly have credentials for these admin sites. When we sign into EBSCOadmin, there are numerous configuration options for our database subscriptions, including a “branding” tab under the “Customize Services” section.

While EBSCO’s branding options include specifying the primary and secondary colors of their databases, there’s also a “bottom branding” section which allows us to inject custom HTML. Branding colors can be important, but this post is focuses on effectively injecting markup onto vendor web pages. The steps for doing so in EBSCOadmin are numerous and not informative for any other system, but the point is that when given custom HTML access one can make many modifications, from inserting text on the page, to an entirely new stylesheet, to modifying user interface behavior with JavaScript. Below, I’ve turned footer links orange and written a message to my browser’s JavaScript console using the custom HTML options in EBSCOadmin.

These opportunities for customization come in many flavors. We might have access only to a section of HTML in the header or footer of a page. We might be customizing the appearance of our link resolver, subscription databases, or catalog. Regardless, there are a few best practices which can aid us in making modifications that are effective.

General Best Practices

What happens when vendors don’t put headings in HTML elements: pic.twitter.com/8tECaBqRbN

— Matthew Reidsma (@mreidsma) April 21, 2015

Ditch best practices when they become obstacles

It’s too tempting; I have to start this post about best practices by noting their inherent limitations. When we’re working with a site designed by someone else, the quality of our own code is restricted by decisions they made for unknown reasons. Commonly-spouted wisdom—reduce HTTP requests! don’t use eval! ID selectors should be avoided!—may be unusable or even counter-productive.

To note but one shining example: CSS specificity. If you’ve worked long enough with CSS then you know that it’s easy to back yourself into a corner by using overly powerful selectors like IDs or—the horror—inline style attributes. These methods of applying CSS have high specificity, which means that CSS written later in a stylesheet or loaded later in the HTML document might not override them as anticipated, a seeming contradiction in the “cascade” part of CSS. The hydrogen bomb of specificity is the !important modifier which automatically overrides anything but another !important later in the page’s styles.

So it’s best practice to avoid inline style attributes, ID selectors, and especially !important. Except when hacking on vendor sites it’s often necessary. What if we need to override an inline style? Suddenly, !important looks necessary. So let’s not get caught up following rules written for people in greener pastures; we’re already in the swamp, throwing some mud around may be called for.

There are dozens of other examples that come to mind. For instance, in serving content from a vendor site where we have no server-side control, we may be forced to violate web performance best practices such as sending assets with caching headers and utilizing compression. While minifying code is another performance best practice, for small customizations it adds little but obfuscates our work for other staff. Keeping a small script or style tweak human-readable might be more prudent. Overall, understanding why certain practices are recommended, and when it’s appropriate to sacrifice them, can aid our decision-making.

Test. Test. Test. When you’re done testing, test again

Whenever we’re creating an experience on the web it’s good to test. To test with Chrome, with Firefox, with Internet Explorer. To test on an iPhone, a Galaxy S4, a Chromebook. To test on our university’s wired network, on wireless, on 3G. Our users are vast; they contain multitudes. We try to represent their experiences as best as possible in the testing environment, knowing that we won’t emulate every possibility.

Testing is important, sure. But when hacking a third party site, the variance is more than doubled. The vendor has likely done their own testing. They’ve likely introduced their own hacks that work around issues with specific browsers, devices, or connectivity conditions. They may be using server-side device detection to send out subtly different versions of the site to different users; they may not offer the same functionality in all situations. All of these circumstances mean that testing is vitally important and unending. We will never cover enough ground to be sure our hacks are foolproof, but we better try or they’ll not work at all.

Analytics and error reporting

Speaking of testing, how will we know when something goes wrong? Surely, our users will send us a detailed error report, complete with screenshots and the full schematics of every piece of hardware and software involved. After all, they do not have lives or obligations of their own. They exist merely to make our code more error-proof.

If, however, for some odd reason someone does not report an error, we may still want to know that one occurred. It’s good to set up unobtrusive analytics that record errors or other measures of interaction. Did we revamp a form to add additional validation? Try tracking what proportion of visitors successfully submit the form, how often the validation is violated, how often users submit invalid data multiple times in a row, and how often our code encounters an error. There are some intriguing client-side error reporting services out there that can catch JavaScript errors and detail them for our perusal later. But even a little work with events in Google Analytics can log errors, successes, and everything in between. With the mere information that problems are occurring, we may be able to identify patterns, focus our testing, and ultimately improve our customizations and end-user experience.

Know when to cut your losses

Some aspects of a vendor site are difficult to customize. I don’t want to say impossible, since one can do an awful lot with only a single <script> tag to work with, but unfeasible. Sometimes it’s best to know when sinking more time and effort into a customization isn’t worth it.

For instance, our repository has a “hierarchy browse” feature which allows us to present filtered subsets of items to users. We often get requests to customize the hierarchies for specific departments or purposes—can we change the default sort, can we hide certain info here but not there, can we use grid instead of list-based results? We probably can, because the hierarchy browse allows us to inject arbitrary custom HTML at the top of each section. But the interface for doing so is a bit clumsy and would need to be repeated everywhere a customization is made, sometimes across dozens of places simply to cover a single department’s work. So while many of these change requests are technically possible, they’re unwise. Updates would be difficult and impossible to automate, virtually ensuring errors are introduced over time as I forget to update one section or make a manual mistake somewhere. Instead, I can focus on customizing the site-wide theme to fix other, potentially larger issues with more maintainable solutions.

A good alternative to tricky and unmaintainable customizations is to submit a feature request to the vendor. Some vendors have specific sites where we can submit ideas for new features and put our support behind others’ ideas. For instance, the Innovative Users Group hosts an annual vote where members can select their most desired enhancement requests. Remember that vendors want to make a better product after all; our feedback is valued. Even if there’s no formal system for submitting feature requests, a simple email to our sales representative or customer support can help.

CSS Best Practices

@mreidsma @phette23 Sounds like a LITA Guide. z-index: 100001 !important; how to customize vendor sites.

— Michael Schofield (@schoeyfield) April 9, 2015

While the above section spoke to general advice, CSS and JavaScript have a few specific peculiarities to keep in mind while working within a hostile host environment.

Don’t write brittle, overly-specific selectors

There are two unifying characteristics of hacking on third-party sites: 1) we’re unfamiliar with the underlying logic of why the site is constructed in a particular way and 2) everything is subject to change without notice. Both of these making targeting HTML elements, whether with CSS or JavaScript, challenging. We want our selectors to be as flexible as possible, to withstand as much change as possible without breaking. Say we have the following list of helpful tools in a sidebar:

<div id="tools">     <ul>         <li><span class="icon icon-hat"></span><a href="#">Email a Librarian</a></li>         <li><span class="icon icon-turtle"></span><a href="#">Citations</a></li>         <li><span class="icon icon-unicorn"></span><a href="#">Catalog</a></li>     </ul> </div>

We can modify the icons listed with a selector like #tools > ul > li > span.icon.icon-hat. But many small changes could break this style: a wrapper layer injected in between the #tools div and the unordered list, a switch from unordered to ordered list, moving from <span>s for icons to another tag such as <i>. Instead, a selector like #tools .icon.icon-hat assumes that little will stay the same; it thinks there’ll be icons inside the #tools section, but doesn’t care about anything in between. Some assumptions have to stay, that’s the nature of customizing someone else’s site, but it’s pretty safe to bet on the icon classes to remain.

In general, sibling and child selectors make for poor choices for vendor sites. We’re suddenly relying not just on tags, classes, and IDs to stay the same, but also the particular order that elements appear in. I’d also argue that pseudo-selectors like :first-child, :last-child, and :nth-child() are dangerous for the same reason.

Avoid positioning if possible

Positioning and layout can be tricky to get right on a vendor site. Unless we’re confident in our tests and have covered all the edge cases, try to avoid properties like position and float. In my experience, many poorly structured vendor sites employ ad hoc box-sizing measurements, float-based layout, and lack a grid system. These are all a recipe for weird interconnections between disparate parts—we try to give a call-out box a bit more padding and end up sending the secondary navigation flying a thousand pixels to the right offscreen.

display: none is your friend

display: none is easily my most frequently used CSS property when I customize vendor sites. Can’t turn off a feature in the admin options? Hide it from the interface entirely. A particular feature is broken on mobile? Hide it. A feature is of niche appeal and adds more clutter than it’s worth? Hide it. The footer? Yeah, it’s a useless advertisement, let’s get rid of it. display: none is great but remember it does affect a site’s layout; the hidden element will collapse and no longer take up space, so be careful when hiding structural elements that are presented as menus or columns.

Attribute selectors are excellent

Attribute selectors, which enable us to target an element by the value of any of its HTML attributes, are incredibly powerful. They aren’t very common, so here’s a quick refresher on what they look. Say we have the following HTML element:

<a href="http://exmaple.com" title="the best site, seriously" target="_blank">

This is an anchor tag with three attributes: href, title, and target. Attribute selectors allow us to target an element by whether it has an attribute or an attribute with a particular value, like so:

/* applies to <a> tags with a "target" attribute */ a[target] {     background: red; } /* applies to <a> tags with an "href" that begin with "http://" this is a great way to style links pointed at external websites or one particular external website! */ a[href^="http://"] {     cursor: help; } /* applies to <a> tags with the text "best" anywhere in their "title" attribute */ a[title*="best"] {     font-variant: small-caps; }

Why is this useful among the many ways we can select elements in CSS? Vendor sites often aren’t anticipating all the customizations we want to employ; they may not provide handy class and ID styling hooks where we need them. Or, as noted above, the structure of the document may be subject to change either over time or across different pieces of the site. Attribute selectors can help mitigate this by making style bindings more explicit. Instead of saying “change the background icon for some random span inside a list inside a div”, we can say “change the background icon for the link that points at our citation management tool”.

If that’s unclear, let me give another example from our institutional repository. While we have the ability to list custom links in the main left-hand navigation of our site, we cannot control the icons that appear with them. What’s worse, there are virtually no styling hooks available; we have an unadorned anchor tag to work with. But that turns out to be plenty for a selector of form a[href$=hierarchy] to target all <a>s with an href ending in “hierarchy”; suddenly we can define icon styles based on the URLs we’re pointing it, which is exactly what we want to base them on anyways.

Attribute selectors are brittle in their own ways—when our URLs change, these icons will break. But they’re a handy tool to have.

JavaScript Best Practices

Avoid the global scope

JavaScript has a notorious problem with global variables. By default, all variables lacking the var keyword are made global. Furthermore, variables outside the scope of any function will also be global. Global variables are considered harmful because they too easily allow unrelated pieces of code to interact; when everything’s sharing the same namespace, the chance that common names like i for index or count are used in two conflicting contexts increases greatly.

To avoid polluting the global scope with our own code, we wrap our entire script customizations in an immediately-invoked function expression (IIFE):

(function() {     // do stuff here  }())

Wrapping our code in this hideous-looking construction gives it its own scope, so we can define variables without fear of overwriting ones in the global scope. As a bonus, our code still has access to global variables like window and navigator. However, global variables defined by the vendor site itself are best avoided; it is possible they will change or are subject to strange conditions that we can’t determine. Again, the fewer assumptions our code makes about how the vendor’s site works, the more resilient it will be.

Avoid calling vendor-provided functions

Oftentimes the vendor site itself will put important functions in the global scope, funtions like submitForm or validate where their intention seems quite obvious. We may even be able to reverse engineer their code a bit, determining what the parameters we should pass to these functions are. But we must not succumb to the temptation to actually reference their code within our own!

Even if we have a decent handle on the vendor’s current code, it is far too subject to change. Instead, we should seek to add or modify site functionality in a more macro-like way; instead of calling vendor functions in our code, we can automate interactions with the user interface. For instance, say the “save” button is in an inconvenient place on a form and has the following code:

<button type="submit" class="btn btn-primary" onclick="submitForm(0)">Save</button>

We can see that the button saves the form by calling the submitForm function when it’s clicked with a value of 0. Maybe we even figure out that 0 means “no errors” whereas 1 means “error”.[X. True story, I reverse engineered a vendor form where this appeared to be the case.] So we could create another button somewhere which calls this same submitForm function. But so many changes break our code; if the meaning of the “0” changes, if the function name changes, or if something else happens when the save button is clicked that’s not evident in the markup. Instead, we can have our new button trigger the click event on the original save button exactly as a user interacting with the site would. In this way, our new save button should emulate exactly the behavior of the old one through many types of changes.

{{Insert Your Best Practices Here}}

Web-savvy librarians of the world, what are the practices you stick to when modifying your LibGuides, catalog, discovery layer, databases, etc.? It’s actually been a while since I did customization outside of my college’s IR, so the ideas in this post are more opinion than practice. If you have your own techniques—or disagree with the ones in this post!—we’d love to hear about it in the comments.

DuraSpace News: Open Repository Welcomes New Client: South African Medical Research Council

planet code4lib - Fri, 2015-04-24 00:00

By James Evans, Open Repository  Open Repository is delighted to announce another new client, the South African Medical Research Council (SAMRC). They are Open Repository’s first client in sub-Saharan Africa. The platform now operates repositories on behalf of a growing client base in six continents.

DuraSpace News: DSquare Technology News: DSpace 4 and 5 Now Available in Hindi

planet code4lib - Fri, 2015-04-24 00:00

DSquare Technologies is focused on providing turnkey solution in the field of Enterprise Content Management.  Recently the National Institute of Immunology, New Delhi (NII) selected DSpace for hosting its institutional repository, which will contain contents like books, theses, annual reports, research reports and more.  Users will be able to access these contents based on nature of contents e.g.

Harvard Library Innovation Lab: Link roundup April 23, 2015

planet code4lib - Thu, 2015-04-23 15:24

This is the good stuff.

Flip-Flap | THE BEACH LAB

then I went here

How Dalziel and Pow Realized This Awesome Interactive Touch Wall – Core77

amazing

How Dalziel and Pow Realized This Awesome Interactive Touch Wall – Core77

amazing

John Harvard ‘speaks’ | Harvard Gazette

Harvard is animating the famous John Harvard Statue

HTTP SEARCH Method

HTTP search. Maybe? Searching is so dang common.

DPLA: DPLAfest 2015: That’s a Wrap!

planet code4lib - Thu, 2015-04-23 14:50

DPLAfest 2015 was one for the history books! Bringing together more than 300 people from across the country (and world!), this year’s DPLAfest was two-days worth of excellent conversations, workshops, networking, hacking, and more. Missed the action, or just looking for a one-stop summary of the event? Look no further — this post contains all media, outputs, and other materials associated with the second annual DPLAfest in Indianapolis.

Announcements

On the second anniversary of the Digital Public Library of America’s launch, DPLA announced a number of new partnerships, initiatives, and milestones that highlight its rapid growth, and prepare it to have an even larger impact in the years ahead. At DPLAfest 2015 in Indianapolis, hundreds of people from DPLA’s expanding community gathered to discuss DPLA’s present and future. Announcements included:

  • Over 10 Million Items from 1,600 Contributing Institutions
  • New Hub Partnerships
  • PBS-DPLA Partnership
  • Learning Registry Collaboration
  • Sloan Foundation-funded Work on Ebooks
  • Collaboration with HathiTrust for Open Ebooks
  • New Board Chair and New Board Member Announced
  • New IMLS-funded Hydra project
  • DPLA Becomes an Official Hydra Project Partner

To find out more about these announcements and milestones, click here.

To read more about the collaboration with HathiTrust for open ebooks, click here.

Slides and notes

To find presentation slides and notes from DPLAfest 2015 sessions, visit the online agenda (click on each session to find attached slides and links to notes, where available).

Tweets

Historic Indianapolis Tour (via Historypin)

Follow the Digital Public Library of America channel on Historypin.org to take our tour of historic sites in downtown Indianapolis, featuring some great images from the collection! Also, make sure to download the Historypin app, too, to access the tour on-the-go. Inspired to make your own Historypin tour? Share it with us!

Sponsors

The Digital Public Library of America wishes to thank its generous DPLAfest Sponsors:

  • The Alfred P. Sloan Foundation
  • Anonymous Donor
  • Bibliolabs
  • Central Indiana Community Foundation (CICF)
  • Digital Divide Data
  • Digital Library Federation
  • Digital Library Systems Group at Image Access
  • OCLC

DPLA also wishes to thank its gracious hosts:

  • Indianapolis Public Library
  • Indiana State Library
  • Indiana Historical Society
  • IUPUI University Library
Host DPLAfest 2016

If your organization is interested in hosting DPLAfest 2016, please let us know! We will put out a formal call for proposals in late April or early May.

Photos

Storify

Peter Murray: Thursday Threads: Fake Social Media, Netflix is Huge, Secret TPP is Bad

planet code4lib - Thu, 2015-04-23 10:55
Receive DLTJ Thursday Threads:

by E-mail

by RSS

Delivered by FeedBurner

In this week’s Thursday Threads we look at the rise of fake social media influence, how a young media company (Netflix) is now bigger than an old media company (CBS), and a reminder of how secrecy in constructing trade agreements is a bad idea.

Feel free to send this to others you think might be interested in the topics. If you find these threads interesting and useful, you might want to add the Thursday Threads RSS Feed to your feed reader or subscribe to e-mail delivery using the form to the right. If you would like a more raw and immediate version of these types of stories, watch my Pinboard bookmarks (or subscribe to its feed in your feed reader). Items posted to are also sent out as tweets; you can follow me on Twitter. Comments and tips, as always, are welcome.

Buying Social Media Influence

Click farms jeopardize the existential foundation of social media: the idea that the interactions on it are between real people. Just as importantly, they undermine the assumption that advertisers can use the medium to efficiently reach real people who will shell out real money. More than $16 billion was spent worldwide on social media advertising in 2014; this money is the primary revenue for social media companies. If social media is no longer made up of people, what is it?

The Bot Bubble: How Click Farms Have Inflated Social Media Currency, by Doug Bock Clark, New Republic, 20-Apr-2015

Think that all that happens on the social networks is real? You may think differently after reading this article about the business of buying follow, likes, and mentions. How to win friends and influence people in the 21st century? Buy in bulk. (Is that too cynical?)

Netflix is Big. Really Big.

In a letter to investors released on Wednesday, Netflix announced that by the end of March, it had reached a staggering 40 million subscriptions in the U.S. That means there’s a Netflix subscription for more than a third of the households in the United States — 115,610,216, according to the U.S. Census. Which is pretty insane. In the same letter, Netflix announced it had reached more than 20 million international subscribers as well, bringing the total to about 60 million.

Netflix Now Has One Subscriber For Every Three Households In America, by Brendan Klinkenberg, Buzzfeed News, 15-Apr-2015

Netflix shares are soaring after another outstanding quarter. And as of right now, that’s pushed the market value of the disruptive streaming TV company above CBS Corp, which, by most measures, operates the highest rating broadcast TV network in the US.

Netflix is now bigger than CBS, by John McDuling, Quartz, 16-Apr-2015

These two articles about the size of Netflix came out back-to back. I find both of them astounding. Sure, I believe that Netflix’s share price, and therefore its market capitalization, is pushed up in an internet bubble. But one in three households in America is a subscriber? Really? I wonder what the breakdown by age demographic is. If media stereotypes are to be believed, it skews highly towards young cable-cutting households.

Secrecy Surrounding Trans-Pacific Partnership

When WikiLeaks recently released a chapter of the Trans-Pacific Partnership Agreement, critics and proponents of the deal resumed wrestling over its complicated contents. But a cover page of the leaked document points to a different problem: It announces that the draft text is classified by the United States government. Even if current negotiations over the trade agreement end with no deal, the draft chapter will still remain classified for four years as national security information. The initial version of an agreement projected by the government to affect millions of Americans will remain a secret until long after meaningful public debate is possible.

National security secrecy may be appropriate to protect us from our enemies; it should not be used to protect our politicians from us. For an administration that paints itself as dedicated to transparency and public input, the insistence on extensive secrecy in trade is disappointing and disingenuous. And the secrecy of trade negotiations does not just hide information from the public. It creates a funnel where powerful interests congregate, absent the checks, balances and necessary hurdles of the democratic process.

Don’t Keep the Trans-Pacific Partnership Talks Secret, op-ed by Margot E. Kaminiski, New York Times, 14-Apr-2015

Have you seen what’s in the new TPP trade deal?

Most likely, you haven’t – and don’t bother trying to Google it. The government doesn’t want you to read this massive new trade agreement. It’s top secret.

Why? Here’s the real answer people have given me: “We can’t make this deal public because if the American people saw what was in it, they would be opposed to it.”

You can&apost read this, by Elizabeth Warren, 22-Apr-2015

This is bad policy. The intellectual property provisions of it — at least the leaked versions that we have seen — are particularly odious. There should not be fast track authority for a treaty that our elected representatives haven’t seen and haven’t heard from their constituencies about.

Link to this post!

HangingTogether: Going, going, gone: The imperative for archiving the web

planet code4lib - Thu, 2015-04-23 00:55

We all know that over the past 30+ years the World Wide Web has become an indispensable tool (understatement!) for disseminating information, extending the reputations of organizations and businesses, enabling Betty the Blogger to establish an international reputation, and ruining dinner table debate by providing the answer to every conceivable question. It has caused a sea change in how humans communicate and learn. Some types of content are new, but huge quantities of material once published in print are now issued only in bytes. For example, if you’re a university archivist, you know that yesterday’s endless flood of high-use content such as graduate program brochures, course listings, departmental newsletters, and campus information dried up a decade or more ago. If you’re a public policy librarian, you know that the enormously important “grey literature” once distributed as pamphlets is now only mostly on the web. Government information? It’s almost all e-only. In addition, the scope of the scholarly record is evolving to embrace new types of content, much of which is also web-only. Without periodic harvesting of the websites that host all this information, the content is gone, gone, gone. In general, we’ve been very slow to respond to this imperative. Failure to adequately preserve the web is at the heart of the Digital Dark Ages.

The Internet Archive’s astonishing Wayback Machine has been archiving the web since the mid-1990s, but its content is far from being complete or reliable, and searching is possible only by URL. In some countries, such as the U.K. and New Zealand, the national library or archives is charged with harvesting the country’s entire web domain, and they struggle to fulfill this charge. In the U.S., some archives and libraries have been harvesting websites for a number of years, but few have been able to do so at scale. Many others have yet to dip their toes in the water. Why do so many of us lack a sense of urgency about preserving all this content? Well, for one thing, web archiving is rife with challenges.

Within the past week Ricky, Dennis, and I hosted two Webex conversations with members of our OCLC Research Library Partnership to surface some of the issues that are top-of-mind for our colleagues. Our objective was to learn whether there are shared problems that make sense for us to work on together to identify community-based solutions. All told, more than sixty people came along for the ride, which immediately suggested that we had touched a nerve. In promoting the sessions, we posited ten broad issues and asked registrants to vote for their top three. The results of this informal poll gave us a good jumping-off point. Master synthesizer Ricky categorized the issues and counted the aggregate votes for each: capture (37), description (41), and use (61). (I confess to having been glad to see use come out on top.)

OK, take a guess … what was the #1 issue? Not surprisingly … metadata guidelines! As with any type of cataloging, no one wants to have to invent the wheel themselves. Guidelines do exist, but they don’t meet the needs of all institutions. #2: Increase access to archived websites. Many sites are archived but are not then made accessible, for a variety of good reasons. #3: Ensure capture of your institution’s own output. If you’re worried about this one, you should be. #4: Measure access to archived websites. Hard to do. Do you have an analytics tool that can ever do what you really want it to?

Other challenges received some votes: getting descriptions of websites into local catalogs and WorldCat, establishing best practices for quality assurance of crawls, collaborating on selection of sites, and increasing discovery through Google and other search engines (we were a tad mystified about why this last one didn’t get more votes). Some folks offered up their own issues, such as capture of file formats other than HTML, providing access in a less siloed way, improving the end-user experience, sustaining a program in the face of minimal resources, and developing convincing use cases.

When we were done, Ricky whipped out a list of her chief off-the-cuff takeaways, to whit:

  • We need strong use cases to convince resource allocators that this work is mission-critical.
  • Let’s collaborate on selection so we don’t duplicate each others’ work.
  • Awareness of archived websites is low across our user communities: let’s fix that.
  • In developing metadata guidelines, we should bridge the differing approaches of the library and archival communities.
  • We need meaningful use metrics.
  • We need to know how users are navigating aggregations of archived sites and what they want to do with the content.
  • Non-HTML file formats are the big capture challenge.

Our Webex conversations were lively and far ranging. Because we emphasized that we needed experienced practitioners at the table, we learned that even the experts responsible for large-scale harvesting struggle in various ways. Use issues loomed large: no one tried to claim that archived websites are easy to locate, comprehend, or use. Legal issues are sometimes complex depending on the sites being crawled. Much like ill-behaved serials, websites change title, move, split, and disappear without warning. Cataloging at the site or document level isn’t feasible if, like the British Library, you crawl literally millions of sites. Tools for analytics are too simplistic for answering the important questions about use and users.

Collecting, preserving, and providing access to indispensible informational, cultural, and scholarly content has always been our shared mission. The web is where today’s content is. Let’s scale up our response before we lose more decades of human history.

What are your own web archiving challenges? Let us know by submitting a comment below, or get in touch by whatever means you prefer so we can add your voice to the conversation. We’re listening.

About Jackie Dooley

Jackie Dooley leads OCLC Research projects to inform and improve archives and special collections practice. Activities have included in-depth surveys of special collections libraries in the U.S./Canada and the U.K./Ireland; leading the Demystifying Born Digital work agenda; a detailed analysis of the 3 million MARC records in ArchiveGrid; and studying the needs of archival repositories for specialized tools and services. Her professional research interests have centered on the development of standards for cataloging and archival description. She is a past president of the Society of American Archivists and a Fellow of the Society.

Mail | Web | Twitter | Facebook | More Posts (16)

DuraSpace News: Asian Development Bank Embraces Open Access

planet code4lib - Thu, 2015-04-23 00:00

Since it was founded in 1966, the Asian Development Bank has been the leading organization fighting poverty in the Asian and Pacific region. The organisation set its goal to enhance economic collaboration by investing in regional projects. Currently, the Asian Development Bank has 67 members.

DuraSpace News: Chris Wilper Joins @mire

planet code4lib - Thu, 2015-04-23 00:00

For people acquainted with digital repositories, conferences or mailinglists, Chris Wilper's name should ring a bell. After being part of the initial Fedora team at Cornell University, Chris was the Fedora tech lead at DuraSpace between 2008 and 2012. We are excited to add Chris’ vast experience with digital repositories and his unique perspective on the grass-roots of the repository community to our team.

DuraSpace News: SIGN UP for Current SHARE (SHared Access Research Ecosystem) News

planet code4lib - Thu, 2015-04-23 00:00

Winchester, MA  Interested in following SHARE news and events?

Nicole Engard: Bookmarks for April 22, 2015

planet code4lib - Wed, 2015-04-22 20:30

Today I found the following resources and bookmarked them on Delicious.

  • ?dygraphs.com An open-source JavaScript charting library for handling huge data sets.
  • Leaflet Leaflet is a modern open-source JavaScript library for mobile-friendly interactive maps.
  • Chart.js Simple, clean and engaging charts for designers and developers
  • WordPress LMS Plugin by LearnDash® LearnDash is taking cutting edge elearning methodology and infusing it into WordPress. More than just a plugin, we provide practical and experience driven guidance for individuals and organizations interested in setting up online courses.
  • DuckDuckGo The search engine that doesn’t track you.

Digest powered by RSS Digest

The post Bookmarks for April 22, 2015 appeared first on What I Learned Today....

Related posts:

  1. Bulk WordPress Plugin Installer
  2. WordPress bookshelf plugin
  3. WordPress Automatic Upgrade

LITA: After Hours: Circulating Technology to Improve Kids’ Access

planet code4lib - Wed, 2015-04-22 19:14

A LITA Webinar: After Hours: Circulating Technology to Improve Kids’ Access

Wednesday May 27, 2015
1:00 pm – 2:00 pm Central Time
Register now for this webinar

The second brand new LITA Webinar on youth and technology.

For years libraries have been providing access and training to technology through their services and programs. Kids can learn to code, build a robot, and make a movie with an iPad at the library. But what can they do when they get home? How can libraries expand their reach to help more than just the youth they see every day? The Meridian Library (ID) has chosen to start circulating new types of technology. Want to learn about Arduinos? Check one out from our library! What is a Raspberry Pi? You get 4 weeks to figure it out. Robots too expensive to buy? Too many iPad apps to choose from? Test it from your library first. Join Megan Egbert to discover benefits, opportunities and best practices.

Megan Egbert

Is the Youth Services Manager for the Meridian Library District (ID), where she oversees programs and services for ages 0-18. Previous to her three years in this position she was a Teen Librarian. She earned her Masters in Library Science from the University of North Texas and her Bachelors in Sociology from Boise State University. Her interests include STEAM education, digital badges, makerspaces, using apps in storytime, and fostering digital literacy. @MeganEgbert on Twitter

Then register for the webinar

Full details
Can’t make the date but still want to join in? Registered participants will have access to the recorded webinar.
Cost:

LITA Member: $45
Non-Member: $105
Group: $196
Registration Information

Register Online page arranged by session date (login required)
OR
Mail or fax form to ALA Registration
OR
Call 1-800-545-2433 and press 5
OR
email registration@ala.org

Questions or Comments?

For all other questions or comments related to the course, contact LITA at (312) 280-4269 or Mark Beatty, mbeatty@ala.org.

DPLA: A Look at the Empire State Digital Network

planet code4lib - Wed, 2015-04-22 19:11
A look into the Empire State Digital Network, a DPLA Service Hub, from ESDN Manager Kerri Willette.  

 

A few months ago Amy Rudersdorf, Assistant Director for Content at DPLA, contacted me for feedback on a visualization she created that illustrates the structure of the DPLA Service Hub in New York. After a few emails back and forth, we landed on this diagram of the Empire State Digital Network (ESDN).

This is a very accurate 2-D depiction of our hub, but I’m amazed by how simple it looks compared to the complex and layered network we have up and running in real life.

New York is home to nine library and research councils that comprise the NY3Rs Association, Inc. (NY3Rs). While it occupies its own dark blue circle in the image above, Metropolitan New York Library Council (METRO), home to our Service Hub, is one of the nine regional councils that make up the NY3Rs. In addition to myriad digital content hosted at individual organizations statewide (the small gray circles in the diagram), New York is also home to four large-scale collaborative digital projects. These projects are all supported and facilitated by the NY3Rs regional councils.

I moved to New York City in March 2014 to join the New York hub after spending most of my career in Chicago. I had very few professional relationships in New York when I arrived. My first few months at ESDN were a whirlwind of new acronyms, new people, and new places. Without the dedicated support and good humor of the digital services managers in each region, I would have floundered. Instead, ESDN recently made its first contribution from New York institutions to DPLA in January (almost 90,000 records!).

Our regional liaisons have been crucial in identifying and recruiting New York content for DPLA.  I think that’s why I keep coming back to the ESDN visualization: while it’s an effective representation of our hub, the true structure of our project is built on numerous relationships that we can’t even begin to capture in a single diagram. I’m speaking not just of our team’s relationships with our liaisons (although seriously, we would be NOWHERE without these nine amazing people), but more importantly, the relationships that those liaisons cultivated with Archivists and Digital Collections Librarians and Curators and Deans at institutions across the state long before ESDN arrived.

The incredible collections ESDN has contributed to DPLA so far (like this one and this one and this one), are now open to the world thanks to the years of partnerships and investments that underpin the rich digital culture of this state. The beauty of the DPLA hub model is that it assumes and relies on the value and depth of local relationships. DPLA could not exist without the vast networks of support and trust that are already alive and well in the cultural heritage community across the nation.

 

Featured image credit: Detail from “Women on parallel bars, Ithaca Conservatory and Affiliated Schools, Ithaca, NY.” Courtesy Ithaca College via the Empire State Digital Network.

All written content on this blog is made available under a Creative Commons Attribution 4.0 International License. All images found on this blog are available under the specific license(s) attributed to them, unless otherwise noted.

District Dispatch: Johnna Percell selected for 2015 ALA Google Policy Fellowship

planet code4lib - Wed, 2015-04-22 18:59

Google Policy Fellow Johnna Percell.

Today, the American Library Association (ALA) announced that Johnna Percell will serve as its 2015 Google Policy Fellow. As part of her summer fellowship, Percell will spend ten weeks in Washington, D.C. working on technology and Internet policy issues. As a Google Policy Fellow, Percell will explore diverse areas of information policy, including copyright law, e-book licenses and access, information access for underserved populations, telecommunications policy, digital literacy, online privacy, and the future of libraries. Google, Inc. pays the summer stipends for the fellows and the respective host organizations determine the fellows’ work agendas.

Percell will work for the American Library Association’s Office for Information Technology Policy (OITP), a unit of the association that works to ensure the library voice in information policy debates and promote full and equitable intellectual participation by the public. Percell is a graduate student at the University of Maryland, pursuing a master’s degree in Library Science from the university’s College of Information Studies. She currently works as an intern at the District of Columbia Public Library in Washington, D.C. Percell completed her undergraduate education with a major in English at Harding Universtiy in Arkansas.

“ALA is pleased to participate for the eighth consecutive year in the Google Policy Fellowship program,” said Alan S. Inouye, director of the ALA Office for Information Technology Policy. “We look forward to working with Johnna Percell in finding new information policy opportunities for libraries, especially in the realm of services for diverse populations.”

Find more information the Google Policy Fellowship Program.

The post Johnna Percell selected for 2015 ALA Google Policy Fellowship appeared first on District Dispatch.

District Dispatch: ALA says “NO!” to Section 215 reauthorization gambit

planet code4lib - Wed, 2015-04-22 17:40

As both chambers of Congress prepare to take up and debate long-needed surveillance law reform, Senate Majority Leader Mitch McConnell’s (R-KY) bill (introduced late yesterday) to simply reauthorize the “library provision” (Section 215) of the USA PATRIOT Act until 2020 without change of any kind was met today by a storm of opposition from leading privacy and civil liberties groups with ALA in the vanguard.  In a statement released this morning, American Library Association (ALA) President Courtney Young said unequivocally of S.1035:

“Nothing is more basic to democracy and librarianship than intellectual freedom. And, nothing is more hostile to that freedom than the knowledge that the government can compel a library—without a traditional judicial search warrant—to report on the reading and Internet records of library patrons, students, researchers and entrepreneurs. That is what Section 215 did in 2001 and what it still does today.

“The time is long past for Section 215 to be meaningfully reformed to restore the civil liberties massively and unjustifiably compromised by the USA PATRIOT Act. For libraries of every kind, for our hundreds of millions of users, ALA stands inimically against S. 1035 and the reauthorization of Section 215 without significant and urgently needed change.”

In the coming days and weeks the ALA Washington Office will be working intensively to fight for real changes to Section 215 and other provisions of the PATRIOT Act, but it will need the help of all librarians and library supporters to succeed.  Sign up now for the latest on ALA and its coalition partners’ efforts, and how you can help sway your Members of Congress when the time comes.  That will be very soon, so don’t wait!

 

The post ALA says “NO!” to Section 215 reauthorization gambit appeared first on District Dispatch.

Library of Congress: The Signal: Libraries Looking Across Languages: Seeing the World Through Mass Translation

planet code4lib - Wed, 2015-04-22 13:32

The following is a guest post by Kalev Hannes Leetaru, Senior Fellow, George Washington University Center for Cyber & Homeland Security. Portions adapted from a post for the Knight Foundation.

Geotagged tweets November 2012 colored by language.

Imagine a world where language was no longer a barrier to information access, where anyone can access real-time information from anywhere in the world in any language, seamlessly translated into their native tongue and where their voice is equally accessible to speakers of all the world’s languages. Authors from Douglas Adams to Ethan Zuckerman have long articulated such visions of a post-lingual society in which mass translation eliminates barriers to information access and communication. Yet, even as technologies like the web have broken down geographic barriers and increasingly made it possible to access information from anywhere in the world, linguistic barriers mean most of those voices remain steadfastly inaccessible. For libraries, mass human and machine translation of the world’s information offers enormous possibilities for broadening access to their collections. In turn, as there is greater interest in the non-Western and non-English world’s information, this should lead to a greater focus on preserving it akin to what has been done for Western online news and television.

There have been many attempts to make information accessbile across language barriers using both human and machine translation. During the 2013 Egyptian uprising, Twitter launched live machine translation of Arabic-language tweets from select political leaders and news outlets, an experiment which it expanded for the World Cup in 2014 and made permanent this past January with its official “Tweet translation” service. Facebook launched its own machine translation service in 2011, while Microsoft recently unveiled live spoken translation for Skype. Turning to human translators, Wikipedia’s Content Translation program combines machine translation with human correction in its quest to translate Wikipedia into every modern language and TED’s Open Translation Project has brought together 20,000 volunteers to translate 70,000 speeches into 107 languages since 2009. Even the humanitarian space now routinely leverages volunteer networks to mass translate aid requests during disasters, while mobile games increasingly combine machine and human translation to create fully multilingual chat environments.

Yet, these efforts have substantial limitations. Twitter and Facebook’s on-demand model translates content only as it is requested, meaning a user must discover a given post, know it is of possible relevance, explicitly request that it be translated and wait for the translation to become available. Wikipedia and TED attempt to address this by pre-translating material en masse, but their reliance on human translators and all-volunteer workflows impose long delays before material becomes available.

Journalism has experimented only haltingly with large-scale translation. Notable successes such as Project Lingua, Yeeyan.org and Meedan.org focus on translating news coverage for citizen consumption, while journalist-directed efforts such as Andy Carvin’s crowd-sourced translations are still largely regarded as isolated novelties. Even the U.S. government’s foreign press monitoring agency draws nearly half its material from English-language outlets to minimize translation costs. At the same time, its counterterrorism division monitoring the Lashkar-e-Taiba terrorist group remarks of the group’s communications, “most of it is in Arabic or Farsi, so I can’t make much of it.

Libraries have explored translation primarily as an outreach tool rather than as a gateway to their collections. Facilities with large percentages of patrons speaking languages other than English may hire bilingual staff, increase their collections of materials in those languages and hold special events in those languages. The Denver Public Library offers a prominent link right on its homepage to its tailored Spanish-language site that includes links to English courses, immigration and citizenship resources, job training and support services. Instead of merely translating their English site into Spanish wording, they have created a completely customized parallel information portal. However, searches of their OPAC in Spanish will still only return works with Spanish titles: a search for “Matar un ruisenor” will return only the single Spanish translation of “To Kill a Mockingbird” in their catalog.

On the one hand, this makes sense, since a search for a Spanish title likely indicates an interest in a Spanish edition of the book, but if no Spanish copy is available, it would be useful to at least notify the patron of copies in other languages in case that patron can read any of those other languages. Other sites like the Fort Vancouver Regional Library District use the Google Translate widget to perform live machine translation of their site. This has the benefit that when searching the library catalog in English, the results list can be viewed in any of Google Translate’s 91 languages. However, the catalog itself must still be searched in English or the language that the book title is published in, so this only solves part of the problem.

In fact, the lack of available content for most of the world’s languages was identified in the most recent Internet.org report (PDF) as being one of the primary barriers to greater connectivity throughout the world. Today there are nearly 7,000 languages spoken throughout the world of which 99.7% are spoken by less than 1% of the world’s population. By some measures, just 53% of the Earth’s population has access to measurable online content in their primary language and almost a billion people speak languages for which no Wikipedia content is available. Even within a single country there can be enormous linguistic variety: India has 425 primary languages and Papua New Guinea has 832 languages spoken within its borders. As ever-greater numbers of these speakers join the online world, even English speakers are beginning to experience linguistic barriers: as of November 2012, 60% of tweets were in a language other than English.

Web companies from Facebook and Twitter to Google and Microsoft are increasingly turning to machine translation to offer real-time access to information in other languages. Anyone who has used Google or Microsoft Translate is familiar with the concept of machine translation and both its enormous potential (transparently reading any document in any language) and current limitations (many translated documents being barely comprehensible). Historically, machine translation systems were built through laborious manual coding, in which a large team of linguists and computer programmers sat down and literally hand-programmed how every single word and phrase should be translated from one language to another. Such models performed well on perfectly grammatical formal text, but often struggled with the fluid informal speech characterizing everyday discourse. Most importantly, the enormous expense of manually programming translation rules for every word and phrase and all of the related grammatical structures of both the input and output language meant that translation algorithms were built for only the most widely-used languages.

Advances in computing power over the past decade, however, have led to the rise of “statistical machine translation” (SMT) systems. Instead of humans hand-programming translation rules, SMT systems examine large corpora of material that have been human-translated from one language to another and learn which words from one language correspond to those in the other language. For example, an SMT system would determine that when it sees “dog” in English it almost always sees “chien” in French, but when it sees “fan” in English, it must look at the surrounding words to determine whether to translate it into “ventilateur” (electric fan) or “supporter” (sports fan).

Such translation systems require no human intervention – just a large library of bilingual texts as input. United Nations and European Union legal texts are often used as input given that they are carefully hand translated into each of the major European languages. The ability of SMT systems to rapidly create new translation models on-demand has led to an explosion in the number of languages supported by machine translation systems over the last few years, with Google Translate translating to/from 91 languages as of April 2015.

What would it look like if one simply translated the entirety of the world’s information in real-time using massive machine translation? For the past two years the GDELT Project has been monitoring global news media, identifying the people, locations, counts, themes, emotions, narratives, events and patterns driving global society. Working closely with governments, media organizations, think tanks, academics, NGO’s and ordinary citizens, GDELT has been steadily building a high resolution catalog of the world’s local media, much of which is in a language other than English. During the Ebola outbreak last year, GDELT actually monitored many of the earliest warning signals of the outbreak in local media, but was unable to translate the majority of that material. This led to a unique initiative over the last half year to attempt to build a system to literally live-translate the world’s news media in real-time.

Beginning in Fall 2013 under a grant from Google Translate for Research, the GDELT Project began an early trial of what it might look like to try and mass-translate the world’s news media on a real-time basis. Each morning all news coverage monitored by the Portuguese edition of Google News was fed through Google Translate until the daily quota was exhausted. The results were extremely promising: over 70% of the activities mentioned in the translated Portuguese news coverage were not found in the English-language press anywhere in the world (a manual review process was used to discard incorrect translations to ensure the results were not skewed by translation error). Moreover, there was a 16% increase in the precision of geographic references, moving from “rural Brazil” to actual city names.

The tremendous success of this early pilot lead to extensive discussions over more than a year with the commercial and academic machine-translation communities on how to scale this approach upwards to be able to translate all accessible global news media in real-time across every language. One of the primary reasons that machine translation today is still largely an on-demand experience is the enormous computational power it requires. Translating a document from a language like Russian into English can require hundreds or even thousands of processors to produce a rapid result. Translating the entire planet requires something different: a more adaptive approach that can dynamically adjust the quality of translation based on the volume of incoming material, in a form of “streaming machine translation.”

Geographic focus of world’s news media by language 8-9AM EST on April 1, 2015 (Green = locations mentioned in Spanish media, Red = French media, Yellow = Arabic media, Blue = Chinese media).

The final system, called GDELT Translingual, took around two and a half months to build and live-translates all global news media that GDELT monitors in 65 languages in real-time, representing 98.4% of the non-English content it finds worldwide each day. Languages supported include Afrikaans, Albanian, Arabic (MSA and many common dialects), Armenian, Azerbaijani, Bengali, Bosnian, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kannada, Kazakh, Korean, Latvian, Lithuanian, Macedonian, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian (Bokmal), Norwegian (Nynorsk), Persian, Polish, Portuguese (Brazilian), Portuguese (European), Punjabi, Romanian, Russian, Serbian, Sinhalese, Slovak, Slovenian, Somali, Spanish, Swahili, Swedish, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu and Vietnamese.

Building the system didn’t require starting from scratch, as there is an incredible wealth of open tools and datasets available to support all of the pieces of the machine translation pipeline. Open source building blocks utilized include the Moses toolkit and a number of translation models contributed by researchers in the field; the Google Chrome Compact Language Detector; 22 different WordNet datasets; multilingual resources from the GEOnet Names Server, Wikipedia, the Unicode Common Locale Data Repository; word segmentation algorithms for Chinese, Japanese, Thai and Vietnamese; and countless other tools. Much of the work lay in how to integrate all of the different components and constructing some of the key unique new elements and architectures to enable the system to scale to GDELT’s needs. A more detailed technical description of the final architecture, tools, and datasets used in the creation of GDELT Translingual is available on the GDELT website.

Just as we digitize books and use speech synthesis to create spoken editions for the visually impaired, we can use machine translation to provide versions of those digitized books into other languages. Imagine a speaker of a relatively uncommon language suddenly being able to use mass translation to access the entire collections of a library and even to search across all of those materials in their native language. In the case of a legal, medical or other high-importance text, one would not want to trust the raw machine translation on its own, but at the very least such a process could be used to help a patron locate a specific paragraph of interest, making it much easier for a bilingual speaker to assist further. For more informal information needs, patrons might even be able to consume the machine translated copy directly in many cases.

Machine translation may also help improve the ability of human volunteer translation networks to bridge common information gaps. For example, one could imagine an interface where a patron can use machine translation to access any book in their native language regardless of its publication language, and can flag key paragraphs or sections where the machine translation breaks down or where they need help clarifying a passage. These could be dispatched to human volunteer translator networks to translate and offer back those translations to benefit others in the community, perhaps using some of the same volunteer collaborative translation models of the disaster community.

As Online Public Access Catalog software becomes increasingly multilingual, eventually one could imagine an interface that automatically translates a patron’s query from his/her native language into English, searches the catalog, and then returns the results back in that person’s language, prioritizing works in his/her native language, but offering relevant works in other languages as well. Imagine a scholar searching for works on an indigenous tribe in rural Brazil and seeing not just English-language works about that tribe, but also Portuguese and Spanish publications.

Much of this lies in the user interface and in making language a more transparent part of the library experience. Indeed, as live spoken-to-spoken translation like Skype’s becomes more common, perhaps eventually patrons will be able to interact with library staff using a Star Trek-like universal translator. As machine translation technology improves and as libraries focus more on multilingual issues, such efforts also have the potential to increase visibility of non-English works for English speakers, countering the heavily Western-centric focus of much of the available information on the non-Western world.

Finally, it is important to note that language is not the only barrier to information access. The increasing fragility and ephemerality of information, especially journalism, poses a unique risk to our understanding of local events and perspectives. While the Internet has made it possible for even the smallest news outlet to reach a global audience, it has also has placed journalists at far greater risk of being silenced by those who oppose their views. In the era of digitally published journalism, so much of our global heritage is at risk of disappearing at the pen stroke of an offended government, at gunpoint by masked militiamen, by regretful combatants or even through anonymized computer attacks. A shuttered print newspaper will live on in library archives, but a single unplugged server can permanently silence years of journalism from an online-only newspaper.

In perhaps the single largest program to preserve the online journalism of the non-Western world, each night the GDELT Project sends a complete list of the URLs of all electronic news coverage it monitors to the Internet Archive under its “No More 404” program, where they join the Archive’s permanent index of more than 400 billion web pages. While this is just a first step towards preserving the world’s most vulnerable information, it is our hope that this inspires further development in archiving high-risk material from the non-Western and non-English world.

We have finally reached a technological junction where automated tools and human volunteers are able to take the first, albeit imperfect, steps towards mass translation of the world’s information at ever-greater scales and speeds. Just as the internet reduced geographic boundaries in accessing the world’s information, one can only imagine the possibilities of a world in which a single search can reach across all of the world’s information in all the world’s languages in real-time.

Machine translation has truly come of age to a point where it can robustly translate foreign news coverage into English, feed that material into automated data mining algorithms and yield substantially enhanced coverage of the non-Western world. As such tools gradually make their way into the library environment, they stand poised to profoundly reshape the role of language in the access and consumption of our world’s information. Among the many ways that big data is changing our society, its empowerment of machine translation is bridging traditional distances of geography and language, bringing us ever-closer to the notion of a truly global society with universal access to information.

In the Library, With the Lead Pipe: Adopting the Educator’s Mindset: Charting New Paths in Curriculum and Assessment Mapping

planet code4lib - Wed, 2015-04-22 13:00

Photo by Flickr user MontyAustin (CC BY-NC-ND 2.0)

In Brief:

The greatest challenge that I faced in my role as Information Literacy Librarian occurred as a result of a Higher Learning Commission (HLC) initiative at my institution, requiring all academic programs/departments to create/review/revise program-level student learning outcomes  (PLSLOs), curriculum maps, and assessment maps. This initiative served as a catalyst for the information literacy program, prompting  me to seek advice from faculty in the Education Department at Southwest Baptist University (SBU), who were more familiar with educational theory and curriculum/assessment mapping methods. In an effort to accurately reflect the University Libraries’ impact on student learning inside and outside of the classroom, I looked for ways to display this visually. The resulting assessment map included classes the faculty and I could readily assess, as well as an evaluation of statistics on library services and resources that also impact student learning, such as data from LibGuide and database usage, reference transactions, interlibrary loans, course reserves, annual gate count trends, the biennial student library survey, and website usability testing.

Embarking on a Career in Information Literacy

Like most academic librarians there was little focus on instruction in my graduate school curriculum. My only experience with classroom instruction occurred over a semester-long internship, during which I taught less than a handful of information literacy sessions. Although I attended  ACRL’s Immersion-Teacher Track Conference as a new librarian, I was at a loss as to how I should strategically apply the instruction and assessment best practices gleaned during that experience to the environment in which I found myself.

When I embraced the role of Information Literacy Librarian at Southwest Baptist University (SBU) Libraries in 2011, I joined a faculty of six other librarians. The year I started, the University Libraries transitioned to a liaison model, with six of the seven librarians, excluding the Library Dean, providing instruction for each of the academic colleges represented at the University. Prior to this point, one librarian provided the majority of instruction across all academic disciplines. As the Information Literacy Librarian, I was given the challenge of directing all instruction and assessment efforts on behalf of the University Libraries. Although my predecessor developed an information literacy plan, the Library Dean asked me to create a plan that spanned the curriculum.

Charting A New Course  

The greatest challenge that I faced in my role as Information Literacy Librarian occurred as a result of a Higher Learning Commission (HLC) initiative at my institution, requiring all academic programs/departments to create/review/revise program-level student learning outcomes  (PLSLOs), curriculum maps, and assessment maps. I found assessment mapping particularly nebulous, since the librarians at my institution do not teach semester long classes. In lieu of this, I looked for new ways to document and assess the University Libraries’ impact on student learning not only inside, but outside of the classroom setting. The resulting assessment map included classes faculty and I could readily assess, as well as an evaluation of statistics on library services and resources that also impact student learning, such as data from LibGuide and database usage, reference transactions, interlibrary loans, course reserves, annual gate count trends, the biennial student library survey, and website usability testing.

As is the case when discovering all uncharted territories, taking a new approach required me to seek counsel from Communities of Practice at my institution, defined as “staff bound together by common interests and a passion for a cause, and who continually interact. Communities are sometimes formed within the one organisation, and sometimes across many organisations. They are often informal, with fluctuating membership and people can belong to more than one community at a time” (Mitchell 5). At SBU, I forged a Community of Practice with faculty in the Education Department, with whom I could meet, as needed, to discuss how the University Libraries could most effectively represent its impact on student learning.

Learning Theory: A Framework for Information Literacy, Instruction, & Assessment

Within the library literature educational and instructional design theorists are frequently cited. Instructional theorists have significantly shaped my pedagogy over the past three and a half years. In their book, Understanding by Design, educators  Grant Wiggins and Jay McTighe point out the importance of developing a cohesive plan that serves as a compass for learning initiatives. They write: “Teachers are designers. An essential act of our profession is the crafting of curriculum and learning experiences to meet specified purposes. We are also designers of assessments to diagnose student needs to guide our teaching and to enable us, our students, and others (parents and administrators) to determine whether we have achieved our goals” (13).  They propose that curriculum designers embrace the following strategic sequence in order to achieve successful learning experiences – 1. “Identify desired results,” 2. “Determine acceptable evidence,” and 3. “Plan learning experiences and instruction” (Wiggins and McTighe 18).

As librarians, we are not only interested in our students’ ability to utilize traditional information literacy skill sets, but we also have a vested interest in scaffolding “critical information literacy,” skills which “differs from standard definitions of information literacy (ex: the ability to find, use, and analyze information) in that it takes into consideration the social, political, economic, and corporate systems that have power and influence over information production, dissemination, access, and consumption” (Gregory and Higgins 4). The time that we spend with students is limited, since many information literacy librarians do not teach semester-long classes nor do we meet each student who steps foot on our campuses. However, as McCook and Phenix point out, awakening critical literacy skills is essential to “the survival of the human spirit” (qtd. in Gregory and Higgins 2). Therefore, librarians must look for ways to invest in cultivating students’ literacy beyond the traditional four walls of the classroom.

Librarians and other teaching faculty recognize that “Students need the ability to think out of the box, to find innovative solutions to looming problems…” (Levine 165). In his book, Generation on a Tightrope: A Portrait of Today’s College Student, Arthur Levine notes that the opportunity academics have to cultivate students’ intellect is greatest during the undergraduate years. While some of them may choose to pursue graduate-level degrees later on, at this point their primary objective will be to obtain ‘just in time education’ at the point of need (165). It is this fact that continues to inspire an urgency in our approaches to information literacy education.

One of the most challenging aspects of pedagogy is that it is messy. While educators are planners, learning and assessment is by no means something that can be wrapped up and decked out with a beautiful bow. Education requires us to give of ourselves, assess what does and does not work for our students and then make modifications as a result. According to educator Rick Reiss, while students are adept at accessing information via the internet, “Threshold concepts and troublesome knowledge present the core challenges of higher learning” (n. pag.). Acquiring new knowledge requires us to grapple with preconceived notions and to realize that not everything is black and white. Despite the messy process in which I found myself immersed, knowledge gleaned from educational and instructional theorists began to bring order to the curriculum and assessment mapping process.

Eureka Moments in Higher Education: Seeing Through a New Lens for the First Time

Eureka moments are integral to the world of education and often consist of a revelation or intellectual discovery. This concept is best depicted in the story of a Greek by the name of Archimedes. Archimedes was tasked  by the king of his time with determining whether or not some local tradesmen had crafted a crown out of pure gold or substituted some of the precious metal with a less valuable material like silver to make a surplus on the project at hand (Perkins 6). As water began flowing out of the tub, legend has it that “In a flash, Archimedes discovered his answer: His body displaced an equal volume of water. Likewise, by immersing the crown in water, Archimedes could determine its volume and compare that with the volume of an equal weight of gold” (Perkins 7). He quickly emerged from the tub naked and ran across town announcing his discovery. Although we have all experienced Eureka moments to some extent or another, not all of them are as dramatically apparent as Archimedes’s discovery.

In his book entitled Archimedes’ Bathtub: The Art and Logic of Breakthrough Thinking, David Perkins uses the the phrase “cognitive snap,” to illustrate a breakthrough that comes suddenly, much like Archimedes’s Eureka moment (10).  Although the gestational period before my cognitive snap was almost three and a half years in the making, when I finally  began to grasp and apply learning theory to the development of PLSLOs, curriculum, and assessment maps, I knew that it was the dawning of a new Eureka era for me.

Librarians play a fundamental role in facilitating cognitive snaps among the  non-library faculty that they partner with in the classroom. Professors of education, history, computer science, etc. enlighten their students to subject-specific knowledge, while librarians have conveyed the value of incorporating information literacy components into the curriculum via the Association of College and Research Libraries’ (ACRL) – Information Literacy Competency Standards for Higher Education. Now, through the more recently modified Framework for Information Literacy for Higher Education, librarians are establishing their own subject-specific approach to information literacy that brings “cognitive snaps” related to the research process into the same realm as disciplinary knowledge (“Information Literacy Competency Standards”; “Framework for Information Literacy”).

Like most universities, each academic college at SBU is comprised of multiple departments, with each consisting of a department chair. The University Libraries is somewhat unique within this framework, in that it is not classified as an academic college, nor does it consist of multiple departments. In 2013, the Library Dean asked me to assume the role of department chair for the University Libraries, because he wanted me to attend the Department Chair Workshops led by the Assessment Academy Team (comprised of the Associate Provost for Teaching and Learning and designated faculty across the curriculum) at SBU. These workshops took place from January 2013 through August 2014. All Department Chairs were invited to participate in four workshops geared towards helping faculty across the University review, revise, and /or create PLSLOs, curriculum, and assessment maps. While my review of educational theory and best practices certainly laid a framework for the evolving information literacy program at SBU, it was during this period that I began charting a new course, as I applied the concepts gleaned during these workshops to the curriculum and assessment maps that I designed for the University Libraries.

What I Learned About the Relationship Between Curriculum & Assessment Mapping

In conversations with Assessment Academy Team members currently serving in the Education Department, I slowly adopted an educator’s lens through which to view these processes. Prior to this point, my knowledge of PLSLOs and curriculum mapping came from the library and education literature that I read. Dialogues with practitioners in the Education Department at my campus slowly enabled me to address teaching and assessment from a pedagogical standpoint, employing educational best practices.

Educator Heidi Hayes-Jacobs believes that mapping is key to the education process. She writes: “Success in a mapping program is defined by two specific outcomes: measurable improvement in student performance in the targeted areas and the institutionalization of mapping as a process for ongoing curriculum and assessment review” (Getting Results with Curriculum Mapping 2). While Hayes-Jacobs’s expertise is in curriculum mapping within the K-12 school system, the principles that she advances apply to higher education, as well as information literacy. She writes about the gaps that often exist as a result of teachers residing in different buildings or teaching students at different levels of the educational spectrum, for example the elementary, middle school, or high school levels (Mapping the Big Picture 3). The mapping process establishes greater transparency and awareness of what is taught across the curriculum and establishes accountability, in spite of the fact that teachers, professors, or librarians might not interact on a daily or monthly basis. It provides a structure for assessment mapping because all of these groups must not only evaluate what they are teaching, but whether or not students are grasping PLSLOs.

Curriculum Maps Are Just a Stepping Stone: Assessment Mapping for the Faint of Heart

When I assumed the role of Information Literacy Librarian at SBU, I knew nothing about assessment. Sure, I knew how to define it and I was familiar with being on the receiving end as a student,  but frankly as a new librarian it scared me. Perhaps that is because I saw it as a solo effort that would most likely not provide a good return on my investment. I quickly realized, however, that facilitating assessment opportunities was critical because I wanted to cultivate Eureka moments for my students. In the event that  students do not understand something, it is my job to look for strategies to address the gap in their knowledge and scaffold the learning process.

Assessment mapping is the next logical step in the mapping process. While curriculum maps give us the opportunity to display the PLSLOs integrated across the curriculum, assessment maps document the tools and assignments that we will utilize to determine whether or not our students have grasped designated learning outcomes. Curriculum and assessment maps do not require input from one person, but rather collaboration among faculty. According Dr. Debra Gilchrist, Vice President for Learning and Student Success at Pierce College, “Assessment is a thoughtful and intentional process by which faculty and administrators collectively, as a community of learners, derive meaning and take action to improve. It is driven by the intrinsic motivation to improve as teachers, and we have learned that, just like the students in our classes, we get better at this process the more we actively engage it” (72). Assessment is not about the data, but strategically getting better at what we do (Gilchrist 76).

Utilizing the Educator’s Lens to Develop Meaningful Curriculum & Assessment Maps

Over the last three and a half years I have learned a great deal about applying the educator’s lens to information literacy. It has made a difference not only in the way I teach and plan, but in the collaboration that I facilitate among the library faculty at my institution who also visit the classroom regularly. Perhaps what scared me the most about assessment initially was my desire to achieve perfection in the classroom, a concept that is completely uncharacteristic of education. I combated this looming fear by immersing myself in pedagogy and asking faculty in the Education Department at SBU endless questions about their own experiences with assessment. The more I read and conversed on the topic, the more I realized that assessment is always evolving. It does not matter how many semesters a professor has taught a class, there is always room for improvement. It was then that I could boldly embrace assessment, knowing that it was messy, but was important to making improvements in the way my colleagues and I conveyed PLSLOs and scaffolded student learning moving forward. In their article “Guiding Questions for Assessing Information Literacy in Higher Education,” Megan Oakleaf and Neal Kaske write: “Practicing continuous assessment allows librarians to ‘get started’ with assessment rather than waiting to ‘get it perfect.’ Each repetition of the assessment cycle allows librarians to adjust learning goals and outcomes, vary instructional strategies, experiment with different assessment of methods, and improve over time” (283).

The biggest challenge for librarians interested in implementing curriculum and assessment maps at their institutions stems from the fact that we often do not have the opportunity to interact with students like the average professor, who meets with a class for nearly four consecutive months a semester and provides feedback through regular assessments and grades. The majority of librarians teach one-shot information literacy sessions. So, what is the most practical way to visually represent librarians’ influence over student learning? I would like to advocate for a new approach, which may be unpopular among some in my field and readily embraced by others. It is a customized approach to curriculum and assessment mapping, which was suggested by faculty in the Education Department at my institution.

A typical curriculum map contains  PLSLOs for designated programs, along with course numbers/titles, and boxes where you can designate whether a skill set was introduced (i), reinforced (r), or mastered (m) (“Create a Curriculum Map”). For traditional academic departments, there is an opportunity to build on skill sets through a series of required courses. For academic libraries, however, it is difficult to subscribe to the standard  curriculum mapping schema because librarians do not always have the opportunity to impact student learning beyond general education classes and a few major-specific courses. This leads to an uneven representation of information literacy across the curriculum.  As a result, it is often more efficient to use an “x” instead to denote a program-level student learning outcome for which the library is responsible, rather than utilizing three progressive symbols.

One of the reasons why curriculum and assessment mapping at my academic library is becoming increasingly valuable, is largely due to the fact that administrators at my institution are interested in fostering a greater deal of accountability in the learning process, namely because of an upcoming HLC visit. In her article entitled, “Assessing Your Program-Level Assessment Plan,” Susan Hatfield, Professor of Communication Studies at Winona State University writes: “Assessment needs to be actively supported at the top levels of administration. Otherwise, it is going to be difficult (if not impossible) to get an assessment initiative off the ground. Faculty listen carefully to what administrators say – and don’t say. Even with some staff support, assessment is unlikely to be taken seriously until administrators get on board” (2). In his chapter entitled “Rhetoric Versus Reality: A Faculty Perspective on Information Literacy Instruction,” Arthur Sterngold embraces the view that “For [information literacy (IL)] to be effective…it must be firmly embedded in an institution’s academic curriculum and…the faculty should assume the lead responsibility for developing and delivering IL instruction (85). He believes that librarians should “serve more as consultants to the faculty than as direct providers of IL instruction” (Sterngold 85).

To some extent, I acknowledge the value of  Hatfield’s and Sterngold’s views on the importance of administration-driven and faculty led assessment initiatives in the realm of assessment.  Campus-wide discussions and initiatives centered around this subject stimulate collaboration among interdisciplinary faculty who would not otherwise meet outside of an established structure. As a librarian and member of the faculty at my institution, their stance on assessment creates some internal tension. While it is ideal for our administrations to care about the issues that are closest to their faculty’s hearts, many times they are driven to lead assessment efforts as a result of an impending accreditation visit (Gilchrist 71; Hatfield 5). While I would love to say that information literacy matters to my administration just as much as it does to me, this is an unrealistic viewpoint. The development, assessment, and day-to-day oversight of information literacy is an uphill battle that requires me to take the lead. My library faculty and I must establish value for our information literacy program among the faculty that we partner with on a daily basis. So, how do we as librarians assess the University Libraries’ impact on student learning when information literacy sessions are unevenly represented across the curriculum? In a conversation with a colleague in the Education Department, I was encouraged to determine and assess all forms of learning that the library facilitates by nature of its multidisciplinary role. In Brenda H. Manning and Beverly D. Payne’s article “A Vygotskian-Based Theory of Teacher Cognition: Toward the Acquisition of Mental Reflection and Self-Regulation,” they write:

Because of the spiral restructuring of knowledge, based on the history of each individual as he or she remembers it, a sociohistorical/cultural orientation may be very appropriate to the unique growth and development of each teaching professional. Such a theory in Vygotsky’s sociohistorical explanation for the development of the mind. In other words, the life history of preservice teachers is an important predictor of how they will interpret what it is that we are providing in teacher preparation programs (362).

My colleague in the Education Department challenged me to think about the multiple points of contact that students have with the library, outside of the one-shot information literacy session and include those in our assessment.

As a result, I developed curriculum and assessment maps that not only contained a list of courses in which specific PLSLOs were advanced, but also began including assessment of data from LibGuides, gate count, interlibrary loan, course reserve, biennial library survey, and website usability testing on the maps as well. All of these statistics can be tied to student-centered learning. Assessment of them enables my library faculty and I to make changes in the way that we market services and resources to constituents.

The maps illustrated in Table 1 and Table 2 below are intentionally simplistic. They  provide the library liaisons and faculty in their liaison areas with a visual overview of the  information literacy PLSLOs taught and assessed. When the University Libraries moved to the liaison model in 2011, the librarian teaching education majors was not necessarily familiar with the PLSLOs advanced  by the library liaison to the Language & Literature Department. Mapping current library involvement in the curriculum created a shared knowledge of PLSLOs among the library faculty. I also asked each librarian to create a lesson plan, which we published on the University Libraries’ website. Since we utilize the letter “x” to denote PLSLOs covered, rather than letters that display the  depth of coverage – introduction, reinforcement, mastery, lesson plans provide the librarians and their faculty with a detailed outline of how the PLSLO is developed in the classroom.

Apart from the general visual appeal, these maps also enable us to recognize holes in our information literacy program. For example, there are several departments that are not listed on the curriculum map because we do not currently provide instruction in these classes. Many of the classes that we visit with are freshman and sophomore level. It helps us to identify areas that we need to target moving forward, such as juniors through graduate students.

Table 1-Adapted Curriculum Map – Click to enlarge

Table 2 reveals a limited number of courses we hope to assess in the upcoming year. In discussions with library faculty, I quickly discovered that it was more important to start assessing, rather than assess every class we are involved in at present. We can continue to build in formal assessments over time, but for now the important thing is to begin the process of evaluating the learning process, so that we can make modifications to more effectively impact student learning (Oakleaf & Kaske 283).

The University Libraries is a unique entity in comparison to the other academic units represented across campus. This is largely because information literacy is not a core curriculum requirement. As a result, some of the PLSLOs reflected on the assessment map include data collected outside of the traditional classroom that is specific to the services, resources, and educational opportunities that we facilitate. This is best demonstrated by PLSLOs two and five. For example, we know that students outside of our sessions are using the LibGuides and databases, which are integral to PLSLO two – “The student will be able to use sources in research.” For PLSLO five – “The student will be able to identify the library as a place in the learning process” we are not predominantly interested in whether or not students are using our electronic classrooms during an information literacy session. We are interested in students’ awareness and use of the physical and virtual library as a whole, so we are assessing student learning by whether or not students can find what they need on the University Libraries’ website or whether they utilize the University Libraries’ physical space in general.

Table 2-Adapted Assessment Map (First Half) – Click to enlarge

Table 2-Adapted Assessment Map (Second Half) – Click to enlarge

Transparency in the Assessment Process

Curriculum and assessment maps provide librarians and educators alike with the opportunity to be transparent about the learning that is or is not happening inside and outside of the classroom. I am grateful for the information I have gleaned from the Education Department at SBU along the way because it has inspired a newfound commitment and dedication to the students that we serve.

Although curriculum and assessment mapping is not widespread in the academic library world, some information literacy practitioners have readily embraced this concept. For example, in Brian Matthews and Char Booth’s invited paper, presented at the California Academic & Research Libraries Conference (CARL), Booth discusses her use of the concept mapping software, Mindomo, to help library and departmental faculty visualize current curriculum requirements, as well as opportunities for library involvement in the education process (6). Some sample concept maps that are especially interesting include one geared towards first-year students and another customized to the Environmental Analysis program at Claremont Colleges (Booth & Matthews 8-9). The concept maps then link to rubrics that are specific to the programs highlighted. Booth takes a very visual and interactive approach to curriculum mapping.

In their invited paper, “A More Perfect Union: Campus Collaborations for Curriculum Mapping Information Literacy Outcomes,” Moser et al. discuss the mapping project they undertook at the Oxford College of Emory University. After revising their PLSLOs, the librarians met with departmental faculty to discuss where the library’s PLSLOs were currently introduced and reinforced in the subject areas. All mapping was then done in Weave (Moser et al. 333). While the software Emory University utilizes is a subscription service, Moser et al. provide a template of the curriculum mapping model they employed (337).

So, which of the mapping systems discussed is the best fit for your institution? This is something that you will want to determine based on the academic environment in which you find yourself. For example, does your institution subscribe to mapping software like Emory University or will you need to utilize free software to construct concept maps like Claremont Colleges? Another factor to keep in mind is what model will make the most sense to your librarians and the subject faculty they partner with in the classroom. As long as the maps created are clear to the audiences that they serve, the format they take is irrelevant. In Janet Hale’s book, A Guide to Curriculum Mapping: Planning, Implementing, and Sustaining the Process, she discusses several different kinds of maps for the K-12 setting. While each map outlined contains benefits, she argues that the “Final selection should be based on considering the whole mapping system’s capabilities” (Hale 228).

The curriculum and assessment mapping models I have used for the information literacy competency program at SBU reflect the basic structure laid out by the Assessment Academy Team at my institution. I have customized the maps to reflect the ways the University Libraries facilitates and desires to impact student  learning inside and outside of the classroom. In an effort to foster collaboration and create more visibility for the Information Literacy Competency Program, I have created two LibGuides that are publicly available to our faculty, students, and the general public. The first one, which is entitled  Information Literacy Competency Program, consists of PLSLOs, our curriculum and assessment maps, outlines of all sessions taught, etc. The Academic Program Review LibGuide provides an overview of the different ways that we are assessing student learning – including website usability testing feedback, annual information literacy reports and biennial student survey reports. Due to confidentiality, all reports are accessible via the University’s intranet.

Acknowledging the Imperfections of Curriculum and Assessment Mapping

Curriculum and assessment mapping is not an exact science. I wish I could bottle it up and distribute a finished product to all of  the information literacy librarians out there who grapple with the imprecision of our profession. While it would eliminate our daily struggle, it would also lead to the discontinuation of  Eureka moments that we all experience as we grow with and challenge the academic cultures in which we find ourselves.

So, what have I learned as a result of the mapping process? It requires collaboration on the part of library and non-library faculty. When I began curriculum and assessment mapping, I learned pretty quickly that without the involvement of each liaison librarian and the departmental faculty, mapping would be in vain. Map structures must be based on the pre-existing partnerships librarians have, but will identify gaps or areas of growth throughout the curriculum. I would love to report that our curriculum maps encompass the entire curriculum at SBU, but that would be a lie. Initially, I did a content analysis of the curriculum and reviewed syllabi for months in an effort to develop well-rounded maps. I learned all too quickly, however, that mapping requires us to work with what we already have and set goals for the future. So, while the University Libraries’ maps are by no means complete, I have challenged each liaison librarian to identify PLSLOs they can advance in the classroom now, while looking for new ways to impact student learning moving forward.

During the mapping process, I was overwhelmed by the fact that the University Libraries was unable to represent student learning in the same way the other academic departments across campus did. I liked the thought of creating maps identifying the introduction, reinforcement, and mastery of certain skill sets throughout students’ academic tenure with us. However, I quickly realized that this was impractical because it does not take into account the variables that librarians encounter, such as one-shot sessions, uneven representation in each section of a given class, transfer students, and learning scenarios that happen outside of the classroom itself. Using the “x” to define areas where our PLSLOs are currently impacting student learning was much less daunting and far more practical.

It is important to anticipate pushback in the mapping process (Moser et al. 333-334; Sterngold 86-88). When I began attending the Department Chair Workshops in 2013, I quickly discovered that not all of the other departmental faculty were amenable to my presence. One individual asked why I was attending, while another questioned my boss about my expertise in higher education. In the assessment mapping process, faculty in my library liaison area were initially  reluctant to collaborate with me on assessing student work. Despite some faculty’s resistance, I was determined to persevere. As a result of the workshops, I established a Community of Practice with faculty in the Education Department and grew more confident in my role as an educator.

I know that there are gaps in the maps, but I have come to terms with the healthy tension that this knowledge creates. While I have a lot more to learn about information literacy, learning theory, curriculum and assessment mapping, etc., I no longer feel under-qualified. As an academic, I continue to glean knowledge from my fellow librarians and the Education Department, looking for opportunities to make modifications as necessary. I have reconciled with the fact that this is a continual process of recognizing gaps in my professional practice and identifying opportunities for change. After all, that is what education is all about, right?

Many thanks to Annie Pho, Ellie Collier, and Carrie Donovan for their tireless editorial advice. I would like to extend a special thank you to my Library Dean, Dr. Ed Walton for believing in my ability to lead information literacy efforts at Southwest Baptist University Libraries back in 2011 when I was fresh out of library school. Last, but certainly not least, my gratitude overflows to the educators at my present institution who helped me to wrap my head around curriculum and assessment mapping. Assessment is no longer a scary thing because I now have a plan!

Works Cited

Booth, Char, and Brian Matthews. “Understanding the Learner Experience: Threshold Concepts & Curriculum Mapping.” California Academic & Research Libraries Conference.
San Diego, CARL: 7 Apr. 2012. Web. 17 Mar. 2015.

“Create a Curriculum Map: Aligning Curriculum with Student Learning Outcomes.” Office of Assessment. Santa Clara University, 2014. Web. 13 Apr. 2015.

“Framework for Information Literacy for Higher Education.” 2015. Association of College and Research Libraries. 11 Mar. 2015.

Gilchrist, Debra. “A Twenty Year Path: Learning About Assessment; Learning from Assessment.” Communications in Information Literacy 3.2 (2009): 70-79. Web. 4 Mar. 2015.

Gregory, Lua, and Shana Higgins. Introduction. Information Literacy and Social Justice: Radical Professional Praxis. Ed. Lua Gregory and Shana Higgins. Sacramento: Library Juice. 1-11. Library Juice Press. Web. 13 Apr. 2015.

Hale, Janet A. A Guide to Curriculum Mapping: Planning, Implementing, and Sustaining the Process. Thousand Oaks: Corwin, 2008. Print.

Hatfield, Susan. “Assessing Your Program-Level Assessment Plan.” IDEA Paper. 45 (2009): 1-9. IDEA Center. Web. 27 Feb. 2015.

Hayes-Jacobs, Heidi. Getting Results with Curriculum Mapping. Alexandria: Association for Supervision and Curriculum Development, 2004. eBook Academic Collection. Web. 26 Feb. 2015.

—. Mapping the Big Picture: Integrating Curriculum & Assessment K-12. Alexandria: Association for Supervision and Curriculum Development. 1997. Print.

“Information Literacy Competency Standards for Higher Education.” 2000. Association of College & Research Libraries. 11 Mar. 2015.

Levine, Arthur. Generation on a Tightrope: A Portrait of Today’s College Student. San Francisco: Jossey-Bass, 2012. Print.

Manning, Brenda H., and Beverly D. Payne. “A Vygotskian-Based Theory of Teacher Cognition: Toward the Acquisition of Mental Reflection and Self-Regulation.” Teaching and Teacher Education 9.4 (1993): 361-372. Web. 25 May 2012.

Mitchell, John. The Potential for Communities of Practice to Underpin the National Training Framework. Melbourne: Australian National Training Authority. 2002. John Mitchell & Associates. Web. 18 Mar. 2015.

Moser, Mary, Andrea Heisel, Nitya Jacob, and Kitty McNeill. “A More Perfect Union: Campus Collaborations for Curriculum Mapping Information Literacy Outcomes.” Association of College and Research Libraries Conference. Philadelphia, ACRL: Mar.-Apr. 2011. Web. 17 Mar. 2015.

Oakleaf, Megan, and Neal Kaske. “Guiding Questions for Assessing Information Literacy in Higher Education.” portal: Libraries and the Academy 9.2 (2009): 273-286. Web. 21 Dec. 2011.

Perkins, David. Archimedes’ Bathtub: The Art and Logic of Breakthrough Thinking. New York: W.W. Norton, 2000. Print.

Reiss, Rick. “Before and After Students ‘Get It': Threshold Concepts.” Tomorrow’s Professor Newsletter 22.4 (2014): n. pag. Stanford Center for Teaching and Learning. Web. 7 Mar. 2015.

Sterngold, Arthur H. “Rhetoric Versus Reality: A Faculty Perspective on Information Literacy Instruction.” Defining Relevancy: Managing the New Academic Library. Ed. Janet McNeil Hurlbert. West Port: Libraries Unlimited, 2008. 85-95. Google Book Search. Web. 17 Mar. 2015.

Wiggins, Grant, and Jay McTighe. Understanding by Design. Alexandria: Association for Supervision and Curriculum Development, 2005. ebrary. Web. 3 Mar. 2015.

Pages

Subscribe to code4lib aggregator