You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib -
Updated: 20 hours 54 min ago

Tara Robertson: HUMAN: subtitles enhancing access and empathy

Sun, 2015-09-27 18:28

I came across this video on a friend’s Facebook feed. I’m a chronic multitasker, but by half a minute in I stopped doing whatever else I was doing and just watched and listened. This is the part that grabbed my heart:

This is my star. I had to wear it on my chest, of course, like all the Jews. It’s big, isn’t it? Especially for a child. That was when I was 8 years old.

Also Francine Christophe’s voice was very powerful and moved me. She annunciates each word so clearly. My French isn’t great, but she speaks slowly and clearly enough that I can understand her. Also, the subtitles confirm that I’m understanding correctly and reinforce what she’s saying.

I noticed that there was something different about the subtitles. The font is clear and elegant and the words are positioned in the blank space beside her face. I can watch her face and her eyes while I read the subtitles. My girlfriend reminded me of something I had said when I was reviewing my Queer ASL lesson at home. In ASL I learned that when fingerspelling you position your hand up by your face, as your face (especially your eyebrows) are part of the language. Even when we speak English our faces communicate so much.

I’ve seen a bunch of these short videos from this film. They are everyday people telling amazing stories about the huge range of experiences that we experience on this planet. The people who are filmed are from all over the world, and speaking in various languages. The design decision to shoot people with enough space to put the subtitles beside them is really powerful. For me the way the subtitles are done enhances the feeling of empathy.

A couple of weeks ago I was at a screening event of Microsoft’s Inclusive video at OCAD in Toronto. In the audience were many students of the Inclusive Design program who were in the video. One of the students asked if the video included description of the visuals for blind and visually impaired viewers. The Microsoft team replied that it didn’t and that often audio descriptions were distracting for viewers who didn’t need them. The student asked if there could’ve been a way to weave the audio description into the interviews, perhaps by asking the people who were speaking to describe where they were and what was going on, instead of tacking on the audio description afterwards. I love this idea.

HUMAN is very successful in skillfully including captions that are beautiful, enhance the storytelling, provide access to Deaf and Hard of Hearing people, provide a way for people who know a bit of the language to follow along with the story as told in the storyteller’s mother tongue, and make it easy to translate the film into other languages. I’m going to include this example in the work we’re going around universal design for learning with the BC Open Textbook project.

I can’t wait to see the whole film of HUMAN. I love the stories that they are telling and the way that they are doing it.

Eric Hellman: Weaponization of Library Resources

Sun, 2015-09-27 01:17
This post needs a trigger warning. You probably think the title indicates that I've gone off the deep end, or that this is one of my satirical posts. But read on, and I think you'll agree with me, we need to make sure that library resources are not turned into weapons. I'll admit that sounds ludicrous, but it won't after you learn about "The Great Cannon" and "QUANTUM".

But first, some background. Most of China's internet connects to the rest of the world through what's known in the rest of the world as "the Great Firewall of China". Similar to network firewalls used for most corporate intranets, the Great Firewall is used as a tool to control and monitor internet communications in and out of China. Websites that are deemed politically sensitive are blocked from view inside China. This blocking has been used against obscure and prominent websites alike. The New York Times, Google, Facebook and Twitter have all been blocked by the firewall.

When web content is unencrypted, it can be scanned at the firewall for politically sensitive terms such as "June 4th", a reference to the Tiananmen Square protests, and blocked at the webpage level. China is certainly not the only entity that does this; many school systems in the US do the same sort of thing to filter content that's considered inappropriate for children. Part of my motivation for working on the "Library Digital Privacy Pledge" is that I don't think libraries and publishers who provide online content to them should be complicit in government censorship of any kind.

Last March, however China's Great Firewall was associated with an offensive attack. To put it more accurately, software co-located with China's Great Firewall turned innocent users of  unencrypted websites into attack weapons. The targets of the attack were "", a website that works to provide Chinese netizens a way to evade the surveillance of the Great Firewall, and, the website that hosts code for hundreds of thousand of programmers, including those supporting

Here's how the Great Cannon operated  In August, Bill Marczak and co-workers from Berkeley, Princeton and Citizen Lab presented their findings on the Great Cannon at the 5th USENIX Workshop on Free and Open Communications on the Internet.
The Great Cannon acted as a "man-in-the-middle"[*] to intercept the communications of users outside china with servers inside china. Javascripts that collected advertising and usage data for Baidu, the "Chinese Google", were replaced with weaponized javascripts. These javascripts, running in the browsers of internet users outside China, then mounted the denial-of-service attack on and Github.China was not the first to weaponized unencrypted internet traffic. Marczak et. al. write:
Our findings in China add another documented case to at least two other known instances of governments tampering with unencrypted Internet traffic to control information or launch attacks—the other two being the use of QUANTUM by the US NSA and UK’s GCHQ.[reference] In addition, product literature from two companies, FinFisher and Hacking Team, indicate that they sell similar “attack from the Internet” tools to governments around the world [reference]. These latest findings emphasize the urgency of replacing legacy web protocols like HTTP with their cryptographically strong counterparts, such as HTTPS.It's worth thinking about how libraries and the resources they offer might be exploited by a man-in-the-middle attacker. Science journals might be extremely useful in targeting espionage scripts at military facilities, for example. A saboteur might alter reference technical information used by a chemical or pharmaceutical company with potentially disastrous consequences. It's easy to see why any publisher that wants its information to be perceived as reliable has no choice but to start encrypting their services now.

The unencrypted services of public libraries are attractive targets for other sorts of mischief, ironically because of their users' trust in them and because they have a reputation for protecting privacy. Think about how many users would enter their names, phone numbers, and last four digits of their social security numbers if a library website seemed to ask for it. When a website is unencrypted, it's possible for "man-in-the-middle" attacks to insert content into an unencrypted web page coming from a library or other trusted website. An easy way for an attacker to get into position to execute such an attack is to spoof a wifi network, for example in a cafe or other public space, such as a library. It doesn't help if only a website's login is encrypted if an attacker can easily insert content into the unencrypted parts of the website.

To be clear, we don't know that libraries and the type of digital resources they offer are being targeted for weaponization, espionage or other sorts of mischief. Unfortunately, the internet offers a target-rich environment of unencrypted websites.

I believe that libraries and their suppliers need to move swiftly to take the possibility off the table and help lead the way to a more secure digital environment for us all.

[note: Technically, the Great Cannon executed a "man-on-the-side" variant of a "man-in-the-middle" attack, not unlike the NSA's "QuantumInsert" attack revealed by Edward Snowden.]

Terry Reese: Automatic Headings Correction–Validate Headings

Sat, 2015-09-26 21:24

After about a month of working with the headings validation tool, I’m ready to start adding a few enhancements to provide some automated headings corrections.  The first change to be implemented will be automatic correction of headings where the preferred heading is different from the in-use headings.  This will be implemented as an optional element.  If this option is selected, the report will continue to note variants are part of the validation report – but when exporting data for further processing – automatically corrected headings will not be included in the record sets for further action.

Additionally – I’ll continue to be looking at ways to improve the speed of the process.  While there are some limits to what I can do since this tool relies on a web service (outside of providing an option for users to download the ~10GB worth of LC data locally), there are a few things I can to do continue to ensure that only new items are queried when resolving links.

These changes will be made available on the next update.


FOSS4Lib Recent Releases: Avalon Media System - 4.0.1

Sat, 2015-09-26 18:50

Last updated September 26, 2015. Created by Peter Murray on September 26, 2015.
Log in to edit this page.

Package: Avalon Media SystemRelease Date: Thursday, September 24, 2015

Karen G. Schneider: The importance of important questions

Sat, 2015-09-26 12:43

Pull up a chair and set a while: I shall talk of my progress in the doctoral program; my research interests, particularly LGBT leadership; the value of patience and persistence; Pauline Kael; and my thoughts on leadership theory. I include a recipe for  cupcakes. Samson, my research assistant, wanted me to add something about bonita flakes, but that’s really his topic.

My comprehensive examinations are two months behind me: two four-hour closed-book exams, as gruesome as it sounds. Studying for these exams was a combination of high-level synthesis of everything I had learned for 28 months and rote memorization of barrels of citations. My brain was not feeling pretty.

I have been re-reading the qualifying paper I submitted earlier this year, once again feeling grateful that I had the patience and persistence to complete and then discard two paper proposals until I found my research beshert, about the antecedents and consequences of sexual identity disclosure for academic library directors. That’s fancy-talk for a paper that asked, why did you come out, and what happened next? The stories participants shared with me were nothing short of wonderful.

As the first major research paper I have ever completed, it is riddled with flaws. At 60–no, now, 52–pages, it is also an unpublishable length, and I am trying to identify what parts to chuck, recycle, or squeeze into smaller dress sizes, and what would not have to be included in a published paper anyway.

But if there is one thing I’ve learned in the last 28 months, it is that it is wise to pursue questions worth pursuing.  I twice made the difficult decision to leave two other proposals on the cutting-room floor, deep-sixing many months of effort. But in the end that meant I had a topic I could live with through the long hard slog of data collection, analysis, and writing, a topic that felt so fresh and important that I would mutter to myself whilst working, “I’m in your corner, little one.”

As I look toward my dissertation proposal, I find myself again (probably, but not inevitably) drawn toward LGBT leadership–even more so when people, as occasionally happens, question this direction. A dear colleague of mine questioned the salience of one of the themes that emerged from my study, the (not unique) idea of being “the only one.” Do LGBT leaders really notice when they are the only ones in any group setting, she asked? I replied, do you notice when you’re the only woman in the room? She laughed and said she saw my point.

The legalization of same-gender marriage has also resulted in some hasty conclusions by well-meaning people, such as the straight library colleague from a liberal coastal community who asked me if “anyone was still closeted these days.” The short answer is yes. A  2013 study of over 800 LGBT employees across the United States found that 53 percent of the respondents hide who they are at work.

But to unpack my response requires recalling Pauline Kael’s comment about not knowing anyone who voted for Nixon (a much wiser observation than the mangled quote popularly attributed to her): “I live in a rather special world. I only know one person who voted for Nixon. Where they are I don’t know. They’re outside my ken. But sometimes when I’m in a theater I can feel them.” 

In my study, I’m pleased to say, most of the participants came from outside that “rather special world.”  I recruited participants through calls to LGBT-focused discussion lists which were then “snowballed” out to people who knew people who knew people, and to quote an ancient meme, “we are everywhere.” The call for participation traveled several fascinating degrees of separation. If only I could have chipped it like a bird and tracked it! As it was, I had 10 strong, eager participants who generated 900 minutes of interview data, and the fact that most were people I didn’t know made my investigation that much better.

After the data collection period for my research had closed, I was occasionally asked, “Do you know so-and-so? You should use that person!” In a couple of cases colleagues complained, “Why didn’t you ask me to participate?” But I designed my study so that participants had to elect to participate during a specific time period, and they did; I had to turn people away.

The same HRC study I cite above shrewdly asked questions of non-LGBT respondents, who revealed their own complicated responses to openly LGBT workers. “In a mark of overall progress in attitudinal shifts, 81% of non-LGBT people report that they feel LGBT people ‘should not have to hide’ who they are at work. However, less than half would feel comfortable hearing an LGBT coworker talk about their social lives, dating or related subject.” I know many of you reading this are “comfortable.” But you’re part of my special world, and I have too much experience outside that “special world” to be surprised by the HRC’s findings.

Well-meaning people have also suggested more than once that I study library leaders who have not disclosed their sexual identity. Aside from the obvious recruitment issues, I’m far more interested in the interrelationship between disclosure and leadership. There is a huge body of literature on concealable differences, but suffice it to say that the act of disclosure is, to quote a favorite article, “a distinct event in leadership that merits attention.” Leaders make decisions all the time; electing to disclose–an action that requires a million smaller decisions throughout life and across life domains–is part of that decision matrix, and inherently an important question.

My own journey into research

If I were to design a comprehensive exam for the road I have been traveling since April, 2013, it would be a single, devilish open-book question to be answered over a weekend: describe your research journey.

Every benchmark in the doctoral program was a threshold moment for my development. Maybe it’s my iconoclast spirit, but I learned that I lose interest when the chain of reasoning for a theory traces back to prosperous white guys interviewing prosperous white guys, cooking up less-than-rigorous theories, and offering prosperous-white-guy advice. “Bring more of yourself to work!” Well, see above for what happens to some LGBT people when they bring more of themselves to work. It’s true that the participants in my study did just that, but it was with an awareness that authenticity has its price as well as its benefits.

The more I poked at some leadership theories, the warier I became. Pat recipes and less-than-rigorous origin stories do not a theory make. (Resonant leadership cupcakes: stir in two cups of self-awareness; practice mindfulness, hope, and compassion; bake until dissonance disappears and renewal is evenly golden.) Too many books on leadership “theory” provide reasonable and generally useful recommendations for how to function as a leader, but are so theoretically flabby that if they were written by women would be labeled self-help books.

(If you feel cheated because you were expecting a real cupcake recipe, here’s one from Cook’s Catalog, complete with obsessive fretting about what makes it a good cupcake.)

I will say that I would often study a mainstream leadership theory and  then see it in action at work. I had just finished boning up on Theory X and Theory Y when someone said to me, with an eye-roll no less, “People don’t change.” Verily, the scales fell from my eyes and I revisited moments in my career where a manager’s X-ness or Y-ness had significant implications. (I have also asked myself if “Theory X” managers can change, which is an X-Y test in itself.) But there is a difference between finding a theory useful and pursuing it in research.

I learned even more when I deep-sixed my second proposal, a “close but no cigar” idea that called for examining a well-tested theory using LGBT leader participants. The idea has merit, but the more I dug into the question, the more I realized that the more urgent question was not how well LGBT leaders conform to predicted majority behavior, but instead the very whatness of the leaders themselves, about which we know so little.

It is no surprise that my interest in research methods also evolved toward exploratory models such as grounded theory and narrative inquiry that are designed to elicit meaning from lived experience. Time and again I would read a dissertation where an author was struggling to match experience with predicated theory when the real findings and “truth” were embedded in the stories people told about their lives. To know, to comprehend, to understand, to connect: these stories led me there.

Bolman and Deal’s “frames” approach also helped me diagnose how and why people are behaving as they are in organizations, even if you occasionally wonder, as I do, if there could be another frame, or if two of the frames are really one frame, or even if “framing” itself is a product of its time.

For that matter, mental models are a useful sorting hat for leadership theorists. Schein and Bolman see the world very differently, and so follows the structure of their advice about organizational excellence. Which brings me back to the question of my own research into LGBT leadership.

In an important discussion about the need for LGBT leadership research, Fassinger, Shullman, and Stevenson get props for (largely) moving the barycenter of LGBT leadership questions from the conceptual framework of being acted upon toward questions about the leaders themselves and their complex, agentic decisions and interactions with others. Their discussion of the role of situation feels like an enduring truth: “in any given situation, no two leaders and followers may be having the same experience, even if obvious organizational or group variables appear constant.”

What I won’t do is adopt their important article on directions for LGBT leadership research as a Simplicity dress pattern for my  leadership research agenda. They created a model; well, you see I am cautious about models. Even my own findings are at best a product of people, time, and place, intended to be valid in the way that all enlightenment is valid, but not deterministic.

So on I go, into the last phase of the program. In this post I have talked about donning and discarding theories as if I had all the time in the world, which is not how I felt in this process at all. It was the most agonizing exercise in patience and persistence I’ve ever had, and I questioned myself along the entire path. I relearned key lessons from my MFA in writing: some topics are more important than others; there is always room for improvement; writing is a process riddled with doubt and insecurity; and there is no substitute for sitting one’s behind in a chair and writing, then rewriting, then writing and rewriting some more.

So the flip side of my self-examination is that I have renewed appreciation for the value of selecting a good question and a good method, and pressing on until done.  I have no intention of repeating my Goldilocks routine.

Will my dissertation be my best work? Two factors suggest otherwise. First, I have now read countless dissertations where somewhere midway in the text the author expresses regret, however subdued, that he or she realized too late that the dissertation had some glaring flaw that could not be addressed without dismantling the entire inquiry. Second, though I don’t know that I’ve ever heard it expressed this way, from a writer’s point of view the dissertation is a distinct genre. I have become reasonably comfortable with the “short story” equivalent of the dissertation. But three short stories do not a novel make, and rarely do one-offs lead to mastery of a genre.

But I will at least be able to appreciate the problem for what it is: a chance to learn, and to share my knowledge; another life experience in the “press on regardless” sweepstakes; and a path toward a goal: the best dissertation I will ever write.

Bookmark to:

Nicole Engard: Bookmarks for September 25, 2015

Fri, 2015-09-25 20:30

Today I found the following resources and bookmarked them on Delicious.

  • iDoneThis Reply to an evening email reminder with what you did that day. The next day, get a digest with what everyone on the team got done.

Digest powered by RSS Digest

The post Bookmarks for September 25, 2015 appeared first on What I Learned Today....

Related posts:

  1. Another reason I want my MLIS
  2. Get organized
  3. Reminder: Carnival Submissions

SearchHub: Pushing the Limits of Apache Solr at Bloomberg

Fri, 2015-09-25 17:17
As we countdown to the annual Lucene/Solr Revolution conference in Austin this October, we’re highlighting talks and sessions from past conferences. Today, we’re highlighting Anirudha Jadhav’s session on going beyond the conventional constraints of Solr. The goal of the presentation is to delve into the implementation of Solr, with a focus on how to optimize Solr for big data search. Solr implementations are frequently limited to 5k-7k ingest rates in similar use cases. I conducted several experiments to increase the ingest rate as well as throughput of Solr, and achieved a 5x increase in performance, or north of 25k documents per second. Typically, optimizations are limited by the available network bandwidth. I used three key metrics to benchmark the performance of my Solr implementation: time triggers, document size triggers and document count triggers. The talk will delve into how I optimized the search engine, and how my peers can coax similar performance out of Solr. This is intended to be an in-depth description of the high-frequency search implementation, with q/a with the audience. All implementations described here are based on latest SolrCloud multi-datacenter setups. Anirudha Jadhav is a big data search expert, and has architected and deployed arguably one of the world’s largest Lucene-based search deployments , tipping the scale at a little over 86 billion documents for Bloomberg LP. He has deep expertise in building financial applications, high-frequency trading and search applications as well as solving complex search and ranking problems. In his free time, he also enjoys scuba-diving, off-road treks with his 18th century British Army motorbike, building tri-copters and underwater photography. Anirudha earned his Masters in Computer Science from Courant Institute of Mathematical Sciences, New York University. Never Stop Exploring – Pushing the Limits of Solr: Presented by Anirudha Jadhav, Bloomberg L.P. from Lucidworks Join us at Lucene/Solr Revolution 2015, the biggest open source conference dedicated to Apache Lucene/Solr on October 13-16, 2015 in Austin, Texas. Come meet and network with the thought leaders building and deploying Lucene/Solr open source search technology. Full details and registration…

The post Pushing the Limits of Apache Solr at Bloomberg appeared first on Lucidworks.

DPLA: Help the Copyright Office Understand How to Address Mass Digitization

Fri, 2015-09-25 14:46

Guest post by Dave Hansen, a Clinical Assistant Professor and Faculty Research Librarian at the University of North Carolina’s School of Law, where he runs the library’s faculty research service.

Wouldn’t libraries and archives like to be able to digitize their collections and make the texts and images available to the world online? Of course they would, but copyright inhibits this for most works created in the last 100 years.

The U.S. Copyright Office recently issued a report and a request for comments on its proposal for a new licensing system intended to overcome copyright obstacles to mass digitization. While the goal is laudable, the Office’s proposal is troubling and vague in key respects.

The overarching problem is that the Office’s proposal doesn’t fully consider how libraries and archives currently go about digitization projects, and so it misidentifies how the law should be improved to allow for better digital access. It’s important that libraries and archives submit comments to help the Office better understand how to make recommendations for improvements.

Below is a summary of the Office’s proposal and five specific reasons why libraries and archives should have reservations about it. I strongly encourage you to read the proposal and Notice of Inquiry closely and form your own judgment about it.

For commenting, a model letter is available here (use this form to fill in basic information), but you should tailor it with details that are important to your institution. Comments are due to the Copyright Office by October 9, 2015. The comment submission page is here.

The Copyright Office’s Licensing Proposal

The Copyright Office’s proposal is that Congress enact a five year pilot “extended collective licensing” (ECL) system that would allow collecting societies (e.g., the Authors Guild, or the Copyright Clearance Center) to grant licenses for mass digitization for nonprofit uses.

Societies could, in theory, already grant mass digitization licenses for works owned by their members. The Office’s proposed ECL system would allow collecting societies to go beyond that, and also grant licenses for all works that are similar to those owned by their members, even if the owners of those similar works are not actually members of the collective themselves. That’s the “extended” part of the license; Congress would, by statute, extend the society’s authority to grant licenses on behalf of both members and non-members alike. Such a system would help to solve one of the most difficult copyright problems libraries and archives face: tracking down rights holders. Digitizers would instead need only to negotiate and purchase a license from the collecting societies, simplifying the rights clearance process.

Why the Copyright Office’s Proposal is Troubling

In the abstract, the Office’s proposal sound appealing. But for digitizing libraries and archives, the details make it troubling for these five reasons:

First, the proposal doesn’t address the types of works that libraries and archives are working hardest to preserve and make available online—unique collections that include unpublished works such as personal letters or home photographs. Instead of focusing on these works for which copyright clearance is hardest to obtain, the proposal applies to only three narrow categories: 1) published literary works, 2) published embedded pictorial or graphic works, and 3) published photographs.

Second, given the large variety of content types in the collections that libraries and archives want to digitize—particularly special collections that include everything from unpublished personal papers, to out-of-print books, to government works—there is no one collecting society that could ever offer a license for mass digitization of entire collections. If seeking a license, libraries and archives would still be forced to negotiate with a large number of parties. And because the proposed ECL pilot would include only published works, large sections of collections would remain unlicensable anyway.

Third, digitization is an expensive investment. Because the system would be a five-year pilot project, few libraries or archives would be able to pay what it will cost to digitize works (not to mention ECL license fees) if those works have to be taken offline in a few years when the ECL system expires.

Fourth, for an ECL system to truly address the costs of clearing rights, it would need to include licensing orphan works (works whose owners cannot be located) alongside all other works. While the Copyright Office acknowledges in one part of its report that licensing of orphan works doesn’t make sense because it would require payment of fees that would never go to owners, it later specifies an ECL system that would do just that. The Society of American Archivists said it best in its comments to the Copyright Office: “[R]epositories that are seeking to increase access to our cultural heritage generally have no surplus funds. . . . Allocating those funds in advance to a licensing agency that will only rarely disperse them would be wasteful, and requiring such would be irresponsible from a policy standpoint.”

Finally, one of the most unsettling things about the ECL proposal is its threat to the one legal tool that is currently working for mass digitization: fair use. To be clear, fair use doesn’t work for all mass digitization uses. But it likely does address many of the uses that libraries and archives are most concerned with, including nonprofit educational uses of orphan works, and transformative use of special collections materials.

The Office recognized concerns about fair use in its report, and in response proposed a “fair use savings clause” that would state that “nothing in the [ECL] statute is intended to affect the scope of fair use.” Even with an effective savings clause, the existence of the ECL system alone could shrink the fair use right because fewer users might rely on it in favor of more conservative licensing. As legal scholars have observed, fair use is like a muscle, its strength depends in part on how it is used.

Rather than focus its energy on creating a licensing system that can only reach a small segment of library and archive collections, the Office should instead promote the use of legal tools that are working, such as fair use, and work to solve the problems underlying the rights-clearance issues by helping to create better copyright information registries and by studying solutions that would encourage rightsholders to make themselves easier to be found by potential users of their works.

LITA: Understanding Creative Commons Licensing

Fri, 2015-09-25 14:00

Creative Commons (CC) is a public copyright license. What does this mean? It means it allows for free distribution of work that would otherwise be under copyright, providing open access to users. Creative Commons licensing provides both gratis OA licensing and libre OA  licensing (terms coined by Peter Suber). Gratis OA is free to use, libre OA is free to use and free to modify.

How does CC licensing benefit the artist? Well, it allows more flexibility with what they can allow others to do with their work. How does it benefit the user? As a user, you are protected from copyright infringement, as long as you follow the CC license conditions.

CC licenses: in a nutshell with examples

BY – attribution | SA – share alike | NC – non-commercial | ND – no derivs

CC0 – creative commons zero license means this work is in the public domain and you can do whatever you want with it. No attribution is required. This is the easiest license to work with. (example of a CC0 license: Unsplash)

BY – This license means that you can do as you wish with the work but only as long as you provide attribution for the original creator. Works with this type of license can be expanded on and used for commercial use, if the user wishes, as long as attribution is given to the original creator. (example of a CC-BY license: Figshare ; data sets at Figshare are CC0; PLOS)

BY-SA – This license is an attribution licenses and share alike license meaning that all new works based on the original work will carry the same license. (example of a CC-BY-SA license: Arduino)

BY-NC – this license is another attribution license but the user does not have to retain the same licensing terms as the original work. The catch, the user must be using the work non-commercially. (example of a BY-NC license: Ely Ivy from the Free Music Archive)

BY-ND – This license means the work can be shared, commercially or non-commercially, but without change to the original work and attribution/credit must be given. (example of a BY-ND license: Free Software Foundation)

BY-NC-SA – This license combines the share alike and the non-commercial with an attribution requirement. Meaning, the work can be used (with attribution/credit) only if for non-commercial use and any and all new works retain the same BY-NC-SA license. (example of a CC BY-NC-SA: Nursing Clio see footer or MITOpenCourseWare)

BY-NC-ND – This license combines the non-commercial and non-derivative licenses with an attribution requirement. Meaning, you can only use works with this license with attribution/credit for non-commercial use and they cannot be changed from the original work. (example of a BY-NC-ND license: Ted Talk Videos)

DuraSpace News: The ACRL Toolkit–An Open Access Week Preparation Assistant

Fri, 2015-09-25 00:00

Washington, DC  Let ACRL’s Scholarly Communication Toolkit help you prepare to lead events on your campus during Open Access Week, October 19-25, 2015. Open Access Week, a global event now entering its eighth year, is an opportunity for the academic and research community to continue to learn about the potential benefits of Open Access, to share what they’ve learned with colleagues, and to help inspire wider participation in helping to make Open Access a new norm in scholarship and research.

DuraSpace News: Cineca Releases Version 5.3.0 of DSpace-CRIS

Fri, 2015-09-25 00:00

From Michele Mennielli, Cineca

As announced in August’s DuraSpace Digest, just a few days after the release of the version 5.2.1, on August 25, 2015 Cineca released Version 5.3.0 of DSpace-CRIS. The new version is aligned with the latest DSpace 5 release and includes a new widget that supports the dynamic properties of CRIS objects to support hierarchy classification such as ERC Sectors, MSC (Mathematics Subject Classification).

District Dispatch: You might as well have stayed in DC

Thu, 2015-09-24 19:48

Nashville is site of U.S Judiciary Committee listening tour, photo courtesy of Matthew Percy, flickr photo share.

At this stage of the copyright reform effort, the U.S. House Judiciary Committee is meeting with stakeholders for “listening sessions,” which give concerned rights holders or users of content an opportunity to make their case for a copyright fix. To reach a broader audience, the Committee is going on the road to reach individuals and groups around the country, and one would think, to hear a range of opinions from the community. So, on September 22, they went to Nashville, a music mecca, to hold a listening session regarding music copyright reform.

Music, perhaps more than any other form of creative expression, needs to be re-examined. New business models for digital streaming, fair royalty rights, and requests for transparency have all created a need for clarity on who gets paid for what in the music business. We need policy that answers this question in a way that’s fair to everyone. One thing has been agreed on by copyright stakeholders thus far—people should be compensated for their intellectual and creative work. Wonderful.

But lo and behold—the same industry and trade group lobbyists that always get a chance to meet with the Congressional representatives and staff in DC turned out to be mostly the only music stakeholder groups that were invited to speak. What gives?

It looks like the House merely gathered the usual suspects—a list of “who do we know (already)?” to the table. It would have been simple for the Committee to convene a wide gamut of music stakeholders together to paint a full picture of the state of the music industry, given the fact that they met in Nashville. Ultimately, however, other key stakeholders (Out of the Box, Sorted Noise, community radio, music educators, librarians, archivists, and consumers) were not heard, and only one (older) version of the state of the music industry (that the Committee already knows about) took center stage.
So, why go to Nashville?

Don’t get me wrong. It is a good thing that the Committee wants to hear from all stakeholders and it is thoughtful to hold listening sessions in geographically diverse locations, but you have to give people you don’t already know an opportunity to speak. That’s the only way to learn about new business models and how best to cultivate music creators of tomorrow—to truly understand how the creativity ecosystem can thrive in the future and then what legislative changes are needed to realize that future.

The post You might as well have stayed in DC appeared first on District Dispatch.

District Dispatch: CopyTalk on international trade treaty

Thu, 2015-09-24 19:34

By trophygeek

What does a trade agreement have to do with libraries and copyright? Expert Krista Cox who has traveled the world promoting better policies for the intellectual property chapter of the Trans-Pacific Partnership Agreement (TPP) will enlighten us at our next CopyTalk webinar.

There is no need to pre-register for this free webinar! Just go to: on October 1, 2015 at 2 p.m. EST/11 a.m. PST.

Note that the webinar is limited to 100 seats so watch with colleagues if possible. An archived copy will be available after the webinar.

The Trans-Pacific Partnership Agreement (TPP) is a large regional trade agreement currently being negotiated between twelve countries: Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, the United States and Vietnam. The agreement has been negotiated behind closed doors, but due to various leaks of the text it is apparent that the TPP will include a comprehensive chapter on intellectual property, including specific provisions governing copyright and enforcement. In addition to requiring other countries to change their laws, the final agreement could lock-in controversial provisions of US law and prevent reform in certain areas.

Krista Cox is Director of Public Policy Initiatives at the Association of Research Libraries (ARL). In this role, she advocates for the policy priorities of the Association and executes strategies to implement these priorities. She monitors legislative trends and participates in ARL’s outreach to the Executive Branch and the US Congress.

Prior to joining ARL, Krista worked as the staff attorney for Knowledge Ecology International (KEI) where she focused on access to knowledge issues as well as TPP. Krista received her JD from the University of Notre Dame and her BA in English from the University of California, Santa Barbara. She is licensed to practice before the Supreme Court of the United States, the Court of Appeals for the Federal Circuit, and the State Bar of California.

The post CopyTalk on international trade treaty appeared first on District Dispatch.

DPLA: Supporting National History Day Researchers

Thu, 2015-09-24 15:14

In 2015, DPLA piloted a National History Day partnership with National History Day in Missouri, thanks to the initiative of community rep Brent Schondelmeyer. For 2016, DPLA will be partnering with NHDMO and two new state programs: National History Day – California and National History Day in South Carolina. For each program, DPLA designs research guides based on state and national topics related to the contest theme, acts an official sponsor, and offers a prize for the best project that extensively incorporates DPLA resources. 

In this post, NHDMO Coordinator Maggie Mayhan describes the value of DPLA as a resource for NHD student researchers. To learn more about DPLA and National History Day partnerships, please email

Show-Me History

Each year more than 3,500 Missouri students take part in National History Day (NHD), a unique opportunity for sixth- through twelfth-grade students to explore the past in a creative, hands-on way. While producing a documentary, exhibit, paper, performance, or website, they become an expert on the topic of their choosing.

In following NHD rules, students quickly learn that the primary sources they are required to use in their projects also help them to tell their stories effectively. But where do they start their search for those sources? How can it be manageable and meaningful?

Enter Digital Public Library of America (DPLA). Collecting and curating digital sources from libraries, museums, and archives, the DPLA portal connects students and teachers with the resources that they need. For students who cannot easily visit specialized repositories to work with primary sources, DPLA may even be the connection that enables them to participate in National History Day.

National History Day in Missouri loves how DPLA actively works to fuse history and technology, encouraging students to use modern media to access and share history. Knowing how to use new technologies to find online archives, databases, and other history sources is important for future leaders seeking to explore the past.

Seeing the potential for a meaningful collaboration in which students uncover history through the DPLA collections and put their own stamp on it through National History Day projects, the Digital Public Library of America became a major program sponsor in 2015.

Additionally, DPLA sponsors a special prize at the National History Day in Missouri state contest, awarded to the best documentary or website that extensively incorporates DPLA resources. The 2015 prize winners, Keturah Gadson and Daniela Hinojosa from Pattonville High School in St. Louis, pointed out that DPLA access was important for their award-winning website about civil rights activist Thurgood Marshall:

We found that the sources on the Digital Public Library of America fit amazingly into our research and boosted it where we were lacking… the detail we gained from looking directly at the primary sources was unmatched…DPLA sources completed our research wonderfully.

National History Day in Missouri is excited to continue this partnership in 2016, and we look forward to seeing what resources students will discover as they focus on the 2016 contest theme, Exploration, Encounter, Exchange in History.

LITA: LITA Forum early bird rates end soon

Thu, 2015-09-24 14:00
LITA and LLAMA Members

There’s still time to register for the 2015 LITA Forum at the early bird rate and save $50
Minneapolis, MN
November 12-15, 2015


LITA Forum early bird rates end September 30, 2015
Register Now!

Join us in Minneapolis, Minnesota, at the Hyatt Regency Minneapolis for the 2015 LITA Forum, a three-day education and networking event featuring 2 preconferences, 3 keynote sessions, more than 55 concurrent sessions and 15 poster presentations. This year including content and planning collaboration with LLAMA.

Why attend the LITA Forum

Check out the report from Melissa Johnson. It details her experience as an attendee, a volunteer, and a presenter. This year, she’s on the planning committee and attending. Melissa says most people don’t know is how action-packed and seriously awesome this years LITA Forum is going to be. Register now to receive the LITA and LLAMA members early bird discount:

  • LITA and LLAMA member early bird rate: $340
  • LITA and LLAMA member regular rate: $390

The LITA Forum is a gathering for technology-minded information professionals, where you can meet with your colleagues involved in new and leading edge technologies in the library and information technology field. Attendees can take advantage of the informal Friday evening reception, networking dinners and other social opportunities to get to know colleagues and speakers and experience the important networking advantages of a smaller conference.

Keynote Speakers:

  • Mx A. Matienzo, Director of Technology for the Digital Public Library of America
  • Carson Block, Carson Block Consulting Inc.
  • Lisa Welchman, President of Digital Governance Solutions at ActiveStandards.

The Preconference Workshops:

  • So You Want to Make a Makerspace: Strategic Leadership to support the Integration of new and disruptive technologies into Libraries: Practical Tips, Tricks, Strategies, and Solutions for bringing making, fabrication and content creation to your library.
  • Beyond Web Page Analytics: Using Google tools to assess searcher behavior across web properties.

Comments from past attendees:

“Best conference I’ve been to in terms of practical, usable ideas that I can implement at my library.”
“I get so inspired by the presentations and conversations with colleagues who are dealing with the same sorts of issues that I am.”
“After LITA I return to my institution excited to implement solutions I find here.”
“This is always the most informative conference! It inspires me to develop new programs and plan initiatives.”

Forum Sponsors:

EBSCO, Ex Libris, Optimal Workshop, OCLC, InnovativeBiblioCommons, Springshare, A Book ApartRosenfeld Media and Double Robotics.

Get all the details, register and book a hotel room at the 2015 Forum Web site.

See you in Minneapolis.

LITA: September Library Tech Roundup

Thu, 2015-09-24 14:00
Image courtesy of Flickr user kalexanderson (CC BY).

Each month, the LITA bloggers share selected library tech links, resources, and ideas that resonated with us. Enjoy – and don’t hesitate to tell us what piqued your interest recently in the comments section!

Brianna M.

Cinthya I.

I’m mixing things up this month and have been reading a lot on…

John K.

Hopefully this isn’t all stuff you’ve all seen already:

Whitni Watkins

These are all over the place as I’ve been bouncing back and forth between multiple interests I’ve been finger dipping in.

LibUX: On the User Experience of Ebooks

Thu, 2015-09-24 01:57

So, when it comes to ebooks I am in the minority: I prefer them to the real thing. The aesthetic or whats-it about the musty trappings of paper and ink or looming space-sapping towers of shelving just don’t capture my fancy. But these are precisely the go-to attributes people wax poetically about — and you can’t deny there’s something to it.

In fact, beyond convenience ebooks don’t have much of an upshot. They are certainly not as convenient as they could be. All the storytelling power of the web is lost on such a stubbornly static industry where print – where it should be most advantageous – drags its feet. Write in the gloss on, but not in an ebook; embellish a narrative with animation at the New York Times (a newspaper), but not in an ebook; share, borrow, copy, paste, link-to anything but an ebook.

Note what is lacking when it comes to ebook’s advantages: the user experience. True, some people certainly prefer an e-reader (or their phone or tablet), but a physical book has its advantages as well: relative indestructibility, and little regret if it is destroyed or lost; tangibility, both in regards to feel and in the ability to notate; the ability to share or borrow; and, of course, the fact a book is an escape from the screens we look at nearly constantly. At the very best the user experience comparison (excluding the convenience factor) is a push; I’d argue it tilts towards physical books.


All things being equal, where it lacks can be made-up by the no-cost of its distribution, but the rarely discounted price of the ebook is often more expensive for those of us in libraries or higher ed – if not substantially subjectively so given that readers neither own nor can legally migrate their ebook-as-licensed-software to a device, medium, or format where the user experience can be improved.

This aligns with findings which show while ebook access improves (phones, etc …) their reading doesn’t meaningfully pull away from the reading of print books.

Recent hullabaloo involving the ebookalypse may be a misreading which ignores data from sales of ebooks without isbns (loathed self-publishers) in which Amazon dominates because of the ubiquity of Kindle and its superior bookstore. There, big-publisher books are forced to a fixed price using an Amazon-controlled interface wherein authors add and easily publish good content on the cheap. We are again reminded that investing in even a slightly better user experience than everyone else is good business:

  • the price of ebooks are competitively low – or even free;
  • ebooks, through Kindles or the Kindle App, can be painlessly downloaded that while largely encumbered by DRM doesn’t require inconvenient additional software or – worst – to be read on a computer;
  • and features like WhisperSync enhance the reading experience in a way that isn’t available in print.

Other vendors, particularly those available to libraries, have so far been able to only provide a fine user experience that doesn’t do much for their desirability for either party.

The post On the User Experience of Ebooks appeared first on LibUX.

District Dispatch: ALA Congratulates Dr. Kathryn Matthew

Wed, 2015-09-23 22:35

Dr. Kathryn Matthew, Director, Institute of Museum and Library Services.

U.S. Senate confirms Matthew as Director of the Institute of Museum and Library Services

Washington, DC— In a statement, American Library Association (ALA) President Sari Feldman commented on the United States Senate’s confirmation of Dr. Kathryn K. Matthew as director of the Institute of Museum and Library Services (IMLS).

“We commend President Obama on Dr. Matthew’s appointment and the U.S. Senate for her confirmation. Communities across the nation will greatly benefit from her experience in bringing museums and libraries and the sciences together as resources readily accessible to families, students and others in our society.”

The Institute, an independent United States government agency, is the primary source of federal support for the nation’s 123,000 libraries and 35,000 museums.

“I am honored to have been nominated by President Barack Obama and to have received the confidence from the Senate through their confirmation process. I look forward to being appointed to serve as the fifth Director of the Institute of Museum and Library Services,”  Dr. Matthew said. “I am eager to begin my work at IMLS to help to sustain strong libraries and museums that convene our communities around heritage and culture, advance critical thinking skills, and connect families, researchers, students, and job seekers to information.”

Dr. Matthew will serve a four-year term as the Director of the Institute. The directorship of the Institute alternates between individuals from the museum and library communities.

ALA appreciates the exemplary service of Maura Marx, who served as IMLS Acting Director since January 19, 2015, following the departure of IMLS Director Susan H. Hildreth, at the conclusion of her four-year term. Marx is currently the deputy director for library services.  ALA has enjoyed good, close and collaborative relationship with Hildreth and with Anne-Imelda Radice, who served as IMLS Director from 2006-2010, and looks forward to a similarly strong and cooperative relationship with Dr. Matthew.

Dr. Matthew’s career interests have centered around supporting and coaching museums and other nonprofits, large and small, who are focused on propelling their programs, communications, events, and fundraising offerings to a higher level of success. Dr. Matthew’s professional experience spans the breadth of the diverse museum field. Through her many different leadership positions, she brings to the agency a deep knowledge of the educational and public service roles of museums, libraries, and related nonprofits.

Trained as a scientist, Dr. Matthew’s 30-year museum career began in curatorial, collections management, and research roles at the Academy of Natural Sciences in Philadelphia and Cranbrook Institute of Science. She worked with a variety of collections including ornithology, paleontology, fine arts, and anthropology. She then moved into management, exhibits and educational programs development, and fundraising and marketing roles, working at the Santa Barbara Museum of Natural History, the Virginia Museum of Natural History, The Nature Conservancy, the Historic Charleston Foundation, and The Children’s Museum of Indianapolis. She was also a science advisor for the IMAX film “Tropical Rainforest,” produced by the Science Museum of Minnesota.

In addition she was Executive Director of the New Mexico Museum of Natural History and Science, a state-funded museum. In that role she worked with corporations, federal agencies, public schools, and Hispanic and Native American communities to offer STEM-based programs. “Proyecto Futuro” was a nationally-recognized program that began during her tenure.

Dr. Matthew has worked on three museum expansion projects involving historic buildings; Science City at Union Station, in Kansas City, Missouri, and the Please Touch Museum at Memorial Hall and The Chemical Heritage Foundation, both in Philadelphia.

Over her 30-year career, she has been active as a volunteer to smaller nonprofits, board member, and award winning peer reviewer for the American Alliance of Museums’ Accreditation and Museum Assessment Programs. Her board service has included two children’s museums, a wildlife rehabilitation center, and a ballet company.

The post ALA Congratulates Dr. Kathryn Matthew appeared first on District Dispatch.

District Dispatch: Six takeaways from new broadband report

Wed, 2015-09-23 21:52

ALA participated at a White House roundtable on new federal broadband recommendations, (photo by www.GlynLowe .com via Flickr)

On Monday the inter-agency Broadband Opportunity Council (BOC) released its report and recommendations on actions the federal government can take to improve broadband networks and bring broadband to more Americans. Twenty-five agencies, departments and offices took part in the Council, which also took public comments from groups like the ALA.

The wide-ranging effort opened the door to address outdated program rules as well as think bigger and more systemically about how to more efficiently build and maximize more robust broadband networks.

Here are six things that struck me in reading and hearing from other local, state and national stakeholders during a White House roundtable in which ALA participated earlier this week:

  1. It’s a big deal. The report looks across the federal government through a single lens of what opportunities for and barriers to broadband exist that it may address. Council members (including from the Institute of Museum and Library Services) met weekly, developed and contributed action plans, and approved the substance of the report. That’s a big job—and one that points to the growing understanding that a networked world demands networked solutions. Broadband (fixed and mobile) is everyone’s business, and this report hopefully begins the process of institutionalizing attention to broadband across sectors.
  2. It’s still a report…a first step toward action. There’s no new money, but some action items will increase access to federal programs valued at $10 billion to support broadband deployment and adoption. The US Department of Agriculture (USDA), for instance, will develop and promote new funding guidance making broadband projects eligible for the Rural Development Community Facility Program and will expand broadband eligibility for the RUS Telecommunications Program. Both of these changes could benefit rural libraries.
  3. It’s a roadmap. Because the report outlines who will do what and when, it provides a path to consider next steps. Options range from taking advantage of new resources to advising on new broadband research to increasing awareness of new opportunities among community partners and residents.
  4. “Promote adoption and meaningful use” is a key principle. ALA argued that broadband deployment and adoption should be “married” to drive digital opportunity, and libraries can and should be leveraged to empower and engage communities. Among the actions here is that the General Services Administration (GSA) will modernize government donation, excess and surplus programs to make devices available to schools, libraries and educational non-profits through the Computers for Learning program, and the Small Business Administration (SBA) will develop and deploy new digital empowerment training for small businesses.
  5. IMLS is called out. It is implicated in seven action items, and the lead on two related to funding projects that will provide libraries with tools to assess and manage broadband networks and expanding technical support for E-rate-funded public library Wi-Fi and connectivity expansions. IMLS also will work with the National Science Foundation and others to develop a national broadband research agenda. The activity includes review existing research and resources and considering possible research questions related to innovation, adoption and impacts (to name a few).
  6. A community connectivity index is in the offing. It is intended to help community leaders understand where their strengths lie and where they need to improve, and to promote innovative community policies and programs. I can think of a few digital inclusion indicators for consideration—how about you?

National Telecommunications and Information Administration (NTIA) Chief Lawrence Strickling noted that the report is “greater than the sum of its parts” in that it increased awareness of broadband issues across the government and brought together diverse stakeholders for input and action. I agree and am glad the Council built on the impactful work already completed through NTIA’s Broadband Technology Opportunities Program (BTOP). As with libraries and the Policy Revolution! initiative, we must play to our strengths, but also think differently and more holistically to create meaningful change. It’s now up to all of us to decide what to do next to advance digital opportunity.

The post Six takeaways from new broadband report appeared first on District Dispatch.

Jonathan Rochkind: bento_search 1.5, with multi-field queries

Wed, 2015-09-23 20:25

bento_search is a gem that lets you search third party search engine APIs with standardized, simple, natural ruby API. It’s focused on ‘scholarly’ sources and use cases.

Version 1.5, just released, includes support for multi-field searching:

searcher = ENV['SCOPUS_API_KEY']) results = => { :title => '"Mystical Anarchism"', :author => "Critchley", :issn => "14409917" })

Multi-field searches are always AND’d together, title=X AND author=Y; because that was the only use case I had and seems like mostly what you’d want. (On our existing Blacklight-powered Catalog, we eliminated “All” or “Any” choices for multi-field searches, because our research showed nobody ever wanted “Any”).

As with everything in bento_search, you can use the same API across search engines, whether you are searching Scopus or Google Books or Summon or EBSCOHost, you use the same ruby code to query and get back results of the same classes.

Except, well, multi-field search is not yet supported for Summon or Primo, because I do not have access to those proprietary projects or documentation to make sure I have the implementation right and test it. I’m pretty sure the feature could be added pretty easily to both, by someone who has access (or wants to share it with me as an unpaid ‘contractor’ to add it for you).

What for multi-field querying?

You certainly could expose this feature to end-users in an application using a bento_search powered interactive search. And I have gotten some requests for supporting multi-field search in our bento_search powered ‘articles’ search in our discovery layer; it might be implemented at some point based on this feature.

(I confess I’m still confused why users want to enter text in separate ‘author’ and ‘title’ fields, instead of just entering the author’s name and title in one ‘all fields’ search box, Google-style. As far as I can tell, all bento_search engines perform pretty well with author and title words entered in the general search box. Are users finding differently? Do they just assume it won’t, and want the security, along with the more work, of entering in multiple fields? I dunno).

But I’m actually more interested in this feature for other users than directly exposed interactive search.

It opens up a bunch of possibilities for a under-the-hood known-item identification in various external databases.

Let’s say you have an institutional repository with pre-prints of articles, but it’s only got author and title metadata, and maybe the name of the publication it was eventually published in, but not volume/issue/start-page, which you really want for better citation display and export, analytics, or generation of a more useful OpenURL.

So you take the metadata you do have, and search a large aggregating database to see if you can find a good match, and enhance the metadata with what that external database knows about the article.

Similarly, citations sometimes come into my OpenURL resolver (powered by Umlaut) that lack sufficient metadata for good coverage analysis and outgoing link generation, for which we generally need year/volume/issue/start-page too. Same deal.

Or in the other direction, maybe you have an ISSN/volume/issue/start-page, but don’t have an author and title. Which happens occasionally at the OpenURL link resolver, maybe other places. Again, search a large aggregating database to enhance the metadata, no problem:

results = => { :issn => "14409917", :volume => "10", :issue => "2", :start_page => "272" })

Or maybe you have a bunch of metadata, but not a DOI — you could use a large citation aggregating database that has DOI information as a reverse-DOI lookup. (Which makes me wonder if CrossRef or another part of the DOI infrastructure might have an API I should write a BentoSearch engine for…)

Or you want to look up an abstract. Or you want to see if a particular citation exists in a particular database for value-added services that database might offer (look inside from Google Books; citation chaining from Scopus, etc).

With multi-field search in bento_search 1.5, you can do a known-item ‘reverse’ lookup in any database supported by bento_search, for these sorts of enhancements and more.

In my next post, I’ll discuss this in terms of DOAJ, a new search engine added to bento_search in 1.5.

Filed under: General