You are here

Feed aggregator

LITA: Passively Asking for Input: Museum Exhibits and Information Retention

planet code4lib - Wed, 2015-10-28 14:00

One of my main research interests is in user experience design; specifically, how people see and remember information. Certain aspects of “seeing” information are passive; that is, we see something without needing to do anything. This is akin to seeing a “Return Materials Here” sign over a book drop: you see this area fills a function that you need, but other than looking for it and finding it, you don’t have to do much else. But how much of this do we actually acknowledge, little less remember?

Countless times I’ve seen patrons fly past signs that tell them exactly where they need to find a certain book or when our library opens. It’s information they need but for some reason they haven’t gotten. So how can we make this more efficient?

I visited the Boston Museum of Science recently and participated in their Hall of Human Life exhibit. Now, anyone can participate in an exhibit, especially in a science museum: turn the crank to watch water flow! Push a button to light up the circulatory system! Touch a starfish! I’ll call this “active passivity”: you’re participating but you’re doing so at a bare minimum. What little information you’re receiving may or may not stick.nudecelebvideo

Who knew feet could be so interesting? (Photo courtesy of the Museum of Science, Boston)

The Hall of Human Life is different because it necessitates your input. You must give it data for the exhibit to be effective. For instance, I had to see how easily distracted I was by selecting whether I saw more red dots or blue dots while other images flashed across the screen. I had to position a virtual module on the International Space Station with only two joysticks to see how blue light affects productivity. I even had to take off my shoes and walk across a platform so I could measure the arch of my foot. All of my data is then compared with two hundred other museum-goers who gave their time and data based on my age, my sex, and other myriad factors such as how much time I spent sleeping the night before and whether or not I played video games.

But that’s not all of it. In order to do these things, you must wear a wristband with a barcode and a number on it. This stores your data and feeds it to each exhibit as well as keeps track of the data the exhibits give back to you. This way, you can see from home how many calories you burn while walking and how well you recognize faces out of a group.

Thus, in order for people to remember a bit of information, they need to experience it as much as possible. That’s all well and good for a science museum exhibit, but how would that work in a library, where almost all of our information is passively given? We need to take some things into consideration:

  • The exhibit didn’t require participation, it invited it – I could’ve ignored the exhibit and kept on walking, but it was hard: there were bright colors, big pictures, lights, and sounds. It got your attention without demanding it. Since we humans love bright lights and pretty colors, the exhibit is asking us to come see what the fuss is about.
  • The exhibit was accessible – I don’t necessarily mean ADA-type accessibility here (although it fit that, too). As I said before, the exhibit hall was bright and welcoming. In addition to being aesthetically pleasing, each station had a visual aide demonstrating what the exhibit was, how to participate, and how your results matched up. It directed you to look at different axes on a graph, for instance, and if it wanted to show you something in particular, it would highlight it. This made it easy for anyone of any age to come and play and – gasp – learn.
  • The exhibit prompted you for your input – Not only did it prompt you to participate, it would ask you questions: “Does the data we’ve collected match what we thought we’d get?” “Do you think age, sex, or experience will affect on the results?” “Were your predicitons right?” The exhibits asked you to make decisions before, during, and after the activity, and it encouraged reflection.

You’re probably saying to yourself that as library staff we do try to invite participation, to be accessible, and ask for input. But it’s not as effective as it should – or could – be. It’s not feasible for all library systems to get touch screens and interactive devices (yet), but we can mould our information to require less active passivity and more action. Using bright colors, welcoming imagery, and memorable, punchy explanations is a start. Some libraries already have interactive kiosks but they may not offer a video guide to using it. Adding more lighting and windows can make a space more lively and inspire more focus in our patrons.

There’s still a lot more to learn about visual communication and how humans process and store information, and I certainly don’t claim to have all the answers. But these are the questions I’m starting to ask and starting to research, and by the looks of things, it’s not just libraries and museums that are doing the same.

Hydra Project: Hydra Connect 2016 in Boston, MA

planet code4lib - Wed, 2015-10-28 11:21

A brief note to say that Hydra has accepted a generous offer from the Boston Public Library, Northeastern University, WGBH and the DPLA jointly to host Hydra Connect 2016.  We’re now looking into possible dates and we’ll let you know what these are as soon as they’re finalized.

FOSS4Lib Recent Releases: Piwik - 2.15.0

planet code4lib - Wed, 2015-10-28 09:25

Last updated October 28, 2015. Created by David Nind on October 28, 2015.
Log in to edit this page.

Package: PiwikRelease Date: Thursday, October 22, 2015

FOSS4Lib Recent Releases: Koha - 3.20.5

planet code4lib - Wed, 2015-10-28 09:20
Package: KohaRelease Date: Tuesday, October 27, 2015

Last updated October 28, 2015. Created by David Nind on October 28, 2015.
Log in to edit this page.

Monthly maintenance release for Koha. See the release announcement for the details.

Terry Reese: Jack-o-Lantern’s 2015

planet code4lib - Wed, 2015-10-28 03:57

First of the Jack-o-Lantern’s have been completed.  I present, Peter Capaldi as the Dr.

DuraSpace News: DSpace User Interface Prototype Challenge Deadline Extended

planet code4lib - Wed, 2015-10-28 00:00

From Tim Donohue, DSpace Tech Lead

Winchester, MA  Many core DSpace developers have been concentrating their efforts on upcoming DSpace 5.4 and 6.0 releases (links below). As a result the deadline for UI prototype submissions has been extended to Friday, December 4.  We still ask that you only spend a maximum of 80 hours on your prototype (full guidelines may be found here). You are welcome to submit your prototype prior to the new deadline, if it is already nearing completion.

DuraSpace News: CALL: Personal Digital Archiving 2016

planet code4lib - Wed, 2015-10-28 00:00

From Lance Stuchell, Digital Preservation Librarian, University Library, University of Michiganon behalf of the PDA 2016 Program Committee

Ann Arbor, Michigan  We are pleased to announce that the annual Personal Digital Archiving 2016 conference will be hosted at the University of Michigan in Ann Arbor on May 12-14, 2016.

Nicole Engard: Bookmarks for October 27, 2015

planet code4lib - Tue, 2015-10-27 20:30

Today I found the following resources and bookmarked them on Delicious.

  • VersionPress WordPress meets Git, properly. Undo anything (including database changes), clone & merge your sites, maintain efficient backups, all with unmatched simplicity.

Digest powered by RSS Digest

The post Bookmarks for October 27, 2015 appeared first on What I Learned Today....

Related posts:

  1. KohaCon10: Intro to Git
  2. We’re in danger of losing our memories
  3. WordPress is for more than blogging

HangingTogether: Persistent identifiers for local collections

planet code4lib - Tue, 2015-10-27 19:32

That was the topic discussed recently by OCLC Research Library Partners metadata managers, initiated by Jackie Shieh of George Washington University, Naun Chew of Cornell and Dawn Hale of Johns Hopkins University. Information professionals want to repurpose, present and connect the data they have created and curated from century-old standards and practices by publishing library metadata in the linked data framework. Recent linked data efforts have highlighted the importance of identifiers— a unique alphanumeric string associated with a digital object and resolvable globally over networks via specific protocols that is unambiguous to use, find, and identify the resource.  Local identifiers cannot be shared or re-used. We need identifiers to be unchanging over time, and independent of where the digital object is or will be stored, that is, “persistent”.  Persistent identifiers help collections become accessible globally, as they can be used, shared and re-used.

The practice for assigning identifiers has been inconsistent. Focus group members noted that maintaining identifiers and losing semantics when mapping one identifier system to another as particular challenges.

Identifiers for “works” has been problematic, as there is no consensus on what represents a distinct work. Two different workflows were mentioned: 1) find an OCLC work ID and add it to the local record and 2) use local algorithms to cluster records in the local catalog, assign a local identifier, and then match that ID with external sources such as the OCLC work ID.

The discussions were wide-ranging, but tended to focus on identifiers for personal names over other types of entities. The desire to present a comprehensive compilation of scholarly output on faculty profile pages has prompted a number of research libraries to roll out ORCIDs (Open Researcher and Contributor ID) for their faculty. ORCID is seen as a way to address the big gap that currently exists in the LC/NACO and other national authority files that do not customarily include authors of journal articles and other scholarly output. Authority files are used only within the library domain. Funding agencies have begun to require ORCIDs as part of the submission process. Few felt that current authority workflows would scale to cover all an institution’s researchers; some journal articles may have several hundred different “authors” listed from multiple countries.  Some researchers are reluctant to use any identifier they are not already using. Faculty can be sensitive about keeping their data private and the potential of “surveillance” or “Big Brotherism” by their institution. Automated ways of comparing faculty output can be seen as threatening.

Some outstanding issues with name identifiers:

  • Some researchers already have a half-dozen or more ORCIDs as well as other identifiers.
  • Skeletal entries make it difficult to determine whether they represent the same or different people.
  • ORCID relies on self-registration, so the deceased are not covered. To be comprehensive, more than one identifier system is needed.
  • There’s an emerging need for a name reconciliation service that can link multiple identifiers representing the same person.
  • For identifiers registered through VIVO, it’s unclear what happens when the person moves to a new institution, retires or dies.
  • Libraries’ data suppliers and system vendors need to support persistent identifiers.

Identifiers for organizations are even more complex than those for persons, as organizations can merge, split, acquire other organizations, have multiple hierarchies, change locations, etc. The Representing Organizations in ISNI Task Group is documenting these issues and recommending some ways to better represent organizations with International Standard Name Identifiers (ISO 27729). These identifiers are important to accurately reflect researchers’ affiliations so that institutions can compile and report their scholarly output easily. Digital Science’s newly released GRID (Global Research Identifier Database) includes ISNI identifiers and maps institutions through GeoNames. GRID is seen as a way to help facilitate linking and promoting the work of the organization.

Identifiers for data sets such as digital resources and collections in institutional repositories include system-generated IDs, locally-minted identifiers, PURL handles, DOIs (Digital Object Identifiers), URIs, URNs and ARKs (Archival Resource Keys). Some are using DataCite to mint and publish DOIs. Resources can have both multiple copies and versions and change over time. Institutional repositories used as collaborative spaces can lead to multiple publications from the same data sets. Libraries want to be able to link related pieces such as preprints, supplementary data and images with the publication. Multiple DOIs pointing to the same object pose a problem. Some libraries are considering using the EZID created by the California Digital Library to mint and publish unique, long-term identifiers and thus minimize the potential for broken citation links. Ideally, libraries would contribute to a hub for the metadata describing their researchers’ data sets regardless of where the data sets are stored.

 

About Karen Smith-Yoshimura

Karen Smith-Yoshimura, program officer, works on topics related to renovating descriptive and organizing practices with a focus on large research libraries and area studies requirements.

Mail | Web | Twitter | More Posts (63)

DPLA: Spotlight on Immigrant Stories

planet code4lib - Tue, 2015-10-27 19:11

Fifty years ago this October, the Immigration and Nationality Act of 1965 was signed into law, forever changing American immigration policy and the country’s demographics. The 1965 law abolished quota systems established in the 1920s that put restrictions on earlier waves of immigration, and allowed for many groups of non-European immigrants to enter the country.

Photograph of President Lyndon B. Johnson signing the Immigration and Nationality Act of 1965. Courtesy of the National Archives and Records Administration.

In celebration of the anniversary, listen to stories of generations of American immigrants, part of the Immigrant Stories collection, a fascinating archival project organized by the Immigration History Research Center (IHRC) at the University of Minnesota, and available to DPLA users via the Minnesota Digital Library.

The comprehensive digital storytelling project “came out of a desire to capture the stories of the most recent immigrants and refugees,” Immigrant Stories Project Manager Elizabeth Venditto said. The stories—some of men and women who arrived in the United States only months before, some whose families immigrated generations earlier—share unique perspectives on the lives of immigrants and their families. These stories are in their own words, of their own making, and offer a personal insight into race and culture in the United States.

There are many ways that participants can submit stories to the Immigrant Stories archive. The model, which began with a pilot project last year in Minneapolis and St. Paul, is adaptable, to meet a variety of needs for participants of all ages, skill-levels, and languages. There are free community workshops to help guide participants, along with a version for college students, and for adult ESL learners. The IHRC creates close relationships with educators, and relies on low-cost, simple software.

There are few requirements as to how these digital stories should look. The general prompt is to create a three to five minute story about an aspect of the individual’s migration experience. The story doesn’t have to be in English. If participants choose to speak in another language, the IHRC will work with a translator to help create subtitles. The stories  incorporate a variety of multimedia elements, too, which creates a rich representation of the individual’s story– it’s not just one person’s words, it’s citizenship documents, family photos, music, or home videos, too.

It also provides new digital skills to men and women, even if that means facilitators work one on one with participants, particularly elders, to get the hang of the platform. It also gives people an outlet to talk about a relevant, contemporary issue as it’s happening, providing a complex and intimate look at the impact of policies like the Immigration and Nationality Act.

An image from Htun Lin’s Immigrant Stories project.

A powerful example is the story of Htun Lin an ESL student and  Karen refugee from Burma who had been in the United States for six months before he created his story. During his time living in a refugee camp in Thailand, Htun Lin learned how to shoot and edit video. He was able to take part in the project while he was living in St. Paul, and Immigrant Stories gave him the space to share his story. There are other poignant stories like Htun Lin’s in the archive, like Thaigo Heilman, a DREAMer from Brazil, or Mustafa Jumale from Somalia who created his digital story in a class at the University of Minnesota.

While the project started in Minnesota, there are now programs in six locations in cities across the United States, representing a variety of immigrant communities and potential users. These programs take place in charter schools, museums, and other social service organizations. A recent NEH grant has allowed the IHRC to create an online portal where people can create and submit a digital story, regardless of location, which will go live in the summer of 2017.

This incredible archive gives contemporary immigrants and refugees—who are in a situation where, Venditto describes, people in power are talking about and making decisions about them–the opportunity to tell their own stories in their own words in a way that is accessible and lasting. Their stories are well worth a listen! You can also follow the #MyImmigrantStory on Twitter to learn more about the project. 

View the Immigrant Stories collection here

 

Islandora: Looking back at iCampCT

planet code4lib - Tue, 2015-10-27 14:03

Last week, Islandora Camp went to Hartford, Connecticut, where 36 Islandorians came together (mostly from the eastern US, with a few travellers putting on more miles to take part) to share what they are doing with Islandora and learn more about the project.

We kicked off on Tueaday, October 20th with a round of presentations focussed on the Islandora project and the community behind it, including a review of how we do our volunteer releases, all the different ways to take part in the community, and a look ahead at Islandora 7.x-2.x and how pulling processes out of Drupal actually makes it easier to use all that Drupal has to offer as a CMS. We also called upon the gathered attendees to step up and show off their own Islandora sites, such as Barnard College's beautiful landing page, the innovative workflows of Williams College, York University's infinitely scrolling Solr View of cats, and an example from Hamilton College of the incredible things you can do with basic, un-altered Islandora modules and a little theming (which is, alas, behind a firewall for now [ed. to add: and here it is!]).

On Wednesday we broke into tracks for day long workshops on either Islandora from the front end (admin track), or code-side development (dev track). The Admin Track, with myself and UConn's Jennifer Eustis for instructors,  set a record for fastest completion of the curriculum, leaving an hour and half to spare to explore other parts of Islandora (and issue a pull request!). On the dev side, Nick Ruest and Danny Lamb led the room through Islandora's coding standards and practices before exploring how to work with Solr by updating the Meme Solution Pack.

Thursday was the last official day of camp, marked by presentations on sites and tools from the Islandora Community, like how to use Ansible and Vagrant to streamline testing and deployment, how Barnard College set up their site, and how LDP and PCDM will shape Islandora's future. Slides from most presentation are available as links in the camp schedule.

But it didn't end there! On Friday, October 23rd, a smaller group got together to talk about Islandora and Hydra, and how our two project can work together and hopefully even share the same Fedora 4 repository, in the not-too-distant-future. Stakeholders from both communities put in a long day of discussion about concerns such as metadata, security, import/export, and how to handle derivative creation.

It was a great week and we hope that you will join us at future Islandora Camps. Next up is Fort Myers, Florida in March 2016.

Library of Congress: The Signal: The Veterans History Project Marks 15 Years of Service

planet code4lib - Tue, 2015-10-27 06:18

Alabama Veterans Memorial Park is a 21-acre park located on a wooded hilltop, Birmingham, Alabama. Digital photograph by Carol Highsmith, May 19, 2010. LC call number LC-DIG-highsm- 07604.

“The willingness with which our young people are likely to serve in any war, no matter how justified, shall be directly proportional to how they perceive the Veterans of earlier wars were treated and appreciated by their nation.”

— George Washington

The Veterans History Project honors the lives and service of all American veterans –not only the warriors but all who have served their country, “From the motor pool to the mess hall,” as director Robert Patrick puts it. VHP collects, preserves and makes available the stories and memorabilia of American veterans so that future generations may better understand the realities of military life and of war. To date, VHP has collected items from over 98,000 veterans, about 15% of which is available online.

The items from each woman or man are considered to be one unique “collection.” Many of the collections include first-person accounts of the veteran’s experience. And that is where great power resides, in a person recounting his or her memories. Whereas a writer of history constructs a narrative out of facts, a witness to history can say, “I was there. This is what I saw, what I experienced and what I felt.”

Congressman Ron Kind. Photo courtesy of Office of Congressman Ron Kind.

Congress enacted the Veterans History Project 15 years ago today, on October 27, 2000, as part of the American Folklife Center at the Library of Congress. The authorizing legislation was sponsored by Representatives Ron Kind, Amo Houghton, and Steny Hoyer from the U.S. House of Representatives and Senators Max Cleland and Chuck Hagel from the U.S. Senate.

“After hearing veterans in my family share their stories, I wanted to find a way to preserve all veterans’ stories for researchers, historians, educators, and most importantly future generations,” said Congressman Kind.

Peter Bartis, folklife specialist in the American Folklife Center and the author of the comprehensive guide, Folklife and Fieldwork, was deeply involved in the foundation of the Project. “Members of Congress got deeply engaged with the Veterans History Project and it was a good opportunity for the Library to demonstrate what it could do for the general public, not just for scholars,” said Bartis.

Reception to disabled veterans. June 7, 1922. 1 negative : glass ; 4 x 5 in. or smaller. LC call number LC-F8- 18996.

Oral histories — written, audio and video — enrich the collections. But there are challenges in every step of the recorded oral-history process, from the interview to posting the file on the web page. “Most veterans are reticent,” said Monica Mohindra, section head of programming coordination and communications for VHP “They are not necessarily going to come forward and volunteer to tell people how heroic and awesome they are.”

It falls on a loved one, friend or volunteer to encourage the veteran to sit and talk about themselves and their experiences, to open up, often to relive and articulate traumatic memories. Some veterans believe their experiences are just not important enough to talk about, let alone capture for posterity, so they need coaxing and assurance. Recording the oral history requires an advocate for the veteran, someone to pull it together.

Native American veterans. “War mothers” year of Xxoshgah. Zig Jackson, photographer, 1995.LC call number PH – Jackson (Z.), no. 5 (B size).

When an advocate is ready to interview a veteran, VHP has plenty of “how to” resources, which they distilled from the American Folklife Center’s decades of oral history best practices into a field kit: cover letter, biographical data form, veterans release form, interviewer’s release form, audio and video recording log, photograph log, and manuscript data sheet.

These materials alone were sufficient for the days of audio cassette and videocassette tapes (which VHP acquired plenty of and still digitizes) but moving into the digital age, the project field kit also includes information on media and formats standards. Additional resources, such as Oral History in the Digital Age, delve deeper into the digital recording equipment and in best production practices, such as three-point lighting for video and the basics of digital audio recording.

Part of the value of digital files is that they can be shared online almost instantly. But curating them can be challenging. “I’ve been reviewing the tens of thousands of optical disks we’ve received since the project started,” said VHP archivist Andrew Cassidy-Amstutz. “Taking the content off before the disk has a chance to fail — in some cases it has failed, unfortunately — but extracting as much of the content off each disk as fast as I can and then uploading it to the Library’s content transfer system.” (The Signal interviewed Cassidy-Amstutz in March 2014 about digital preservation in VHP.)

Vietnam Memorial Soldiers by Frederick Hart. Photograph by Carol Highsmith. September 13, 2006. LC call number LC-DIG-highsm- 04696.

VHP offers advice on how to prepare for the interview and how to conduct the interview, advice that covers the production elements of recording as well as the professional elements of how to be a good interviewer and listener. The rest is up to the openness and receptivity between interviewer and interviewee.

Once the interview is submitted to the VHP, it is marked for digitization (if it is not already digital), tagged with metadata and eventually archived in the Library of Congress’s digital repository for long-term preservation. “We generally receive about 5,000 collections per year,” said Cassidy-Amstutz. “Maybe 300 to 400 per month. That includes anything from new collections to new content arriving to be added to existing collections…We go through cycles of rapid acquisition, especially at the end of semesters when educators — who assigned the project to their students — have the chance to assemble it all together and send it out to us.”

Detail, Wall of Remembrance, Korean War Veterans Memorial, Washington, D.C. Carol Highsmith, photographer [between 1995 and 2006]. 1 transparency : color ; 4 x 5 in. or smaller. LC call number LC-HS503- 2013.

Adapting to the times and technology, VHP is working toward creating an online submission process for uploading digital files directly into their system. There may be an app eventually. The project could benefit from crowdsourcing the transcription of the hundreds of thousands of digitized letters and cards, which would contribute enormous value to their keyword search for researchers once the transcriptions are indexed. VHP is also collaborating with the Oral History Association to develop a pamphlet titled Doing Veterans Oral History.

VHP continues to reach out to veterans, including the steady stream of new veterans. And there will always be new veterans. “We can keep growing the project,” said Congressman Kind. “I urge everyone to ask veterans they know to record their stories. This is the last ask of a grateful nation to our veterans. What better way to preserve this important history of what it was like to protect our nation while honoring our veterans at the same time.”

LITA: Editorial Response to “Is Technology Bringing in More Skillful Male Librarians?”

planet code4lib - Mon, 2015-10-26 14:34

Hi LITA members (and beyond):

My name is Brianna Marshall and I am the editor of the LITA blog. Last week, the post “Is Technology Bringing in More Skillful Male Librarians?” by Jorge Perez was published on the blog. The post has understandably sparked considerable discussion on Twitter. Jorge has indicated an interest in writing a follow up post to clarify his viewpoints vs. the viewpoints expressed by the authors he cited, so I won’t speak for him beyond saying that I believe his intentions were to highlight issues around the stereotyping of male librarians. In his communications with me, he indicated that the provocative title and brevity was intended to spark a conversation with blog readers, not to be flippant about the issues. Again, I will let him provide clarification on the content of the post itself.

As I looked at the conversation on Twitter, I noticed a number of comments that implied that the viewpoints, quality, and tone of this post was endorsed by LITA as an organization. There have also been comments questioning who would allow something like the post to be published. As blog editor, I want provide greater transparency on how the blog has worked under my direction. I wholeheartedly welcome ideas to improve this process.

The LITA blog has a revolving team of regular writers who volunteer to contribute a new post once every 1-1.5 months, depending on how full the schedule is and how many regular writers we have at a given time. I provide a blog content and style guide to reference, as well as encouragement to ask for opinions and feedback from the team through our shared listserv. (I’ve added a link to the content and style guide to the LITA blog about page, if it is of interest.) While I work directly with guest writers who publish on the blog, it is not manageable for me to review or oversee all posts by regular writers. Peer feedback prior to publication is solicited at the author’s discretion; it is encouraged but not required or enforced. Ultimately, as a blog that tries to produce and publish new content multiple times per week, additional oversight has not been sustainable. A level of trust and knowledge that a post may go through that elicits negative reactions is, in my opinion, just part of the trade-off. However, the conversation around this post has sparked a renewed discussion among the LITA blog writers about our review processes and whether there are additional measures to help support each other in producing high-quality writing. As blog editor my critique of the post is not the content but rather that the author’s ideas are not fully developed, leading to a rushed post that at first read seems like Jorge is putting forth ideas that he is, I believe, instead critiquing.

It would deeply sadden me to have the efforts of a really incredible group of writers in the LITA community overshadowed by negative reactions to this blog post. I know I am often impressed by the writers’ thoughtful posts on a diverse array of topics. While as the blog editor I regret that the topic that brought about this conversation is an unclear post about a controversial issue, it’s great to be part of an engaged library tech community and I welcome any feedback to help us make improvements. In particular, I invite you to apply to be a blog writer during the next call for writers, and in the meantime to propose a guest post. We would love to feature your ideas!

Lastly, I appreciate Galen Charlton for his thoughtful response, everyone who has contributed to the LITA listserv thread, and for the tweets that sparked this conversation.

Brianna, LITA Blog Editor

LITA: Top Technologies Every Librarian Needs to Know – 2, a LITA webinar

planet code4lib - Mon, 2015-10-26 13:00

Attend this informative and fast paced new LITA webinar:

Top Technologies Every Librarian Needs to Know – 2

Monday November 2, 2015
1:00 pm – 2:00 pm Central Time
Register Online, page arranged by session date (login required)

We’re all awash in technological innovation. It can be a challenge to know what new tools are likely to have staying power ­­and what that might mean for libraries. The 2014 LITA Guide, Top Technologies Every Librarian Needs to Know, highlights a selected set of technologies that are just starting to emerge and describes how libraries might adapt them in the next few years. In this 60 minute webinar, join the authors of three chapters from the book as they talk about their technologies and what they mean for libraries. Those chapters covered will be:

Impetus to Innovate: Convergence and Library Trends
Presenter: A.J. Million
This presentation does not try and predict the future, but it does provide a framework to understand trends that relate to digital media.

The Future of Cloud-Based Library Systems
Presenters: Elliot Polak & Steven Bowers
The “cloud” has come to mean a shared hardware environment with an optional software component. In libraries, cloud computing technology can reduce the costs and human capital associated with maintaining a 24/7 Integrated Library System while facilitating an up­time that is costly to attain in­ house.

Library Discovery: From Ponds to Streams
Presenter: Ken Varnum
Libraries, and libraries’ perceptions of the patrons’ needs, have led to the creation and acquisition of “web­scale” discovery services. These new services seek to amalgamate all the online content a library might provide into one bucket.

Review of The 2014 LITA Guide, Top Technologies Every Librarian Needs to Know
”Contains excellent advice about defining the library’s context, goals, needs, and abilities as a means of discerning which technologies to adopt … introduces a panoply of emergent technologies in libraries by providing a fascinating snapshot of where we are now and of where we might be in three to five years.” — Technical Services Quarterly

Presenters:

Steven Bowers is the director of the Detroit Area Library Network (DALNET), at Wayne State University. He also co-teaches a course on Integrated Library Systems for the Wayne State University School of Library and Information Science, with his colleague Elliot Polak. Bowers was featured in the 2008 edition of the Library Journal’s Movers & Shakers.

A.J. Million is a Ph.D. candidate in the School of Information Science & Learning Technologies (SISLT) at the University of Missouri, where he teaches digital media and Web development to librarians and educators. He has written journal articles that appeared in Cataloging and Classification Quarterly, the Journal of Library Administration, and OCLC Systems and Services. His dissertation examines website infrastructure in state government agencies.

Elliot Polak is the Assistant Library Director for Discovery and Innovation at Wayne State University. Prior to joining Wayne State Elliot spent three years at Norwich University serving as the Head of Library Technology responsible for evaluating, maintaining, and implementing systems at Kreitzberg Library.

Ken Varnum is the Web Systems Manager at the University of Michigan Library. Ken’s research and professional interests include discovery systems, content management, and user-generated content. He wrote “Drupal in Libraries” (2012) and edited “The Top Technologies Every Librarian Needs to Know” (2014).

Register for the Webinar

Full details
Can’t make the date but still want to join in? Registered participants will have access to the recorded webinar.

Cost:

  • LITA Member: $45
  • Non-Member: $105
  • Group: $196

Registration Information:

Register Online, page arranged by session date (login required)
OR
Mail or fax form to ALA Registration
OR
call 1-800-545-2433 and press 5
OR
email registration@ala.org

Questions or Comments?

For all other questions or comments related to the course, contact LITA at (312) 280-4268 or Mark Beatty, mbeatty@ala.org

Ed Summers: Postdoc

planet code4lib - Mon, 2015-10-26 04:00

This week’s readings included some dire looks at life after the PhD: Kovalik (2013) on how easy it is to slip through the cracks of academia, and Johnson (2014) on the hyper-competitive life of the postdoc. Both were quite sobering. Johnson describes the problem in the health sciences where reduced government funding has led to situations where academic research labs are increasingly dependent on cheap labor (postdocs), who do most of the actual science, while the faculty jobs are increasingly difficult to find, because there are too many postdocs being cranked out to do the research. It was a somewhat frustrating article because while it hinted at how smaller labs could help correct this problem, it really didn’t explain how that could work. Would the problem just be harder to identify if there were lots of smaller labs, rather than fewer large ones? I like to think there is more to this idea of smaller labs, that are geared more to research. Perhaps they are more like projects with longer funding cycles than labs?

I followed one of Johnson’s citations to Alberts, Kirschner, Tilghman, & Varmus (2014) which painted a very bleak picture of federal funding drying up, and its impact on the research lab. The authors attribute this to many factors, but the primary one is an unending growth model that began after World War 2 with Vannevar Bush. It seems like a systemic problem that we can see reflected in our financial systems. But I actually liked this article because it ends with a list of recommendations that were at least understandable.

One of the things that the authors suggest is weaning research labs off of using grant funding to pay for postdoc positions and using funding that is oriented towards training:

To give federal agencies more control over the number of trainees and the quality of their training, we propose moving gradually to a system in which graduate students are supported with training grants and fellowships and not with research grants. Fellowships have the virtue of providing peer review of the student applicants, and training programs set high standards for selection of students and for the education they receive.

They also suggest that labs increase the number of full time staff (requiring support from the University):

We believe that staff scientists can and should play increasingly important roles in the biomedical workforce. Within individual laboratories, they can oversee the day-to-day work of the laboratory, taking on some of the administrative burdens that now tend to fall on the shoulders of the laboratory head; orient and train new members of the laboratory; manage large equipment and common facilities; and perform scientific projects independently or in collaboration with other members of the group. Within institutions, they can serve as leaders and technical experts in core laboratories serving multiple investigators and even multiple institutions.

As a staff person at the University of Maryland I feel good about these recommendations. But where I can’t help but wonder if there are enough training grants available, particularly in the field of information science. What are effective ways to make the case to your university that you need additional staff? I guess these may vary from institution to institution.

Johnson remarks on how learning to be a researcher as a doctoral student doesn’t always translate very well into the job that you end up doing when you get out:

The problem is that any researcher running a lab today is training far more people than there will ever be labs to run. Often these supremely well-educated trainees are simply cheap laborers, not learning skills for the careers where they are more likely to find jobs — teaching, industry, government or nonprofit jobs, or consulting.

This reminded me of this visualization I saw recently of initial career choices after receiving a PhD from Stanford University:

Click on the image for the full report. Wouldn’t it be great if all universities did this?

Maybe I missed it, but what Alberts et al. (2014) didn’t seem to address was the significant numbers of post doctoral students that head directly into business, government or the scary, gray unknown. Hopefully that unknown area isn’t unemployment! I have to imagine that a significant number of biomedical researchers go immediately to work in corporate labs. What impact does this have on the enterprise of open science?

The implication here is that doing a PhD must necessarily involve opportunities to gain experience with industry, government and non-profits. How can this be achieved while not compromising the independence of research? It reminds me of how important my internships were as a Masters level student.

References

Alberts, B., Kirschner, M. W., Tilghman, S., & Varmus, H. (2014). Rescuing uS biomedical research from its systemic flaws. Proceedings of the National Academy of Sciences, 111(16), 5773–5777. Retrieved from http://www.pnas.org/content/111/16/5773.full

Johnson, C. (2014). Glut of postdoc researchers stirs quiet crisis in science. The Boston Globe. Retrieved from https://www.bostonglobe.com/metro/2014/10/04/glut-postdoc-researchers-stirs-quiet-crisis-science/HWxyErx9RNIW17khv0MWTN/story.html

Kovalik, D. (2013). Death of an adjunct. Pittsburgh Post-Gazette, 18. Retrieved from http://www.post-gazette.com/opinion/Op-Ed/2013/09/18/Death-of-an-adjunct/stories/201309180224

DuraSpace News: Registration Available for DuraSpace Services Webinars

planet code4lib - Mon, 2015-10-26 00:00

Join us in November for a two-part webinar series devoted to DuraSpace Services, “2015 Accomplishments and A Sneak Peek at What Lies Ahead.”   This series will highlight the recent developments of our suite of services including DSpace Direct, ArchivesDirect, DuraCloud and our soon to be released service, DuraCloud Vault. 

Details and registration are available here.

Eric Lease Morgan: “Sum reflextions” on travel

planet code4lib - Sun, 2015-10-25 11:31

These are “sum reflextions” on travel; travel is a good thing, for many reasons.

I am blogging in front of the Pantheon. Amazing? Maybe. Maybe not. But the ability to travel, see these sorts of things, experience the different languages and cultures truly is amazing. All too often we live in our own little worlds, especially in the United States. I can’t blame us too much. The United States is geographically large. It borders only two other countries. One country speaks Spanish. The other speaks English and French. While the United States is the proverbial “melting pot”, there really isn’t very much cultural diversity in the United States, not compared to Europe. Moreover, the United States does not nearly have the history of Europe. For example, I am sitting in front of a building that was build before the “New World” was even considered as existing. It doesn’t help that the United States’ modern version of imperialism tends to make “United Statesians” feel as if they are the center of the world. I guess, that is some ways, it is not much different than Imperial Rome. “All roads lead to Rome.”

As you may or may not know, I have commenced upon a sort of leave of absence from my employer. In the past six weeks I have moved all of belongings to a cabin in a remote part of Indiana, and I have moved myself to Chicago. From there I began a month-long adventure. It began in Tuscany where I painted and deepened my knowledge of Western art history. I spent a week in Venice where I did more painting, walked up to my knees in water because the streets flooded, and I experienced Giotto’s frescos in Padua. For the past week I experienced Rome and did my best to actively participate in a users group meeting called ADLUG — the remnants of a user’s group meeting surrounding one of the very first integrated library systems — Dobris Libris. I also painted and rode a bicycle along the Appian Way. I am now on my way to Avignon where I will take a cooking class and continue on a “artist’s education”.

Travel is not easy. It requires a lot of planning and coordination. “Where will I be when, and how will I get there? Once I’m there, what am I going to do, and how will I make sure things don’t go awry?” In this way, travel is not for the fient of heart, especially when venturing into territory where you do not know the language. It can be scary. Nor is travel inexpensive. One needs to maintain two households.

Travel is a kind of education that can not be gotten through the reading of books, the watching of television, nor discussion with other people. It is something that must be experienced first hand. Like sculpture, it is literally an experience that can only exist time & space in order to fully appreciate.

What does this have to do with librarianship? On one hand, nothing. On the other hand, everthing. From my perspective, librarianship is about a number of processes applied against a number of things. These processes include collection, organization, preservation, dissemination, and sometimes evaluation. The things of librarianship are data, information, knowledge, and sometimes wisdom. Even today, with the advent of our globally networked computers, the activities of librarianship remain essentially unchanged when compared to the activities of more than a hundred years ago. Libraries still curate collections, organize the collections into useful sets, provide access to the collections, and endeavor to maintain all of these services for the long haul.

Like most people and travel, many librarians (and people who work in libraries) do not have a true appreciation for the work of their colleagues. Sure, everybody applauds everybody else’s work, but have they actually walked in those other people’s shoes? The problem is most acute between the traditional librarians and the people who write computer programs for libraries. Both sets of people have the same goals; they both want to apply the same processes to the same things, but their techniques for accomplishing those goals are disimilar. One wants to take a train to get where they are going, and other wants to fly. This must change lest the profession become even less relevant.

What is the solution? In a word, travel. People need to mix and mingle with the other culture. Call it cross-training. Have the computer programmer do some traditional cataloging for a few weeks. Have the cataloger learn how to design, implement, and maintain a relational database. Have the computer programmer sit at the reference desk for a while in order to learn about service. Have the reference librarian work with the computer programmer and learn how to index content and make it searchable. Have the computer programmer work in an archive or conservatory making books and saving content in gray cardboard boxes. Have the archivist hang out with computer programmer and learn how content is backed up and restored.

How can all this happen? In my opinion, the most direct solution is advocacy from library administration. Without the blessing of library administration everybody will say, “I don’t have time for such ‘travel’.” Well, library work is never done, and time will need to be carved out and taken from the top, like retirement savings, in order for such trips abroad to come to fruition.

The waiters here at my cafe are getting restless. I have had my time here, and it is time to move on. I will come back, probably in the Spring, and I’ll stay longer. In the meantime, I will continue with my own personal education.

Ed Summers: Seminar Week 8

planet code4lib - Sat, 2015-10-24 04:00

This weeks seminar was focused on citizen science. We had three readings: Wiggins & Crowston (2011), Quinn & Bederson (2011), Eveleigh, Jennett, Blandford, Brohan, & Cox (2014) and were visited by the author of the first paper Andrea Wiggins. This class was a lot of fun because prior to talking about the readings we spent an hour walking around the UMD campus looking for birds, and collecting observations with Andrea’s eBird mobile app.

Along the way we chatted about how her dissertation research used an in depth case study of eBird (a project from the Cornell Lab of Ornithology), in which she did a great deal of participant observation. I was particularly struck by her observation of how important the knowledge she gained about birding, and the relationships she developed as part of this work were to her dissertation work, as well as her academic career. Although she works in the field of information science, some of her most well known work has been with ecologists that she was put in touch with as part of this birding observation. Andrea stressed how important it is for her research to be put to use in the world, when it comes to creating applications like eBird, or effecting policy. This seems like a deep lesson for discovering and building a meaningful research topic. Another thing that occurred to me as I was writing up these notes was how meta this part of her research was: observing the humans who were observing the birds.

We did spot a few birds on our walk, which you can see in Andrea’s eBird checklist. When we came back to the class we took a brief tour through the eBird website, and looked at how the data was collected and made available. Andrea said that they had some initial difficulty drawing people to using the eBird application, but this changed when they brought in some active birders to help design the application, which helped spread the word about the application. Perhaps there was a participatory design story that could be told, or that has been told. Now they are swimming in data, which they make quarterly and annual of snapshots available to the public. My only quibble with the datasets they make available is that they have their own peculiar license instead of using a Creative Commons license, like CC-BY-NC-SA. The participation in the project is truly impressive; take a look in your area to see what birds have been observed. I found a handful of people in my neighborhood had documented some 104 species of birds, mostly in 2014 and 2015.

One additional topic that came up was ethical considerations when making the data available on the Web. A lot of birders will use their actual names, so in sharing observation data you are also providing information about your location at particular times. There are obvious privacy implications here, that are necessarily balanced with the birders desire to participate in the community of other active birders. Another consideration is rare birds that are found, which can result in an increased number of people to come and see the bird, which could impact their environment. eBird themselves provide some guidance on these concerns. I suspect some of these issues will come up again in a few weeks when Katie Shilton visits our class to talk about values in design.

The papers provided a nice variety of views into the domain. Quinn & Bederson (2011) surveyed the landscape of human computation, which seems to have its genesis in the pioneering work of [Luis von Ahn] at Carnegie Mellon (who invented ReCAPTCHA which he later sold it to Google). The paper is quite structured in its approach to what is in and out of scope for human computational work, and provides a taxonomy or rubric for the field. It’s a nice article to help situate ideas in the field of human computation. Wiggins & Crowston (2011) similarly provided a useful look at the relatively new field of citizen science with particular attention to how the degree of virtuality and goal orientation can be additional participatory types. It also seems like this is one of the first papers to deliberately include purely virtual citizen science projects like GalaxyZoo.

The last paper Eveleigh et al. (2014) was suggested by Jonathan who led the discussion and also is working with Andrea on citizen science projects. I really enjoyed this paper because it took a deep dive into a user study of GalaxyZoo. There is already a significant body of research of how crowd sourcing projects like Wikipedia tend to have a large number of contributions from a small numbers of people. The general approach is that the more we understand about how these super-users behave the better these systems can be built and sustained. There is a certain logic to that approach, but what hasn’t been explored so much is how the users who submit less behave, and how important they are to the health of the overall system. The long tail of small contributions is actually extremely important, and designing systems that allow for this level of engagement is under-developed.

The paper actually almost felt like two papers to me, since it was a mixed methods paper that first surveyed OldWeather users about their motivations (extrinsic and intrinsic) for participation, and then did a series of in depth follow on interviews to help identify what the barriers to and constraints of participation were for the individuals on the long tail. In the classroom discussion I indicated that it felt like two papers to me, but on rereading pieces of it now I see that the two parts of the study were more connected than I initially recognized. The results of the survey were used to sample OldWeather users that had different motivations and participation patterns.

The findings were interesting, especially regarding the identified design patterns in OldWeather that helped encourage lower volume contributions:

  • Facilitate independent working and participant choice.
  • Optimize tasks to fit within busy lives.
  • Publicize scientific outcomes.
  • Sell citizen science snacks, not gourmet meals!
  • Enable personalized feedback to affirm quality.

There seems a lot of useful information here to build and test new citizen-science and crowd-sourcing projects. I know I have habitually thought of the expert user when desigining user interfaces and applications. Focusing on the dabbler seems like an extremely valuable lesson. Even the dropout who no longer contributes, but enjoys getting project update emails, and spreads the word about the project to friends and colleagues is important. Now that I think about it this was one of the underlying themes in Mauricio Giraldo’s talk about NYPL’s Building Inspector earlier this year in MITH:

I think his talk is largely about what it means to design for dabbling, and how important this activity is for building substantial engagement.

PS. the more I think about it the more I like the model Andrea presented for using particpant observation as a core part the work I do in studying appraisal in Web archives. Finding the balance between observation, participation and collaboration will be difficult, because I don’t want to maintain too much critical distance from the work.

References

Eveleigh, A., Jennett, C., Blandford, A., Brohan, P., & Cox, A. L. (2014). Designing for dabblers and deterring drop-outs in citizen science. In Proceedings of the 32nd annual ACM conference on human factors in computing systems (pp. 2985–2994). Association for Computing Machinery.

Quinn, A. J., & Bederson, B. B. (2011). Human computation: A survey and taxonomy of a growing field. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1403–1412). Association for Computing Machinery.

Wiggins, A., & Crowston, K. (2011). From conservation to crowdsourcing: A typology of citizen science. In System sciences (HICSS), 2011 44th Hawaii international conference on (pp. 1–10). IEEE.

Jonathan Rochkind: Blacklight community survey: What would you like to see added to Blacklight?

planet code4lib - Fri, 2015-10-23 19:24

Here’s the complete list of answers to “What would you like to see added to Blacklight?” from the Blacklight community survey I distributed last month. 

Some of these are features, some of these are more organizational. I simply paste them all here, with no evaluation on my own part as to desirability or feasibility.

  • Keep up the great work!
  • Less. Simplicity instead of more indirection and magic. While the easy things have stayed easy anything more has seemed to be getting harder and more complicated.

    Search inside indexing patterns and plugin.
    Better, updated, maintained analytics plugin.

  • Support for Elasticsearch
  • Blacklight-maps seems fantastic if you don’t need the geoblacklight features.
  • (1) I’ve had lots of requests for an “OR” option on the main facet limits–like SearchWorks has. The advanced search has this feature. We have a facet for ‘Record Type’ (e.g. publication, object, oral history, film, photograph, etc) and we have users who would like to search across e.g. film or photograph. That could be implemented with a checkbox. Unfortunately it’s a little above my Rails chops & time at this point to implement.
    (2) We do geographic name expansion and language stemming. It would be sweet to be able to let users turn those features off. Jonathan Rochkind wrote an article awhile back on how to do that–again, I unfortunately lack Rails chops & time to implement that.
  • To reduce upgrade/compatibility churn, I wonder if it might be helpful to avoid landing changes in master/release until they are fully baked. For major refactorings/ruby API changes, do all dev in master until the feature is done churning and everyone relevant is satisfied with it being complete. As opposed to right now it seems as if iterative development on new features sometimes happens in masters and even in releases, before a full picture of what the final API will look like exists. Eg SearchBuilder refactorings.
  • A more active and transparent Blacklight development process. We would be happy to contribute more, but it’s difficult to know a longer-term vision of the community.
  • Integration with Elasticsearch

Filed under: General

Eric Hellman: This is NOT a Portrait of Mary Astell

planet code4lib - Fri, 2015-10-23 15:06
Not Mary Astell, by Sir Joshua ReynoldsTen years ago, the University of Calgary Press published a very fine book by Christine Mason Sutherland called The Eloquence of Mary Astell, which focused on the proto-feminist's contributions as a rhetorician. The cover for the book featured a compelling image using a painted sketch from 1760-1765 by the master English portraitist Sir Joshua Reynolds, currently in Vienna's Kunsthistorisches Museum and known as Bildnisstudie einer jungen Dame (Study for the portrait of a young woman).

Cover images from books circulate widely on the internet. They are featured in online bookstores, they get picked up by search engines. Inevitably, they get re-used and separated from their context. Today (2015) "teh Internetz" firmly believe that the cover image is a portrait of Mary Astell.

For example:

If you look carefully, you'll see that the image most frequently used is the book cover with the title inexpertly removed.

But the painting doesn't depict Mary Astell. It was done 30 years after her death. In her book, Sutherland notes (page xii):No portrait of her remains, but such evidence as we have suggests that she was not particularly attractive. Lady Mary Wortley Montagu’s granddaughter records her as having been “in outward form [...] rather ill-favoured and forbidding,” though Astell was long past her youth when this observation was made. 
Wikipedia has successfully resisted the misattribution.

A contributing factor for the confusion about Mary Astell's image is the book's failure to attribute the cover art. Typically a cover description is included in the front matter of the book. According to the Director of the University of Calgary Press, Brian Scrivener, proper attribution would certainly be done in a book produced today. Publishers now recognize that metadata is increasingly the cement that makes books part of the digital environment. Small presses often struggle to bring their back lists up to date, and publishers both large and small have "metadata debt" from past oversights, mergers, reorganizations and lack of resources.

Managing cover art and permissions for included graphics is often an expensive headache for digital books, particularly for Open Access works. I've previously written about the importance of clear licensing statements and front matter in ebooks. It's unfortunate when public domain art is not recognized as such, as in Eloquence, but nobody's perfect.

The good news is that University of Calgary Press has embraced Open Access ebooks in a big way. The Eloquence of Mary Astell and 64 other books are already available, making Calgary one of the world's leading publishers of Open Access ebooks. Twelve more are in the works.

You can find Eloquence at the Calgary University Press website (including the print edition), Unglue.itDOAB, and Internet Archive. Mary Astell's 1706 pamphlet Reflections Upon Marriage can be found at the Internet Archive and at the University of Pennsylvania's Celebration of Women Writers.

And maybe in 2025, teh internetz will know all about Sir Joshua Reynold's famous painting, Not Mary Astell. Happy Open Access Week!

Pages

Subscribe to code4lib aggregator