You are here

Feed aggregator

LITA: IA & UX Meet Library Technology

planet code4lib - Fri, 2014-11-07 13:00

The class I enjoy the most this semester at Indiana University is Information Architecture. It is a class where theory and practical application are blended so that we can create something tangible, but also understand the approaches – my favorite kind!

As usability.gov defines it, Information Architecture (IA) “focuses on organizing, structuring, and labeling content in an effective and sustainable way.” While the class doesn’t necessarily focus on Library Science since it is offered through the Information Science courses, this concept may sound a bit familiar to those working in a library.

In the class, we have chosen a small website we believe could benefit from restructuring. Some students chose public library websites, and others websites from the private sector. Regardless of each website’s purpose, the process of restructuring is the same. The emphasis is placed on usability and user experience (UX), which the ALA Reference and User Services Association defines as “employing user research and user-centered design methods to holistically craft the structure, context, modes of interaction, and aesthetic and emotional aspects of an experience in order to facilitate satisfaction and ease of use.”

Basically, it means structuring content so that a user can use it to a high level of satisfaction.

Peter Morville and Co. developed this honeycomb to represent the multiple facets of User Experience. Check out his explanation here.

Keeping usability and UX at the forefront, much of our semester has been focused on user demographics. We developed personas of specific users by highlighting the tasks they need to carry out and the kind of behaviors they bring to the computer. For example, one of my personas is a working mother who wants to find the best dance studio for her daughter, but doesn’t have a lot of time to spend looking up information and gets frustrated easily with technology (may or may not have been influenced by my own mother).

We also developed a project brief to keep the main benefits of restructuring in mind, and we analyzed parts of the current websites that work for users, and parts that could be improved. We did not (and could not) begin proposing our restructured website until we had a solid understanding of the users and their needs.

While learning about usability, I thought back to my graduate school application essay. I discussed focusing on digital libraries and archives in order to improve accession of materials, which is my goal throughout my career. As I’m learning, I realize that accession doesn’t mean digitizing to digitize, it means digitizing then presenting the materials in an accessible way. Even though the material may be released on the web, that doesn’t always imply that a user will find it and be able to use it.

As technology increasingly evolves, keeping the goals of the library in sync with the skills and needs of the user is crucial. This is where information architecture and user experience meet library technology.

How do you integrate usability and user experience with library technology in your institution? If you are an information architect or usability researcher, what advice do you have for others wishing to integrate these tools?

Open Knowledge Foundation: Global Open Data Index 2014: Reviewing in progress

planet code4lib - Thu, 2014-11-06 19:54

October was a very exciting month for us in the Index team. We spoke to so many of you about the Index, face to face or in the virtual world, and we got so much back from you. It was amazing for us to see how the community is pulling together not only with submissions, but also giving advice in the mailing list, translating tweets and tutorials and spreading the word of the Index around. Thank you so much for your contributions.

This is the first time that we have done regional sprints, starting from the Americas in early October in AbreLATAM/ConDatos, through to our community hangout with Europe and MENA, and finishing off with Asia, Africa and Pacific. On Thursday last week, we hosted a Hangout with Rufus, who spoke about the the Index, how it can be used and where it is headed. We were also very lucky to have Oscar Montiel from Mexico, who spoke with us how they use the Index to demand datasets from the government and how they are now implementing the local data index in cities around Mexico so they can promote data openness at the municipal level. We were also excited to host Oludotun Babayemi from Nigeria, who explained how Index that involves Nigeria can help them to promote awareness in government and civilians to open data issues.

Now that the sprints are over, we still have a lot of work ahead of us. We are now reviewing all of the submissions. This year, we divided the editor role from 2014 into two roles known as ‘contributor’ and ‘reviewer’. This has been done so we can have a second pair of eyes to to ensure information is reliable and of excellent quality. Around the world people a team of reviewers are working on the submissions from the sprints. We are still looking for reviewers for South Africa, Bangladesh, Finland, Georgia, Latvia, Philippines and Norway. You can apply to become one here.

We are finalising the Index 2014 over the next few weeks. Stay tuned for more updates. In the meantime, we are also collecting your stories about participating in the Index for 2014. If you would like to contribute to these regional blogs, please email emma.beer@okfn.org. We would love to hear from you and make sure your country is represented.

pinboard: Code4Lib shop

planet code4lib - Thu, 2014-11-06 19:08
tshirts, mugs, etc.

Library of Congress: The Signal: WITNESS: Digital Preservation (in Plain Language) as a Tool for Justice

planet code4lib - Thu, 2014-11-06 18:09

Illustration of video file and wrapper from WITNESS.

Some of you information professionals may have experienced incidents where, in the middle of a breezy conversation, you get caught off guard  by a question about your work (“What do you do?”) and you struggle to come up with a straightforward, clear answer without losing the listener’s attention or narcotizing them into a stupor with your explanation.

Communicating lucid, stripped-down technical information to a general audience is a challenge…not dumbing down the information but simplifying it. Or, rather, un-complicating it and getting right to the point. At the Signal, we generally address our blog posts to institutions, librarians, archivists, students and information technologists. We preach to the choir and use peer jargon with an audience we assume knows a bit about digital preservation already. Occasionally we direct posts specifically to laypeople, yet we might still unintentionally couch some information in language that may be off-putting to them.

WITNESS, the human rights advocacy organization, has become expert in communicating complex technical information in a simple manner.  WITNESS empowers people by teaching them how to use video as a tool to document human rights abuses and how to preserve digital video so they can use it to corroborate their story when the time is right. Their audience — who may or may not be technologically savvy –  often comes to WITNESS in times of crisis, when they need immediate expertise and guidance.

Cell phone video interview on witness.org

What WITNESS has in common with the Library of Congress and other cultural institutions is a dedication to best practices in digital preservation. However, to the Library of Congress and its peer institutions, the term “digital preservation” pertains to cultural heritage; to victims of human rights violations, “digital preservation” pertains to evidence and justice.

For example, WITNESS advises people to not rename or modify the original video files. While that advice is in accord with the institutional practice of storing the original master file and  working only with derivative copies, that same advice, as applied to documenting human rights violations, protects people from the potential accusation of tampering with — or modifying — video to manipulate the truth. The original file might also retain such machine-captured metadata as the time, date and geolocation of the recording, which can be crucial for maintaining authenticity.

The Society of American Archivists recently honored WITNESS with their 2014 Preservation Publication Award for their “Activists Guide to Archiving Video.” The SAA stated, “Unlike other resources, (the guide) is aimed at content creators rather than archivists, enabling interventions that support preservation early in the digital life-cycle. The guide also uses easy-to-understand language and low-cost recommendations that empower individuals and grassroots organizations with fewer resources to take action to safeguard their own valuable collections. To date, the guide has found enthusiastic users among non-archivists, including independent media producers and archives educators, as well as archivists who are new to managing digital video content. The Award Committee noted that the guide was a ‘valuable contribution to the field of digital preservation’ and an ‘example of what a good online resource should be.’”

Screenshot from “What is Metadata” video by WITNESS.

That is an important distinction, the part about “…non-archivists, including independent media producers and archives educators, as well as archivists who are new to managing digital video content.” It means that WITNESS’s digital preservation resources are equally useful to a broad audience as they are to its intended audience of human rights advocates. Like the  Academy of Motion Picture Arts and Sciences’ 2007 publication, The Digital Dilemma (profiled in the Signal), the language that WITNESS communicates in is so plain and direct, and the advice so comprehensive, that the digital video preservation instruction in the publication is broadly applicable and useful beyond its intended audience. Indeed, WITNESS’s ”Activists Guide to Archiving Video” is used in training and college courses on digital preservation.

WITNESS’s latest resource, “Archiving for Activists,”  is a video series aimed at improving people’s understanding of digital video so they can make informed choices for shooting and preserving the best possible copy of the event. The videos in this series are:

Photo from witness.org

Some activists in the field have said that, thanks to WITNESS’s resources, they are organizing their footage better and adopting consistent naming conventions, which makes it easier to find files later on and strengthens the effectiveness of their home-grown archives. Yvonne Ng, senior archivist at WITNESS, said, “Even in a situation where they don’t have a lot of resources, there are simple things that can be done if you have a few hard drives and a simple system that everybody you are working with can follow in terms of how to organize your files and put them into information packages – putting things in folders and not renaming your files and not transcoding your files and having something like an Excel document to keep track of where your videos are.”

WITNESS will continue to offer professional digital video archival practices to those in need of human rights assistance, in the form of tools that are easy to use and readily available, in plain language. Ng said, “We talk about digital preservation in a way that is relevant and immediate to the people who are documenting abuses. It serves their end goals, which are not necessarily just to create an archive. It’s so that they can have a collection that they can easily use and it will maintain its integrity for years.”

HangingTogether: UCLA’s Center for Primary Resources and Training: A model for increasing the impact of special collections and archives

planet code4lib - Thu, 2014-11-06 17:00

Many of us in the special collections and archives community have long admired the purpose and scope of UCLA’s Center for Primary Resources and Training (CFPRT), so I was pleased to learn that the UCLA library would be celebrating the Center’s 10th anniversary with a symposium on 24 October. As a result, I now know that we should all be celebrating its remarkable success as well. The audience that day learned via stellar presentations by ten CFPRT “graduates” that the program’s impact on them, and on their students and colleagues, has been profound.

Vicki Steele, the Center’s founding director, talked about being inspired by the ARL “hidden collections” conference at the Library of Congress in 2003 (the papers were published here). She flew right back to UCLA and put together a strategy for not only making a dent in her department’s massive backlogs (she noted they had lost both collections and donors due to a well-deserved reputation for taking years to process new acquisitions) but for integrating special collections into the intellectual life of the university. Students have told her “you never know what you’re in training for” when describing the “life-changing experiences” fostered by working at CFPRT. And based on the presentations, it’s clear that this is not hyperbole. Oh, and it was great to learn that providing a very desirable wage to the Center’s fellows was a high priority from the beginning; one graduate noted that the stipend literally made it possible for her to focus on her studies and complete her M.A. program.

I confess that I’ve occasionally wondered how much the Center accomplishes beyond getting lots of special collections processed. In the wake of this symposium, I’m wondering no more. The achievements of the graduate students who have participated, their evangelism for the importance of primary sources research, and the effects of the CFPRT experience on their lives render this program a model for others to admire and, resources permitting, to replicate. Ensuring that special collections and archives achieve real impact is a huge emphasis these days—as it should be. The Center is a model for one meaningful approach.

A few of my takeaways:

  •  Alexandra Apolloni, Ph.D. student in musicology, now uses sheet music to teach her students about the many aspects of society reflected in such sources. She teaches them to “read a primary source for context.” She noted that it was useful to think about how future researchers would use the materials in order to maintain objectivity in her approach to processing and description.
  • Yasmin Dessem, MA graduate in moving image archive studies and now an archivist at Paramount Studios, discovered the power of primary sources to change history: evidence found in a collection on the notorious Lindbergh kidnapping suggests that the person executed for the crime was innocent. Too little, too late.
  • Andrew Gomez, Ph.D. graduate in history, played a central role in designing and implementing the exceptional digital resource The Los Angeles Aqueduct Digital Platform. In the process of this work, he became a huge supporter of the digital humanities as a rigorous complement to traditional historical research: his work involved standard historical skills and outputs such as studying primary sources and creating historical narratives, as well as mastering a wide variety of digital tools. He also learned how to address audiences other than fellow scholars; in effect, he saw that scholarship can have a broad reach if designed to do so. He is currently on the academic job market and noted that he is seeing ads for tenure-track faculty positions focused on digital humanities. The game may be starting to change.
  • Rhiannon Knol, M.A. student in classics, worked on textual medieval manuscripts. I liked her elegant statement about the ability of a book’s materiality to “communicate knowledge from the dead to the living.” She also quoted Umberto Eco: “Books are not made to be believed, but to be subject to inquiry.” I can imagine reciting both statements to students.
  • Erika Perez, Ph.D. graduate in history and now on the faculty of the University of Arizona, reported that when looking for a job, her experience at CFPRT helped her get her foot in the door and tended to be a major topic during interviews.
  • Aaron Gorelik, Ph.D. graduate in English, said that CFPRT changed his life by leading to his becoming a scholar of the poet Paul Monette. He had his “wow” moment when he realized that “this was a life, not a novel.” His work on Monette has guided his dissertation, teaching, and reading ever since, and he’s in the process of getting more than 100 unpublished Monette poems into press.
  • Audra Eagle Yun, MLIS graduate and now Head of Special Collections and Archives at UC Irvine, spoke of the CFPRT as an “archival incubator.” She and her fellow students were amazed that they would be trusted “to handle the stuff of history” and learned the centrality of doing research before processing. They graduated from CFPRT with the assumption that MPLP is standard processing. Ah, the joys of a fresh education, to be unfettered by unproductive past practice! She felt like a “real archivist” when she realized that she could identify the best research resources and make processing decisions without input from her supervisor.
  • Thai Jones, curator of U.S. history at the Columbia University Rare Books and Manuscripts Library, gave a fascinating keynote in which he told the story of researching his activist grandmother, Annie Stein, who worked for integration of New York City public schools from the 1950s to the 1980s. He gathered a collection of materials entirely via FOIA requests, and the resulting Annie Stein papers are heavily used. (His own life story is fascinating too: he was born and spent his early years living underground with his family because his father was on the run as a member of the Weather Underground. Gosh. Rather different from my Republican childhood!) He opined that digitization has revolutionized discovery for historians but lamented that many of his colleagues today identify and use online resources only. Please digitize more, and faster, is his mantra. It’s ours too, but we know how difficult and expensive it is to achieve. We need to keep developing methodologies for turning it around.

Few special collections and archives can muster the resources to launch and maintain a program as impressive as UCLA’s Center for Primary Resources and Training, but many can do it on a smaller scale. Do you work at one that has gotten started and from which colleagues might learn? If not, what are the challenges that have stopped you from moving forward? Please leave a comment and tell your story.

 

 

About Jackie Dooley

Jackie Dooley leads OCLC Research projects to inform and improve archives and special collections practice. Activities have included in-depth surveys of special collections libraries in the U.S./Canada and the U.K./Ireland; leading the Demystifying Born Digital work agenda; a detailed analysis of the 3 million MARC records in ArchiveGrid; and studying the needs of archival repositories for specialized tools and services. Her professional research interests have centered on the development of standards for cataloging and archival description. She is a past president of the Society of American Archivists and a Fellow of the Society.

Mail | Web | Twitter | Facebook | More Posts (15)

Jonathan Rochkind: Useful lesser known ruby Regexp methods

planet code4lib - Thu, 2014-11-06 15:50
1. Regexp.union

Have a bunch of regex’s, and want to see if a string matches any of them, but don’t actually care which one it matches, just if it matches any one or more? Don’t loop through them, combine them with Regexp.union.

union_re = Regexp.union(re1, re2, re3, as_many_as_you_want) str =~ union_re 2. Regexp.escape

Have an arbitrary string that you want to embed in a regex, interpreted as a literal? Might it include regex special chars that you want interpreted as literals instead? Why even think about whether it might or not, just escape it, always.

val = 'Section 19.2 + [Something else]' re = /key: #{Regexp.escape val}/

Yep, you can use #{} string interpolation in a regex literal, just like a double quoted string.


Filed under: General

Eric Hellman: If your website still uses HTTP, the X-UIDH header has turned you into a snitch

planet code4lib - Thu, 2014-11-06 14:54
Does your website still use HTTP? It not, you're a snitch.

As I talk to people about privacy, I've found a lot of misunderstanding. HTTPS applies encryption to the communication channel between you and the website you're looking at. It's an absolute necessity when someone's making a password or sending a credit card number, but the modern web environment has also made it important for any communication that expects privacy.

HTTP is like sending messages on a postcard. Anyone handling the message can read the whole message. Even worse, they can change the message if they want. HTTPS is like sending the message in a sealed envelope. The messengers can read the address, but they can't read or change the contents.

It used to be that network providers didn't read your web browsing traffic or insert content into it, but now they do so routinely. This week we learned that Verizon and AT&T were inserting an "X-UIDH" header into your mobile phone web traffic. So for example, if a teen was browsing a library catalog for books on "pregnancy" using a mobile phone, Verizon's advertising partners could, in theory, deliver advertising for maternity products.

The only way to stop this header insertion is for websites to use HTTPS. So do it. Or you're a snitch.

Sorry, Blogger.com doesn't support HTTPS. So if you mysteriously get ads for snitch-related products, or if the phrase "Verizon and AT&T" is not equal to "V*erizo*n and A*T*&T" without the asterisks, blame me and blame Google.

Here's more on the X-UIDH header.

Open Knowledge Foundation: Open Knowledge Festival 2014 report: out now!

planet code4lib - Thu, 2014-11-06 14:46

Today we are delighted to publish our report on OKFestival 2014!

This is packed with stories, statistics and outcomes from the event, highlighting the amazing facilitators, sessions, speakers and participants who made it an event to inspire. Explore the pictures, podcasts, etherpads and videos which reflect the different aspects of the event, and uncover some of its impact as related by people striving for change – those with Open Minds to Open Action.

Want more data? If you are still interested in knowing more about how the OKFestival budget was spent, we have published details about the events income and expenses here.

If you missed OKFestival this year, don’t worry – it will be back! Keep an eye on our blog for news and join the Open Knowledge discussion list to share your ideas for the next OKFestival. Looking forward to seeing you there!

OCLC Dev Network: Planned Downtime for November 9 Release

planet code4lib - Thu, 2014-11-06 14:30

WMS Web services will be down during the install window for this weekend's release. The install time for this release is between 2:00 – 7:00 am Eastern USA, Sunday Nov 9th.

 

Ted Lawless: Connecting Python's RDFLib and Stardog

planet code4lib - Thu, 2014-11-06 00:00
Connecting Python's RDFLib and Stardog

For a couple of years I have been working with the Python RDFLib library for converting data from various formats to RDF. This library serves this work well but it's sometimes difficult to track down a straightforward, working example of performing a particular operation or task in RDFLib. I have also become interested in learning more about the commercial triple store offerings, which promise better performance and more features than the open source solutions. A colleague has had good experiences with Stardog, a commercial semantic graph database (with a freely licensed community edition) from Clark & Parsia, so I thought I would investigate how to use RDFLib to load data in to Stardog and share my notes.

A "SPARQLStore" and "SPARQLUpdateStore" have been included with Python's RDFLib since version 4.0. These are designed to allow developers to use the RDFLib code as a client to any SPARQL endpoint. Since Stardog supports SPARQL 1.1, developers should be able to connect to Stardog from RDFLib in the similar way they would to other triple stores like Sesame or Fuseki.

Setup Stardog

You will need a working instance of Stardog. Stardog is available under a community license for evaluation after going through a simple registration process. If you haven't setup Stardog before, you might want to checkout Geir Grnmo's triplestores repository where he has Vagrant provisioning scripts for various triple stores. This is how I got up and running with Stardog.

Once Stardog is installed, start the Stardog server with security disabled. This will allow the RDFLib code to connect without a username and password. Obviously you will not want to run Stardog in this way in production but it is convenient for testing.

$./bin/stardog-admin server start --disable-security

Next create a database called "demo" to store our data.

$./bin/stardog-admin db create -n demo

At this point a SPARQL endpoint is available at ready for queries at http://localhost:5820/demo/query.

RDF

For this example, we'll add three skos:Concepts to a named graph in the Stardog store.

@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix skos: <http://www.w3.org/2004/02/skos/core#> . @prefix xml: <http://www.w3.org/XML/1998/namespace> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . <http://example.org/n1234> a skos:Concept ; skos:broader <http://example.org/b5678> ; skos:preferredLabel "Baseball" . <http://example.org/b5678> a skos:Concept ; skos:preferredLabel "Sports" . <http://example.org/n1000> a skos:Concept ; skos:preferredLabel "Soccer" . Code

The complete example code here is available as a Gist.

Setting up the 'store'

We need to initialize a SPARQLUpdateStore as well as a named graph where we will store our assertions.

from rdflib import Graph, Literal, URIRef from rdflib.namespace import RDF, SKOS from rdflib.plugins.stores import sparqlstore #Define the Stardog store endpoint = 'http://localhost:5820/demo/query' store = sparqlstore.SPARQLUpdateStore() store.open((endpoint, endpoint)) #Identify a named graph where we will be adding our instances. default_graph = URIRef('http://example.org/default-graph') ng = Graph(store, identifier=default_graph) Loading assertions from a file

We can load our sample turtle file to an in-memory RDFLib graph.

g = Graph() g.parse('./sample-concepts.ttl', format='turtle') #Serialize our named graph to make sure we got what we expect. print g.serialize(format='turtle')

Since our data is now loaded as an in memory Graph we can add it to Stardog with a SPARQL INSERT DATA operation.

ng.update( u'INSERT DATA { %s }' % g.serialize(format='nt') ) Use the RDFLib API to inspect the data

Using the RDFLib API, we can list all the Concepts in the Stardog that were just added.

for subj in ng.subjects(predicate=RDF.type, object=SKOS.Concept): print 'Concept: ', subj

And, we can find concepts that are broader than others.

for ob in ng.objects(predicate=SKOS.broader): print 'Broader: ', ob Use RDFLib to issue SPARQL read queries.

RDFLib allows for binding a prefix to a namespace. This makes our queries easier to read and write.

store.bind('skos', SKOS)

A SELECT query to get all the skos:preferredLabel for skos:Concepts.

rq = """ SELECT ?s ?label WHERE { ?s a skos:Concept ; skos:preferredLabel ?label . } """ for s, l in ng.query(rq): print s.n3(), l.n3() Use RDFLib to add assertions.

The RDFLib API can also be used to add new assertions to Stardog.

soccer = URIRef('http://example.org/n1000') ng.add((soccer, SKOS.altLabel, Literal('Football')))

We can now Read statements about soccer using the RDFLib API, which issues the proper SPARQL query to Stardog in the background.

for s, p, o in ng.triples((soccer, None, None)): print s.n3(), p.n3(), o.n3() Summary

With a little setup, we can begin working with Stardog in RDFLib in a similar way that we work with RDFLib and other backends. The sample code here is included in this Gist.

DuraSpace News: Recordings available for the Fedora 4.0 Webinar Series

planet code4lib - Thu, 2014-11-06 00:00

Winchester, MA

On November 5, 2014 the Hot Topics DuraSpace Community Webinar series, “Early Advantage: Introducing New Fedora 4.0 Repositories,” concluded with its final webinar, “Fedora 4.0 in Action at Penn State and Stanford.”

DuraSpace News: Fedora 4 Almost Out the Door: Final Community Opportunity for Feedback!

planet code4lib - Thu, 2014-11-06 00:00

From Andrew Woods, Technical Lead for Fedora 

Winchester, MA  Fedora 4 Beta-04 will be released before this coming Monday, November 10, 2014. The development sprint that also begins on November 10 will be focused on testing and documentation as we prepare for the Fedora 4.0 production release.

SearchHub: What Could Go Wrong? – Stump The Chump In A Rum Bar

planet code4lib - Wed, 2014-11-05 22:56

The first time I ever did a Stump The Chump session was back in 2010. It was scheduled as a regular session — in the morning if I recall correctly — and I (along with the panel) was sitting behind a conference table on a dais. The session was fun, but the timing, and setting, and seating, made it feel very stuffy and corporate..

We quickly learned our lesson, and subsequent “Stump The Chump!” sessions have become “Conference Events”. Typically held at the end of the day, in a nice big room, with tasty beverages available for all. Usually, right after the winners are announced, it’s time to head out to the big conference party.

This year some very smart people asked me a very smart question: why make attendees who are having a very good time (and enjoying tasty beverages) at “Stump The Chump!”, leave the room and travel to some other place to have a very good time (and enjoy tasty beverages) at an official conference party? Why not have one big conference party with Stump The Chump right in the middle of it?

Did I mention these were very smart people?

So this year we’ll be kicking off the official “Lucene/Solr Revolution Conference Party” by hosting Stump The Chump at the Cuba Libre Restaurant & Rum Bar.

At 4:30 PM on Thursday, (November 13) there will be a fleet of shuttle buses ready and waiting at the Omni Hotel’s “Parkview Entrance” (on the South East side of the hotel) to take every conference attendee to Cuba Libre. Make sure to bring your conference badge, it will be your golden ticket to get on the bus, and into the venue — and please: Don’t Be Late! If you aren’t on a shuttle buses leaving the Omni by 5:00PM, you might miss the Chump Stumping!

Beers, Mojitos & Soft Drinks will be ready and waiting when folks arrive, and we’ll officially be “Stumping The Chump” from 5:45 to 7:00-ish.

The party will continue even after we announce the winners, and the buses will be available to shuttle people back to the Omni. The last bus back to the hotel will leave around 9:00 PM — but as always, folks are welcome to keep on partying. There should be plenty of taxis in the area.

To keep up with all the “Chump” news fit to print, you can subscribe to this blog (or just the “Chump” tag).

The post What Could Go Wrong? – Stump The Chump In A Rum Bar appeared first on Lucidworks.

LITA: Game Night at LITA Forum

planet code4lib - Wed, 2014-11-05 22:13

Are you attending the 2014 LITA Forum in Albuquerque? Like board games? If so, come to the LITA Game Night!

Thursday, November 6, 2014
8:00 – 11:00 pm
Hotel Albuquerque, Room Alvarado C

Games that people are bringing:

  • King of Tokyo
  • Cheaty Mages
  • Cards Against Humanity
  • One Night Ultimate Werewolf
  • Star Fluxx
  • Love Letter
  • Seven Dragons
  • Pandemic
  • Coup
  • Avalon
  • Bang!: The Dice Game
  • Carcassonne
  • Uno
  • Gloom
  • Monty Python Fluxx
  • and probably more…

Hope you can come!

FOSS4Lib Recent Releases: Evergreen - 2.7.1, 2.6.4, 2.5.8

planet code4lib - Wed, 2014-11-05 21:21
Package: EvergreenRelease Date: Wednesday, November 5, 2014

Last updated November 5, 2014. Created by Peter Murray on November 5, 2014.
Log in to edit this page.

"In particular, they fix a bug where even if a user had logged out of the Evergreen public catalog, their login session was not removed. This would permit somebody who had access to the user’s session cookie to impersonate that user and gain access to their account and circulation information."

Evergreen ILS: SECURITY RELEASES – Evergreen 2.7.1, 2.6.4, and 2.5.8

planet code4lib - Wed, 2014-11-05 21:11

On behalf of the Evergreen contributors, the 2.7.x release maintainer (Ben Shum) and the 2.6.x and 2.5.x release maintainer (Dan Wells), we are pleased to announce the release of Evergreen 2.7.1, 2.6.4, and 2.5.8.

The new releases can be downloaded from:

http://evergreen-ils.org/egdownloads/

THESE RELEASES CONTAIN SECURITY UPDATES, so you will want to upgrade as soon as possible.

In particular, they fix a bug where even if a user had logged out of the Evergreen public catalog, their login session was not removed. This would permit somebody who had access to the user’s session cookie to impersonate that user and gain access to their account and circulation information.

After installing the Evergreen software update, it is recommended that memcached be restarted prior to restarting Evergreen services and Apache.  This will clear out all user login sessions.

All three releases also contain bugfixes that not related to the security issue. For more information on the changes in these release, please consult the change logs:

District Dispatch: IRS provides update to libraries on tax form program

planet code4lib - Wed, 2014-11-05 21:06

Photo by AgriLifeToday via Flickr

On Tuesday, the Internal Revenue Service (IRS) announced that the agency will continue to deliver 1040 EZ forms to public libraries that are participating in the Tax Forms Outlet Program (TFOP). TFOP offers tax products to the American public primarily through participating libraries and post offices. The IRS will distribute new order forms to participating libraries in the next two to three weeks.

The IRS released the following statement on November 4, 2014:

Based on the concerns expressed by many of our TFOP partners, we are now adding the Form 1040 EZ, Income Tax Return for Single and Joint Filers with No Dependents, to the list of forms that can be ordered. We will send a supplemental order form to you in two to three weeks. We strongly recommend you keep your orders to a manageable level primarily due to the growing decline in demand for the form and our print budget. Taxpayers will be able to file Form 1040 EZ and report that they had health insurance coverage, claim an exemption from coverage or make a shared responsibility payment. However, those who purchased health coverage from the Health Insurance Marketplace must use the Form 1040 or 1040A.Your help communicating this to your patrons within your normal work parameters would be greatly appreciated.

We also heard and understood your concerns of our decision to limit the number of Publication 17 we plan to distribute. Because of the growing cost to produce and distribute Pub 17, we are mailing to each of our TFOP partners, including branches, one copy for use as a reference. We believe that the majority of local demand for a copy of or information from Publication 17 can be met with a visit to our website at www.irs.gov/formspubs or by ordering it through the Government Printing Office. We value and appreciate the important work you do providing IRS tax products to the public and apologize for any inconvenience this service change may cause.

Public library leaders will have the opportunity to discuss the management and effectiveness of the Tax Forms Outlet Program with leaders from the IRS during the 2015 American Library Association Midwinter Meeting session “Tell the IRS: Tax Forms in the Library.” The session takes place on Sunday, February 1, 2015.

The post IRS provides update to libraries on tax form program appeared first on District Dispatch.

Roy Tennant: How Some of Us Learned To Do the Web Before it Existed

planet code4lib - Wed, 2014-11-05 20:58

Perhaps you really had to be there to understand what I’m about to relate. I hope not, but it’s quite possible. Imagine a world without the Internet, as so totally strange as that is. Imagine that we had no world-wide graphical user interface to the world of information. Imagine that the most we had were green screens and text-based interfaces to “bulletin boards” and “Usenet usegroups”. Imagine that we were so utterly ignorant of the world we would very soon inhabit. Imagine that we were about to have our minds utterly blown.

But we didn’t know that. We only had what we had, and it wasn’t much. We had microcomputers of various kinds, and the clunkiest interfaces to the Internet that you can possibly imagine. Or maybe you can’t even imagine. I’m not sure I could, from this perspective. Take it from me — it totally sucked. But it was also the best that we had ever had.

And then along came HyperCard. 

HyperCard was a software program that ran on the Apple Macintosh computer. It would be easy to write it off as being too narrow a niche, as Microsoft was even more dominant in terms of its operating system than it is now. But that would be a mistake. Much of the true innovation at that point was happening on the Macintosh. This was because it had blown the doors off the user interface and Microsoft was still playing catchup. You could argue in some ways it still is. But back then there was absolutely no question who was pushing the boundaries, and it wasn’t Redmond, WA, it was Cupertino, CA. Remember that I’m taking you back before the Web. All we had were clunky text-based interfaces. HyperCard gave us this:

  • True “hypertext”. Hypertext is what we called the proto-web — that is, the idea of linking from one text document to another before Tim Berners-Lee created HTML.
  • An easy to learn programming language. This is no small thing. Having an easy-to-learn scripting language put the ability to create highly engaging interactive interfaces into the hands of just about anyone.
  • Graphical elements. Graphics, as we know, are a huge part of the Web. The Web didn’t really come into its own until graphics could show up in the UI. But we already had this in HyperCard. The difference was that anyone with a network connection could see your graphics — not just those who had your HyperCard “stack”.

As a techie, I was immediately taken with the possibilities, so as a librarian at UC Berkeley at the time I found some other willing colleagues and we built a guide to the UC Berkeley Libraries. Unfortunately I’ve been unable to locate a copy of it, since it’s still possible to run a HyperCard stack in emulation. I’d give a lot to be able to play with it again.

Doing this exposed us to principles of “chunking up” information and linking it together in different ways that we eventually took with us to the web. We also learned to limit the amount of text with online presentations, to enhance “scannability”. We were introduced to visual metaphors like buttons. We learned to use size to indicate priority. We experimented with bread crumb trails to give users a sense of where they were in the information space. And we strove to be consistent. All of these lessons helped us to be better designers of web sites, before the web even existed.

For more, here is another viewpoint on what HyperCard provided a web-hungry world.

Nicole Engard: Bookmarks for November 5, 2014

planet code4lib - Wed, 2014-11-05 20:30

Today I found the following resources and bookmarked them on <a href=

  • Brackets A modern, open source text editor that understands web design.

Digest powered by RSS Digest

The post Bookmarks for November 5, 2014 appeared first on What I Learned Today....

Related posts:

  1. ATO2014: Open Source – The Key Component of Modern Applications
  2. Code4Lib Programs Chosen
  3. Design for Firefox First

LITA: Jobs in Information Technology: November 5

planet code4lib - Wed, 2014-11-05 17:33

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Assistant University Archivist for Technical Services, Princeton University Library, Princeton, NJ

Dean of University Libraries, Oakland University, Rochester, MI

Digital Production Services Programmer – IT Expert, University of Florida, George A Smathers Libraries, Gainesville, FL

IT Expert – Programmer, University of Florida, George A Smathers Libraries, Gainesville, FL

Physician Directory Specialist, Froedtert Health, Menomonee Falls, WI 

Visit the LITA Job Site for more available jobs and for information on submitting a  job posting.

Pages

Subscribe to code4lib aggregator