You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib -
Updated: 1 day 10 hours ago

SearchHub: Stump The Chump D.C.: Meet The Panel

Wed, 2014-10-22 22:04

If you haven’t heard: On November 13th, I’ll be back in the hot seat at Lucene/Solr Revolution 2014 answering tough Solr questions — submitted by users like you — live, on stage, sight unseen.

Today, I’m happy to announce the Panel of experts that will be challenging me with those questions, and deciding which questions were able to Stump The Chump!

In addition to taunting me with the questions, and ridiculing all my “Um”s and “Uhh”s as a struggle to answer them, the Panel members will be awarding prizes to the folks who have submitted the question that do the best job of “Stumping” me. Questions can be submitted to our panel via any time until the day of the session. Even if you won’t be able to attend the conference, you can still participate — and do your part to humiliate me — by submitting your tricky questions.

To keep up with all the “Chump” news fit to print, you can subscribe to this blog (or just the “Chump” tag).

The post Stump The Chump D.C.: Meet The Panel appeared first on Lucidworks.

Nicole Engard: ATO2014: Pax Data

Wed, 2014-10-22 21:39

Doug Cutting from Cloudera gave our closing keynote on day 1.

Hadoop started a revolution. It is an open source platform that really harnesses data.

In movies the people who harness the data are always the bad guys – so how do we save ourselves from becoming the bad guy? What good is coming out of good data?

Education! The better data we have the better our education system can be. Education will be much better if we can have a custom experience for each student – these kinds of observations are fed by data. If we’re going to make this happen we’re going to need to study data about these students. The more data you amass the better predictions you can make. On the flip side it’s scary to collect data about kids. inBloom was an effort to collect this data, but they ended up shutting down because of the fear. There is a lot of benefit to be had, and it would be sad if we didn’t enable this type of application.

Heathcare is another area this becomes handy. Medical research benefits greatly from data. The better data we collect the better we can care for people. Once again this is an area that people have fears about shared data.

Climate is the last example. Climate is changing and in order to understand how we can effect it data plays a huge role. Data about our energy consumption is part of this. Some people say that certain data is not useful to collect – but this isn’t a good approach. We want to collect all the data and then evaluate it. You don’t know in advance what value the data you collect will have.

How do we collect this data if we don’t have trust? How do we build that trust? There are some technology solutions like encrypting data and anonymizing data sets – these methods are imperfect though. In fact if you anonymize the data too much it muddies it and makes it less useful. This isn’t just a technical problem – instead we need to build trust.

The first way to build trust is to be transparent. If you’re collecting data you need to let people know you’re collecting it and what you’re going to use it for.

The next key element is establishing best practices around data. These are the technical elements like encryption and anonymization. This also includes language to agree/disagree to ways our data is shared.

Next we need to draw clear lines that people can’t step over – for example we can’t show someone’s home address without their express permission. Which gives us a basis for the last element.

Enforcement and oversight is needed. We need someone who is checking up on these organizations that are collecting data. Regulation can sound scary to people, but we have come to trust it in many markets already.

This is not just a local issue – it needs to be a global effort. As professionals in this industry we need to think about how to build this trust and get to the point where data can be stored and shared.

The post ATO2014: Pax Data appeared first on What I Learned Today....

Related posts:

  1. ATO2014: Modern Applications & Data
  2. ATO2014: How Raleigh Became an Open Source City
  3. ATO2014: What Academia Can Learn from Open Source

Nicole Engard: ATO2014: Saving the world: Open source and open science

Wed, 2014-10-22 20:49

Marcus Hanwell, another fellow moderator, was the last session of the day with his talk about saving the world with open source and open science!

In science there was a strong ethic of ‘trust, but verify’ – and if you couldn’t reproduce the efforts of the scientist then the theory was dismissed. The ‘but verify’ part of that has kind of gone away in recent years. In science the primary measure of whether you were successful or not was to publish – citations to your work are key. Then when you do publish your content is locked down in costly journals instead of available in the public domain. So if you pay large amounts of money you can have access to the article – but not the data necessarily. Data is kept locked up more and more to keep the findings with the published person so that they get all the credit.

Just like in the talk earlier today on what Academia can learn from open source Marcus showed us an article from the 17th century next to an article today – the method of publishing has not changed. Plus these articles are full of academese which is obtuse.

All of this makes it very important to show what’s in the black box. We need to show what’s going on in these experiments at all levels. This includes sharing your steps to run calculations – the source code used to get this info should be written in open source because now the tools used are basically notebooks with no version control system. We have to stop putting scientists on these pedestals and start to hold them accountable.

A great quote that Marcus shared from an Economist article was: “Scientific research has changed the world. Now it needs to change itself.” Another was “Publishing research without data is simply advertising, not science.” Scientists need to think more about licenses – they give their rights away to journals because they don’t pay enough attention to the licenses that are out there like the creative commons.

What is open? How do we change these behaviors? Open means that everyone has the same access. Certain basic rights are granted to all – the ability to share, modify and use the information. There is a fear out there that sharing our data means that we could prove that we’re wrong or stupid. We need to change this culture. We need more open data (shared in open formats) and using open source software, more open standards and open access.

We need to push boundaries – most of what is published in publicly funded so it should be open and available to all of us! We do need some software to share this data – that’s where we come in and where open source comes in. In the end the lesson is that we need to get scientists to show all their data and not reward academics solely for their citations because this model is rubbish. We need to find a new way to reward scientists though – a more open model.

The post ATO2014: Saving the world: Open source and open science appeared first on What I Learned Today....

Related posts:

  1. ATO2014: What Academia Can Learn from Open Source
  2. ATO2014: Easing into open source
  3. ATO2014: How Raleigh Became an Open Source City

Nicole Engard: Bookmarks for October 22, 2014

Wed, 2014-10-22 20:30

Today I found the following resources and bookmarked them on <a href=

  • vokoscreen Open source screencasting
  • Waffle creates a full project management solution from your existing GitHub Issues.

Digest powered by RSS Digest

The post Bookmarks for October 22, 2014 appeared first on What I Learned Today....

Related posts:

  1. Open Access Day in October
  2. Governments Urging the use of Open Source
  3. Digsby Goes Open Source

Nicole Engard: ATO2014: Open Source in Healthcare

Wed, 2014-10-22 19:59

Luis Ibanez, my fellow moderator, was up next to talk to us about Open Source in Healthcare. Luis’s story was so interesting – I hope I caught all the numbers he shared – but the moral of the story is that hospitals could save insane amounts of money if they switched to an open system.

There are 7 billion people on the planet making $72 trillion a year. In the US we have 320 million people and that’s 5% of the global population, but we make 22% of the economic production on the planet – what do we do with that money? 24% of that money is spent on healthcare ($3.8 trillion) – not just the government, this is the spending of the entire country. This is more than they’re spending in Germany and France. However we’re ranked 38th in healthcare quality in the world. France is #1 however and they spend only 12% of their money on healthcare. This is an example of how spending more money on the problem is not helping.

Is there something that geekdom can do to set this straight? Luis says ‘yes!’

So, why do we go to the doctor? To get information. We want the doctor to tell us if we have a problem they can fix and know how to fix it. Information connects directly to our geekdom.

Today if you go to a hospital our data will be stored in paper and will go in to a “data center” (a filing cabinet). In 2010 84% of hospitals were keeping paper records versus using software. The healthcare industry is the only industry that needs to be paid to get them to switch to using software to store this information – $20 billion spent between 2010 and 2013 to get us to 60% of hospitals storing information electronically. This is one of the reasons we’re spending so much on healthcare right now.

The problem here (and this is Luis’s rant) is that the hospitals have to pay for this software in the first place. And you’re not allowed to share anything about the system. You can’t take screenshots, you can’t talk about the features, you are completely locked down. This system will run your hospital (a combination of hotel, restaurant, and medical facility) – they have been called the most complex institution of the century. These systems for a 400 bed hospital cost $100 million – and they have to buy these systems with little or no knowledge of how they work because of the security measures around seeing/sharing information about the software. This is against the idea of a free market because of the NDA you have to sign to see the software and use the software.

An example that Luis gave us was Wake Forest hospital which ended up being in the red by $56 million. All because they bought software for $100 million – leading to them having to fire their people, stop making retirement payments and other cuts. [For me this sounds a lot like what libraries are doing - paying salaries for an ILS instead of putting money toward people and services instead and saving money on the ILS]

Another problem in the medical industry is that 41% (less than 1/2) have the capability to send secure messages to patients. This is not a technology problem – this is a cultural problem in the medical world. Other industries have solved this technology problem already.

So, why do we care about all of this? There are 5,723 hospitals in the US, 211 of them are federally run (typically military hospitals), 413 are psychiatric, 2,894 are non profits and the others are private or state run. That totals nearly 1 million beds and $830 billion a year is spent in hospitals. The software that these hospitals are buying costs about $250 billion.

The federal hospitals are running a system that was released in to the public domain called VistA. OSEHRA was founded to protect this software. This software those is written in MUMPS. This is the same language that the $100 million software is written in! Except there is a huge difference in price.

If hospitals switched they’d spend $0. To keep this software running/updated we’d need about 20 thousand developers – but if you divide that by the hospitals that’s 4 developers per hospital. These developers don’t need to be programmers though – they could be doctors, nurses pharmacists – because MUMPS is so easy to learn.

The post ATO2014: Open Source in Healthcare appeared first on What I Learned Today....

Related posts:

  1. ATO2014: Open Source – The Key Component of Modern Applications
  2. ATO2014: Easing into open source
  3. ATO2014: Open Source & the Internet of Things

LITA: LITA Forum: Online Registration Ends Oct. 27

Wed, 2014-10-22 19:59

Don’t miss your chance to register online for the 2014 LITA Forum “From Node to Network” to be held Nov. 5-8, 2014 at the Hotel Albuquerque in Albuquerque N.M. Online registration closes October 27, 2014. You can register on site, but it’s so much easier to have it all taken care of before you arrive in Albuquerque.

Book your room at the Hotel Albuquerque. The guaranteed LITA room rate date has passed, but when you call at: 505-843-6300 ask for the LITA room rate, there might be a few rooms left in our block.

Three keynote speakers will be featured at this year’s forum:

  • AnnMarie Thomas, Engineering Professor, University of St. Thomas
  • Lorcan Dempsey, Vice President, OCLC Research and Chief Strategist
  • Kortney Ryan Ziegler, Founder Trans*h4ck.

More than 30 concurrent colleague inspired sessions and a dozen poster sessions will provide a wealth of practical information on a wide range of topics.

Two preconference workshops will also be offered;

  • Dean B. Krafft and Jon Corson-Rikert of Cornell University Library will present
    “Linked Data for Libraries: How libraries can make use of Linked Open Data to share information about library resources and to improve discovery, access, and understanding for library users”
  • Francis Kayiwa of Kayiwa Consulting will present
    “Learn Python by Playing with Library Data”

Networking opportunities, a major advantage of a smaller conference, are an important part of the Forum. Take advantage of the Thursday evening reception and sponsor showcase, the Thursday game night, the Friday networking dinners or Kitchen Table Conversations, plus meals and breaks throughout the Forum to get to know LITA leaders, Forum speakers, sponsors, and peers.

2014 LITA Forums sponsors include EBSCO, Springshare, @mire, Innovative and OCLC.

Visit the LITA website for more information.

Library and Information Technology Association (LITA) members are information technology professionals dedicated to educating, serving, and reaching out to the entire library and information community. LITA is a division of the American Library Association.

LITA and the LITA Forum fully support the Statement of Appropriate Conduct at ALA Conferences

Islandora: Islandora Deployments Repo

Wed, 2014-10-22 18:58

Ever wonder what another institution's Islandora deployment looks like in detail? Look no further: York and Ryerson have shared their deployments wit the community on GitHub, including details such as software versions, general settings, XACML policies, and Drupal modules. If you would like to share your deployment, please contact Nick Ruest so he can add you as a collaborator on the repo.

Nicole Engard: ATO2014: Open Source &amp; the Internet of Things

Wed, 2014-10-22 18:49

Erica Stanley was up next to talk to us about Open Source and the Internet of Things (IoT).

The Internet of Things (Connected Devices) is the connection of things and people over a network. Why the Internet of Things? Why now? Because technology has made it a possibility. Why open source Internet of Things? To ensure that innovation continues.

Some of the applications we have for connected devices are: Health/Fitness, Home/Environment and Identity. Having devices that are always connected to us allow us to do things like monitor our health so that we can see when something might be wrong before we feel symptoms. Some devices like this are vision (Google glass) related, smart watches, wearable cameras, wristbands (fitbit), smart home devices (some of which are on my wishlist), connected cars (cars that see that the car in front of you has stopped versus slowed down) and smart cities like Raleigh.

There are many networking technologies these devices can use to stay connected, but bluetooth seems to be the default that is being used. There is a central device and a peripheral device – the central device wants the data that the peripheral device has. They use bluetooth to communicate with each other – the central device requesting info from the peripheral.

Cloud commuting, another important technology, has been one of the foundations for the Internet of Things – this is how we store all the info we’re passing back and forth. As we get more ability for our devices to learn we get more devices that can act on the data they’re gathering (there is a fitness app/device that will encourage you to get up and move once in a while for example).

Yet another technology that’s important is augmented reality showing us results of data in our day to day (Google glass showing you the directions to where you’re walking).

One challenge facing us is the fact that we have devices living in silos. So we have Google devices and Samsung devices – but they don’t talk to each other. We need to move towards a platform for connected devices. This will allow us to have a user controlled and created environment – where the devices I want to talk to each other can and the people I want to see the data can see the data. This allows us to personalize our environment but also secure our environment.

Speaking of security, there are some guidelines for developers that we can all follow to be sure to create secure devices. When building these devices we want to think about security from the very beginning. We need to understand our vulnerabilities, build security from the ground up. This starts with the OS so that we’re building an end-to-end solution. Obviously you want to be proactive in testing your apps and use updated APIs/frameworks/protocols.

Some tools you can use to get started as far as hardware: Arduino Compatible devices (Lilypad, Adafruit Flora and Gemma), Tessel, and Metawear. Software tools include: Spark Core, IoT Toolkit,, Cloud Foundry, Eclipse IoT Tools, and Huginn (which is kind of an open source IFTTT).

One thing to keep in mind when designing for IoT is that we no longer own the foreground – we might not have a screen or a full sized screen. We also have to think about integration with other devices and discoverablity of functionality if we don’t have a screen (gesture based device). Finally we have to keep in mind low energy and computing power. On the product side you want to think about the form factor – you don’t want a device that no one will want to wear. This also means creating personalizable devices

Remember that there is no ‘one size fits all’ – your device doesn’t have to be the same as others that are out there. Try to not get in the way of your user – build for people not technology! If we don’t try to take all of the user’s attention with the wearable then we’ll get more users.

The post ATO2014: Open Source & the Internet of Things appeared first on What I Learned Today....

Related posts:

  1. ATO2014: How Raleigh Became an Open Source City
  2. ATO2014: Open Source – The Key Component of Modern Applications
  3. ATO2014: Easing into open source

Nicole Engard: ATO2014: How Raleigh Became an Open Source City

Wed, 2014-10-22 18:04

Next up was Jason Hibbets and Gail Roper who gave a talk about the open source initiative in Raleigh.

Gail started by saying ‘no one told us we had to be more open’. Instead there were signs that showed that this was a good way to go. In 2010 Forbes labeled Raleigh one of the most wired cities in the country, but what they really want is to be the most connected city in the country.

Raleigh has 3 initiatives open source, open data, and open access – the city wants to get gigabit internet connections to every household. So far they have a contract with AT&T and they are working with Google to see if Raleigh will become a Google fiber city.

The timeline leading up to this though required a lot of education of the community about what open meant. It didn’t mean that before this they were hiding things from the community. Instead they had to teach people about open source and open access. There were common stereotypes that the government had about open source – the image of a developer in his basement being among them.

Why did they do this? Why do they want to be an open city? Because of SMAC (Social, Mobile, Analytics, Cloud). Today’s citizens expect that anywhere on any device they should be able to connect to the web. Government organizations like Raleigh’s will have 100x the data to manage. So providing a government that is collaborative and connected to the community becomes a necessity not an option.

“Empowerment of individuals is a key part of what makes open source work, since in the end, innovations tend to come from small groups, not from large, structured efforts.” -Tim O’Reilly

Next up was Jason Hibbets who is the team lead on by day and by night he supports the open Raleigh project. Jason shared with us how he helped make the open Raleigh vision a reality. He is not a coder, but he is a community manager. Government to him is about more than putting taxes in and getting out services – it’s about us – the members of the community.

Jason discovered CityCamp – a government unconference that brings together local citizens to build stronger communities where they live. These camps have allowed for people to come together to share their idea openly. Along the way the organizers of this local CityCamp became members of Code for America. Using many online tools they have made it easy to communicate with their local brigade and with others around the state. There is also a meetup group if you’re in the area. If you’re not local you can join a brigade in your area or start your own!

Jason has shared his story in his book The foundation for an open city.

The post ATO2014: How Raleigh Became an Open Source City appeared first on What I Learned Today....

Related posts:

  1. ATO2014: Easing into open source
  2. ATO2014: What Academia Can Learn from Open Source
  3. ATO2014: Building a premier storytelling platform on open source

LITA: Jobs in Information Technology: October 22

Wed, 2014-10-22 17:20

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Head of Technology, Saline County Library,  Benton,  AR

Science Data Librarian,  Penn State University Libraries, University Park,  PA

Visit the LITA Job Site for more available jobs and for information on submitting a  job posting.

HangingTogether: The Variation and the Damage Done

Wed, 2014-10-22 16:45

Some of you may already know about my “MARC Usage in WorldCat” project, where I simply expose the contents of a number of MARC subfields in ordered lists of strings. The point, as I state on the site itself, is to expose “which elements and subfields have actually been used, and more importantly, how? This work seeks to use evidence of usage, as depicted in the largest aggregation of library data in the world — WorldCat — to inform decisions about where we go from here.”

One aspect of this is the quality, or lack thereof, of the actual data recorded. As an aggregator, we see it all. We see the typos, the added punctuation where none should be. We see the made up elements and subfields (yes, made up). We see data that is clearly in the completely wrong place in the record (what were they thinking?). We see it all.

So this week when I received a request for a specific report, as sometimes happens, I was happy to comply. The correspondent wanted to see the contents of the 775 $e subfield, which, according to the documentation should only have a “language code”. Catalogers know that you can’t make these up, they must come from the Library of Congress’ MARC Code List for Languages.

Sounds simple, right? If you encode a language in the 775 $e, it must come from that list. But that doesn’t prevent catalogers from embellishing (see all the variations for “eng” below and the number of times they were found; this does not include variations like “anglais”). Why not add punctuation? Or additional information, such as “bilingual”? I’ll tell you why not. Because it renders the data increasingly unusable without normalization.

And normalization comes at a cost. Easy normalization, such as removing punctuation, is straightforward. But at some point the easiest thing to do is to simply throw it away. If a string only occurs once, how important can it be?

As we move into a more fully machine-supported world for library metadata we will be facing more of these choices. Some will be harder than others. If you don’t believe me, just check out what we have to do with dates.

52861 eng
1249 eng.
400 (eng)
20 (eng.)
12 (eng).
3 eeng
2 [eng]
1 feng
1 eng~w(CaOOP) a472415
1 engw(CaOOP) a459037
1 engw(CaOOP) a371268
1 engw(CaOOP) 1-181456
1 engw(CaOOP) 01-0314275
1 engw(CaOOP) 01-0073869
1 enge
1 eng..
1 eng,
1 eng(CaOOP) a359090
1 eng(CaOOP) 1-320212
1 eng$x0707-9311
1 bilingual eng
1 (eng),

Photo by Suzanne Chapman, Creative Commons license CC BY-NC-SA 2.0

About Roy Tennant

Roy Tennant works on projects related to improving the technological infrastructure of libraries, museums, and archives.

Mail | Web | Twitter | Facebook | LinkedIn | Flickr | YouTube | More Posts (83)

Nicole Engard: ATO2014: What Academia Can Learn from Open Source

Wed, 2014-10-22 16:01

Arfon Smith from Github was up to talk to us about Academia and open source.

Arfon started with an example of a shared research proposal. So you create a document and then you edit the filename with each iteration because word processing applications is not good at tracking changes and allowing collaboration. Git though is meant for this very thing. So he showed us a book example on Github where the collaborators worked together on a document.

In open source there is this ubiquitous culture of reuse. Academia doesn’t do this – but why not? The problem is the publishing requirement in academia. The first problem is that ‘Novel’ results are preferred. You’re incentivized to publish new things to move ahead. The second problem is that the value of your citation is more powerful than the number of people you’ve worked with. And thirdly, and more generally, the format sucks. Even if it’s an electronic document it’s still hard to collaborate on it (see the document example above). This is state of the art technology … for the late 17th century. (Reinventing Discovery).

So, what do open source collaborations do well? There is a difference sometimes between open source and open source collaborations, this is an important distinction. Open source is the right to modify – it’s not the right to contribute back. An open source collaborations are highly collaborative development processes that allow anyone to contribute if they show an interest. This brings us back to the ubiquitous culture of reuse. These collaborations also expose the process by which they work together – unlike the current black box of research in academia.

How do we get 4000 people to work together then? Using git and Github specifically you can fork the code from an existing project and work on it without breaking other people’s work and then when you want to contribute it back you submit a pull request to the project. The beauty of this is ‘code first, permission later’ and every time this process happens the community learns.

The goal of a contribution of Github is to get it merged in to the product. Not all open source projects are receptive to these pull requests though, so those are not the collaborative types of projects.

Fernando Perez: “open source is .. reproducible by necessity.” If you don’t collaborate then these projects wouldn’t move forward – so they need to be collaborative. The difference in academia is that you have to work alone to and in a closed fashion to move ahead and get recognition.

Open can mean within your team or institution – it doesn’t have to be worldwide like in open source. But making your content electronic and available (which does not me a word doc or email) makes working together easier. Academia can learn from open source – more importantly academia must learn from open source to move forward.

All the above seems kind of negative, but Arfon did show us a lot of examples where people are sharing in academia – we just need to get this to be more widespread. Where might more significant change happen? The most obvious place to look is where communities form – like around a shared challenge – or around shared data. Science and big data are where we’re going to see this more hopefully.

There are challenges still though – so how do we make sharing the norm? The main problem is that academic reward ‘credit’ – so articles written by you solely. Tools like Astropy is hugely successful on github, but the authors had to write a paper about it to get credit. The other issue is trust – academics are reluctant to use other people’s stuff because we don’t know if their work is of value. In open source we have solved this problem already – if the package was downloading thousands of times it’s probably reliable. There are also tools like codeclimate that give your code a grade.

In short the barriers are cultural not technical!

The post ATO2014: What Academia Can Learn from Open Source appeared first on What I Learned Today....

Related posts:

  1. ATO2014: Open Source – The Key Component of Modern Applications
  2. ATO2014: Open Source Schools: More Soup, Less Nuts
  3. ATO2014: Easing into open source

Open Knowledge Foundation: New Open Access Button launches as part of Open Access Week

Wed, 2014-10-22 15:02

This post is part of our Open Access Week blog series to highlight great work in Open Access communities around the world.

Push Button. Get Research. Make Progress.

If you are reading this, I’m guessing that you too are a student, researcher, innovator, an everyday citizen with questions to answer, or just a friend to Open Knowledge. You may be doing incredible work and are writing a manuscript or presentation, or just have a burning desire to know everything about anything. In this case I know that you are also denied access to the research you need, not least because of paywalls blocking access to the knowledge you seek. This happens to me too, all the time, but we can do better. This is why we started the Open Access Button, for all the people around the world who deserve to see and use more research results than they can today.

Yesterday we released the new Open Access Button at a launch event in London, which you can download from The next time you’re asked to pay to access academic research. Push the Open Access Button on your phone or on the web. The Open Access Button will search the web for version of the paper that you can access.

If you get your research, you can make progress with your work. If you don’t get your research, your story will be used to help change the publishing system so it doesn’t happen again. The tool seeks to help users get the research they need immediately, or adds papers unavailable to a wish-list we can get started . The apps work by harnessing the power of search engines, research repositories, automatic contact with authors, and other strategies to track down the papers that are available and present them to the user – even if they are using a mobile device.

The London launch led other events showcasing the Open Access Button throughout the week, in Europe, Asia and the Middle East. Notably, the new Open Access Button was previewed at the World Bank Headquarters in Washington D.C. as part of the International Open Access Week kickoff event. During the launch yesterday, we reached at least 1.3 million people on social media alone. The new apps build upon a successful beta released last November that attracted thousands of users from across the world and drew lots of media attention. These could not have been built without a dedicated volunteer team of students and young researchers, and the invaluable help of a borderless community responsible for designing, building and funding the development.

Alongside supporting users, we have will start using the data and the stories collected by the Button to help make the changes required to really solve this issue. We’ll be running campaigns and supporting grassroots advocates with this at as well as building a dedicated data platform for advocates to use our data . If you go there you now you can see the ready to be filled map, and your first action, sign our first petition, this petition in support of Diego Gomez, a student who faces 8 years in prison and a huge monetary fine for doing something citizens do everyday, sharing research online for those who cannot access it.

If you too want to contribute to these goals and advance your research, these are exciting opportunities to make a difference. So install the Open Access Button (it’s quick and easy!), give it a push, click or tap when you’re denied access to research, and let’s work together to fix this problem. The Open Access Button is available now at

Nicole Engard: ATO2014: Using Bootstrap to create a common UI across products

Wed, 2014-10-22 14:59

Robb Hamilton and Greg Sheremeta from Red Hat spoke in this session about Bootstrap.

First up was Robb to talk about the problem. The problem that they had at Red Hat was that they had a bunch of products that all had their own different UI. They decided that as you went from product to product there should be a common UI. PatternFly was the initiative to make that happen.

Bootstrap was the framework they chose for this solution. Bootstrap is a front end framework for apps and websites. It’s comprised of HTML, CSS, JavaScript and an icon font (for resolution independent icons). Of course Bootstrap is open source and it’s the most popular project on Github. Bootstrap is mobile-first and responsive – design for the smallest screen first and then as the screen gets bigger you can adjust. Bootstrap has a lot of components like a grid, drop down menus, fonts, and form elements. So the answer to ‘Why Bootstrap’ seems obvious now. But one reason that Red Hat chose it was that most everyone was already using it in their products.

PatternFly is basically Bootstrap + extra goodness.

Up next was Gregg to talk about using PatternFly on his project – oVirt. First when you have to work with multiple groups/products you need good communication. The UI team was very easy to reach out to, answering questions in IRC immediately and providing good documentation. One major challenge that Gregg ran in to was having to write the application in a server-side language and then get it to translate to the web languages that PatternFly was using.

Gregg’s favorite quote: “All problems in computer science can be solved by another level of indirection, except of course for the problem of too many indirections” – David Wheeler. So he needed to come up with a layer of indirection to get from his language to bootstrap. He Googled his problem though and found a library that would work for him.

The post ATO2014: Using Bootstrap to create a common UI across products appeared first on What I Learned Today....

Related posts:

  1. ATO2014: Building a premier storytelling platform on open source
  2. ATO2014: The first FOSS Minor at RIT
  3. Open Source Documentation

Nicole Engard: ATO2014: Modern Applications &amp; Data

Wed, 2014-10-22 14:09

Dwight Merriman from MongoDB was up next to talk to us about modern applications and data.

We’re not building the same things that we were before – we’re building a whole new class of applications that just didn’t exist before. When creating an app these days you might use pieces from 12 other applications, if you had to do this with a closed source project this would be very difficult. Open source makes the modern applications possible – otherwise you have 12 other things to go buy to make your application work.

We’re in the midst of the biggest technology change in the data layer in 25 years. We talk about big data and this is all part of it. One of the differences is the shape of the data. It’s not all tabular data anymore. The new tools we’re creating today are very good at handling these new shapes. Saying ‘unstructured data’ is inaccurate – it’s dynamic data – hence the word ‘shape’.

Speed is another aspect of this. Everything is real-time now – you don’t want to wait overnight for you report anymore. As developers as we build systems we need to start with a real-time mentality. While this sound logical – it’s actually a big change in the way we were taught which was to do things in batches. These days, computers are a lot faster so if you can do it (real-time) it’s a lot better.

We also need to think about our approach to writing code these days – this has changed a lot from how we were taught years ago. It’s not just about writing the perfect spec anymore, it’s a lot more collaboration with the customer. Iteration is necessary now – look at how Facebook changes a tiny bit every day.

Dwight then shared with us some real world examples from John Deer, Bosch and Edeva. Edeva is doing some interesting things with traffic data. They have built a technology that will see your speed when you’re driving over this one bridge in Sweden, if you’re going over the speed limit it will create speed bumps specifically for you. That’s just one say they’re putting their data to use in a real life scenario.

“There’s new stuff to do in all domains – in all fields – and we have the tools to do them now.”

The post ATO2014: Modern Applications & Data appeared first on What I Learned Today....

Related posts:

  1. ATO2014: Open Source – The Key Component of Modern Applications
  2. ATO2014: The first FOSS Minor at RIT
  3. ATO2014: Open Source Schools: More Soup, Less Nuts

Nicole Engard: ATO2014: Open Source – The Key Component of Modern Applications

Wed, 2014-10-22 13:44

Jeffrey Hammond from Forrester Research started this morning with a talk about Open Source – The Key Component of Modern Applications. Jeffrey wants to talk to us about why open source matters. It’s the golden age to be a developer. If you have people who work for you who are developers you need to understand what’s going on in our space right now. The industry is changing drastically.

When you started a software company years ago it would cost $5 to $10 million. Today software innovation cost about 90% less than it used to. This is because of a variety of things including: elastic infrastructure, services that we can call upon, managed APIs, open source software, and a focus on measurable feedback. Open source is one of the key parts of this. It is one of the driving forces of modern application development. In 2014 4 out of 5 developers use or have used open source software to develop or deploy their software.

The traits of modern applications show why we expect to see more and more open source software everywhere. One of those traits is the API. Another is asynchronous communication – a lot of the traditional frameworks that developers are used to using are not conducive to this so we’re seeing new frameworks and these are open source. We’re seeing less and less comparison of open source versus proprietary and more open source compared to open source.

Jeff showed us the Netflix’s engagement platform and how every part of their system is built on open source source. Most of the popular tools out there have this same architecture built on open source.

This development is being driven by open source communities. What Jess call collaborative collectives. Those of us looking to hire developers need to restructure to use the power of these collectives.

When asked if they write code on their own time 70% of developers say they do. That desire to write code on your own time is built on a variety of motives, all those motives represent intrinsic motivation – it makes them feel good. For those developers a little over 1 in 4 contribute to open source projects on their own time. So, if you’re looking to hire productive developers Jeff says there is a direct correlation between those who participate in open source to those who are amazing and productive programmers.

I’d add here that we need to educate the next generation in this model better so that they can get jobs when they graduate.

We are in a generational technology shift – web-based applications are very different from the systems that have come before them. The elasticity of open source licenses make them the perfect fit for these new modern architectures and comes naturally to most developers. Open source projects are driving the formation of groups of people who know how to work collaboratively successfully.

The post ATO2014: Open Source – The Key Component of Modern Applications appeared first on What I Learned Today....

Related posts:

  1. ATO2014: Easing into open source
  2. ATO2014: Building a premier storytelling platform on open source
  3. Evaluating Open Source

LITA: Women Learning to Code

Wed, 2014-10-22 13:00

I am a user of technology much more than a creator.   After I completed a masters in educational technology I knew to better use the skills I had learned it would benefit me to gain a better understanding of computer coding. My HTML skills were adequate but rusty, and I didn’t have any experience with other languages. To increase these skills I really did not want to have to take another for-credit course, but I also knew that I would have a better learning experience if I had someone of whom I could ask questions. Around this time, I was made aware of Girl Develop It. I have attended a few meetings and truly appreciate the instruction and the opportunity to learn new skills. As a way to introduce the readers of the LITA blog who might be interested in adding to their skill-set I interviewed Michelle Brush and Denisse Osorio de Large, the leaders of my local Girl Develop It group.

What is Girl Develop It?

MB: Girl Develop It is a national nonprofit organization dedicated to bringing more women into technology by offering educational and network-building opportunities.

DL: Girl Develop It is a nonprofit organization that exists to provide affordable and accessible programs to women who want to learn web and software development through mentorship and hands-on instruction.

What sparked your interest in leading a Girl Develop It group?

MB: I attended Strange Loop where Jen Myers spoke and mentioned her involvement in Girl Develop It.   Then several friends reached out to me about wanting to do more for women in tech in Kansas City, so we decided to propose a chapter in Kansas City.

DL: Growing up my mom told me my inheritance was my education, and that my education was something no one would ever be able to take away from me.  My education has allowed me to have a plentiful life, I wanted to pay it forward and this organization allowed to do just that. I’m also the proud mom of two little girls and I want to be a good example for them.

What is your favorite thing about working in the technology industry?

MB: Software can be like magic.  You can build very useful and sometimes beautiful things from a pile of keywords and numbers.  It’s also very challenging, so you get the same joy when your code works that you do when solving a really hard math problem.

DL: I love the idea of helping to create things that don’t exist and solving problems that no one else has solved. The thought of making things better, drives me.

Why do you believe more women should be working in information technology?

MB: If we can get women involved at the same percentages as we have men, we would solve our skills gap.  It also helps that women bring a different perspective to the work.

DL: The industry as a whole will benefit from the perspective of a more diverse workforce. Also, this industry has the ability to provide a safe and stable environment where females can thrive and make a good living.

Are there other ways communities can be supportive of women entering the information technology industry?

MB: We need more visibility to the women already in the industry as that will make other women recognize they can be successful in the community as well.  Partly it’s on women like me to seek out opportunities to be more visible, but it’s also on the community to remember to look outside of the usual suspects when looking for speakers, mentors, etc.  It’s too easy to keep returning to the names you already know. Conferences like Strange Loop and are making strides in this area.

DL: I believe it starts with young girls and encouraging and nurturing their interest in STEM. It is very important that members of the community provide opportunities for girls to find their passion in the field of their choice.

Are any of you reading the LITA blog involved with Girl Develop It? I’d love to hear your stories!

Open Knowledge Foundation: Building Community Action at Mozilla Festival 2014

Wed, 2014-10-22 11:49

Often Community is thought of as a soft topic. In reality, being part of a community (or more!) is admirable, a wonderful effort, both very fun but also sometimes tough and building and mobilising community action requires expertise and understanding of both tools and crowds – all relationships between stakeholders involved need to be planned with inclusivity and sustainability in mind.

This year Mozilla Festival (London, October 24-26), an event we always find very inspiring to collaborate with, will feature a track focusing on all this and more. Called Community Building, and co-wrangled by me and Bekka Kahn (P2PU / Open Coalition), the track has the ambitious aim to tell the story about this powerful and groundbreaking system, create the space where both newcomers and experienced community members can meet, share knowledge, learn from each other, get inspired and leave the festival feeling empowered and equipped with a plan for their next action, of any size and shape, to fuel the values they believe in.

We believe that collaboration between communities is what can really fuel the future of the Open Web movement and we put this belief into practice from our curatorship structure (we come from different organisations and are loving the chance to work together closely for the occasion) to the planning of the track’s programme, which is a combination of great ideas that were sent through the festival’s Call for Proposals and invitations we made to folks we knew would have had the ability to blow people’s mind with 60 minutes and a box of paper and markers at their disposal.

The track has two narrative arcs, connecting all its elements: one focusing on the topics which will be unpacked by each session, from gathering to organising and mobilising community power and one aiming to embrace all learnings from the track to empower us all, member of communities, to take action for change.

The track will feature participatory sessions (there’s no projector is sight!), an ongoing wall-space action and a handbook writing sprint. In addition to this, some wonderful allies, Webmaker Mentors, Mozilla Reps and the Space Wranglers team will help us make a question resonate all around the festival during the whole weekend: “What’s the next action, of any kind/ size/ location, you plan to take for the Open Web movement?”. Participants to our track, passer-bys feeding our wall action, folks talking with our allies will be encouraged to think about the answer to this, and, if not before, join our space for our Closing Circle on Sunday afternoon when we’ll all share with each other our plans for the next step, local or global, online or offline, that we want to take.

Furthermore, we also invite folks who’ll not be able to join us at the event to get in touch with us, know more about what we’re making and collaborate with us if they wish. Events can be an exclusive affair (they require time and funds to be attended) and we want to try to overcome this obstacle. Anyone will be welcome to connect with us in (at least) three ways. We’ll have a dedicated hashtag to keep all online/remote Community conversations going: follow and engage with #MozFestCB on your social media platform of choice, we’ll record a curated version of the feed on our Storify. We’ll also collect all notes, resources of documentation of anything that will happen in and around the track on our online home. The work to create a much awaited Community Building Handbook will be kicked off at MozFest and anyone who thinks could enrich it with useful learnings is invited to join the writing effort, from anywhere in the world.

If you’d like to get a head start on MozFest this year and spend some time with other open knowledge community minded folks, please join our community meetup on Friday evening in London.

In the Library, With the Lead Pipe: Using Animated GIF Images for Library Instruction

Wed, 2014-10-22 10:30


In Brief

This article discusses the changing nature of animated Graphics Interchange Format images (GIFs) as a form of visual communication on the Web, and how that can be adapted for the purposes of information literacy and library instruction. GIFs can be displayed simultaneously as a sequence of comic book like panels, allowing for a ‘birds eye view’ of all the steps of a process, viewing and reviewing steps as needed without having to rewind or replay an entire video. I discuss tools and practical considerations as well as limitations and constraints.

Introduction and Background

Animated GIFs are “a series of GIF files saved as one large file. Animated GIFs…provide short animations that typically repeat as long as the GIF is being displayed.” (High Definition) Animated GIFs were at one point one of the few options available for adding video-like elements to a web page. As web design aesthetics matured and digital video recording, editing, playback and bandwidth became more affordable and feasible, the animated GIF joined the blink tag and comic sans font as the gold, silver, and bronze medals for making a site look like it was ready to party like it’s 1999.

Even so, services like MySpace and fresh waves of web neophytes establishing a personal online space allowed the animated GIF to soldier on. Typically used purely for decoration without any particular function, and sometimes funny at first, then less so each subsequent viewing (like bumper stickers) animated GIFs ranged from benign to prodigiously distracting, best exemplified by that rococo entity: the sparkly unicorn:1

To be fair, some sites used animated GIFs with specific purposes, such as an early version of an American Sign Language site that used animated GIFs to demonstrate signing of individual words.2 As the web continued to evolve and function began to catch up with form, the animated GIF began to fade from the scene, especially with the advent of comparably fast-loading and high-resolution streaming video formats such as Quicktime and RealVideo. Flash, in conjunction with the rise of YouTube, established a de facto standard for video on the web for a time. In turn, with the ongoing adoption of HTML5 standards and the meteoric rise of mobile devices and their particular needs with regards to video formats, the web content landscape continues to develop and change.

I had personally written off the animated GIF as a footnote in early web history, until the last few years when I noticed them cropping up again with regularity. My initial reaction was ‘great, I’m officially old enough to see the first wave of web retro nostalgia’, but I began to notice some differences: instead of being images that simply waved their arms for attention, this new generation of animated GIFs often sketched out some sort of narrative: telling a joke, or riffing on a meme, such as the following:

This example combines an existing visual meme as a ‘punchline’ to clips from scenes in two different movies (Everything is Illuminated and Lord of the Rings) that pivots on two points of commonality: Elijah Wood and potatoes. I should note that when I first created this GIF, it was in ‘stacked’ format, or one continuous GIF to give the ‘punchline’ more impact, but I separated them here in keeping with the spirit of the article’s topic. In general, further thoughts and observations on the curious persistence and evolution of GIFs as a popular culture entity is discussed in this 2013 Wired article: The Animated GIF: Still Looping After All These Years.

Concepts and Rationale

At some point, an idea coalesced that a similar approach could be applied to instructional videos, specifically those supporting information literacy. Jokes and memes are, after all, stories of sorts, and information literacy instruction is too.

One initial attraction to exploring the use of animated GIFs was as an alternative to video. Given a choice between a video, even a short one, and some other media such as a series of captioned images or simple text, in most cases I will opt for the latter, especially if the subject matter demonstrates or explains how to do something. Some of this is merely personal preference, but I suspected others had the same inclination. In fact, a study by Mestre that compared the effectiveness of video vs. static images used for library tutorials indicated that participants had a disinclination to take the time to view instruction in video form. One participant comment in particular was interesting: “I think that a video tutorial really is only needed if you want to teach the complex things, but if it’s to illustrate simple information you don’t need to do it. In this case, a regular web page with added images and multimedia is all you need” (266). Furthermore, only five of twenty one participants indicated a preference for video over static image tutorials, and of those five, two “admitted that although they preferred the screencast tutorial, they would probably choose the static tutorial if they actually needed to figure out how to do something (270). Not only did the study show that students prefer not to watch videos, but students with a variety of learning style preferences were better able to complete or replicate demonstrated tasks when tutorials used a sequence of static images as compared to screencast videos (260).

Some reflection on why yielded the following considerations.

  1. Scope and scale: A group of pictures or block of text gives immediate feedback on how much information is being conveyed. The length of a video will give some indication of this, but at a greater level of abstraction.
  2. Sequence: Pictures and text have natural break points between steps of a process; the next picture, or a new paragraph or bullet point. This allows one to jump back to review an earlier step in the process, then move forward again in a way that is not disruptive to a train of thought. This is more difficult to do in video, especially if appropriate scene junctures are not built in with attendant navigation tools such as a click-able table of contents/scene list (i.e., you have to rewatch the video from the beginning to see step 3 again, or have a deft touch on the rewind/scrub bar). The Mestre study suggested that being able to quickly jump back to or review prior steps was important to participants (265).
  3. Seeing the forest and the trees: This involves the concept of closure as described by Scott McCloud in Understanding Comics: “the…phenomenon of observing the parts but perceiving the whole”(63). Judicious choice and arrangement of sequences can allow one to see both the individual steps of a process and get a sense of an overall concept in less physical and temporal space than either a video or a series of static images. The main challenge in applying this concept is determining natural breaking points in a process, analogous to structuring scenes and transitions in a video or deciding on panel layout and what happens ‘off-screen’ between panels. Does the sequence of GIFs need to be a video that is chopped into as many parts as there are steps, or are there logical groupings that can be combined in each GIF?
  4. Static and dynamic: This is where the animation factor comes into play. A series of animated GIFs allows for incorporating both the sequencing and closure components described above, while retaining some of the dynamic element of video. The static component involves several GIFs being displayed at once. This can be helpful for a multistep process where each step depends on properly executing the one before it, such as tying a bowtie. If you’re in the middle of one step, you can take in, at a glance, the previous or next step rather than waiting for the whole sequence to re-play. Depending on the complexity of the task, the simplification afforded by using several images compared to one can be subtle, but an analogy might be that it can make a task like hopping into an already spinning jump rope more like stepping onto an escalator—both tasks are daunting, but the latter markedly less so. The dynamic component involves how long and how much movement each image should include. A single or too few images, and you might as well stick with a video. Too many images and the process gets lost in a confusing array of too much information.

Using animated GIFs can also leverage existing content or tutorials. A sequence of GIFs can be generated from existing video tutorials. Conversely, the process of producing an efficient series of GIFs can also function as a storyboarding technique for making videos more concise and efficient or with appropriate annotation, selected individual frames of an animated GIF can be adapted to a series of static images for online use or physical handouts.

Animated GIFs might also be explored as an alternative instructional media where technological limitations are a consideration. For example, the area served by my library has a significant population that does not have access to broadband, and media that is downloaded and cached or saved locally might be more practical than streaming media. In terms of web technology, animated GIFs have been around a long time, but by the same token, are stable and widely supported and can be employed without special plugins or browser extensions. Once downloaded they may be viewed repeatedly without any further downloading or buffering times.

Applications, Practical Considerations, and Tools

In the section below I’ll discuss two specific examples I created of brief library tutorials using animated GIFs. The raw materials for creating the GIFs consisted of video footage recorded on an iPhone, video screen capture, and still images.

The first example of using this format is at This page features four variants of instructions for renewing a book online. To some extent, the versions represent different approaches to implementing the concept, but probably more poignantly represent the process of trial and error in finding a workable approach. Notice: if the ‘different cloned versions of Ripley’ scene from Alien Resurrection3 disturbed you, you might want to proceed with caution (mostly kidding, mostly). I tried different sizes, arrangements and numbers of images. For the specific purpose here, three images seemed to strike a good balance between cramming too many steps into one segment and blinking visual overload.

The second example, at:, sticks with the three image approach for demonstrating how to track a call number from the catalog to a physical shelf location. The images produced in this example were very large, as much as 6MB. It is possible to shrink the file size by reducing the overall image size or optimizing the animated GIF. The optimized version is below the original. There is a distinct loss of image quality, but the critical information still seems to be retained; the text can still be read and the video is serviceable, although it has a certain ‘this is the scary part’ quality to it.

Creation of the two examples above revealed an assortment of practical considerations for and constraints of the animated GIF format. Animated GIF file sizes aren’t inherently smaller than video, especially streaming video. One advantage the animated GIF format has, as mentioned above, is that aside from not needing special plugins or extensions, they can be set to loop after downloading with no further user intervention or downloading of data. This facilitates the use of a series of moving images that illustrate steps that happen in sequence and can be parsed back and forth as necessary. This also helps in breaking up a single large video sequence into chunks of manageable size.

Depending on the task at hand, the usefulness of the animation factor can range from clarifying steps that might be difficult to grasp in one long sequence of static images (the bowtie example) to simply adding some visual interest or sense of forward propulsion to the demonstration of a process (the climbing the stairs example).

For some topics, it’s a fine line judgement call as to whether animated GIFs would add any clarity, or if a few thoughtfully-annotated screen shots would serve. While looking at non-library related examples, I found some demonstrations of variations on tying your shoe, both illustrated with static images or a single GIF demonstrating all of the steps. I found one to be learnable with the static images, and I actually regularly now use that method and tie my shoes one or two times a day instead of ten or twenty. A second, more complex method, was harder for me to grasp; between the complexity of the task, the number of images needed to illustrate the steps (which were displayed vertically, requiring scrolling to see them all), and the fact that it’s hard to scroll through images while holding shoelaces, I gave up. I also found it difficult to keep track of the steps with the single animated GIF. I can’t help but wonder if using several animated GIFs instead one for the entire process might have tipped the balance there.

In terms of tools, there is a variety of software that can get the task done. The examples above, including the mashup of Everything is Illuminated / Lord of the Rings, were done using Camtasia Studio versions 4 and 8 (a newer version became available to me whilst writing this article). The GIF optimization was done with Jasc Animation Shop v.2, which has been around at least fifteen years, but proved useful in reducing the file size of some of the example animated GIFs by nearly half.

Camtasia Studio is not terribly expensive, is available for Mac and Windows, and has some very useful annotation and production tools, but there are also freely-available programs that can be used to achieve similar results. A few Windows examples that I have personally used/tried:4

  • Screen capture: Jing and Hypercam.
  • Scene selection and excerpting: Free Video Slicer.
    • VLC is another option and is available on Mac and Linux as well. There is a Lifehacker article that details how to record a section of video.
  • Video to GIF conversion: Free Video to Gif Converter .
  • Captioning: Windows Movie Maker
    • The captioning in Camtasia and Movie Maker is a nice feature, but it should be noted that conversion to GIF removes any ADA compliance functionality of closed captions. An alternative is to simply caption each animated GIF with html text under each image. An inference can be drawn from the Mestre study that a bit of daylight between the visual and the textual information might actually be beneficial (268).

Some cursory web searching indicates that there are a variety, yea—even a plethora, of additional tools available; web-based and standalone programs, freeware, shareware and commercial.

Discussion and Where Next

The example information literacy GIFs discussed above both deal with very straightforward processes that are very task oriented. Initial impressions suggest that using animated GIFs for instruction would have a fairly narrow scope for usefulness, but within those parameters it could be a good alternative, or even the most effective approach. Areas for further exploration include using this approach for more abstract ideas, such as intellectual property issues, that could draw more upon the narrative power of sequential images. Conversely animated GIFs could serve to illuminate even more specific library-related processes and tasks (e.g.: how to use a photocopier or self checkout station.) Another unknown aspect is assessment and effectiveness. Since I assembled the examples used, I was naturally very familiar with the processes and it would be helpful to have data on whether this is a useful or effective method from an end user’s perspective.

The Mestre study made a fairly strong case that static images were more effective than video for instruction in basic tasks and the the sequentiality of the images was an important component of that (260, 265, 270). One aspect that warrants further investigation is whether the dynamic aspects of animated GIFs would add to the advantage of a sequence of images, if the movement would detract from the effectiveness of purely static images, or if they would provide a ‘third way’ that would draw on the strengths of the other two approaches to be even more effective than either.


In closing, I’d like to note that there is a peculiar gratification in finding a new application for a technology that’s been around at least as long as the Web itself. In reflecting on how the idea took shape, I find it interesting that it wasn’t a case of looking for a new way to deliver library instruction, rather that observing the use of a technology for unrelated purposes led to recognition that it could be adapted to a particular library-related need. I suppose the main idea I’d really like to communicate here is, to put it simply: be open to old ideas surprising you with new possibilities.

I would like to acknowledge the peer reviewers for this article: Ellie Collier and Paul Pival, and the Publishing Editor Erin Dorney for their kind support, invaluable insights, and unflagging assistance in transforming ideas, notes and thoughts and first drafts into a fully realized article. Many thanks to you all!


“Animated Gif.” High Definition: A-z Guide to Personal Technology. Boston: Houghton Mifflin, 2006. Credo Reference. Web. 13 October 2014.

McCloud, Scott. Understanding Comics. New York: Paradox Press, 2000. Print.

Mestre, Lori S. “Student Preference for Tutorial Design: A Usability Study.” Reference Services Review 40.2 (2012): 258-76. ProQuest. Web. 26 Sep. 2014.



  1. source:
  2. I was able to dig up the original site on, it seems to have moved on to use video clips, but I was able to find a web oubliette that still has examples of GIFs used to animate ASL signs.
  3. In one scene in Alien Resurrection, a cloned version of the main character discovers several ‘rough draft’ clones of herself, gruesomely malformed and existing in suffering
  4. As a side note, I’m simply listing them, rather than providing direct links, erring on the side of caution on security matters, but I have personally downloaded and used all of the above with no issues that I’m aware of. They are all also easily findable via a web search.

DuraSpace News: Fedora 4.0 Update: Fall 2014 Outreach and Training

Wed, 2014-10-22 00:00

From Andrew Woods, Technical Lead for Fedora 

Winchester, MA  Members of the Fedora team have been engaged in outreach and training at multiple community events around the globe this fall.