You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 2 hours 30 min ago

LibUX: Does Google think Your Library is Mobile Friendly?

Thu, 2014-11-27 08:03

If your users are anything like mine, then

  • no one has your website bookmarked on their home-screen
  • your url is kind of a pain to tap-out

and consequently inquiries about business hours and location start not on your homepage but in a search bar. As of last Tuesday (November 18th), searchers from a mobile device will be given the heads up that this or that website is “mobile friendly.” Since we know how picky mobile users are (spoiler: very), we need to assume that more quickly than not users will avoid search results if a website isn’t tailored for their screen. A mobile-friendly result looks like this:

The criteria from the announcement are that the website

  • Avoids software that is not common on mobile devices, like Flash
  • Uses text that is readable without zooming
  • Sizes content to the screen so users don’t have to scroll horizontally or zoom
  • Places links far enough apart so that the correct one can be easily tapped

and we should be grateful that this is low-hanging fruit. The implication that a website is not mobile friendly will certainly ward off clickthrough, which for public libraries especially may have larger repercussions.

Your website has just 2 seconds to load at your patron’s point of need before a certain percentage will give up. This may literally affect your foot traffic. Rather than chance the library being closed, your patron might just change plans. Mobile Users Are Demanding

You can test if your site meets Googlebot’s standards. Here’s how the little guy sees the New York Public Library:

Cue opportunistic tangent about pop-ups

On an unrelated note, the NYPL is probably missing out on more donations than they get through that pop-up. People hate pop-ups, viscerally.

Users not only dislike pop-ups, they transfer their dislike to the advertisers behind the ad and to the website that exposed them to it. In a survey of 18,808 users, more than 50% reported that a pop-up ad affected their opinion of the advertiser very negatively and nearly 40% reported that it affected their opinion of the website very negatively. The Most Hated Advertising Techniques

And, in these circumstances, the advertiser is the library itself. ( O_o )

At least Googlebot thinks they’re mobile friendly.

The post Does Google think Your Library is Mobile Friendly? appeared first on LibUX.

FOSS4Lib Upcoming Events: hosted pbx denver

Thu, 2014-11-27 05:02
Date: Thursday, November 27, 2014 (All day)Supports: DMP Online

Last updated November 27, 2014. Created by fredwhite on November 27, 2014.
Log in to edit this page.

Get the voip connection for the business as well as resident purpose. For more information of the phones systems and voip connection visit here.

HangingTogether: “Managing Monsters”? Academics and Assessment

Wed, 2014-11-26 21:17

Recently in the London Review of Books Marina Warner explained why she quit her post at the University of Essex. I found it a shocking essay. Warner was pushed out because she is chairing the Booker Prize committee this year, in addition to delivering guest lectures at Oxford. (If those lectures are anything like Managing Monsters (1994), they will probably change the world.) Warner’s work – as a creative writer, scholar, public intellectual – does not count in the mechanics of assessment, which includes both publishing and teaching.

Warner opens her LRB essay with the library at Essex as the emblem of the university: “New brutalism! Rarely seen any so pure.” I don’t want to make light of the beautifully-written article, which traces changes over time in the illustrious and radical reputation of the University of Essex since it was founded in the 60s. Originally Warner had enthusiastic support, which later waned when a new vice-chancellor muttered, “‘These REF stars – they don’t earn their keep.”

Warner’s is just the latest high-profile critique about interference in research by funders and university administrators.  The funniest I’ve read is a “modest proposal” memo mandating university-wide use of research assessment tools that have acronyms such as Stupid, Crap, Mess, Waste, Pablum, and Screwed.

I have been following researchers’ opinions about management of information about research ever since John MacColl synthesized assessment regimes in five countries. This past spring John sent me an opinion piece from the Times Higher in which the author, a REF coordinator himself, despairs about the damage done by years of assessment to women’s academic careers, to morale, to creativity, and to education and research. During my visits to the worlds of digital scholarship, I invariably hear of the failure of assessment regimes for the humanities, the digital humanities, digital scholarship, and e-research.

I figure it is high time I post another excerpt from my synthesis of user studies about managing research information. I prepared most of this post a year ago, when I was pondering the fraught politics (and ethics) of libraries’ contributions to research information management systems (RIMs). (Lorcan recently parsed RIM services.)

So here goes:

Alignment with the mission of one’s institution is not a black-and-white exercise. I believe that research libraries must think carefully about how they choose to ally themselves with their own researchers, academic administrations, and national funding agencies. If we are calibrating our library services – for new knowledge and higher education – to rankings and league tables, I certainly hope that we are reading the journals that publish those rankings, especially articles written by the same academics we want to support.

An editorial blog post for the Chronicle of Higher Education is titled, provocatively, “A Machiavellian Guide to Destroying Public Universities in 12 Easy Steps.” The fifth step is assessment regimes:

(5) Put into place various “oversight instruments,” such as quality-assessment exercises, “outcome matrices,” or auditing mechanisms, to assure “transparency” and “accountability” to “stakeholders.” You might try using research-assessment exercises such as those in Britain or Australia, or cheaper and cruder measures like Texas A&M’s, by simply publishing a cost/benefit analysis of faculty members.

This reminded me of a similar cri de coeur a few years ago in the New York Review of Books. In “The Grim Threat to British Universities,” Simon Head warned about applying a (US) business-style “bureaucratic control” – performance indicators, metrics, and measurement of outputs, etc. – to scholarship, especially science. Researchers often feel that administrators have no idea what research entails, and often for a good reason. For example, Warner’s executive dean for the humanities is a “young lawyer specialising in housing.”

A consistent theme in user studies with researchers is the sizeable gulf between what they use and desire and the kinds of support services that libraries and universities offer.[1] A typical case study in the life sciences, for example, concludes that there is a “significant gap” between researchers’ use of information and the strategies of funders and policy-makers.[2] In particular, researchers consider libraries unlikely to play a desirable role supporting research. [3]

Our own RIN and OCLC Research studies interviewing researchers reveal that libraries offering to manage research information seems “orthogonal, and at worst irrelevant,” to the needs of researchers.[4] One of the trends that stands out is oversight: researchers require autonomy, so procedures mandated in a top-down fashion are mostly perceived as intrusive and unnecessary.

Librarians and administrators need to respect networks of trust between researchers. In particular, researchers may resist advice from the Research Office or any other internal agency removed from the colleagues they work with.[5]

Researchers feel that their job is to do research. They begrudge any time spent on activities that serve administrative purposes.[6] A heavy-handed approach to participation in research information management is unpopular and can back-fire.[7] In some cases, mandates and requirements – such as national assessment regimes – become disincentives for researchers to improve methodologies or share their research.[8]

On occasion researchers have pushed back against such regimes. For example, in 2011, Australian scholars successfully quashed a journal-ranking system used for assessment. The academics objected that such a flawed “blunt instrument” for evaluating individuals uses crude criteria to rank journals rather than professional respect. [9]

Warner – like many humanists I have met – calls for a remedy that research libraries could provide. “By the end of 2013, all the evidence had been gathered, and the inventory of our publications fought over, recast and finally sent off to be assessed by panels of peers… A scholar whose works are left out of the tally is marked for assisted dying.” Librarians can  improve information about those “works left out,” or get the attributions right.

But assisted dying? Yikes. At our June meeting in Amsterdam on Supporting Change/Changing Support, Paul Wouters gave a thoughtful warning of the “seduction” of measurements, such as the trendy quantified self. Wouters gave citation analysis as an example of a measure that is necessarily backward-looking and disadvantages some domains. “You can’t see everything in publications.” Wouters pointed out that assessment is a bit “close to the skin” for academics, and that libraries might not want to “torment their researchers,” inadvertently making an honest mistake that could influence or harm careers.

Just because we can, we might consider whether we should, and when, and how. The politics of choosing to participate in expertise profiling and research assessment regimes potentially have consequences for research libraries that are trying to win the trust of their faculty members.

References beyond embedded links:

[1] pp. 4, 70 in Sheridan Brown and Alma Swan (i.e. Key Perspectives). 2007. Researchers’ use of academic libraries and their services. London: RIN (Research Information Network)/CURL (Consortium of Research Libraries). http://www.rin.ac.uk/our-work/using-and-accessing-information-resources/researchers-use-academic-libraries-and-their-serv

[2] pp. 5-6 in Robin Williams and Graham Pryor. 2009. Patterns of information use and exchange: case studies of researchers in the life sciences. London: RIN and the British Library. http://www.rin.ac.uk/our-work/using-and-accessing-information-resources/patterns-information-use-and-exchange-case-studie

[3] Brown and Swan 2007, p. 4.

[4] p. 6 in John MacColl and Michael Jubb. 2011. Supporting research: environments, administration and libraries. Dublin, Ohio: OCLC Research and London: Research Information Network (RIN). http://www.oclc.org/research/publications/library/2011/2011-10.pdf

[5] p. 10 in Research Information Network (RIN). 2010. Research support services in selected UK universities. London: RIN. http://www.rin.ac.uk/system/files/attachments/Research_Support_Services_in_UK_Universities_report_for_screen.pdf

[6] MacColl and Jubb, 2011, p. 3-4.

[7] p. 12-13 in Martin Feijen. 2011. What researchers want: A literature study of researchers’ requirements with respect to storage and access to research data. Utrecht: SURFfoundation. http://www.surf.nl/nl/publicaties/Documents/What_researchers_want.pdf. P. 56 in Elizabeth Jordan, Andrew Hunter, Becky Seale, Andrew Thomas and Ruth Levitt. 2011. Information handling in collaborative research: an exploration of five case studies. London: RIN and the BL. http://www.rin.ac.uk/our-work/using-and-accessing-information-resources/collaborative-research-case-studies. MacColl and Jubb 2011, p.6.

[8] p. 53 in Robin Williams and Graham Pryor. 2009. Patterns of information use and exchange: case studies of researchers in the life sciences. London: RIN and the British Library. http://www.rin.ac.uk/our-work/using-and-accessing-information-resources/patterns-information-use-and-exchange-case-studie

[9] Jennifer Howard. 2011 (June 1). “Journal-ranking system gets dumped after scholars complain.” Chronicle of higher education. http://chronicle.com/article/Journal-Ranking-System-Gets/127737/

 

About Jennifer Schaffner

Jennifer Schaffner is a Program Officer with the OCLC Research Library Partnership. She works with the rare books, manuscripts and archives communities. She joined RLG/OCLC Research in August of 2007.

Mail | Web | Twitter | More Posts (24)

DPLA: Order Up: 10 Thanksgiving Menu Inspirations

Wed, 2014-11-26 19:23

With Thanksgiving just a day away, the heat’s turned up for the perfect kitchen creation. Whether you’re the one cooking the turkey, or are just in charge of expertly arranging the table napkins, creating the perfect Thanksgiving meal is a big responsibility. Take some cues from these Thanksgiving dinner menus from hotels and restaurants across the country, from The New York Public Library.

Gramercy Park Hotel, NY, 1955. Metropole Hotel, Fargo, ND, 1898. The New Yorker at Terrace Restaurant, NY, 1930. Briggs House, Chicago, IL, 1899. Normandie Café, Detroit, MI, 1905. Hotel De Dijon, France, 1881. M.F. Lyons Dining Rooms, NY, 1906. L’Aiglon, NY, 1947. Hotel Roanoke, Roanoke, VA, 1899. The Waldorf Astoria, NY, 1961.

Library of Congress: The Signal: Collecting and Preserving Digital Art: Interview with Richard Rinehart and Jon Ippolito

Wed, 2014-11-26 17:54

Jon Ippolito, Professor of New Media at the University of Maine

As artists have embraced a range of new media and forms in the last century as the work of collecting, conserving and exhibiting these works has become increasingly complex and challenging. In this space, Richard Rinehart and Jon Ippolito have been working to develop and understand approaches to ensure long-term access to digital works. In this installment of our insights interview series I discuss Richard and Jon’s new book, “Re-collection: Art, New Media, and Social Memory.” The book offers an articulation of their variable media approach to thinking about works of art. I am excited to take this opportunity to explore the issues the book raises about digital art in particular and a perspective on digital preservation and social memory more broadly as part of our Insights Interview Series.

Trevor: The book takes a rather broad view of “new media”; everything from works made of rubber, to CDs, art installations made of branches, arrangements of lighting, commercial video games and hacked variations of video games. For those unfamiliar with your work more broadly, could you tell us a bit about your perspective on how these hang together as new media? Further, given that the focus of our audience is digital preservation, could you give us a bit of context for what value thinking about various forms of non-digital variable new media art offer us for understanding digital works?

Richard Rinehart, Director of the Samek Art Museum at Bucknell University.

Richard: Our book does focus on the more precise and readily-understood definition of new media art as artworks that rely on digital electronic computation as essential and inextricable. The way we frame it is that these works are at the center of our discussion, but we also discuss works that exist at the periphery of this definition. For instance, many digital artworks are hybrid digital/physical works (e.g., robotic works) and so the discussion cannot be entirely contained in the bitstream.

We also discuss other non-traditional art forms–performance art, installation art–that are not as new as “new media” but are also not that old in the history of museum collecting. It is important to put digital art preservation in an historical context, but also some of the preservation challenges presented by these works are shared with and provide precedents for digital art. These precedents allow us to tap into previous solutions or at least a history of discussion around them that could inform or aid in preserving digital art. And, vice versa, solutions for preserving digital art may aid in preserving these other forms (not least of which is shifting museum practices). Lastly, we bring non-digital (but still non-traditional) art forms into the discussion because some of the preservation issues are technological and media-based (in which case digital is distinct) but some issues are also artistic and theoretical, and these issues are not necessarily limited to digital works.

Jon: Yeah, we felt digital preservation needed a broader lens. The recorded culture of the 20th century–celluloid, vinyl LPs, slides–is a historical anomaly that’s a misleading precedent for preserving digital artifacts. Computer scientist Jeff Rothenberg argues that even JPEGs and PDF documents are best thought of as applications that must be “run” to be accessed and shared. We should be looking at paradigms that are more contingent than static files if we want to forecast the needs of 21st-century heritage.

Casting a wider net can also help preservationists jettison our culture’s implicit metaphor of stony durability in favor of one of fluid adaptability. Think of a human record that has endured and most of us picture a chiseled slab of granite in the British Museum–even though oral histories in the Amazon and elsewhere have endured far longer. Indeed, Dragan Espenschied has pointed out cases in which clay tablets have survived longer than stone because of their adaptability: they were baked as is into new buildings, while the original carvings on stones were chiseled off to accommodate new inscriptions. So Richard and I believe digital preservationists can learn from media that thrive by reinterpretation and reuse.

Trevor: The book presents technology, institutions and law as three sources of problems for the conservation of variable media art and potentially as three sources of possible solutions. Briefly, what do you see as the most significant challenges and opportunities in these three areas? Further, are there any other areas you considered incorporating but ended up leaving out?

Jon: From technology, the biggest threat is how the feverish marketing of our techno-utopia masks the industry’s planned obsolescence. We can combat this by assigning every file on our hard drives and gadget on our shelves a presumptive lifespan, and leaving room in our budgets to replace them once their expiration date has expired.

From institutions, the biggest threat is that their fear of losing authenticity gets in the way of harnessing less controllable forms of cultural perseverance such as proliferative preservation. Instead of concentrating on the end products of culture, they should be nurturing the communities where it is birthed and finds meaning.

From the law, the threat is DRM, the DMCA, and other mechanisms that cut access to copyrighted works–for unlike analog artifacts, bits must be accessed frequently and openly to survive. Lawyers and rights holders should be looking beyond the simplistic dichotomy of copyright lockdown versus “information wants to be free” and toward models in which information requires care, as is the case for sacred knowledge in many indigenous cultures.

Other areas? Any in which innovative strategies of social memory are dismissed because of the desire to control–either out of greed (“we can make a buck off this!”) or fear (“culture will evaporate without priests to guard it!”).

Trevor: One of the central concepts early in the book is “social memory,” in fact, the term makes its way into the title of the book. Given its centrality, could you briefly explain the concept and discuss some of how this framework for thinking about the past changes or upsets other theoretical perspectives on history and memory that underpin work in preservation and conservation?

Richard: Social memory is the long-term memory of societies. It’s how civilizations persist from year to year or century to century. It’s one of the core functions of museums and libraries and the purpose of preservation. It might alternately be called “cultural heritage,” patrimony, etc. But the specific concept of social memory is useful for the purpose of our book because there is a body of literature around it and because it positions this function as an active social dynamic rather than a passive state (cultural heritage, for instance, sounds pretty frozen). It was important to understand social memory as a series of actions that take place in the real world every day as that then helps us to make museum and preservation practices tangible and tractable.

The reason to bring up social memory in the first place is to gain a bit of distance on the problem of preserving digital art. Digital preservation is so urgent that most discussions (perhaps rightfully) leap right to technical issues and problem-solving. But, in order to effect the necessary large-scale and long-term changes in, say, museum practices, standards and policies we need to understand the larger context and historic assumptions behind current practices. Museums (and every cultural heritage institution) are not just stubborn; they do things a certain way for a reason. To convince them to change, we cannot just point at ad-hoc cases and technical problematics; we have to tie it to their core mission: social memory. The other reason to frame it this way is that new media really are challenging the functions of social memory; not just in museums, but across the board and here’s one level in which we can relate and share solutions.

These are some ways in which the social  memory allows us to approach preservation differently in the book, but here’s another, more specific one. We propose that social memory takes two forms: formal/canonical/institutional memory and informal/folkloric/personal memory (and every shade in between). We then suggest how the preservation of digital art may be aided by BOTH social memory functions.

Trevor: Many of the examples in the book focus on boundary-breaking installation art, like Flavin’s work with lighting, and conceptual art, like Nam June Paik’s work with televisions and signals, or Cory Arcangel’s interventions on Nintendo cartridges. Given that these works push the boundaries of their mediums, or focus in depth on some of the technical and physical properties of their mediums do you feel like lessons learned from them apply directly to seemingly more standardized and conventional works in new media? For instance, mass produced game cartridges or Flash animations and videos? To what extent are lessons learned about works largely intended to be exhibited art in galleries and museums applicable to more everyday mass-produced and consumed works?

Richard: That’s a very interesting question and its speaks to our premise that preserving digital art is but one form of social memory and that lessons learned therein may benefit other areas. I often feel that preserving digital art is useful for other preservation efforts because it provides an extreme case. Artists (and the art world) ensure that their media creations are about as complex as you’ll likely find; not necessarily technically (although some are technically complex and there are other complexities introduced in their non-standard use of technologies) but because what artists do is to complicate the work at every level–conceptually, phenomenologically, socially, technically; they think very specifically about the relationship between media and meaning and then they manifest those ideas in the digital object.

I fully understand that preserving artworks does not mean trying to capture or preserve the meaning of those objects (an impossible task) but these considerations must come into play when preserving art even at a material level; especially in fungible digital media. So, for just one example, preserving digital artworks will tell us a lot about HCI considerations that attend preserving other types of interactive digital objects.

Jon: Working in digital preservation also means being a bit of a futurist, especially in an age when the procession from medium to medium is so rapid and inexorable. And precisely because they play with the technical possibilities of media, today’s artists are often society’s earliest adopters. My 2006 book with Joline Blais, “At the Edge of Art,” is full of examples, whether how Google Earth came from Art+Com, Wikileaks from Antoni Muntadas, or gestural interfaces from Ben Fry and Casey Reas. Whether your metaphor for art is antennae (Ezra Pound) or antibodies (Blais), if you pay attention to artists you’ll get a sneak peek over the horizon.

Trevor: Richard suggests that the key to digital media is variability and not fixity which is the defining feature of digital media. Beyond this that conservators should move away from “outdated notions of fixity.” Given the importance of the concept of fixity in digital preservation circles, could you unpack this a bit for us? While digital objects do indeed execute and perform the fact that I can run a fixity check and confirm that this copy of the digital object is identical to what it was before seems to be an incredibly powerful and useful component of ensuring long-term access to them. Given that based on the nature of digital objects, we can actually ensure fixity in a way we never could with analog artifacts, this idea of distancing ourselves from fixity seemed strange.

Richard: You hit the nail on the head with that last sentence; and we’re hitting a little bit of a semantic wall here as well–fixity as used in computer science and certain digital preservation circles does not quite have the same meaning as when used in lay text or in the context of traditional object-based museum preservation. I was using fixity in the latter sense (as the first book on this topic, we wrote for a lay audience and across professional fields as much as possible.) Your last thought compares the uses of “fixity” as checks between analog media (electronic, reproducible; film, tape, or vinyl) compared to digital media, but in the book I was comparing fixity as applied to a different class of analog objects (physical; marble, bronze, paint) compared to digital objects.

If we step back from the professional jargon for a moment, I would characterize the traditional museological preservation approach for oil painting and bronze sculptures to be one based on fixity. The kind of digital authentication that you are talking about is more like the scientific concept of repeatability; a concept based on consistency and reproduction–the opposite of the fixity! I think the approach we outline in the book is in opposition to fixity of the marble-bust variety (as inappropriate for digital media) but very much in-line with fixity as digital authentication (as one tool for guiding and balancing a certain level of change with a certain level of integrity.) Jon may disagree here–in fact we built in these dynamics of agreement/disagreement into our book too.

Jon: I’d like to be as open-minded as Richard. But I can’t, because I pull my hair out every time I hear another minion of cultural heritage fixated on fixity. Sure, it’s nifty that each digital file has a unique cryptographic signature we can confirm after each migration. The best thing about checksums is that they are straightforward, and many preservation tools (and even some operating systems) already incorporate such checks by default. But this seems to me a tiny sliver of a far bigger digital preservation problem, and to blow it out of proportion is to perpetuate the myth that mathematical replication is cultural preservation.

Two files with different passages of 1s and 0s automatically have different checksums but may still offer the same experience; for example, two copies of a digitized film may differ by a few frames but look identical to the human eye. The point of digitizing a Stanley Kubrick film isn’t to create a new mathematical artifact with its own unchanging properties, but to capture for future generations the experience us old timers had of watching his cinematic genius in celluloid. As a custodian of culture, my job isn’t to ensure my DVD of A Clockwork Orange is faithful to some technician’s choices when digitizing the film; it’s to ensure it’s faithful to Kubrick’s choices as a filmmaker.

Furthermore, there’s no guarantee that born-digital files with impeccable checksums will bear any relationship to the experience of an actual user. Engineer and preservationist Bruno Bachiment gives the example of an archivist who sets a Web spider loose on a website, only to have the website’s owners update it in the middle of the crawling process. (This happens more often than you might think.) Monthly checksums will give the archivist confidence that she’s archived that website, but in fact her WARC files do not correspond to any digital artifact that has ever existed in the real world. Her chimera is a perversion caused by the capturing process–like those smartphone panoramas of a dinner where the same waiter appears at both ends of the table.

As in nearly all storage-based solutions, fixity does little to help capture context.  We can run checksums on the Riverside “King Lear” till the cows come home, and it still won’t tell us that boys played women’s parts, or that Elizabethan actors spoke with rounded vowels that sound more like a contemporary American accent than the King’s English, or how each generation of performers has drawn on the previous for inspiration. Even on a manuscript level, a checksum will only validate one of many variations of a text that was in reality constantly mutating and evolving.

The context for software is a bit more cut-and-dried, and the professionals I know who use emulators like to have checksums to go with their disk images. But checksums don’t help us decide what resolution or pace they should run at, or what to do with past traces of previous interactions, or what other contemporaneous software currently taken for granted will need to be stored or emulated for a work to run in the future.

Finally, even emulation will only capture part of the behaviors necessary to reconstruct digital creations in the networked age, which can depend on custom interfaces, environmental data or networks. You can’t just go around checksumming wearable hardware or GPS receivers or Twitter networks; the software will have to mutate to accommodate future versions of those environments.

So for a curator to run regular tests on a movie’s fixity is like a zookeeper running regular tests on a tiger’s DNA. Just because the DNA tests the same doesn’t guarantee the tiger is healthy, and if you want the species to persist in the long term, you have to accept that the DNA of individuals is certainly going to change.

We need a more balanced approach. You want to fix a butterfly? Pin it to a wall. If you want to preserve a butterfly, support an ecosystem where it can live and evolve.

Trevor: The process of getting our ideas out on the page can often play a role in pushing them in new directions. Are there any things that you brought into working on the book that changed in the process of putting it together?

Richard: A book is certainly slow media; purposefully so. I think the main change I noticed was the ability to put our ideas around preservation practice into a larger context of institutional history and social memory functions. Our previous expressions in journal articles or conference presentation simply did not allow us time to do that and, as stated earlier, I feel that both are important in the full consideration of preservation.

Jon: When Richard first approached me about writing this book, I thought, well it’s gonna be pretty tedious because it seemed we would be writing mostly about our own projects. At the time I was only aware of a single emulation testbed in a museum, one software package for documenting opinions on future states of works, and no more conferences and cross-institutional initiatives on variable media preservation than I could count on one hand.

Fortunately, it took us long enough to get around to writing the book (I’ll take the blame for that) that we were able to discover and incorporate like-minded efforts cropping up across the institutional spectrum, from DOCAM and ZKM to Preserving Virtual Worlds and JSMESS. Even just learning how many art museums now incorporate something as straightforward as an artist’s questionnaire into their acquisition process! That was gratifying and led me to think we are all riding the crest of a wave that might bear the digital flotsam of today’s culture into the future.

Trevor: The book covers a lot of ground, focusing on a range of issues and offering myriad suggestions for how various stakeholders could play a role in ensuring access to variable media works into the future. In all of that, is there one message or issue in the work that you think is the most critical or central?

Richard: After expanding our ideas in a book; it’s difficult to come back to tweet format, but I’ll try…

Change will happen. Don’t resist it; use it, guide it. Let art breathe; it will tell you what it needs.

Jon: And don’t save documents in Microsoft Word.

Open Knowledge Foundation: Congratulations to the Panton Fellows 2013-2014

Wed, 2014-11-26 11:51

Samuel Moore, Rosie Graves and Peter Kraker are the 2013-2014 Open Knowledge Panton Fellows – tasked with experimenting, exploring and promoting open practises through their research over the last twelve months. They just posted their final reports so we’d like to heartily congratulate them on an excellent job and summarise their highlights for the Open Knowledge community.

Over the last two years the Panton Fellowships have supported five early career researchers to further the aims of the Panton Principles for Open Data in Science alongside their day to day research. The provision of additional funding goes some way towards this aim, but a key benefit of the programme is boosting the visibility of the Fellow’s work within the open community and introducing them to like-minded researchers and others within the Open Knowledge network.

On stage at the Open Science Panel Vienna (Photo by FWF/APA-Fotoservice/Thomas Preiss)

Peter Kraker (full report) is a postdoctoral researcher at the Know-Centre in Graz and focused his fellowship work on two facets: open and transparent altmetrics and the promotion of open science in Austria and beyond. During his Felowship Peter released the open source visualization Head Start, which gives scholars an overview of a research field based on relational information derived from altmetrics. Head Start continues to grow in functionality, has been incorporated into Open Knowledge Labs and is soon to be made available on a dedicated website funded by the fellowship.

Peter’s ultimate goal is to have an environment where everybody can create their own maps based on open knowledge and share them with the world. You are encouraged to contribute! In addition Peter has been highly active promoting open science, open access, altmetrics and reproducibility in Austria and beyond through events, presentations and prolific blogging, resulting in some great discussions generated on social media. He has also produced a German summary of open science activities every month and is currently involved in kick-starting a German-speaking open science group through the Austrian and German Open Knowledge local groups.

Rosie with an air quality monitor

Rosie Graves (full report) is a postdoctoral researcher at the University of Leicester and used her fellowship to develop an air quality sensing project in a primary school. This wasn’t always an easy ride, the sensor was successfully installed and an enthusiastic set of schoolhildren were on board, but a technical issue meant that data collection was cut short, so Rosie plans to resume in the New Year. Further collaborations on crowdsourcing and school involvement in atmospheric science were even more successful, including a pilot rain gauge measurement project and development of a cheap, open source air quality sensor which is sure to be of interest to other scientists around the Open Knowledge network and beyond. Rosie has enjoyed her Panton Fellowship year and was grateful for the support to pursue outreach and educational work:

“This fellowship has been a great opportunity for me to kick start a citizen science project … It also allowed me to attend conferences to discuss open data in air quality which received positive feedback from many colleagues.”

Samuel Moore (full report) is a doctoral researcher in the Centre for e-Research at King’s College London and successfully commissioned, crowdfunded and (nearly) published an open access book on open research data during his Panton Year: Issues in Open Research Data. The book is still in production but publication is due during November and we encourage everyone to take a look. This was a step towards addressing Sam’s assessment of the nascent state of open data in the humanities:

“The crucial thing now is to continue to reach out to the average researcher, highlighting the benefits that open data offers and ensuring that there is a stock of accessible resources offering practical advice to researchers on how to share their data.”

Another initiative Sam initiated during the fellowship was establishing the forthcoming Journal of Open Humanities Data with Ubiquity Press, which aims to incentivise data sharing through publication credit, which in turn makes data citable through usual academic paper citation practices. Ultimately the journal will help researchers share their data, recommending repositories and best practices in the field, and will also help them track the impact of their data through citations and altmetrics.

We believe it is vital to provide early career researchers with support to try new open approaches to scholarship and hope other organisations will take similar concrete steps to demonstrate the benefits and challenges of open science through positive action.

Finally, we’d like to thank the Computer and Communications Industry Association (CCIA) for their generosity in funding the 2013-14 Panton Fellowships.

This blog post a cross-post from the Open Science blog, see the original here.

Hydra Project: Sufia 4.2.0 released

Wed, 2014-11-26 10:01

We are pleased to announce the release of Sufia 4.2.0.

This release of Sufia includes the ability to cache usage statistics in the application database, an accessibility fix, and a number of bug fixes. Thanks to Carolyn Cole, Michael Tribone, Adam Wead, Justin Coyne, and Mike Giarlo for their work on this release.

View the upgrade notes and a complete changelog on the release page: https://github.com/projecthydra/sufia/releases/tag/v4.2.0

LibUX: Who Uses Library Mobile Websites?

Wed, 2014-11-26 05:39

Almost every American owns a cell phone. More than half use a smartphone and sleeps with it next to the bed. How many do you think visit their library website on their phone, and what do they do there? Heads up: this one’s totally America-centric.

Who uses library mobile websites?

Almost one in five (18%) Americans ages 16-29 have used a mobile device to visit a public library’s website or access library resources in the past 12 months, compared with 12% of those ages 30 and older.) Younger Americans’ Library Habits and Expectations (2013)

If that seems anticlimactic, consider that just about every adult in the U.S. owns a cell phone, and almost every millenial in the country is using a smartphone. This is the demographic using library mobile websites, more than half of which already have a library card.

In 2012, the Pew Internet and American Life Project found that library website users were often young, not poor, educated, and–maybe–moms or dads.

Those who are most likely to have visited library websites are parents of minors, women, those with college educations, those under age 50, and people living in households earning $75,000 or more.

This correlates with the demographics of smartphone owners for 2014.

What do they want?

This 2013 Pew report makes the point that while digital natives still really like print materials and the library as a physical space, a non-trivial number of them said that libraries should definitely move most library services online. Future-of-the-library blather is often painted in black and white, but it is naive to think physical–or even traditional–services are going away any time soon. Rather, there is already demand for complementary or analogous online services.

Literally. When asked, 45% of Americans ages 16 – 29 wanted “apps that would let them locate library materials within the library.” They also wanted a library-branded Redbox (44%), and an “app to access library services” (42%) – by app I am sure they mean a mobile-first, responsive web site. That’s what we mean here at #libux.

For more on this non-controversy, listen to our chat with Brian Pichman about web vs native.

Eons ago (2012), the non-mobile specific breakdown of library web activities looked like this:

  • 82% searched the catalog
  • 72% looked for hours, location, directions, etc.
  • 62% put items on hold
  • 51% renewed them
  • 48% were interested in events and programs – especially old people
  • 44% did research
  • 30% sought readers’ advisory (book reviews or recommendations)
  • 30% paid fines (yikes)
  • 27% signed-up for library programs and events
  • 6% reserved a room

Still, young Americans are way more invested in libraries coordinating more closely with schools, offering literacy programs, and being more comfortable ( chart ). They want libraries to continue to be present in the community, do good, and have hipster decor – coffee helps.

Webbification is broadly expected, but it isn’t exactly a kudos subject. Offering comparable online services is necessary, like it is necessary that MS Word lets you save work. A library that doesn’t offer complementary or analogous online services isn’t buggy so much as it is just incomplete.

Take this away

The emphasis on the library as a physical space shouldn’t be shocking. The opportunity for the library as a hyper-locale specifically reflecting its community’s temperament isn’t one to overlook, especially for as long as libraries tally success by circulation numbers and foot traffic. The whole library-without-walls cliche that went hand-in-hand with all that Web 2.0 stuff tried to show-off the library as it could be in the cloud, but “the library as physical space” isn’t the same as “the library as disconnected space.” The tangibility of the library is a feature to be exploited both for atmosphere and web services. “Getting lost in the stacks” can and should be relegated to just something people say than something that actually happens.

The main reason for library web traffic has been and continues to be to find content (82%) and how to get it (72%).

Bullet points
  • Mobile first: The library catalog, as well as basic information about the library, must be optimized for mobile
  • Streamline transactions: placing and removing holds, checking out, paying fines. There is a lot of opportunity here. Basic optimization of the OPAC and cart can go along way, but you can even enable self checkout, library card registration using something like Facebook login, or payment through Apple Pay.
  • Be online: [duh] Offer every basic service available in person online
  • Improve in-house wayfinding through the web: think Google Indoor Maps
  • Exploit smartphone native services to anticipate context: location, as well as time-of-day, weather, etc., can be used to personalize service or contextually guess at the question the patron needs answered. “It’s 7 a.m. and cold outside, have a coffee on us.” – or even a simple “Yep. We’re open” on the front page.
  • Market the good the library provides to the community to win support (or donations)

The post Who Uses Library Mobile Websites? appeared first on LibUX.

FOSS4Lib Recent Releases: Sufia - 4.2.0

Tue, 2014-11-25 21:54
Package: SufiaRelease Date: Tuesday, November 25, 2014

Last updated November 25, 2014. Created by Peter Murray on November 25, 2014.
Log in to edit this page.

The 4.2.0 release of Sufia includes the ability to cache usage statistics in the application database, an accessibility fix, and a number of bug fixes.

Nicole Engard: Bookmarks for November 25, 2014

Tue, 2014-11-25 20:30

Today I found the following resources and bookmarked them on <a href=

  • PressForward A free and open-source software project launched in 2011, PressForward enables teams of researchers to aggregate, filter, and disseminate relevant scholarship using the popular WordPress web publishing platform. Just about anything available on the open web is fair game: traditional journal articles, conference papers, white papers, reports, scholarly blogs, and digital projects.

Digest powered by RSS Digest

The post Bookmarks for November 25, 2014 appeared first on What I Learned Today....

Related posts:

  1. Code4Lib Journal
  2. Games & Meebo
  3. The Future of Bibliographic Control: A Time of Transition

District Dispatch: CopyTalk: Free Copyright Webinar

Tue, 2014-11-25 19:48

Join us for our CopyTalk, our copyright webinar, on December 4 at 2pm Eastern Time. This installment of CopyTalk is entitled, “Introducing the Statement of Best Practices in Fair Use of Collections Containing Orphan Works for Libraries, Archives, and Other Memory Institutions”.

Peter Jaszi (American University, Washington College of Law) and David Hansen (UC Berkeley and UNC Chapel Hill) will introduce the “Statement of Best Practices in Fair Use of Collections Containing Orphan Works for Libraries, Archives, and Other Memory Institutions.” This Statement, the most recent community-developed best practices in fair use, is the result of intense discussion group meetings with over 150 librarians, archivists, and other memory institution professionals from around the United States to document and express their ideas about how to apply fair use to collections that contain orphan works, especially as memory institutions seek to digitize those collections and make them available online. The Statement outlines the fair use rationale for use of collections containing orphan works by memory institutions and identifies best practices for making assertions of fair use in preservation and access to those collections.

There is no need to pre-register! Just show up on December 2, at 2pm Eastern time. http://ala.adobeconnect.com/copyright/

The post CopyTalk: Free Copyright Webinar appeared first on District Dispatch.

DPLA: From the Book Patrol: A Parade of Thanksgiving Goodness

Tue, 2014-11-25 19:00

Did you know that over 2,400 items related to Thanksgiving reside at the DPLA? From Thanksgiving menus from hotels and restaurants across this great land to Thanksgiving postcards to images of the fortunate and less fortunate taking part in Thanksgiving day festivities.

Here’s just a taste of Thanksgiving at the Digital Public Library of America.

Enjoy and and have a Happy Thanksgiving!

Thanksgiving Day, Raphael Tuck & Sons, 1907 Macy’s Thanksgiving Day Parade, 1932 Photograph by Alexander Alland  Japanese Internment Camp – Gila River Relocation Center, Rivers, Arizona. One of the floats in the Thanksgiving day Harvest Festival, 11/26/1942 Annual Presentation of Thanksgiving Turkey, 11/16/1967 . Then President Lyndon Baines Johnson presiding  A man with an axe in the midst of a flock of turkeys. Greenville North Carolina,1965  Woman carries Thanksgiving turkey at Thresher & Kelley Market, Faneuil Hall in Boston, 1952. Photograph by Leslie Jones  Thanksgiving Dinner Menu. Hotel Scenley, Pittsburgh, PA. 1900 More than 100 wounded Negro soldiers, sailors, marines and Coast Guardsmen were feted by The Equestriennes, a group of Government Girls, at an annual Thanksgiving dinner at Lucy D. Slowe Hall, Washington, D. C., Photograph by Helen Levitt, 1944. Volunteers of America Thanksgiving, 22 November 1956. Thanksgiving dinner line in front of Los Angeles Street Post door

District Dispatch: Have questions about WIOA?

Tue, 2014-11-25 18:24

To follow up on the October 27th webinar “$2.2 Billion Reasons to Pay Attention to WIOA,” the American Library Association (ALA) today releases a list of resources and tools that provide more information about the Workforce Innovation and Opportunity Act (WIOA). The Workforce Innovation and Opportunity Act allows public libraries to be considered additional One-Stop partners, prohibits federal supervision or control over selection of library resources and authorizes adult education and literacy activities provided by public libraries as an allowable statewide employment and training activity.

Subscribe to the District Dispatch, ALA’s policy blog, to be alerted to when additional WIOA information becomes available.

The post Have questions about WIOA? appeared first on District Dispatch.

FOSS4Lib Upcoming Events: Advanced DSpace Training

Tue, 2014-11-25 16:45
Date: Tuesday, March 17, 2015 - 08:00 to Thursday, March 19, 2015 - 17:00Supports: DSpace

Last updated November 25, 2014. Created by Peter Murray on November 25, 2014.
Log in to edit this page.

In-person, 3-day Advanced DSpace Course in Austin March 17-19, 2015. The total cost of the course is being underwritten with generous support from the Texas Digital Library and DuraSpace. As a result, the registration fee for the course for DuraSpace Members is only $250 and $500 for Non-Members (meals and lodging not included). Seating will be limited to 20 participants.

For more details, see http://duraspace.org/articles/2382

David Rosenthal: Dutch vs. Elsevier

Tue, 2014-11-25 16:00
The discussions between libraries and major publishers about subscriptions have only rarely been actual negotiations. In almost all cases the libraries have been unwilling to walk away and the publishers have known this. This may be starting to change; Dutch libraries have walked away from the table with Elsevier. Below the fold, the details.

VNSU, the association representing the 14 Dutch research universities, negotiates on their behalf with journal publishers. Earlier this month they announced that their current negotiations with Elsevier are at an impasse, on the issues of costs and the Dutch government's Open Access mandate:
Negotiations between the Dutch universities and publishing company Elsevier on subscription fees and Open Access have ground to a halt. In line with the policy pursued by the Ministry of Education, Culture and Science, the universities want academic publications to be freely accessible. To that end, agreements will have to be made with the publishers. The proposal presented by Elsevier last week totally fails to address this inevitable change.In their detailed explanation for scientists (PDF), VNSU elaborates:
During several round[s] of talks, no offer was made which would have led to a real, and much-needed, transition to open access. Moreover, Elsevier has failed to deliver an offer that would have kept the rising costs of library subscriptions at an acceptable level. ... In the meantime, universities will prepare for the possible consequences of an expiration of journal subscriptions. In case this happens researchers will still be able to publish in Elsevier journals. They will also have access to back issues of these journals. New issues of Elsevier journals as of 1-1-2015 will not be accessible anymore.I assume that this means that post-cancellation access will be provided by Elsevier directly, rather than by an archiving service. The government and the Dutch research funder have expressed support for VNSU's position.

This stand by the Dutch is commendable; the outcome will be very interesting. In a related development, if my marginal French is not misleading me, a new law in Germany allows authors of publicly funded research to make their accepted manuscripts freely available 1 year after initial publication. Both stand in direct contrast to the French "negotiation" with Elsevier:
France may not have any money left for its universities but it does have money for academic publishers.
While university presidents learn that their funding is to be reduced by EUR 400 million, the Ministry of Research has decided, under great secrecy, to pay EUR 172 million to the world leader in scientific publishing Elsevier .

LITA: Top Technologies Webinar – Dec. 2, 2014

Tue, 2014-11-25 15:56

Don’t miss the Top Technologies Every Librarian Needs to Know Webinar with Presenters: Brigitte Bell, Steven Bowers, Terry Cottrell, Elliot Polak and Ken Varnum

Offered: December 2, 2014
1:00 pm – 2:00 pm Central Time

Register Online page arranged by session date (login required)

We’re all awash in technological innovation. It can be a challenge to know what new tools are likely to have staying power — and what that might mean for libraries. The recently published Top Technologies Every Librarian Needs to Know highlights a selected set of technologies that are just starting to emerge and describes how libraries might adapt them in the next few years.

In this webinar, join the authors of three chapters from the book as they talk about their technologies and what they mean for libraries.

Hands-Free Augmented Reality: Impacting the Library Future
Presenters: Brigitte Bell & Terry Cottrell

Based on the recent surge of interest in head-mounted augmented reality devices such as the 3D gaming console Oculus Rift and Google’s Glass project, it seems reasonable to expect that the implementation of hands-free augmented reality technology will become common practice in libraries within the next 3-5 years.

The Future of Cloud-Based Library Systems
Presenters: Elliot Polak & Steven Bowers

In libraries, cloud computing technology can reduce the costs and human capital associated with maintaining a 24/7 Integrated Library System while facilitating an up-time that is costly to attain in-house. Cloud-Based Integrated Library Systems can leverage a shared system environment, allowing libraries to share metadata records and other system resources while maintaining independent local information allowing for reducing redundant workflows and yielding efficiencies for cataloging/metadata and acquisitions departments.

Library Discovery: From Ponds to Streams
Presenter: Ken Varnum

Rather than exploring focused ponds of specialized databases, researchers now swim in oceans of information. What is needed is neither ponds (too small in our interdisciplinary world) or oceans (too broad and deep for most needs), but streams — dynamic, context-aware subsets of the whole, tailored to the researcher’s short- or long-term interests.

Register Online now to join us what is sure to be an excellent and informative webinar.

Open Knowledge Foundation: Code for Africa &amp; Open Knowledge Launch Open Government Fellowship Pilot Programme: Apply Today

Tue, 2014-11-25 14:22

Open Knowledge and Code for Africa launch pilot Open Government Fellowship Programme. Apply to become a fellow today. This blog announcement is available in French here and Portuguese here.

Open Knowledge and Code for Africa are pleased to announce the launch of our pilot Open Government Fellowship programme. The six month programme seeks to empower the next generation of leaders in field of open government.


We are looking for candidates that fit the following profile:

  • Currently engaged in the open government and/or related communities . We are looking to support individuals already actively participating in the open government community
  • Understands the role of civil society and citizen based organisations in bringing about positive change through advocacy and campaigning
  • Understands the role and importance of monitoring government commitments on open data as well as on other open government policy related issues
  • Has facilitation skills and enjoys community-building (both online and offline).
  • Is eager to learn from and be connected with an international community of open government experts, advocates and campaigners
  • Currently living and working in Africa. Due to limited resources and our desire to develop a focused and impactful pilot programme, we are limiting applications to those currently living and working in Africa. We hope to expand the programme to the rest of the world starting in 2015.

The primary objective of the Open Government Fellowship programme is to identify, train and support the next generation of open government advocates and community builders. As you will see in the selection criteria, the most heavily weighted item is current engagement in the open government movement at the local, national and/or international level. Selected candidates will be part of a six-month fellowship pilot programme where we expect you to work with us for an average of six days a month, including attending online and offline trainings, organising events, and being an active member of the Open Knowledge and Code for Africa communities.

Fellows will be expected to produce tangible outcomes through during their fellowship but what these outcomes are will be up to the fellows to determine. In the application, we ask fellows to describe their vision for their fellowship or, to put it another way, to lay out what they would like to accomplish. We could imagine fellows working with a specific government department or agency to make a key dataset available, used and useful by the community or organising a series of events addressing a specific topic or challenge citizens are currently facing. We do not wish to be prescriptive, there are countless possibilities for outcomes for the fellowship but successful candidates will demonstrate a vision that has clear, tangible outcomes.

To support fellows in achieving these outcomes, all fellows will receive a stipend of $1,000 per month in addition to a project grant of $3,000 to spend over the course of your fellowship. Finally, a travel stipend is available for each fellow for national and/or international travel related to furthering the objective of their fellowship.

There are up to 3 fellowship positions open for the February to July 2015 pilot programme. Due to resourcing, we will only be accepting fellowship applications from individuals living and working in Africa. Furthermore, in order to ensure that we are able to provide fellows with strong local support during the pilot phase, we will are targeting applicants from the following countries where Code for Africa and/or Open Knowledge already have existing networks: Angola, Burkina Faso, Cameroon, Ghana, Kenya, Morocco, Mozambique, Mauritius, Namibia, Nigeria, Rwanda, South Africa, Senegal, Tunisia, Tanzania, and Uganda. We are hoping to roll out the programme in other regions in autumn 2015. If you are interested in the fellowship but not currently located in one of the target countries, please get in touch.

Do you have questions? See more about the Fellowship Programme here and have a looks at this Frequently Asked Questions (FAQ) page. If this doesn’t answer your question, email us at Katelyn[dot]Rogers[at]okfn.org

Not sure if you fit the profile? Drop us a line!

Convinced? Apply now to become a Open Government fellow. If you would prefer to submit your application in French or Portuguese, translations of the application form are available in French here and in Portuguese here.

The application will be open until the 15th of December 2014 and the programme will start in February 2015. We are looking forward to hearing from you!

Raffaele Messuti: Serve deepzoom images from a zip archive with openseadragon

Tue, 2014-11-25 10:00

vips is a fast image processing system. Version higher than 7.40 can generate static tiles of big images in deepzoom format, saving them directly into a zip archive.

PeerLibrary: Educators Rejoice! This Week’s Featured Content from the PeerLibrary Collections

Tue, 2014-11-25 04:08

PeerLibrary’s groups and collections functionality is especially suited towards educators running classes that involve reading and discussing various academic publications. This week we would like to highlight one such collection, created for a graduate level computer science class taught by Professor John Kubiatowicz at UC Berkeley. The course, Advanced Topics in Computer Systems, requires weekly readings which are handily stored on the PeerLibrary platform for students to read, discuss, and collaborate outside of the typical classroom setting. Articles within the collection come from a variety of sources, such as the publicly available “Key Range Locking Strategies” and the closed access “ARIES: A Transaction Recovery Method”. Even closed access articles, which hide the article from unauthorized users, allow users to view the comments and annotations!

Pages