You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 6 hours 21 min ago

Open Knowledge Foundation: Competition now open – enter your app and win 5,000 euro

Fri, 2014-11-28 13:55

This is a cross-post by Ivonne Jansen-Dings, originally published on the Apps4Europe blog, see the original here.

With 10 Business Lounges happening throughout Europe this year, Apps for Europe is trying to find the best open data applications and startups that Europe has to offer. We invite all developers, startups and companies that use open data as a recourse to join our competition and win a spot at the International Business Lounge @ Future Everything in February 2015.

Last year’s winner BikeCityGuide.org has shown the potential of using open data to enhance their company and expand their services. Since the international Business Lounge at Future Everything last year they were able to reach new cities and raise almost 140.000,- in crowdfunding. A true success story!   Over the past years many local, regional and national app competitions in Europe have been organized to stimulated developers and companies to build new applications with open data. Apps for Europe has taken it to the next level. By adding Business Lounges to local events we introduce the world of open data development to that of investors, accelerators, incubators and more.   Thijs Gitmans, Peak Capital: “The Business Lounge in Amsterdam had a professional and personal approach. I am invited to this kind of meetings often, and the trigger to actually go or cancel last minute 99% of the time has to do with proper, timely and personal communication.”   The Apps for Europe competitions will run from 1 September to 31 December 2014, with the final at Future Everything in Manchester, UK, on 26-27 February 2015.

Read more about Apps4Europe here.

DPLA: Giving Thanks: Top 10 Colonial Facial Hair Inspirations

Fri, 2014-11-28 07:30

This week was a time for people to give thanks—this includes showing some gratitude for some awe-inspiring beards and mustaches. In a continuation of our “Movember” series, we’re throwing it back to the colonial era for some facial hair inspiration.

 This week: Styles of the early settlers. 

The first landing of the Pilgrims, 1620. Governor Berkley and Nathaniel Bacon. Sir Walter Raleigh. Landing of the Pilgrims. Embarkation of the Pilgrims. Early settlers on their way to church. Embarkation of the “Pilgrim Fathers.” Colonists reaching Connecticut. The landing of the colonists. A Pilgrim parting from his family.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

And a bonus: Just because your mustache is strong, doesn’t mean your hairstyle should suffer. Try one of these colonial wig styles to compliment your new look.

“The Colonists At Home,” wig styles.

Terry Reese: MarcEdit 6 Update

Fri, 2014-11-28 06:27

Happy Thanksgiving to those celebrating.  Rather than over indulging in food, my family and I spent our day relaxing and enjoying some down time together.  After everyone went to bed, I had a little free time and decided to wrap up the update I’ve been working on.  This update includes the following changes:

  • Language File changes.
  • Export/Delete Selected Records: UI changes
  • Biblinker — updated the tool to provide support for linking to FAST headings when available in the record
    • Updated the fields processed (targeted to ignore uncontrolled or local items)
  • Z39.50 Client — Single Search, multiple databases selected bug when number of results exceed data limit, blank data would be returned.
  • RDA Helper Bug Fix — Updated an error where under certain conditions, bracketed data would be incorrectly parsed.
  • Miscellaneous UI changes to support language changes

 

The Language file changes represent a change in how internationalization of the interface works.  Master language files are now hosted on GItHub, with new files added on update.  The language files are automatically generated, so they are not as good as if they were done by an individual – though some individuals are looking at the files and providing updates.  My hope is that through this process of automated language generation, coupled with human intervention, the new system will significantly help non-English speakers.  But I guess time will tell.

The download can be found by using the automated update tool in MarcEdit, or downloading the update from: http://marcedit.reeset.net/downloads/

pinboard: Code4LibBC Day 1: Lightning Talks Part 1 | Learning LibTech

Thu, 2014-11-27 21:40
RT @TheRealArty: Code4LibBC Day 1: Lightning Talks Part 1 #c4lbc #code4lib

Lukas Koster: Analysing library data flows for efficient innovation

Thu, 2014-11-27 12:24

In my work at the Library of the University of Amsterdam I am currently taking a step forward by actually taking a step back from a number of forefront activities in discovery, linked open data and integrated research information towards a more hidden, but also more fundamental enterprise in the area of data infrastructure and information architecture. All for a good cause, for in the end a good data infrastructure is essential for delivering high quality services in discovery, linked open data and integrated research information.
In my role as library systems coordinator I have become more and more frustrated with the huge amounts of time and effort spent on moving data from one system to another and shoehorning one record format into the next, only to fulfill the necessary everyday services of the university library. Not only is it not possible to invest this time and effort productively in innovative developments, but this fragmented system and data infrastructure is also completely unsuitable for fundamental innovation. Moreover, information provided by current end user services is fragmented as well. Systems are holding data hostage. I have mentioned this problem before in a SWIB presentation. The issue was also recently touched upon in an OCLC Hanging Together blog post: “Synchronizing metadata among different databases” .

Fragmented data (SWIB12)

In order to avoid confusion in advance: when using the term “data” here, I am explicitly not referring to research data or any other specific type of data. I am using the term in a general sense, including what is known in the library world as “metadata”. In fact this is in line with the usage of the term “data” in information analysis and system design practice, where data modelling is one of the main activities. Research datasets as such are to be treated as content types like books, articles, audio and people.

It is my firm opinion that libraries have to focus on making their data infrastructure more efficient if they want to keep up with the ever changing needs of their audience and invest in sustainable service development. For a more detailed analysis of this opinion see my post “(Discover AND deliver) OR else – The future of the academic library as a data services hub”. There are a number of different options to tackle this challenge, such as starting completely from scratch, which would require huge investments in resources for a long time, or implementing some kind of additional intermediary data warehouse layer while leaving the current data source systems and workflows in place. But for all options to be feasible and realistic, a thorough analysis of a library’s current information infrastructure is required. This is exactly what the new Dataflow Inventory project is about.

The project is being carried out within the context of the short term Action Plans of the Digital Services Division of the Library of the University of Amsterdam, and specifically the “Development and improvement of information architecture and dataflows” program. The goal of the project is to describe the nature and content of all internal and external datastores and dataflows between internal and external systems in terms of object types (such as books, articles, datasets, etc.) and data formats, thereby identifying overlap, redundancy and bottlenecks that stand in the way of efficient data and service management. We will be looking at dataflows in both front and back end services for all main areas of the University Library: bibliographic, heritage and research information. Results will be a logical map of the library data landscape and recommendations for possible follow up improvements. Ideally it will be the first step in the Cleaning-Reconciling-Enriching-Publishing data chain as described by Seth van Hooland and Ruben Verborgh in their book “Linked Data for Libraries, Archives and Museums”.

The first phase of this project is to decide how to describe and record the information infrastructure in such a form that the data map can be presented to various audiences in a number of ways, and at the same time can be reused in other contexts on the long run, for instance designing new services. For this we need a methodology and a tool.

At the university library we do not have any thorough experience with describing an information infrastructure on an enterprise level, so in this case we had to start with a clean slate. I am not at all sure that we came up with the right approach in the end. I hope this post will trigger some useful feedback from institutions with relevant experience.

Since the initial and primary goal of this project is to describe the existing infrastructure instead of a desired new situation, the first methodological area to investigate appears to be Enterprise Architecture (interesting to see that Wikipedia states “This article appears to contain a large number of buzzwords“). Because it is always better to learn from other people’s experiences than to reinvent all four wheels, we went looking for similar projects in the library, archive and museum universe. This proved to be rather problematic. There was only one project we could find that addresses a similar objective, and I happened to know one of the project team members. The Belgian “Digital library system’s architecture study” (English language report here)” was carried out for the Flemish Public Library network Bibnet, by Rosemie Callewaert among others. Rosemie was so kind to talk to me and explain the project objectives, approaches, methods and tools used. For me, two outcomes of this talk stand out: the main methodology used in the project is Archimate, which is an Enterprise Architecture methodology, and the approach is completely counter to our own approach: starting from the functional perspective as opposed to our overview of the actual implemented infrastructure. This last point meant we were still looking at a predominantly clean slate.
Archimate also turned out to be the method of choice used by the University of Amsterdam central enterprise architecture group, whom we also contacted. It became clear that in order to use Archimate efficiently, it is necessary to spend a considerable amount of time on mastering the methodology. We looked for some accessible introductory information to get started. However the official Open Group Archimate website is not as accessible as desired in more than one way. We managed to find some documentation anyway, for instance the direct linkt to the Archimate specification and the free document “Archimate made practical”. After studying this material we found that Archimate is a comprehensive methodology for describing business, application and technical infrastructure components, but we also came to the conclusion that for our current short term project presentation goals we needed something that could be implemented fairly soon. We will keep Archimate in mind for the intermediate future. If anybody is interested, there is a good free open source modelling tool available, Archi. Other Enterprise Architecture methodologies like Business Process Modelling focus more on workflows than on existing data infrastructures. Turning to system design methods like UML (Unified Modelling Language) we see similar drawbacks.

An obvious alternative technique to consider is Dataflow Diagramming (DFD) (what’s in a name?), part of the Structured Design and Structured Analysis methodology, which I had used in previous jobs as systems designer and developer. Although DFD’s are normally used for describing functional requirements on a conceptual level, with some tweaking they can also be used for describing actual system and data infrastructures, similar to the Archimate Application and Infrastructure layers. The advantage of the DFD technique is that it is quite simple. Four elements are used to describe the flow of information (dataflows) between external entities, processes and datastores. The content of dataflows and datastores can be specified in more detail using a data dictionary. The resulting diagrams are relatively easy to comprehend. We decided to start with using DFD’s in the project. All we had left to do was find a good and not too expensive tool for it.

Basic DFD structure

There are basically two types of tools for describing business processes and infrastructures: drawing tools, focusing on creating diagrams, and repository based modelling tools, focused on reusing the described elements. The best known drawing tool must be MicroSoft Visio, because it is part of their widely used Office Suite. There are a number of other commercial and free tools, among which the free Google Drive extension Draw.io. Although most drawing tools cover a wide range of methods and techniques, they don’t usually support reuse of elements with consistent characteristics in other diagrams. Also, diagrams are just drawings, they can’t be used to generate data definition scripts or basic software modules or reverse engineering or flexible reporting. Repository based tools can do all these things. Reuse, reporting, generating, reverse engineering and import and export features are exactly the features we need. We also wanted a tool that supports a number of other methods and techniques for employing in other areas of modelling, design and development. There are some interesting free or open source tools, like OpenModelSphere (which supports UML, ERD Data modelling and DFD), and a range of commercial tools. To cut a long story short we selected the commercial design and management tool Visual-Paradigm because it supports a large number of methodologies with an extensive feature set in a number of editions for reasonable fees. An additional advantage is the online shared teamwork repository.

After acquiring the tool we had to configure it the way we wanted to use it. We decided to try and align the available DFD model elements to the Archimate elements so it would in time be possible to move to Archimate if that would prove to be a better method for future goals. Archimate has Business Service and Business Process elements on the conceptual business level, and Application Component (a “system”), Application Function (a “module”) and Application Service (a “function”) elements on the implementation level.

Basic Archimate Structure

In our project we will mainly focus on the application layer, but with relations to the business layer. Fortunately, the DFD method supports a hierarchical process structure by means of the decomposition mechanism, so the two hierarchical structures Business Service – Business Process and Application Component – Application Function – Application Service can be modeled using DFD. There is an additional direct logical link between a Business Process and the Application Service that implements it. By adding the “stereotypes” feature from the UML toolset to the DFD method in Visual Paradigm, we can effectively distinguish between the five process types (for instance by colour and attributes) in the DFD.

Archimate DFD alignment

So in our case, a DFD process with a “system” stereotype represents a top level Business Service (“Catalogue”, “Discover”, etc.) and a “process” process within “Cataloguing” represents an activity like “Describe item”, “Remove item”, etc. On the application level a “system” DFD process (Application Component) represents an actual system, like Aleph or Primo, a “module” (Application Function) a subsystem like Aleph CAT or Primo Harvesting, and a “function” (Application Service) an actual software function like “Create item record”.
A DFD datastore is used to describe the physical permanent and temporary files or databases used for storing data. In Archimate terms this would probably correspond with a type of “Artifact” in the Technical Infrastructure layer, but that might be subject for interpretation.
Finally an actual dataflow describes the data elements that are transferred between external entities and processes, between processes, and between processes and datastores, in both directions. In DFD, the data elements are defined in the data dictionary in the form of terms in a specific syntax that also supports optionality, selection and iteration, for instance:

  • book = title + (subtitle) + {author} + publisher + date
  • author = name + birthdate + (death date)

etc.
In Archimate there is a difference in flows in the Business and Application layers. In the Business layer a flow can be specified by a Business Object, which indicates the object types that we want to describe, like “book”, “person”, “dataset”, “holding”, etc. The Business Object is realised as one or more Data Objects in the Application Layer, thereby describing actual data records representing the objects transferred between Application Services and Artifacts. In DFD there is no difference between a business and a dataflow. In our project we particularly want to describe business objects in dataflows and datastores to be able to identify overlap and redundancies. But besides that we are also interested in differences in data structure used for similar business objects. So we do have to distinguish between business and data objects in the DFD model. In Visual-Paradigm this can be done in a number of ways. It is possible to add elements from other methodologies to a DFD with links between dataflows or datastores and the added external elements. Data structures like this can also be described in Entity Relationship Diagrams, UML Class Diagrams or even RDF Ontologies.
We haven’t decided on this issue yet. For the time being we will employ the Visual Paradigm Glossary tool to implement business and data object specifications using Data Dictionary terms. A specific business object (“book”) will be linked to a number of different dataflows and datastores, but the actual data objects for that one business object can be different, both in content and in format, depending on the individual dataflows and datastores. For instance a “book” Business Object can be represented in one datastore as an extensive MARC record, and in another as a simple Dublin Core record.

Example bibliographic dataflows

After having determined method, tool and configuration, the next step is to start gathering information about all relevant systems, datastores and dataflows and describing this in Visual Paradigm. This will be done by invoking our own internal Digital Services Division expertise, reviewing applicable documentation, and most importantly interviewing internal and external domain experts and stakeholders.
Hopefully the resulting data map will provide so much insight that it will lead to real efficiency improvements and really innovative services.

LibUX: Does Google think Your Library is Mobile Friendly?

Thu, 2014-11-27 08:03

If your users are anything like mine, then

  • no one has your website bookmarked on their home-screen
  • your url is kind of a pain to tap-out

and consequently inquiries about business hours and location start not on your homepage but in a search bar. As of last Tuesday (November 18th), searchers from a mobile device will be given the heads up that this or that website is “mobile friendly.” Since we know how picky mobile users are (spoiler: very), we need to assume that more quickly than not users will avoid search results if a website isn’t tailored for their screen. A mobile-friendly result looks like this:

The criteria from the announcement are that the website

  • Avoids software that is not common on mobile devices, like Flash
  • Uses text that is readable without zooming
  • Sizes content to the screen so users don’t have to scroll horizontally or zoom
  • Places links far enough apart so that the correct one can be easily tapped

and we should be grateful that this is low-hanging fruit. The implication that a website is not mobile friendly will certainly ward off clickthrough, which for public libraries especially may have larger repercussions.

Your website has just 2 seconds to load at your patron’s point of need before a certain percentage will give up. This may literally affect your foot traffic. Rather than chance the library being closed, your patron might just change plans. Mobile Users Are Demanding

You can test if your site meets Googlebot’s standards. Here’s how the little guy sees the New York Public Library:

Cue opportunistic tangent about pop-ups

On an unrelated note, the NYPL is probably missing out on more donations than they get through that pop-up. People hate pop-ups, viscerally.

Users not only dislike pop-ups, they transfer their dislike to the advertisers behind the ad and to the website that exposed them to it. In a survey of 18,808 users, more than 50% reported that a pop-up ad affected their opinion of the advertiser very negatively and nearly 40% reported that it affected their opinion of the website very negatively. The Most Hated Advertising Techniques

And, in these circumstances, the advertiser is the library itself. ( O_o )

At least Googlebot thinks they’re mobile friendly.

The post Does Google think Your Library is Mobile Friendly? appeared first on LibUX.

FOSS4Lib Upcoming Events: hosted pbx denver

Thu, 2014-11-27 05:02
Date: Thursday, November 27, 2014 (All day)Supports: DMP Online

Last updated November 27, 2014. Created by fredwhite on November 27, 2014.
Log in to edit this page.

Get the voip connection for the business as well as resident purpose. For more information of the phones systems and voip connection visit here.

HangingTogether: “Managing Monsters”? Academics and Assessment

Wed, 2014-11-26 21:17

Recently in the London Review of Books Marina Warner explained why she quit her post at the University of Essex. I found it a shocking essay. Warner was pushed out because she is chairing the Booker Prize committee this year, in addition to delivering guest lectures at Oxford. (If those lectures are anything like Managing Monsters (1994), they will probably change the world.) Warner’s work – as a creative writer, scholar, public intellectual – does not count in the mechanics of assessment, which includes both publishing and teaching.

Warner opens her LRB essay with the library at Essex as the emblem of the university: “New brutalism! Rarely seen any so pure.” I don’t want to make light of the beautifully-written article, which traces changes over time in the illustrious and radical reputation of the University of Essex since it was founded in the 60s. Originally Warner had enthusiastic support, which later waned when a new vice-chancellor muttered, “‘These REF stars – they don’t earn their keep.”

Warner’s is just the latest high-profile critique about interference in research by funders and university administrators.  The funniest I’ve read is a “modest proposal” memo mandating university-wide use of research assessment tools that have acronyms such as Stupid, Crap, Mess, Waste, Pablum, and Screwed.

I have been following researchers’ opinions about management of information about research ever since John MacColl synthesized assessment regimes in five countries. This past spring John sent me an opinion piece from the Times Higher in which the author, a REF coordinator himself, despairs about the damage done by years of assessment to women’s academic careers, to morale, to creativity, and to education and research. During my visits to the worlds of digital scholarship, I invariably hear of the failure of assessment regimes for the humanities, the digital humanities, digital scholarship, and e-research.

I figure it is high time I post another excerpt from my synthesis of user studies about managing research information. I prepared most of this post a year ago, when I was pondering the fraught politics (and ethics) of libraries’ contributions to research information management systems (RIMs). (Lorcan recently parsed RIM services.)

So here goes:

Alignment with the mission of one’s institution is not a black-and-white exercise. I believe that research libraries must think carefully about how they choose to ally themselves with their own researchers, academic administrations, and national funding agencies. If we are calibrating our library services – for new knowledge and higher education – to rankings and league tables, I certainly hope that we are reading the journals that publish those rankings, especially articles written by the same academics we want to support.

An editorial blog post for the Chronicle of Higher Education is titled, provocatively, “A Machiavellian Guide to Destroying Public Universities in 12 Easy Steps.” The fifth step is assessment regimes:

(5) Put into place various “oversight instruments,” such as quality-assessment exercises, “outcome matrices,” or auditing mechanisms, to assure “transparency” and “accountability” to “stakeholders.” You might try using research-assessment exercises such as those in Britain or Australia, or cheaper and cruder measures like Texas A&M’s, by simply publishing a cost/benefit analysis of faculty members.

This reminded me of a similar cri de coeur a few years ago in the New York Review of Books. In “The Grim Threat to British Universities,” Simon Head warned about applying a (US) business-style “bureaucratic control” – performance indicators, metrics, and measurement of outputs, etc. – to scholarship, especially science. Researchers often feel that administrators have no idea what research entails, and often for a good reason. For example, Warner’s executive dean for the humanities is a “young lawyer specialising in housing.”

A consistent theme in user studies with researchers is the sizeable gulf between what they use and desire and the kinds of support services that libraries and universities offer.[1] A typical case study in the life sciences, for example, concludes that there is a “significant gap” between researchers’ use of information and the strategies of funders and policy-makers.[2] In particular, researchers consider libraries unlikely to play a desirable role supporting research. [3]

Our own RIN and OCLC Research studies interviewing researchers reveal that libraries offering to manage research information seems “orthogonal, and at worst irrelevant,” to the needs of researchers.[4] One of the trends that stands out is oversight: researchers require autonomy, so procedures mandated in a top-down fashion are mostly perceived as intrusive and unnecessary.

Librarians and administrators need to respect networks of trust between researchers. In particular, researchers may resist advice from the Research Office or any other internal agency removed from the colleagues they work with.[5]

Researchers feel that their job is to do research. They begrudge any time spent on activities that serve administrative purposes.[6] A heavy-handed approach to participation in research information management is unpopular and can back-fire.[7] In some cases, mandates and requirements – such as national assessment regimes – become disincentives for researchers to improve methodologies or share their research.[8]

On occasion researchers have pushed back against such regimes. For example, in 2011, Australian scholars successfully quashed a journal-ranking system used for assessment. The academics objected that such a flawed “blunt instrument” for evaluating individuals uses crude criteria to rank journals rather than professional respect. [9]

Warner – like many humanists I have met – calls for a remedy that research libraries could provide. “By the end of 2013, all the evidence had been gathered, and the inventory of our publications fought over, recast and finally sent off to be assessed by panels of peers… A scholar whose works are left out of the tally is marked for assisted dying.” Librarians can  improve information about those “works left out,” or get the attributions right.

But assisted dying? Yikes. At our June meeting in Amsterdam on Supporting Change/Changing Support, Paul Wouters gave a thoughtful warning of the “seduction” of measurements, such as the trendy quantified self. Wouters gave citation analysis as an example of a measure that is necessarily backward-looking and disadvantages some domains. “You can’t see everything in publications.” Wouters pointed out that assessment is a bit “close to the skin” for academics, and that libraries might not want to “torment their researchers,” inadvertently making an honest mistake that could influence or harm careers.

Just because we can, we might consider whether we should, and when, and how. The politics of choosing to participate in expertise profiling and research assessment regimes potentially have consequences for research libraries that are trying to win the trust of their faculty members.

References beyond embedded links:

[1] pp. 4, 70 in Sheridan Brown and Alma Swan (i.e. Key Perspectives). 2007. Researchers’ use of academic libraries and their services. London: RIN (Research Information Network)/CURL (Consortium of Research Libraries). http://www.rin.ac.uk/our-work/using-and-accessing-information-resources/researchers-use-academic-libraries-and-their-serv

[2] pp. 5-6 in Robin Williams and Graham Pryor. 2009. Patterns of information use and exchange: case studies of researchers in the life sciences. London: RIN and the British Library. http://www.rin.ac.uk/our-work/using-and-accessing-information-resources/patterns-information-use-and-exchange-case-studie

[3] Brown and Swan 2007, p. 4.

[4] p. 6 in John MacColl and Michael Jubb. 2011. Supporting research: environments, administration and libraries. Dublin, Ohio: OCLC Research and London: Research Information Network (RIN). http://www.oclc.org/research/publications/library/2011/2011-10.pdf

[5] p. 10 in Research Information Network (RIN). 2010. Research support services in selected UK universities. London: RIN. http://www.rin.ac.uk/system/files/attachments/Research_Support_Services_in_UK_Universities_report_for_screen.pdf

[6] MacColl and Jubb, 2011, p. 3-4.

[7] p. 12-13 in Martin Feijen. 2011. What researchers want: A literature study of researchers’ requirements with respect to storage and access to research data. Utrecht: SURFfoundation. http://www.surf.nl/nl/publicaties/Documents/What_researchers_want.pdf. P. 56 in Elizabeth Jordan, Andrew Hunter, Becky Seale, Andrew Thomas and Ruth Levitt. 2011. Information handling in collaborative research: an exploration of five case studies. London: RIN and the BL. http://www.rin.ac.uk/our-work/using-and-accessing-information-resources/collaborative-research-case-studies. MacColl and Jubb 2011, p.6.

[8] p. 53 in Robin Williams and Graham Pryor. 2009. Patterns of information use and exchange: case studies of researchers in the life sciences. London: RIN and the British Library. http://www.rin.ac.uk/our-work/using-and-accessing-information-resources/patterns-information-use-and-exchange-case-studie

[9] Jennifer Howard. 2011 (June 1). “Journal-ranking system gets dumped after scholars complain.” Chronicle of higher education. http://chronicle.com/article/Journal-Ranking-System-Gets/127737/

 

About Jennifer Schaffner

Jennifer Schaffner is a Program Officer with the OCLC Research Library Partnership. She works with the rare books, manuscripts and archives communities. She joined RLG/OCLC Research in August of 2007.

Mail | Web | Twitter | More Posts (24)

DPLA: Order Up: 10 Thanksgiving Menu Inspirations

Wed, 2014-11-26 19:23

With Thanksgiving just a day away, the heat’s turned up for the perfect kitchen creation. Whether you’re the one cooking the turkey, or are just in charge of expertly arranging the table napkins, creating the perfect Thanksgiving meal is a big responsibility. Take some cues from these Thanksgiving dinner menus from hotels and restaurants across the country, from The New York Public Library.

Gramercy Park Hotel, NY, 1955. Metropole Hotel, Fargo, ND, 1898. The New Yorker at Terrace Restaurant, NY, 1930. Briggs House, Chicago, IL, 1899. Normandie Café, Detroit, MI, 1905. Hotel De Dijon, France, 1881. M.F. Lyons Dining Rooms, NY, 1906. L’Aiglon, NY, 1947. Hotel Roanoke, Roanoke, VA, 1899. The Waldorf Astoria, NY, 1961.

Library of Congress: The Signal: Collecting and Preserving Digital Art: Interview with Richard Rinehart and Jon Ippolito

Wed, 2014-11-26 17:54

Jon Ippolito, Professor of New Media at the University of Maine

As artists have embraced a range of new media and forms in the last century as the work of collecting, conserving and exhibiting these works has become increasingly complex and challenging. In this space, Richard Rinehart and Jon Ippolito have been working to develop and understand approaches to ensure long-term access to digital works. In this installment of our insights interview series I discuss Richard and Jon’s new book, “Re-collection: Art, New Media, and Social Memory.” The book offers an articulation of their variable media approach to thinking about works of art. I am excited to take this opportunity to explore the issues the book raises about digital art in particular and a perspective on digital preservation and social memory more broadly as part of our Insights Interview Series.

Trevor: The book takes a rather broad view of “new media”; everything from works made of rubber, to CDs, art installations made of branches, arrangements of lighting, commercial video games and hacked variations of video games. For those unfamiliar with your work more broadly, could you tell us a bit about your perspective on how these hang together as new media? Further, given that the focus of our audience is digital preservation, could you give us a bit of context for what value thinking about various forms of non-digital variable new media art offer us for understanding digital works?

Richard Rinehart, Director of the Samek Art Museum at Bucknell University.

Richard: Our book does focus on the more precise and readily-understood definition of new media art as artworks that rely on digital electronic computation as essential and inextricable. The way we frame it is that these works are at the center of our discussion, but we also discuss works that exist at the periphery of this definition. For instance, many digital artworks are hybrid digital/physical works (e.g., robotic works) and so the discussion cannot be entirely contained in the bitstream.

We also discuss other non-traditional art forms–performance art, installation art–that are not as new as “new media” but are also not that old in the history of museum collecting. It is important to put digital art preservation in an historical context, but also some of the preservation challenges presented by these works are shared with and provide precedents for digital art. These precedents allow us to tap into previous solutions or at least a history of discussion around them that could inform or aid in preserving digital art. And, vice versa, solutions for preserving digital art may aid in preserving these other forms (not least of which is shifting museum practices). Lastly, we bring non-digital (but still non-traditional) art forms into the discussion because some of the preservation issues are technological and media-based (in which case digital is distinct) but some issues are also artistic and theoretical, and these issues are not necessarily limited to digital works.

Jon: Yeah, we felt digital preservation needed a broader lens. The recorded culture of the 20th century–celluloid, vinyl LPs, slides–is a historical anomaly that’s a misleading precedent for preserving digital artifacts. Computer scientist Jeff Rothenberg argues that even JPEGs and PDF documents are best thought of as applications that must be “run” to be accessed and shared. We should be looking at paradigms that are more contingent than static files if we want to forecast the needs of 21st-century heritage.

Casting a wider net can also help preservationists jettison our culture’s implicit metaphor of stony durability in favor of one of fluid adaptability. Think of a human record that has endured and most of us picture a chiseled slab of granite in the British Museum–even though oral histories in the Amazon and elsewhere have endured far longer. Indeed, Dragan Espenschied has pointed out cases in which clay tablets have survived longer than stone because of their adaptability: they were baked as is into new buildings, while the original carvings on stones were chiseled off to accommodate new inscriptions. So Richard and I believe digital preservationists can learn from media that thrive by reinterpretation and reuse.

Trevor: The book presents technology, institutions and law as three sources of problems for the conservation of variable media art and potentially as three sources of possible solutions. Briefly, what do you see as the most significant challenges and opportunities in these three areas? Further, are there any other areas you considered incorporating but ended up leaving out?

Jon: From technology, the biggest threat is how the feverish marketing of our techno-utopia masks the industry’s planned obsolescence. We can combat this by assigning every file on our hard drives and gadget on our shelves a presumptive lifespan, and leaving room in our budgets to replace them once their expiration date has expired.

From institutions, the biggest threat is that their fear of losing authenticity gets in the way of harnessing less controllable forms of cultural perseverance such as proliferative preservation. Instead of concentrating on the end products of culture, they should be nurturing the communities where it is birthed and finds meaning.

From the law, the threat is DRM, the DMCA, and other mechanisms that cut access to copyrighted works–for unlike analog artifacts, bits must be accessed frequently and openly to survive. Lawyers and rights holders should be looking beyond the simplistic dichotomy of copyright lockdown versus “information wants to be free” and toward models in which information requires care, as is the case for sacred knowledge in many indigenous cultures.

Other areas? Any in which innovative strategies of social memory are dismissed because of the desire to control–either out of greed (“we can make a buck off this!”) or fear (“culture will evaporate without priests to guard it!”).

Trevor: One of the central concepts early in the book is “social memory,” in fact, the term makes its way into the title of the book. Given its centrality, could you briefly explain the concept and discuss some of how this framework for thinking about the past changes or upsets other theoretical perspectives on history and memory that underpin work in preservation and conservation?

Richard: Social memory is the long-term memory of societies. It’s how civilizations persist from year to year or century to century. It’s one of the core functions of museums and libraries and the purpose of preservation. It might alternately be called “cultural heritage,” patrimony, etc. But the specific concept of social memory is useful for the purpose of our book because there is a body of literature around it and because it positions this function as an active social dynamic rather than a passive state (cultural heritage, for instance, sounds pretty frozen). It was important to understand social memory as a series of actions that take place in the real world every day as that then helps us to make museum and preservation practices tangible and tractable.

The reason to bring up social memory in the first place is to gain a bit of distance on the problem of preserving digital art. Digital preservation is so urgent that most discussions (perhaps rightfully) leap right to technical issues and problem-solving. But, in order to effect the necessary large-scale and long-term changes in, say, museum practices, standards and policies we need to understand the larger context and historic assumptions behind current practices. Museums (and every cultural heritage institution) are not just stubborn; they do things a certain way for a reason. To convince them to change, we cannot just point at ad-hoc cases and technical problematics; we have to tie it to their core mission: social memory. The other reason to frame it this way is that new media really are challenging the functions of social memory; not just in museums, but across the board and here’s one level in which we can relate and share solutions.

These are some ways in which the social  memory allows us to approach preservation differently in the book, but here’s another, more specific one. We propose that social memory takes two forms: formal/canonical/institutional memory and informal/folkloric/personal memory (and every shade in between). We then suggest how the preservation of digital art may be aided by BOTH social memory functions.

Trevor: Many of the examples in the book focus on boundary-breaking installation art, like Flavin’s work with lighting, and conceptual art, like Nam June Paik’s work with televisions and signals, or Cory Arcangel’s interventions on Nintendo cartridges. Given that these works push the boundaries of their mediums, or focus in depth on some of the technical and physical properties of their mediums do you feel like lessons learned from them apply directly to seemingly more standardized and conventional works in new media? For instance, mass produced game cartridges or Flash animations and videos? To what extent are lessons learned about works largely intended to be exhibited art in galleries and museums applicable to more everyday mass-produced and consumed works?

Richard: That’s a very interesting question and its speaks to our premise that preserving digital art is but one form of social memory and that lessons learned therein may benefit other areas. I often feel that preserving digital art is useful for other preservation efforts because it provides an extreme case. Artists (and the art world) ensure that their media creations are about as complex as you’ll likely find; not necessarily technically (although some are technically complex and there are other complexities introduced in their non-standard use of technologies) but because what artists do is to complicate the work at every level–conceptually, phenomenologically, socially, technically; they think very specifically about the relationship between media and meaning and then they manifest those ideas in the digital object.

I fully understand that preserving artworks does not mean trying to capture or preserve the meaning of those objects (an impossible task) but these considerations must come into play when preserving art even at a material level; especially in fungible digital media. So, for just one example, preserving digital artworks will tell us a lot about HCI considerations that attend preserving other types of interactive digital objects.

Jon: Working in digital preservation also means being a bit of a futurist, especially in an age when the procession from medium to medium is so rapid and inexorable. And precisely because they play with the technical possibilities of media, today’s artists are often society’s earliest adopters. My 2006 book with Joline Blais, “At the Edge of Art,” is full of examples, whether how Google Earth came from Art+Com, Wikileaks from Antoni Muntadas, or gestural interfaces from Ben Fry and Casey Reas. Whether your metaphor for art is antennae (Ezra Pound) or antibodies (Blais), if you pay attention to artists you’ll get a sneak peek over the horizon.

Trevor: Richard suggests that the key to digital media is variability and not fixity which is the defining feature of digital media. Beyond this that conservators should move away from “outdated notions of fixity.” Given the importance of the concept of fixity in digital preservation circles, could you unpack this a bit for us? While digital objects do indeed execute and perform the fact that I can run a fixity check and confirm that this copy of the digital object is identical to what it was before seems to be an incredibly powerful and useful component of ensuring long-term access to them. Given that based on the nature of digital objects, we can actually ensure fixity in a way we never could with analog artifacts, this idea of distancing ourselves from fixity seemed strange.

Richard: You hit the nail on the head with that last sentence; and we’re hitting a little bit of a semantic wall here as well–fixity as used in computer science and certain digital preservation circles does not quite have the same meaning as when used in lay text or in the context of traditional object-based museum preservation. I was using fixity in the latter sense (as the first book on this topic, we wrote for a lay audience and across professional fields as much as possible.) Your last thought compares the uses of “fixity” as checks between analog media (electronic, reproducible; film, tape, or vinyl) compared to digital media, but in the book I was comparing fixity as applied to a different class of analog objects (physical; marble, bronze, paint) compared to digital objects.

If we step back from the professional jargon for a moment, I would characterize the traditional museological preservation approach for oil painting and bronze sculptures to be one based on fixity. The kind of digital authentication that you are talking about is more like the scientific concept of repeatability; a concept based on consistency and reproduction–the opposite of the fixity! I think the approach we outline in the book is in opposition to fixity of the marble-bust variety (as inappropriate for digital media) but very much in-line with fixity as digital authentication (as one tool for guiding and balancing a certain level of change with a certain level of integrity.) Jon may disagree here–in fact we built in these dynamics of agreement/disagreement into our book too.

Jon: I’d like to be as open-minded as Richard. But I can’t, because I pull my hair out every time I hear another minion of cultural heritage fixated on fixity. Sure, it’s nifty that each digital file has a unique cryptographic signature we can confirm after each migration. The best thing about checksums is that they are straightforward, and many preservation tools (and even some operating systems) already incorporate such checks by default. But this seems to me a tiny sliver of a far bigger digital preservation problem, and to blow it out of proportion is to perpetuate the myth that mathematical replication is cultural preservation.

Two files with different passages of 1s and 0s automatically have different checksums but may still offer the same experience; for example, two copies of a digitized film may differ by a few frames but look identical to the human eye. The point of digitizing a Stanley Kubrick film isn’t to create a new mathematical artifact with its own unchanging properties, but to capture for future generations the experience us old timers had of watching his cinematic genius in celluloid. As a custodian of culture, my job isn’t to ensure my DVD of A Clockwork Orange is faithful to some technician’s choices when digitizing the film; it’s to ensure it’s faithful to Kubrick’s choices as a filmmaker.

Furthermore, there’s no guarantee that born-digital files with impeccable checksums will bear any relationship to the experience of an actual user. Engineer and preservationist Bruno Bachiment gives the example of an archivist who sets a Web spider loose on a website, only to have the website’s owners update it in the middle of the crawling process. (This happens more often than you might think.) Monthly checksums will give the archivist confidence that she’s archived that website, but in fact her WARC files do not correspond to any digital artifact that has ever existed in the real world. Her chimera is a perversion caused by the capturing process–like those smartphone panoramas of a dinner where the same waiter appears at both ends of the table.

As in nearly all storage-based solutions, fixity does little to help capture context.  We can run checksums on the Riverside “King Lear” till the cows come home, and it still won’t tell us that boys played women’s parts, or that Elizabethan actors spoke with rounded vowels that sound more like a contemporary American accent than the King’s English, or how each generation of performers has drawn on the previous for inspiration. Even on a manuscript level, a checksum will only validate one of many variations of a text that was in reality constantly mutating and evolving.

The context for software is a bit more cut-and-dried, and the professionals I know who use emulators like to have checksums to go with their disk images. But checksums don’t help us decide what resolution or pace they should run at, or what to do with past traces of previous interactions, or what other contemporaneous software currently taken for granted will need to be stored or emulated for a work to run in the future.

Finally, even emulation will only capture part of the behaviors necessary to reconstruct digital creations in the networked age, which can depend on custom interfaces, environmental data or networks. You can’t just go around checksumming wearable hardware or GPS receivers or Twitter networks; the software will have to mutate to accommodate future versions of those environments.

So for a curator to run regular tests on a movie’s fixity is like a zookeeper running regular tests on a tiger’s DNA. Just because the DNA tests the same doesn’t guarantee the tiger is healthy, and if you want the species to persist in the long term, you have to accept that the DNA of individuals is certainly going to change.

We need a more balanced approach. You want to fix a butterfly? Pin it to a wall. If you want to preserve a butterfly, support an ecosystem where it can live and evolve.

Trevor: The process of getting our ideas out on the page can often play a role in pushing them in new directions. Are there any things that you brought into working on the book that changed in the process of putting it together?

Richard: A book is certainly slow media; purposefully so. I think the main change I noticed was the ability to put our ideas around preservation practice into a larger context of institutional history and social memory functions. Our previous expressions in journal articles or conference presentation simply did not allow us time to do that and, as stated earlier, I feel that both are important in the full consideration of preservation.

Jon: When Richard first approached me about writing this book, I thought, well it’s gonna be pretty tedious because it seemed we would be writing mostly about our own projects. At the time I was only aware of a single emulation testbed in a museum, one software package for documenting opinions on future states of works, and no more conferences and cross-institutional initiatives on variable media preservation than I could count on one hand.

Fortunately, it took us long enough to get around to writing the book (I’ll take the blame for that) that we were able to discover and incorporate like-minded efforts cropping up across the institutional spectrum, from DOCAM and ZKM to Preserving Virtual Worlds and JSMESS. Even just learning how many art museums now incorporate something as straightforward as an artist’s questionnaire into their acquisition process! That was gratifying and led me to think we are all riding the crest of a wave that might bear the digital flotsam of today’s culture into the future.

Trevor: The book covers a lot of ground, focusing on a range of issues and offering myriad suggestions for how various stakeholders could play a role in ensuring access to variable media works into the future. In all of that, is there one message or issue in the work that you think is the most critical or central?

Richard: After expanding our ideas in a book; it’s difficult to come back to tweet format, but I’ll try…

Change will happen. Don’t resist it; use it, guide it. Let art breathe; it will tell you what it needs.

Jon: And don’t save documents in Microsoft Word.

Open Knowledge Foundation: Congratulations to the Panton Fellows 2013-2014

Wed, 2014-11-26 11:51

Samuel Moore, Rosie Graves and Peter Kraker are the 2013-2014 Open Knowledge Panton Fellows – tasked with experimenting, exploring and promoting open practises through their research over the last twelve months. They just posted their final reports so we’d like to heartily congratulate them on an excellent job and summarise their highlights for the Open Knowledge community.

Over the last two years the Panton Fellowships have supported five early career researchers to further the aims of the Panton Principles for Open Data in Science alongside their day to day research. The provision of additional funding goes some way towards this aim, but a key benefit of the programme is boosting the visibility of the Fellow’s work within the open community and introducing them to like-minded researchers and others within the Open Knowledge network.

On stage at the Open Science Panel Vienna (Photo by FWF/APA-Fotoservice/Thomas Preiss)

Peter Kraker (full report) is a postdoctoral researcher at the Know-Centre in Graz and focused his fellowship work on two facets: open and transparent altmetrics and the promotion of open science in Austria and beyond. During his Felowship Peter released the open source visualization Head Start, which gives scholars an overview of a research field based on relational information derived from altmetrics. Head Start continues to grow in functionality, has been incorporated into Open Knowledge Labs and is soon to be made available on a dedicated website funded by the fellowship.

Peter’s ultimate goal is to have an environment where everybody can create their own maps based on open knowledge and share them with the world. You are encouraged to contribute! In addition Peter has been highly active promoting open science, open access, altmetrics and reproducibility in Austria and beyond through events, presentations and prolific blogging, resulting in some great discussions generated on social media. He has also produced a German summary of open science activities every month and is currently involved in kick-starting a German-speaking open science group through the Austrian and German Open Knowledge local groups.

Rosie with an air quality monitor

Rosie Graves (full report) is a postdoctoral researcher at the University of Leicester and used her fellowship to develop an air quality sensing project in a primary school. This wasn’t always an easy ride, the sensor was successfully installed and an enthusiastic set of schoolhildren were on board, but a technical issue meant that data collection was cut short, so Rosie plans to resume in the New Year. Further collaborations on crowdsourcing and school involvement in atmospheric science were even more successful, including a pilot rain gauge measurement project and development of a cheap, open source air quality sensor which is sure to be of interest to other scientists around the Open Knowledge network and beyond. Rosie has enjoyed her Panton Fellowship year and was grateful for the support to pursue outreach and educational work:

“This fellowship has been a great opportunity for me to kick start a citizen science project … It also allowed me to attend conferences to discuss open data in air quality which received positive feedback from many colleagues.”

Samuel Moore (full report) is a doctoral researcher in the Centre for e-Research at King’s College London and successfully commissioned, crowdfunded and (nearly) published an open access book on open research data during his Panton Year: Issues in Open Research Data. The book is still in production but publication is due during November and we encourage everyone to take a look. This was a step towards addressing Sam’s assessment of the nascent state of open data in the humanities:

“The crucial thing now is to continue to reach out to the average researcher, highlighting the benefits that open data offers and ensuring that there is a stock of accessible resources offering practical advice to researchers on how to share their data.”

Another initiative Sam initiated during the fellowship was establishing the forthcoming Journal of Open Humanities Data with Ubiquity Press, which aims to incentivise data sharing through publication credit, which in turn makes data citable through usual academic paper citation practices. Ultimately the journal will help researchers share their data, recommending repositories and best practices in the field, and will also help them track the impact of their data through citations and altmetrics.

We believe it is vital to provide early career researchers with support to try new open approaches to scholarship and hope other organisations will take similar concrete steps to demonstrate the benefits and challenges of open science through positive action.

Finally, we’d like to thank the Computer and Communications Industry Association (CCIA) for their generosity in funding the 2013-14 Panton Fellowships.

This blog post a cross-post from the Open Science blog, see the original here.

Hydra Project: Sufia 4.2.0 released

Wed, 2014-11-26 10:01

We are pleased to announce the release of Sufia 4.2.0.

This release of Sufia includes the ability to cache usage statistics in the application database, an accessibility fix, and a number of bug fixes. Thanks to Carolyn Cole, Michael Tribone, Adam Wead, Justin Coyne, and Mike Giarlo for their work on this release.

View the upgrade notes and a complete changelog on the release page: https://github.com/projecthydra/sufia/releases/tag/v4.2.0

LibUX: Who Uses Library Mobile Websites?

Wed, 2014-11-26 05:39

Almost every American owns a cell phone. More than half use a smartphone and sleeps with it next to the bed. How many do you think visit their library website on their phone, and what do they do there? Heads up: this one’s totally America-centric.

Who uses library mobile websites?

Almost one in five (18%) Americans ages 16-29 have used a mobile device to visit a public library’s website or access library resources in the past 12 months, compared with 12% of those ages 30 and older.) Younger Americans’ Library Habits and Expectations (2013)

If that seems anticlimactic, consider that just about every adult in the U.S. owns a cell phone, and almost every millenial in the country is using a smartphone. This is the demographic using library mobile websites, more than half of which already have a library card.

In 2012, the Pew Internet and American Life Project found that library website users were often young, not poor, educated, and–maybe–moms or dads.

Those who are most likely to have visited library websites are parents of minors, women, those with college educations, those under age 50, and people living in households earning $75,000 or more.

This correlates with the demographics of smartphone owners for 2014.

What do they want?

This 2013 Pew report makes the point that while digital natives still really like print materials and the library as a physical space, a non-trivial number of them said that libraries should definitely move most library services online. Future-of-the-library blather is often painted in black and white, but it is naive to think physical–or even traditional–services are going away any time soon. Rather, there is already demand for complementary or analogous online services.

Literally. When asked, 45% of Americans ages 16 – 29 wanted “apps that would let them locate library materials within the library.” They also wanted a library-branded Redbox (44%), and an “app to access library services” (42%) – by app I am sure they mean a mobile-first, responsive web site. That’s what we mean here at #libux.

For more on this non-controversy, listen to our chat with Brian Pichman about web vs native.

Eons ago (2012), the non-mobile specific breakdown of library web activities looked like this:

  • 82% searched the catalog
  • 72% looked for hours, location, directions, etc.
  • 62% put items on hold
  • 51% renewed them
  • 48% were interested in events and programs – especially old people
  • 44% did research
  • 30% sought readers’ advisory (book reviews or recommendations)
  • 30% paid fines (yikes)
  • 27% signed-up for library programs and events
  • 6% reserved a room

Still, young Americans are way more invested in libraries coordinating more closely with schools, offering literacy programs, and being more comfortable ( chart ). They want libraries to continue to be present in the community, do good, and have hipster decor – coffee helps.

Webbification is broadly expected, but it isn’t exactly a kudos subject. Offering comparable online services is necessary, like it is necessary that MS Word lets you save work. A library that doesn’t offer complementary or analogous online services isn’t buggy so much as it is just incomplete.

Take this away

The emphasis on the library as a physical space shouldn’t be shocking. The opportunity for the library as a hyper-locale specifically reflecting its community’s temperament isn’t one to overlook, especially for as long as libraries tally success by circulation numbers and foot traffic. The whole library-without-walls cliche that went hand-in-hand with all that Web 2.0 stuff tried to show-off the library as it could be in the cloud, but “the library as physical space” isn’t the same as “the library as disconnected space.” The tangibility of the library is a feature to be exploited both for atmosphere and web services. “Getting lost in the stacks” can and should be relegated to just something people say than something that actually happens.

The main reason for library web traffic has been and continues to be to find content (82%) and how to get it (72%).

Bullet points
  • Mobile first: The library catalog, as well as basic information about the library, must be optimized for mobile
  • Streamline transactions: placing and removing holds, checking out, paying fines. There is a lot of opportunity here. Basic optimization of the OPAC and cart can go along way, but you can even enable self checkout, library card registration using something like Facebook login, or payment through Apple Pay.
  • Be online: [duh] Offer every basic service available in person online
  • Improve in-house wayfinding through the web: think Google Indoor Maps
  • Exploit smartphone native services to anticipate context: location, as well as time-of-day, weather, etc., can be used to personalize service or contextually guess at the question the patron needs answered. “It’s 7 a.m. and cold outside, have a coffee on us.” – or even a simple “Yep. We’re open” on the front page.
  • Market the good the library provides to the community to win support (or donations)

The post Who Uses Library Mobile Websites? appeared first on LibUX.

FOSS4Lib Recent Releases: Sufia - 4.2.0

Tue, 2014-11-25 21:54
Package: SufiaRelease Date: Tuesday, November 25, 2014

Last updated November 25, 2014. Created by Peter Murray on November 25, 2014.
Log in to edit this page.

The 4.2.0 release of Sufia includes the ability to cache usage statistics in the application database, an accessibility fix, and a number of bug fixes.

Nicole Engard: Bookmarks for November 25, 2014

Tue, 2014-11-25 20:30

Today I found the following resources and bookmarked them on <a href=

  • PressForward A free and open-source software project launched in 2011, PressForward enables teams of researchers to aggregate, filter, and disseminate relevant scholarship using the popular WordPress web publishing platform. Just about anything available on the open web is fair game: traditional journal articles, conference papers, white papers, reports, scholarly blogs, and digital projects.

Digest powered by RSS Digest

The post Bookmarks for November 25, 2014 appeared first on What I Learned Today....

Related posts:

  1. Code4Lib Journal
  2. Games & Meebo
  3. The Future of Bibliographic Control: A Time of Transition

District Dispatch: CopyTalk: Free Copyright Webinar

Tue, 2014-11-25 19:48

Join us for our CopyTalk, our copyright webinar, on December 4 at 2pm Eastern Time. This installment of CopyTalk is entitled, “Introducing the Statement of Best Practices in Fair Use of Collections Containing Orphan Works for Libraries, Archives, and Other Memory Institutions”.

Peter Jaszi (American University, Washington College of Law) and David Hansen (UC Berkeley and UNC Chapel Hill) will introduce the “Statement of Best Practices in Fair Use of Collections Containing Orphan Works for Libraries, Archives, and Other Memory Institutions.” This Statement, the most recent community-developed best practices in fair use, is the result of intense discussion group meetings with over 150 librarians, archivists, and other memory institution professionals from around the United States to document and express their ideas about how to apply fair use to collections that contain orphan works, especially as memory institutions seek to digitize those collections and make them available online. The Statement outlines the fair use rationale for use of collections containing orphan works by memory institutions and identifies best practices for making assertions of fair use in preservation and access to those collections.

There is no need to pre-register! Just show up on December 2, at 2pm Eastern time. http://ala.adobeconnect.com/copyright/

The post CopyTalk: Free Copyright Webinar appeared first on District Dispatch.

DPLA: From the Book Patrol: A Parade of Thanksgiving Goodness

Tue, 2014-11-25 19:00

Did you know that over 2,400 items related to Thanksgiving reside at the DPLA? From Thanksgiving menus from hotels and restaurants across this great land to Thanksgiving postcards to images of the fortunate and less fortunate taking part in Thanksgiving day festivities.

Here’s just a taste of Thanksgiving at the Digital Public Library of America.

Enjoy and and have a Happy Thanksgiving!

Thanksgiving Day, Raphael Tuck & Sons, 1907 Macy’s Thanksgiving Day Parade, 1932 Photograph by Alexander Alland  Japanese Internment Camp – Gila River Relocation Center, Rivers, Arizona. One of the floats in the Thanksgiving day Harvest Festival, 11/26/1942 Annual Presentation of Thanksgiving Turkey, 11/16/1967 . Then President Lyndon Baines Johnson presiding  A man with an axe in the midst of a flock of turkeys. Greenville North Carolina,1965  Woman carries Thanksgiving turkey at Thresher & Kelley Market, Faneuil Hall in Boston, 1952. Photograph by Leslie Jones  Thanksgiving Dinner Menu. Hotel Scenley, Pittsburgh, PA. 1900 More than 100 wounded Negro soldiers, sailors, marines and Coast Guardsmen were feted by The Equestriennes, a group of Government Girls, at an annual Thanksgiving dinner at Lucy D. Slowe Hall, Washington, D. C., Photograph by Helen Levitt, 1944. Volunteers of America Thanksgiving, 22 November 1956. Thanksgiving dinner line in front of Los Angeles Street Post door

District Dispatch: Have questions about WIOA?

Tue, 2014-11-25 18:24

To follow up on the October 27th webinar “$2.2 Billion Reasons to Pay Attention to WIOA,” the American Library Association (ALA) today releases a list of resources and tools that provide more information about the Workforce Innovation and Opportunity Act (WIOA). The Workforce Innovation and Opportunity Act allows public libraries to be considered additional One-Stop partners, prohibits federal supervision or control over selection of library resources and authorizes adult education and literacy activities provided by public libraries as an allowable statewide employment and training activity.

Subscribe to the District Dispatch, ALA’s policy blog, to be alerted to when additional WIOA information becomes available.

The post Have questions about WIOA? appeared first on District Dispatch.

FOSS4Lib Upcoming Events: Advanced DSpace Training

Tue, 2014-11-25 16:45
Date: Tuesday, March 17, 2015 - 08:00 to Thursday, March 19, 2015 - 17:00Supports: DSpace

Last updated November 25, 2014. Created by Peter Murray on November 25, 2014.
Log in to edit this page.

In-person, 3-day Advanced DSpace Course in Austin March 17-19, 2015. The total cost of the course is being underwritten with generous support from the Texas Digital Library and DuraSpace. As a result, the registration fee for the course for DuraSpace Members is only $250 and $500 for Non-Members (meals and lodging not included). Seating will be limited to 20 participants.

For more details, see http://duraspace.org/articles/2382

Pages