Almost every American owns a cell phone. More than half use a smartphone and sleeps with it next to the bed. How many do you think visit their library website on their phone, and what do they do there? Heads up: this one’s totally America-centric.Who uses library mobile websites?
Almost one in five (18%) Americans ages 16-29 have used a mobile device to visit a public library’s website or access library resources in the past 12 months, compared with 12% of those ages 30 and older.) Younger Americans’ Library Habits and Expectations (2013)
If that seems anticlimactic, consider that just about every adult in the U.S. owns a cell phone, and almost every millenial in the country is using a smartphone. This is the demographic using library mobile websites, more than half of which already have a library card.
In 2012, the Pew Internet and American Life Project found that library website users were often young, not poor, educated, and–maybe–moms or dads.
Those who are most likely to have visited library websites are parents of minors, women, those with college educations, those under age 50, and people living in households earning $75,000 or more.
This correlates with the demographics of smartphone owners for 2014.What do they want?
This 2013 Pew report makes the point that while digital natives still really like print materials and the library as a physical space, a non-trivial number of them said that libraries should definitely move most library services online. Future-of-the-library blather is often painted in black and white, but it is naive to think physical–or even traditional–services are going away any time soon. Rather, there is already demand for complementary or analogous online services.
Literally. When asked, 45% of Americans ages 16 – 29 wanted “apps that would let them locate library materials within the library.” They also wanted a library-branded Redbox (44%), and an “app to access library services” (42%) – by app I am sure they mean a mobile-first, responsive web site. That’s what we mean here at #libux.
For more on this non-controversy, listen to our chat with Brian Pichman about web vs native.
Eons ago (2012), the non-mobile specific breakdown of library web activities looked like this:
- 82% searched the catalog
- 72% looked for hours, location, directions, etc.
- 62% put items on hold
- 51% renewed them
- 48% were interested in events and programs – especially old people
- 44% did research
- 30% sought readers’ advisory (book reviews or recommendations)
- 30% paid fines (yikes)
- 27% signed-up for library programs and events
- 6% reserved a room
Still, young Americans are way more invested in libraries coordinating more closely with schools, offering literacy programs, and being more comfortable ( chart ). They want libraries to continue to be present in the community, do good, and have hipster decor – coffee helps.
Webbification is broadly expected, but it isn’t exactly a kudos subject. Offering comparable online services is necessary, like it is necessary that MS Word lets you save work. A library that doesn’t offer complementary or analogous online services isn’t buggy so much as it is just incomplete.Take this away
The emphasis on the library as a physical space shouldn’t be shocking. The opportunity for the library as a hyper-locale specifically reflecting its community’s temperament isn’t one to overlook, especially for as long as libraries tally success by circulation numbers and foot traffic. The whole library-without-walls cliche that went hand-in-hand with all that Web 2.0 stuff tried to show-off the library as it could be in the cloud, but “the library as physical space” isn’t the same as “the library as disconnected space.” The tangibility of the library is a feature to be exploited both for atmosphere and web services. “Getting lost in the stacks” can and should be relegated to just something people say than something that actually happens.
The main reason for library web traffic has been and continues to be to find content (82%) and how to get it (72%).Bullet points
- Mobile first: The library catalog, as well as basic information about the library, must be optimized for mobile
- Streamline transactions: placing and removing holds, checking out, paying fines. There is a lot of opportunity here. Basic optimization of the OPAC and cart can go along way, but you can even enable self checkout, library card registration using something like Facebook login, or payment through Apple Pay.
- Be online: [duh] Offer every basic service available in person online
- Improve in-house wayfinding through the web: think Google Indoor Maps
- Exploit smartphone native services to anticipate context: location, as well as time-of-day, weather, etc., can be used to personalize service or contextually guess at the question the patron needs answered. “It’s 7 a.m. and cold outside, have a coffee on us.” – or even a simple “Yep. We’re open” on the front page.
- Market the good the library provides to the community to win support (or donations)
Last updated November 25, 2014. Created by Peter Murray on November 25, 2014.
Log in to edit this page.
The 4.2.0 release of Sufia includes the ability to cache usage statistics in the application database, an accessibility fix, and a number of bug fixes.
Today I found the following resources and bookmarked them on <a href=
- PressForward A free and open-source software project launched in 2011, PressForward enables teams of researchers to aggregate, filter, and disseminate relevant scholarship using the popular WordPress web publishing platform. Just about anything available on the open web is fair game: traditional journal articles, conference papers, white papers, reports, scholarly blogs, and digital projects.
Digest powered by RSS Digest
Join us for our CopyTalk, our copyright webinar, on December 4 at 2pm Eastern Time. This installment of CopyTalk is entitled, “Introducing the Statement of Best Practices in Fair Use of Collections Containing Orphan Works for Libraries, Archives, and Other Memory Institutions”.
Peter Jaszi (American University, Washington College of Law) and David Hansen (UC Berkeley and UNC Chapel Hill) will introduce the “Statement of Best Practices in Fair Use of Collections Containing Orphan Works for Libraries, Archives, and Other Memory Institutions.” This Statement, the most recent community-developed best practices in fair use, is the result of intense discussion group meetings with over 150 librarians, archivists, and other memory institution professionals from around the United States to document and express their ideas about how to apply fair use to collections that contain orphan works, especially as memory institutions seek to digitize those collections and make them available online. The Statement outlines the fair use rationale for use of collections containing orphan works by memory institutions and identifies best practices for making assertions of fair use in preservation and access to those collections.
There is no need to pre-register! Just show up on December 2, at 2pm Eastern time. http://ala.adobeconnect.com/copyright/
Did you know that over 2,400 items related to Thanksgiving reside at the DPLA? From Thanksgiving menus from hotels and restaurants across this great land to Thanksgiving postcards to images of the fortunate and less fortunate taking part in Thanksgiving day festivities.
Here’s just a taste of Thanksgiving at the Digital Public Library of America.
Enjoy and and have a Happy Thanksgiving!Thanksgiving Day, Raphael Tuck & Sons, 1907 Macy’s Thanksgiving Day Parade, 1932 Photograph by Alexander Alland Japanese Internment Camp – Gila River Relocation Center, Rivers, Arizona. One of the floats in the Thanksgiving day Harvest Festival, 11/26/1942 Annual Presentation of Thanksgiving Turkey, 11/16/1967 . Then President Lyndon Baines Johnson presiding A man with an axe in the midst of a flock of turkeys. Greenville North Carolina,1965 Woman carries Thanksgiving turkey at Thresher & Kelley Market, Faneuil Hall in Boston, 1952. Photograph by Leslie Jones Thanksgiving Dinner Menu. Hotel Scenley, Pittsburgh, PA. 1900 More than 100 wounded Negro soldiers, sailors, marines and Coast Guardsmen were feted by The Equestriennes, a group of Government Girls, at an annual Thanksgiving dinner at Lucy D. Slowe Hall, Washington, D. C., Photograph by Helen Levitt, 1944. Volunteers of America Thanksgiving, 22 November 1956. Thanksgiving dinner line in front of Los Angeles Street Post door
To follow up on the October 27th webinar “$2.2 Billion Reasons to Pay Attention to WIOA,” the American Library Association (ALA) today releases a list of resources and tools that provide more information about the Workforce Innovation and Opportunity Act (WIOA). The Workforce Innovation and Opportunity Act allows public libraries to be considered additional One-Stop partners, prohibits federal supervision or control over selection of library resources and authorizes adult education and literacy activities provided by public libraries as an allowable statewide employment and training activity.
- Have additional questions about WIOA? Access additional WIOA resources (pdf)
- Missed the October ALA WIOA webinar? Watch the archived WIOA webinar now (Note: You must download the WebEx media player to view the video)
- Download PowerPoint slides from the webinar (ppt)
Subscribe to the District Dispatch, ALA’s policy blog, to be alerted to when additional WIOA information becomes available.
Last updated November 25, 2014. Created by Peter Murray on November 25, 2014.
Log in to edit this page.
In-person, 3-day Advanced DSpace Course in Austin March 17-19, 2015. The total cost of the course is being underwritten with generous support from the Texas Digital Library and DuraSpace. As a result, the registration fee for the course for DuraSpace Members is only $250 and $500 for Non-Members (meals and lodging not included). Seating will be limited to 20 participants.
For more details, see http://duraspace.org/articles/2382
VNSU, the association representing the 14 Dutch research universities, negotiates on their behalf with journal publishers. Earlier this month they announced that their current negotiations with Elsevier are at an impasse, on the issues of costs and the Dutch government's Open Access mandate:
Negotiations between the Dutch universities and publishing company Elsevier on subscription fees and Open Access have ground to a halt. In line with the policy pursued by the Ministry of Education, Culture and Science, the universities want academic publications to be freely accessible. To that end, agreements will have to be made with the publishers. The proposal presented by Elsevier last week totally fails to address this inevitable change.In their detailed explanation for scientists (PDF), VNSU elaborates:
During several round[s] of talks, no offer was made which would have led to a real, and much-needed, transition to open access. Moreover, Elsevier has failed to deliver an offer that would have kept the rising costs of library subscriptions at an acceptable level. ... In the meantime, universities will prepare for the possible consequences of an expiration of journal subscriptions. In case this happens researchers will still be able to publish in Elsevier journals. They will also have access to back issues of these journals. New issues of Elsevier journals as of 1-1-2015 will not be accessible anymore.I assume that this means that post-cancellation access will be provided by Elsevier directly, rather than by an archiving service. The government and the Dutch research funder have expressed support for VNSU's position.
This stand by the Dutch is commendable; the outcome will be very interesting. In a related development, if my marginal French is not misleading me, a new law in Germany allows authors of publicly funded research to make their accepted manuscripts freely available 1 year after initial publication. Both stand in direct contrast to the French "negotiation" with Elsevier:
France may not have any money left for its universities but it does have money for academic publishers.
While university presidents learn that their funding is to be reduced by EUR 400 million, the Ministry of Research has decided, under great secrecy, to pay EUR 172 million to the world leader in scientific publishing Elsevier .
Don’t miss the Top Technologies Every Librarian Needs to Know Webinar with Presenters: Brigitte Bell, Steven Bowers, Terry Cottrell, Elliot Polak and Ken Varnum
Offered: December 2, 2014
1:00 pm – 2:00 pm Central Time
Register Online page arranged by session date (login required)
We’re all awash in technological innovation. It can be a challenge to know what new tools are likely to have staying power — and what that might mean for libraries. The recently published Top Technologies Every Librarian Needs to Know highlights a selected set of technologies that are just starting to emerge and describes how libraries might adapt them in the next few years.
In this webinar, join the authors of three chapters from the book as they talk about their technologies and what they mean for libraries.
Hands-Free Augmented Reality: Impacting the Library Future
Presenters: Brigitte Bell & Terry Cottrell
Based on the recent surge of interest in head-mounted augmented reality devices such as the 3D gaming console Oculus Rift and Google’s Glass project, it seems reasonable to expect that the implementation of hands-free augmented reality technology will become common practice in libraries within the next 3-5 years.
The Future of Cloud-Based Library Systems
Presenters: Elliot Polak & Steven Bowers
In libraries, cloud computing technology can reduce the costs and human capital associated with maintaining a 24/7 Integrated Library System while facilitating an up-time that is costly to attain in-house. Cloud-Based Integrated Library Systems can leverage a shared system environment, allowing libraries to share metadata records and other system resources while maintaining independent local information allowing for reducing redundant workflows and yielding efficiencies for cataloging/metadata and acquisitions departments.
Library Discovery: From Ponds to Streams
Presenter: Ken Varnum
Rather than exploring focused ponds of specialized databases, researchers now swim in oceans of information. What is needed is neither ponds (too small in our interdisciplinary world) or oceans (too broad and deep for most needs), but streams — dynamic, context-aware subsets of the whole, tailored to the researcher’s short- or long-term interests.
Register Online now to join us what is sure to be an excellent and informative webinar.
Open Knowledge Foundation: Code for Africa & Open Knowledge Launch Open Government Fellowship Pilot Programme: Apply Today
Open Knowledge and Code for Africa are pleased to announce the launch of our pilot Open Government Fellowship programme. The six month programme seeks to empower the next generation of leaders in field of open government.
We are looking for candidates that fit the following profile:
- Currently engaged in the open government and/or related communities . We are looking to support individuals already actively participating in the open government community
- Understands the role of civil society and citizen based organisations in bringing about positive change through advocacy and campaigning
- Understands the role and importance of monitoring government commitments on open data as well as on other open government policy related issues
- Has facilitation skills and enjoys community-building (both online and offline).
- Is eager to learn from and be connected with an international community of open government experts, advocates and campaigners
- Currently living and working in Africa. Due to limited resources and our desire to develop a focused and impactful pilot programme, we are limiting applications to those currently living and working in Africa. We hope to expand the programme to the rest of the world starting in 2015.
The primary objective of the Open Government Fellowship programme is to identify, train and support the next generation of open government advocates and community builders. As you will see in the selection criteria, the most heavily weighted item is current engagement in the open government movement at the local, national and/or international level. Selected candidates will be part of a six-month fellowship pilot programme where we expect you to work with us for an average of six days a month, including attending online and offline trainings, organising events, and being an active member of the Open Knowledge and Code for Africa communities.
Fellows will be expected to produce tangible outcomes through during their fellowship but what these outcomes are will be up to the fellows to determine. In the application, we ask fellows to describe their vision for their fellowship or, to put it another way, to lay out what they would like to accomplish. We could imagine fellows working with a specific government department or agency to make a key dataset available, used and useful by the community or organising a series of events addressing a specific topic or challenge citizens are currently facing. We do not wish to be prescriptive, there are countless possibilities for outcomes for the fellowship but successful candidates will demonstrate a vision that has clear, tangible outcomes.
To support fellows in achieving these outcomes, all fellows will receive a stipend of $1,000 per month in addition to a project grant of $3,000 to spend over the course of your fellowship. Finally, a travel stipend is available for each fellow for national and/or international travel related to furthering the objective of their fellowship.
There are up to 3 fellowship positions open for the February to July 2015 pilot programme. Due to resourcing, we will only be accepting fellowship applications from individuals living and working in Africa. Furthermore, in order to ensure that we are able to provide fellows with strong local support during the pilot phase, we will are targeting applicants from the following countries where Code for Africa and/or Open Knowledge already have existing networks: Angola, Burkina Faso, Cameroon, Ghana, Kenya, Morocco, Mozambique, Mauritius, Namibia, Nigeria, Rwanda, South Africa, Senegal, Tunisia, Tanzania, and Uganda. We are hoping to roll out the programme in other regions in autumn 2015. If you are interested in the fellowship but not currently located in one of the target countries, please get in touch.
Do you have questions? See more about the Fellowship Programme here and have a looks at this Frequently Asked Questions (FAQ) page. If this doesn’t answer your question, email us at Katelyn[dot]Rogers[at]okfn.org
Not sure if you fit the profile? Drop us a line!
Convinced? Apply now to become a Open Government fellow. If you would prefer to submit your application in French or Portuguese, translations of the application form are available in French here and in Portuguese here.
The application will be open until the 15th of December 2014 and the programme will start in February 2015. We are looking forward to hearing from you!
PeerLibrary’s groups and collections functionality is especially suited towards educators running classes that involve reading and discussing various academic publications. This week we would like to highlight one such collection, created for a graduate level computer science class taught by Professor John Kubiatowicz at UC Berkeley. The course, Advanced Topics in Computer Systems, requires weekly readings which are handily stored on the PeerLibrary platform for students to read, discuss, and collaborate outside of the typical classroom setting. Articles within the collection come from a variety of sources, such as the publicly available “Key Range Locking Strategies” and the closed access “ARIES: A Transaction Recovery Method”. Even closed access articles, which hide the article from unauthorized users, allow users to view the comments and annotations!
Gates Foundation to require immediate free access for journal articles
By Jocelyn Kaiser 21 November 2014 1:30 pm
Breaking new ground for the open-access movement, the Bill & Melinda Gates Foundation, a major funder of global health research, plans to require that the researchers it funds publish only in immediate open-access journals.
The policy doesn’t kick in until January 2017; until then, grantees can publish in subscription-based journals as long as their paper is freely available within 12 months. But after that, the journal must be open access, meaning papers are free for anyone to read immediately upon publication. Articles must also be published with a license that allows anyone to freely reuse and distribute the material. And the underlying data must be freely available.
Is this going to work? Will researchers be able to comply with these requirements without harm to their careers? Does the Gates Foundation fund enough research that new open access venues will open up to publish this research (and if so how will their operation be funded?), or do sufficient venues already exist? Will Gates Foundation grants include funding for “gold” open access fees?
I am interested to find out. I hope this article is accurate about what their doing, and am glad they are doing it if so.
I note that the policy mentions “including any underlying data sets.” Do they really mean to be saying that underlying data sets used for all publications “funded, in whole or in part, by the foundation” must be published? I hope so. Requiring “underlying data sets” to be available at all is in some ways just as big or bigger as requiring them to be available open access.
Filed under: General
Last updated November 24, 2014. Created by Peter Murray on November 24, 2014.
Log in to edit this page.
Join BitCurator users from around the globe for a hands-on day focused on current use and future development of the BitCurator digital software environment. Hosted by the BitCurator Consortium (BCC), this event will be grounded in the practical, boots-on-the-ground experiences of digital archivists and curators. Come wrestle with current challenges—engage in disc image format debates, investigate emerging BitCurator integrations and workflows, and discuss the “now what” of handling your digital forensics outputs.
Slate recently published a series of maps illustrating the languages other than English spoken in each of the fifty US states. In nearly every state, the most commonly spoken non-English language was Spanish. But when Spanish is excluded as well as English, a much more diverse – and sometimes surprising – landscape of languages is revealed, including Tagalog in California, Vietnamese in Oklahoma, and Portuguese in Massachusetts.
Public library collections often reflect the attributes and interests of the communities in which they are embedded. So we might expect that public library collections in a given state will include relatively high quantities of materials published in the languages most commonly spoken by residents of the state. We can put this hypothesis to the test by examining data from WorldCat, the world’s largest bibliographic database.
WorldCat contains bibliographic data on more than 300 million titles held by thousands of libraries worldwide. For our purposes, we can filter WorldCat down to the materials held by US public libraries, which can then be divided into fifty “buckets” representing the materials held by public libraries in each state. By examining the contents of each bucket, we can determine the most common language other than English found within the collections of public libraries in each state:
As with the Slate findings regarding spoken languages, we find that in nearly every state, the most common non-English language in public library collections is Spanish. There are exceptions: French is the most common non-English language in public library collections in Massachusetts, Maine, Rhode Island, and Vermont, while German prevails in Ohio. The results for Maine and Vermont complement Slate’s finding that French is the most commonly spoken non-English language in those states – probably a consequence of Maine and Vermont’s shared borders with French-speaking Canada. The prominence of German-language materials in Ohio public libraries correlates with the fact that Ohio’s largest ancestry group is German, accounting for more than a quarter of the state’s population.
Following Slate’s example, we can look for more diverse language patterns by identifying the most common language other than English and Spanish in each state’s public library collections:
Excluding both English- and Spanish-language materials reveals a more diverse distribution of languages across the states. But only a bit more diverse: French now predominates, representing the most common language other than English and Spanish in public library collections in 32 of the 50 states. Moreover, we find only limited correlation with Slate’s findings regarding spoken languages. In some states, the most common non-English, non-Spanish spoken language does match the most common non-English, non-Spanish language in public library collections – for example, Polish in Illinois; Chinese in New York, and German in Wisconsin. But only about a quarter of the states (12) match in this way; the majority do not. Why is this so? Perhaps materials published in certain languages have low availability in the US, are costly to acquire, or both. Maybe other priorities drive collecting activity in non-English materials – for example, a need to collect materials in languages that are commonly taught in primary, secondary, and post-secondary education, such as French, Spanish, or German.
Or perhaps a ranking of languages by simple counts of materials is not the right metric. Another way to assess if a state’s public libraries tailor their collections to the languages commonly spoken by state residents is to compare collections across states. If a language is commonly spoken among residents of a particular state, we might expect that public libraries in that state will collect more materials in that language compared to other states, even if the sum total of that collecting activity is not sufficient to rank the language among the state’s most commonly collected languages (for reasons such as those mentioned above). And indeed, for a handful of states, this metric works well: for example, the most commonly spoken language in Florida after English and Spanish is French Creole, which ranks as the 38th most common language collected by public libraries in the state. But Florida ranks first among all states in the total number of French Creole-language materials held by public libraries.
But here we run into another problem: the great disparity in size, population, and ultimately, number of public libraries, across the states. While a state’s public libraries may collect heavily in a particular language relative to other languages, this may not be enough to earn a high national ranking in terms of the raw number of materials collected in that language. A large, populous state, by sheer weight of numbers, may eclipse a small state’s collecting activity in a particular language, even if the large state’s holdings in the language are proportionately less compared to the smaller state. For example, California – the largest state in the US by population – ranks first in total public library holdings of Tagalog-language materials; Tagalog is California’s most commonly spoken language after English and Spanish. But surveying the languages appearing in Map 2 (that is, those that are the most commonly spoken language other than English and Spanish in at least one state), it turns out that California also ranks first in total public library holdings for Arabic, Chinese, Dakota, French, Italian, Korean, Portuguese, Russian, and Vietnamese.
To control for this “large state problem”, we can abandon absolute totals as a benchmark, and instead compare the ranking of a particular language in the collections of a state’s public libraries to the average ranking for that language across all states (more specifically, those states that have public library holdings in that language). We would expect that states with a significant population speaking the language in question would have a state-wide ranking for that language that exceeds the national average. For example, Vietnamese is the most commonly spoken language in Texas other than English and Spanish. Vietnamese ranks fourth (by total number of materials) among all languages appearing in Texas public library collections; the average ranking for Vietnamese across all states that have collected materials in that language is thirteen. As we noted above, California has the most Vietnamese-language materials in its public library collections, but Vietnamese ranks only eighth in that state.
Map 3 shows the comparison of the state-wide ranking with the national average for the most commonly spoken language other than English and Spanish in each state:
Now it appears we have stronger evidence that public libraries tend to collect heavily in languages commonly spoken by state residents. In thirty-eight states (colored green), the state-wide ranking of the most commonly spoken language other than English and Spanish in public library collections exceeds – often substantially – the average ranking for that language across all states. For example, the most commonly spoken non-English, non-Spanish language in Alaska – Yupik – is only the 10th most common language found in the collections of Alaska’s public libraries. However, this ranking is well above the national average for Yupik (182nd). In other words, Yupik is considerably more prominent in the materials held by Alaskan public libraries than in the nation at large – in the same way that Yupik is relatively more common as a spoken language in Alaska than elsewhere.
As Map 3 shows, six states (colored orange) exhibit a ranking equal to the national average; in all of these cases the language in question is French or German, languages that tend to be highly collected everywhere (the average ranking for French is four, and for German, five). Five states (colored red) exhibit a ranking that is below the national average; in four of the five cases, the state ranking is only one notch below the national average.
The high correlation between languages commonly spoken in a state, and the languages commonly found within that state’s public library collections suggests that public libraries are not homogenous, but in many ways reflect the characteristics and interests of local communities. It also highlights the important service public libraries provide in facilitating information access to community members who may not speak or read English fluently. Finally, public libraries’ collecting activity across a wide range of non-English language materials suggests the importance of these collections in the context of the broader system-wide library resource. Some non-English language materials in public library collections – perhaps the French Creole-language materials in Florida’s public libraries, or the Yupik-language materials in Alaska’s public libraries – could be rare and potentially valuable items that are not readily available in other parts of the country.
Visit your local public library … you may find some unexpected languages on the shelf.
Acknowledgement: Thanks to OCLC Research colleague JD Shipengrover for creating the maps.
Note on data: Data used in this analysis represent public library collections as they are cataloged in WorldCat. Data is current as of July 2013. Reported results may be impacted by WorldCat’s coverage of public libraries in a particular state.
About Brian Lavoie
Brian Lavoie is a Research Scientist in OCLC Research. Brian's research interests include collective collections, the system-wide organization of library resources, and digital preservation.Mail | Web | LinkedIn | More Posts (6)
by Tom Baker, Karen Coyle, Sean Petiya
Published in: Library Hi Tech, v. 32, n. 4, 2014 pp 562-582 DOI:10.1108/LHT-08-2014-0081
Open Access Preprint
The above article was just published in Library hi Tech. However, because the article is a bit dense, as journal articles tend to be, here is a short description of the topic covered, plus a chance to reply to the article.
We now have a number of multi-level views of bibliographic data. There is the traditional "unit card" view, reflected in MARC, that treats all bibliographic data as a single unit. There is the FRBR four-level model that describes a single "real" item, and three levels of abstraction: manifestation, expression, and work. This is also the view taken by RDA, although employing a different set of properties to define instances of the FRBR classes. Then there is the BIBFRAME model, which has two bibliographic levels, work and instance, with the physical item as an annotation on the instance.
In support of these views we have three RDF-based vocabularies:
FRBRer (using OWL)
RDA (using RDFS)
BIBFRAME (using RDFS)
The vocabularies use a varying degree of specification. FRBRer is the most detailed and strict, using OWL to define cardinality, domains and ranges, and disjointness between classes and between properties. There are, however, no sub-classes or sub-properties. BIBFRAME properties all are defined in terms of domains (classes), and there are some sub-class and sub-property relationships. RDA has a single set of classes that are derived from the FRBR entities, and each property has the domain of a single class. RDA also has a parallel vocabulary that defines no class relationships; thus, no properties in that vocabulary result in a class entailment. 
As I talked about in the previous blog post on classes, the meaning of classes in RDF is often misunderstood, and that is just the beginning of the confusion that surrounds these new technologies. Recently, Bernard Vatant, who is a creator of the Linked Open Vocabularies site that does a statistical analysis of the existing linked open data vocabularies and how they relate to each other, said this on the LOV Google+ group:
"...it seems that many vocabularies in LOV are either built or used (or both) as constraint and validation vocabularies in closed worlds. Which means often in radical contradiction with their declared semantics."What Vatant is saying here is that many vocabularies that he observes use RDF in the "wrong way." One of the common "wrong ways" is to interpret the axioms that you can define in RDFS or OWL the same way you would interpret them in, say, XSD, or in a relational database design. In fact, the action of the OWL rules (originally called "constraints," which seems to have contributed to the confusion, now called "axioms") can be entirely counter-intuitive to anyone whose view of data is not formed by something called "description logic (DL)."
A simple demonstration of this, which we use in the article, is the OWL axiom for "maximum cardinality." In a non-DL programming world, you often state that a certain element in your data is limited to the number of times it can be used, such as saying that in a MARC record you can have only one 100 (main author) field. The maximum cardinality of that field is therefore "1". In your non-DL environment, a data creation application will not let you create more than one 100 field; if an application receiving data encounters a record with more than one 100 field, it will signal an error.
The semantic web, in its DL mode, draws an entirely different conclusion. The semantic web has two key principles: open world, and non-unique name. Open world means that whatever the state of the data on the web today, it may be incomplete; there can be unknowns. Therefore, you may say that you MUST have a title for every book, but if a look at your data reveals a book without a title, then your book still has a title, it is just an unknown title. That's pretty startling, but what about that 100 field? You've said that there can only be one, so what happens if there are 2 or 3 or more of them for a book? That's no problem, says OWL: the rule is that there is only one, but the non-unique name rule says that for any "thing" there can be more than one name for it. So when an OWL program  encounters multiple author 100 fields, it concludes that these are all different names for the same one thing, as defined by the combination of the non-unique name assumption and the maximum cardinality rule: "There can only be one, so these three must really be different names for that one." It's a bit like Alice in Wonderland, but there's science behind it.
What you have in your database today is a closed world, where you define what is right and wrong; where you can enforce the rule that required elements absolutely HAVE TO be there; where the forbidden is not allowed to happen. The semantic web standards are designed for the open world of the web where no one has that kind of control. Think of it this way: what if you put a document onto the open web for anyone to read, but wanted to prevent anyone from linking to it? You can't. The links that others create are beyond your control. The semantic web was developed around the idea of a web (aka a giant graph) of data. You can put your data up there or not, but once it's there it is subject to the open functionality of the web. And the standards of RDFS and OWL, which are the current standards that one uses to define semantic web data, are designed specifically for that rather chaotic information ecosystem, where, as the third main principle of the semantic web states, "anyone can say anything about anything."
I have a lot of thoughts about this conflict between the open world of the semantic web and the needs for closed world controls over data; in particular whether it really makes sense to use the same technology for both, since there is such a strong incompatibility in underlying logic of these two premises. As Vatant implies, many people creating RDF data are doing so with their minds firmly set in closed world rules, such that the actual result of applying the axioms of OWL and RDF on this data on the open web will not yield the expected closed world results.
This is what Baker, Petiya and I address in our paper, as we create examples from FRBRer, RDA in RDF, and BIBFRAME. Some of the results there will probably surprise you. If you doubt our conclusions, visit the site http://lod-lam.slis.kent.edu/wemi-rdf/ that gives more information about the tests, the data and the test results.
 "Entailment" means that the property does not carry with it any "classness" that would thus indicate that the resource is an instance of that class.
 Programs that interpret the OWL axioms are called "reasoners". There are a number of different reasoner programs available that you can call from your software, such as Pellet, Hermit, and others built into software packages like TopBraid.
What technology are you watching on the horizon? Have you seen brilliant ideas that need exposing? Do you really like sharing with your LITA colleagues?
The LITA Top Tech Trends Committee is trying a new process this year and issuing a Call for Panelists. Answer the short questionnaire by 12/10 to be considered. Fresh faces and diverse panelists are especially encouraged to respond. Past presentations can be viewed at http://www.ala.org/lita/ttt.
If you have additional questions check with Emily Morton-Owens, Chair of the Top Tech Trends committee: email@example.com
Help preserve our shared heritage, increase funding for conservation, and strengthen collections care by completing the Heritage Health Information (HHI) 2014 National Collections Care Survey. The HHI 2014 is a national survey on the condition of collections held by archives, libraries, historical societies, museums, scientific research collections, and archaeological repositories. It is the only comprehensive survey to collect data on the condition and preservation needs of our nation’s collections.
The deadline for the Heritage Health Information 2014: A National Collections Care Survey is December 19, 2014. In October, the Heritage Health Information sent invitations to the directors of over 14,000 collecting institutions across the country to participate in the survey. These invitations included personalized login information, which may be entered at hhi2014.com.
Questions about the survey may be directed to hhi2014survey [at] heritagepreservation [dot] org or 202-233-0824.
The post Opportunity knocks: Take the HHI 2014 National Collections Care Survey appeared first on District Dispatch.
An archive of the free webinar “Lib2Gov.org: Connecting Patrons with Legal Information” is now available. Hosted jointly by the American Library Association (ALA) and iPAC, the webinar was designed to help library reference staff build confidence in responding to legal inquiries. Watch the webinar
The session offers information on laws, legal resources and legal reference practices. Participants will learn how to handle a law reference interview, including where to draw the line between information and advice, key legal vocabulary and citation formats. During the webinar, leaders offer tips on how to assess and choose legal resources for patrons.
Catherine McGuire is the head of Reference and Outreach at the Maryland State Law Library. McGuire currently plans and presents educational programs to Judiciary staff, local attorneys, public library staff and members of the public on subjects related to legal research and reference. She serves as Vice Chair of the Conference of Maryland Court Law Library Directors and the co-chair of the Education Committee of the Legal Information Services to the Public Special Interest Section (LISP-SIS) of the American Association of Law Libraries (AALL).
The post Archive webinar available: Giving legal advice to patrons appeared first on District Dispatch.
A couple of week ago we kicked off Islandora Show and Tell by looking at a newly launched site: Barnard Digital Collection. This week, we're going to take a look at a long-standing Islandora site that has been one of our standard answers when someone asks "What's a great Islandora site?" - Fundación Juan March, which will, to our great fortune, be the host of the next European Islandora Camp, set for May 27 - 29, 2015.
It was a foregone conclusion that once we launched this series, we would be featuring FJM sooner rather than later, but it happens that we're visiting them just as they have launched a new collection: La saga Fernández-Shaw y el teatro lírico, containing three archives of a family of Spanish playwrights. This collection is also a great example of why we love this site: innovative browsing tools such as a timeline viewer, carefully curated collections spanning a wide varieties of objects types living side-by-side (the Knowledge Protal approach really makes this work), and seamless multi-language support.
FJM was also highlighted by D-LIB Magazine this month, as their Featured Digital Collection, a well -deserved honour that explores their collections and past projects in greater depth.
But are there cats? There are. Of course when running my standard generic Islandora repo search term, it helps to acknowledge that this is a collection of Spanish cultural works and go looking for gatos, which leads to Venta de los gatos (Sale of Cats), Orientaçao dos gatos (Orientation of Cats), Todos los gatos son pardos (All Cats are Grey).
Curious about the code behind this repo? FJM has been kind enough to share the details of a number of their initial collections on GitHub. Since they take the approach of using .NET for the web interface instead of using Drupal, the FJM .Net Library may also prove useful to anyone exploring alternate front-ends for their own collections.
Our Show and tell interview was completed by Luis Martínez Uribe, who will be joining us at Islandora Camp in Madrid as an instructor in the Admin Track in May 2015.
What is the primary purpose of your repository? Who is the intended audience?
We have always said that more than a technical system, the FJM digital repository tries to bring in a new working culture. Since the Islandora deployment, the repository has been instrumental in transforming the way in which data is generated and looked after across the organization. Thus the main purpose behind our repository philosophy is to take an active approach to ensure that our organizational data is managed using appropriate standards, made available via knowledge portals and preserved for future access.
The contents are highly heterogeneous with materials from the departments of Art, Music, Conferences, a Library of Spanish Music and Theatre as well as various outputs from scientific centres and scholarships. Therefore the audience ranges from the general public interested in particular art exhibitions, concerts or lecture to the highly specialised researchers in fields such as theatre, sociology or biology.
Why did you choose Islandora?
Back in 2010 the FJM was looking for a robust and flexible repository framework to manage an increasing volume of interrelated digital materials. With preservation in mind, the other most important aspect was the capacity to create complex models to accommodate relations between diverse types of content from multiple sources such as databases, the library catalogue, etc. Islandora provided the flexibility of Fedora plus easy customization powered by Drupal. Furthermore, discoverygarden could kick start us with their services and having Mark Leggott leading the project provided us with the confidence that our library needs and setting would be well understood.
Which modules or solution packs are most important to your repository?
In our latest collections we mostly use Drupal for prototyping. For this reason modules such as the Islandora Solr Client, the PDF Solution Pack or the Book Module are rather useful components to help us test and correct our collections once ingested and before the web layer is deployed.
What feature of your repository are you most proud of?
We like to be able to present the information through easy to grasp visualizations and have used timelines and maps in the past. In addition to this, we have started exploring the use of recommendation systems that once an object is selected it will suggest other materials of interest. This has been used in production in “All our art catalogues since 1973”.
Who built/developed/designed your repository (i.e, who was on the team?)
Driven by the FJM Library, Islandora was initially setup at FJM with help from discoverygarden and the first four collections (CLAMOR, CEACS IR, Archive of Joaquín Turina, Archive of Antonia Mercé) were developed in the first year.
After that, the Library and IT Services undertook the development of a small and simple collection of essays to then move into a more complex product like the Personal Library of Cortazar that required more advanced work from web programmers and designers.
In the last year, we have developed a .NET library that allows us to interact with the Islandora components such as Fedora, Solr or RISearch. Since then we have undertaken more complex interdepartmental ventures like the collection “All our art catalogues since 1973” where Library, IT and the web team have worked with colleagues in other departments such digitisation, art and design.
In addition to this we have also kept working on Library collections with help from IT like Sim Sala Bim Library of Illusionism or our latest collection “La Saga de los Fernández Shaw” which merges three different archives with information managed in Archivist Toolkit.
Do you have plans to expand your site in the future?
The knowledge portals developed using Islandora have been well received both internally and externally with many visitors. We plan to expand the collections with many more materials as well as using the repository to host the authority index and the thesaurus collections for the FJM. This will continue our work to ensure that the FJM digital materials are managed, connected and preserved.
What is your favourite object in your collection to show off?
This is a hard one, but if we have to chose our favourite object we would probably chose a resource like the The Avant-Garde Applied (1890.1950) art catalogue. The catalogue is presented with different photos of the spine and back cover, with other editions and related catalogues with a responsive web design and multi-device progressive loading viewer.
Our thanks to Luis and to FJM for agreeing to this feature. To learn more about their approach to Islandora, you can query to source by attending Islandora Camp EU2.