news aggregator

xkcd 1313: Something is Wrong on the Internet! [Peter Norvig]

unalog - Sun, 2014-02-09 19:42
Norvig's general solution to the XKCD 1313 regex problem.

RStudio - Home

unalog - Sun, 2014-02-09 19:42

RStudio - Home

unalog - Sun, 2014-02-09 19:42

James Cook University, Library Tech: VALA Day 1 Wrap Up

planet code4lib - Sun, 2014-02-09 19:22
The first plenary was delivered by Christine Borgman (UCLA) talking us through the issues around research data management. Persistent URL: http://www.vala.org.au/vala2014-proceedings/vala2014-plenary-1-borgman She laid out some cautionary thoughts - librarians thinking they can handle 'data'  the way we handle other information bundles is naive in the extreme. She was complimentary of

James Cook University, Library Tech: VALA Day 1 Wrap Up

planet code4lib - Sun, 2014-02-09 19:22
The first plenary was delivered by Christine Borgman (UCLA) talking us through the issues around research data management. Persistent URL: http://www.vala.org.au/vala2014-proceedings/vala2014-plenary-1-borgman She laid out some cautionary thoughts - librarians thinking they can handle 'data'  the way we handle other information bundles is naive in the extreme. She was complimentary of

State Library of Denmark: villadsen

planet code4lib - Sun, 2014-02-09 16:49

tl;dr: Want to use lineman and maven together? Get the lineman-maven-plugin.

At the State and University Library we have traditionally been using Java, JSP and related technologies for our web frontend development. Of course with a healthy dose of javascript in there as well. Our build tool has moved from ant to maven but as our use of javascript became more advanced and we started developing more single page apps it became clear that the advanced tools for javascript weren’t readily available in a Java world.The web development community now have a huge selection of tools all written in javascript and running on node.

We looked at some of the build tools available – from writing our own setup using grunt to the more complete frameworks like yeoman and lineman. In the end lineman was the one that suited us best with its relatively simple approach and focus on sensible defaults.

Integrating lineman with our existing maven setup proved frustrating. We tried using maven-exec-plugin and maven-antrun-plugin but neither of those could really give us a nice way of running the correct lineman tasks alongside our jetty server for local development as well as using lineman to build the javascript parts of our projects and integrating it into the final war file.

So in the end we developed a small maven plugin ourselves to make this integration easier. The result is the lineman-maven-plugin available under the Apache License 2.0 at github.

 


State Library of Denmark: villadsen

planet code4lib - Sun, 2014-02-09 16:49

tl;dr: Want to use lineman and maven together? Get the lineman-maven-plugin.

At the State and University Library we have traditionally been using Java, JSP and related technologies for our web frontend development. Of course with a healthy dose of javascript in there as well. Our build tool has moved from ant to maven but as our use of javascript became more advanced and we started developing more single page apps it became clear that the advanced tools for javascript weren’t readily available in a Java world.The web development community now have a huge selection of tools all written in javascript and running on node.

We looked at some of the build tools available – from writing our own setup using grunt to the more complete frameworks like yeoman and lineman. In the end lineman was the one that suited us best with its relatively simple approach and focus on sensible defaults.

Integrating lineman with our existing maven setup proved frustrating. We tried using maven-exec-plugin and maven-antrun-plugin but neither of those could really give us a nice way of running the correct lineman tasks alongside our jetty server for local development as well as using lineman to build the javascript parts of our projects and integrating it into the final war file.

So in the end we developed a small maven plugin ourselves to make this integration easier. The result is the lineman-maven-plugin available under the Apache License 2.0 at github.

 


Morgan, Eric Lease: Linked data and archival practice: Or, There is more than one way to skin a cat.

planet code4lib - Sat, 2014-02-08 21:30

Two recent experiences have taught me that — when creating some sort of information service — linked data will reside and be mixed in with data collected from any number of Internet techniques. Linked data interfaces will coexist with REST-ful interfaces, or even things as rudimentary as FTP. To the archivist, this means linked data is not the be-all and end-all of information publishing. There is no such thing. To the application programmer, this means you will need to have experience with a ever-growing number of Internet protocols. To both it means, “There is more than one way to skin a cat.”

Semantic Web in Libraries, 2013

Hamburg, Germany

In October of 2013 I had the opportunity to attend the Semantic Web In Libraries conference. [1, 2] It was a three-day event attended by approximately three hundred people who could roughly be divided into two equally sized groups: computer scientists and cultural heritage institution employees. The bulk of the presentations fell into two categories: 1) publishing linked data, and 2) creating information services. The publishers talked about ontologies, human-computer interfaces for data creation/maintenance, and systems exposing RDF to the wider world. The people creating information services were invariably collecting, homogenizing, and adding value to data gathered from a diverse set of information services. These information services were not limited to sets of linked data. They also included services accessible via REST-ful computing techniques, OAI-PMH interfaces, and there were probably a few locally developed file transfers or relational database dumps described as well. These people where creating lists of information services, regularly harvesting content from the services, writing cross-walks, locally storing the content, indexing it, providing services against the result, and sometimes republishing any number of “stories” based on the data. For the second group of people, linked data was certainly not the only game in town.

GLAM Hack Philly

Philadelphia, United States

In February of 2014 I had the opportunity to attend a hackathon called GLAM Hack Philly. [3] A wide variety of data sets were presented for “hacking” against. Some where TEI files describing Icelandic manuscripts. Some was linked data published from the British museum. Some was XML describing digitized journals created by a vendor-based application. Some of it resided in proprietary database applications describing the location of houses in Philadelphia. Some of it had little or no computer-readable structure at all and described plants. Some of it was the wiki mark-up for local municipalities. After the attendees (there were about two dozen of us) learned about each of the data sets we self-selected and hacked away at projects of our own design. The results fell into roughly three categories: geo-referencing objects, creating searchable/browsable interfaces, and data enhancement. With the exception of the resulting hack repurposing journal content to create new art, the results were pretty typical for cultural heritage institutions. But what fascinated me was way us hackers selected our data sets. Namely, the more complete and well-structured the data was the more hackers gravitated towards it. Of all the data sets, the TEI files were the most complete, accurate, and computer-readable. Three or four projects were done against the TEI. (Heck, I even hacked on the TEI files. [4]) The linked data from the British Museum — very well structured but not quite as through at the TEI — attracted a large number of hackers who worked together for a common goal. All the other data sets had only one or two people working on them. What is the moral to the story? There are two of them. First, archivists, if you want people to process your data and do “kewl” things against it, then make sure the data is thorough, complete, and computer-readable. Second, computer programmers, you will need to know a variety of data formats. Linked data is not the only game in town.

Summary

In summary, the technologies described in this Guidebook are not the only way to accomplish the goals of archivists wishing to make their content more accessible. [5] Instead, linked data is just one of many protocols in the toolbox. It is open, standards-based, and simpler rather than more complex. On the other hand, other protocols exist which have a different set of strengths and weaknesses. Computer technologists will need to have a larger rather than smaller knowledge of various Internet tools. For archivists, the core of the problem is still the collection and description of content. This — a what of archival practice — continues to remain constant. It is the how of archival practice — the technology — that changes at a much faster pace.

Links
  1. SWIB13 – http://swib.org/swib13/
  2. SWIB3 travelogue – http://blogs.nd.edu/emorgan/2013/12/swib13/
  3. hackathon – http://glamhack.com/
  4. my hack – http://dh.crc.nd.edu/sandbox/glamhack/
  5. Guidebook – http://sites.tufts.edu/liam/

Morgan, Eric Lease: Linked data and archival practice: Or, There is more than one way to skin a cat.

planet code4lib - Sat, 2014-02-08 21:30

Two recent experiences have taught me that — when creating some sort of information service — linked data will reside and be mixed in with data collected from any number of Internet techniques. Linked data interfaces will coexist with REST-ful interfaces, or even things as rudimentary as FTP. To the archivist, this means linked data is not the be-all and end-all of information publishing. There is no such thing. To the application programmer, this means you will need to have experience with a ever-growing number of Internet protocols. To both it means, “There is more than one way to skin a cat.”

Semantic Web in Libraries, 2013

Hamburg, Germany

In October of 2013 I had the opportunity to attend the Semantic Web In Libraries conference. [1, 2] It was a three-day event attended by approximately three hundred people who could roughly be divided into two equally sized groups: computer scientists and cultural heritage institution employees. The bulk of the presentations fell into two categories: 1) publishing linked data, and 2) creating information services. The publishers talked about ontologies, human-computer interfaces for data creation/maintenance, and systems exposing RDF to the wider world. The people creating information services were invariably collecting, homogenizing, and adding value to data gathered from a diverse set of information services. These information services were not limited to sets of linked data. They also included services accessible via REST-ful computing techniques, OAI-PMH interfaces, and there were probably a few locally developed file transfers or relational database dumps described as well. These people where creating lists of information services, regularly harvesting content from the services, writing cross-walks, locally storing the content, indexing it, providing services against the result, and sometimes republishing any number of “stories” based on the data. For the second group of people, linked data was certainly not the only game in town.

GLAM Hack Philly

Philadelphia, United States

In February of 2014 I had the opportunity to attend a hackathon called GLAM Hack Philly. [3] A wide variety of data sets were presented for “hacking” against. Some where TEI files describing Icelandic manuscripts. Some was linked data published from the British museum. Some was XML describing digitized journals created by a vendor-based application. Some of it resided in proprietary database applications describing the location of houses in Philadelphia. Some of it had little or no computer-readable structure at all and described plants. Some of it was the wiki mark-up for local municipalities. After the attendees (there were about two dozen of us) learned about each of the data sets we self-selected and hacked away at projects of our own design. The results fell into roughly three categories: geo-referencing objects, creating searchable/browsable interfaces, and data enhancement. With the exception of the resulting hack repurposing journal content to create new art, the results were pretty typical for cultural heritage institutions. But what fascinated me was way us hackers selected our data sets. Namely, the more complete and well-structured the data was the more hackers gravitated towards it. Of all the data sets, the TEI files were the most complete, accurate, and computer-readable. Three or four projects were done against the TEI. (Heck, I even hacked on the TEI files. [4]) The linked data from the British Museum — very well structured but not quite as through at the TEI — attracted a large number of hackers who worked together for a common goal. All the other data sets had only one or two people working on them. What is the moral to the story? There are two of them. First, archivists, if you want people to process your data and do “kewl” things against it, then make sure the data is thorough, complete, and computer-readable. Second, computer programmers, you will need to know a variety of data formats. Linked data is not the only game in town.

Summary

In summary, the technologies described in this Guidebook are not the only way to accomplish the goals of archivists wishing to make their content more accessible. [5] Instead, linked data is just one of many protocols in the toolbox. It is open, standards-based, and simpler rather than more complex. On the other hand, other protocols exist which have a different set of strengths and weaknesses. Computer technologists will need to have a larger rather than smaller knowledge of various Internet tools. For archivists, the core of the problem is still the collection and description of content. This — a what of archival practice — continues to remain constant. It is the how of archival practice — the technology — that changes at a much faster pace.

Links
  1. SWIB13 – http://swib.org/swib13/
  2. SWIB3 travelogue – http://blogs.nd.edu/emorgan/2013/12/swib13/
  3. hackathon – http://glamhack.com/
  4. my hack – http://dh.crc.nd.edu/sandbox/glamhack/
  5. Guidebook – http://sites.tufts.edu/liam/

OCLC Dev Network: Time Flies at Developer House

planet code4lib - Fri, 2014-02-07 20:13

Is it really already Friday? I can’t believe Developer House has flown by so quickly! We’ve been so busy coding, testing, chatting, and surviving a snow storm that the week was almost over before we knew it! In addition to all of that, we’ve spent some time this week with OCLC staff, learning more about a few of the services and applications from OCLC Research and how OCLC is making shared library data more visible on the web. We also had some good conversations over lunch with some of our OCLC technical strategists and architects.

read more

Grimmelmann, James: This City Has a Food Inequality Crisis. The Reason Why Will Shock You.

planet code4lib - Fri, 2014-02-07 19:53

I have a parable up on the Washington Post’s WonkBlog, “This restaurant fable explains everything wrong with San Francisco right now. It’s the story of the city of Junipero, which has a severe problem of food inequality. The wealthy, who work in the city’s booming birdcage industry, are eating like gluttonous emperors, but their fellow citizens are on the brink of starvation. It isn’t pretty:

Junipero’s cultural politics have gotten ugly. Angry protesters are smashing restaurant windows, hurling garbage at anyone who eats in public, and picketing the private food trucks that provide box lunches for birdcage foundries. One prominent birdcage CEO turned the anger back, saying in a TV interview that making birdcages is hard work and requires a full stomach, and people who don’t deserve or appreciate good food should stop complaining about those who will put it to better use. A graffiti mural downtown has become a symbol of the city’s tensions: it depicts a tide of bone-thin children, pressed up against the bars of a locked and gilded birdcage, staring forlornly at platters of grilled-cheese sandwiches stacked within.

But things are not as they first seem in Junipero, and maybe this isn’t just a story of rich versus poor … read the whole thing to find out why.

ALA Equitable Access to Electronic Content: [Heads up] The Day We Fight Back: Feb 11th day of action on surveillance and privacy reforms

planet code4lib - Fri, 2014-02-07 18:37

On Tuesday, February 11th, library supporters are asked to mount a major action to urge Congress to pass major reforms to our surveillance laws. As part of The Day We Fight Back, thousands of websites will host banners urging people to call Congress to stop mass surveillance. You can use ALA’s legislative action center to call in to members of your congressional delegation to urge them to vote for reforms such as those in the USA FREEDOM Act (S.1599 and H.R.3361) and other reform proposals.

Tuesday ALA will send out a blast email to ALA members with instructions and a basic message to help you contact your senators and representatives. Please push other friends and colleagues to do the same. We want to flood the Congressional switchboards.

ALA is making this effort because of the library community’s long standing commitment to privacy, starting with the protection of patron library records. Grassroots support from ALA has meant a lot to the reform attempts since passage of the USA PATRIOT Act in 2001. Now with public knowledge about the extensive surveillance of telephone records and other revelations, there is an opportunity get some real reforms to the surveillance system. That is why we need our library voices to express the need for ending mass surveillance, bring due process to the FISA court process and rationality to the collection and retention of data about millions of people. This is Day of Action is done in collaboration with EFF, ACLU, Amnesty International, and more.

Please be ready to help protect privacy February 11th.

The post [Heads up] The Day We Fight Back: Feb 11th day of action on surveillance and privacy reforms appeared first on District Dispatch.

Open Knowledge Foundation: What are you doing on Open Data Day?

planet code4lib - Fri, 2014-02-07 17:03

Open Data Day 2014 is February 22 – just two weeks away!

What: It’s a gathering of citizens in cities all around the world to write applications, liberate data, create visualizations and publish analyses using open public data.
Why: To show support for and encourage the adoption open data policies by the world’s local, regional and national governments.
Where: All around the world, in person and online, in a timezone near you!

At the Open Knowledge Foundation, this is one of our favourite community initiatives of the year, and this time we had the honour to connect more globally than ever before, supporting and working with our fabulous Open Knowledge Foundation local and working groups and also connecting with other great groups active in the global open space.

We are hosting another G+ hangout for the whole Open Data Day community.

To join: Register for the “What are you doing Open Data Day?” G+ hangout.
Wednesday, February 12, 2014 – 12:00 EST / 5.00pm GMT

It will recorded for those unable to attend. See the last ODD video.

Some inspiring community Initiatives (among so many!)

A handpicked selection of inspiring community initiatives happening on Open Data Day 2014. Is yours missing? Tell us everything about it!

Spain

In Spain there will be six events on Open Data Day. In Madrid Open Knowledge Foundation Spain will be organizing the first OKFN Award for Open Knowledge, Open Data and Transparency to recognize extraordinary efforts in the public and private sector on those subjects. Submission process is now open – see more info at http://premio.okfn.es. Additionally, there are events in Seville, Barcelona, Granada, Zaragoza and Vigo.

Canada

There are a number of Open Data Day events in Canada. The first ever Canada Open Data Summit will occur right before Open Data Day. Communities across the country are self-organizing events from Vancouver to Edmonton to Windsor to Sherbrooke. Heather Leson, OKF staffer, will participate in Toronto’s ODD in a roundtable discussion and OKF Ambassador, Diane Mercier, will be participating in Montreal’s Open Data Day hosted by Quebec Ouvert.

Argentina

In Argentina the Open Data Day event, organized by Buenos Aires Open Government Office, Ministry of Modernization, will focus on going out into the street to play with local data from the Buenos Aires data portal, and show neighbors some of the things that can be done with local data. For instance select street artists will join the team to process data and work on visualizations that will then afterwards be painted as street murals around Buenos Aires. The idea is to do something of greater impact to include not only the data community, but also a bigger audience by mixing street art and data. Read more here.

Germany

In Germany Open Data Day will be celebrated in 5 cities. In Berlin, it will be hosted by Wikimedia and put a focus on health and social structure data that the city releases specifically for the event. They will also use this as a launch event for Code for Germany, their new network of local hack groups called OK Labs. Read more on the German Open Data Day website.

Kenya

Kenya passed a new constitution in 2010 that created devolved administrative units (counties) which have been operational since March last year. For Open Data Day in Kenya, organized among other by Open Institute, Angani and pawa254 they are aiming to engage communities in the three major cities, Nairobi, Kisumu and Mombasa to take advantage of the new system to organize themselves to build a demand driven open data ecosystem in their communities. Activities around this involve talking to the local governments and institutions to open up their data and/or scraping the data from their website. Find out more at the Open Data Day Kenya website.

Japan

In Japan a whopping 31 cities are participating in Open Data Day, which is being prepared in an impressive fashion. Last week the organizers — which include Code For Japan and Open Knowledge Foundation Japan — held an Open Data Day press conference to inform of the activities that were being planned; an event that was even covered in national media. Additionally, the group organized a pre-event a few days ago in which some of their fellow local organizers presented plans for Open Data Day activities in different areas (see some of the presentation slides: Open Street Map, ODD Chiba. You can keep track of everything on the Open Data Day Japan website.

After so much inspiration – time to roll up our sleeves!

Call to Action #1: Join Open Data Day! Call to Action #2: Share your stories!
  • We’re collecting info about Open Data Day 2014 events all around the world, organised by the Open Knowledge Foundation community or by any other open community happy to connect with us (welcome! We’re so happy to connect with you!). Add info about your event here!

  • We’ll then gather all the information and resources about your initiatives in a wrap-up blogpost on the OKF main blog and (in this case, only if you feel comfortable about being recognised as part of the OKF community) spread the word about your work also on our OKF Community Stories Tumblr!

Let’s get making and building with open data!

Open Knowledge Foundation: What are you doing on Open Data Day?

planet code4lib - Fri, 2014-02-07 17:03

Open Data Day 2014 is February 22 – just two weeks away!

What: It’s a gathering of citizens in cities all around the world to write applications, liberate data, create visualizations and publish analyses using open public data.
Why: To show support for and encourage the adoption open data policies by the world’s local, regional and national governments.
Where: All around the world, in person and online, in a timezone near you!

At the Open Knowledge Foundation, this is one of our favourite community initiatives of the year, and this time we had the honour to connect more globally than ever before, supporting and working with our fabulous Open Knowledge Foundation local and working groups and also connecting with other great groups active in the global open space.

We are hosting another G+ hangout for the whole Open Data Day community.

To join: Register for the “What are you doing Open Data Day?” G+ hangout.
Wednesday, February 12, 2014 – 12:00 EST / 5.00pm GMT

It will recorded for those unable to attend. See the last ODD video.

Some inspiring community Initiatives (among so many!)

A handpicked selection of inspiring community initiatives happening on Open Data Day 2014. Is yours missing? Tell us everything about it!

Spain

In Spain there will be six events on Open Data Day. In Madrid Open Knowledge Foundation Spain will be organizing the first OKFN Award for Open Knowledge, Open Data and Transparency to recognize extraordinary efforts in the public and private sector on those subjects. Submission process is now open – see more info at http://premio.okfn.es. Additionally, there are events in Seville, Barcelona, Granada, Zaragoza and Vigo.

Canada

There are a number of Open Data Day events in Canada. The first ever Canada Open Data Summit will occur right before Open Data Day. Communities across the country are self-organizing events from Vancouver to Edmonton to Windsor to Sherbrooke. Heather Leson, OKF staffer, will participate in Toronto’s ODD in a roundtable discussion and OKF Ambassador, Diane Mercier, will be participating in Montreal’s Open Data Day hosted by Quebec Ouvert.

Argentina

In Argentina the Open Data Day event, organized by DG Información y Gobierno Abierto, will focus on going out into the street to play with local data from the Buenos Aires data portal, and show neighbors some of the things that can be done with local data. For instance select street artists will join the team to process data and work on visualizations that will then afterwards be painted as street murals around Buenos Aires. The idea is to do something of greater impact to include not only the data community, but also a bigger audience by mixing street art and data. Read more here.

Germany

In Germany Open Data Day will be celebrated in 5 cities. In Berlin, it will be hosted by Wikimedia and put a focus on health and social structure data that the city releases specifically for the event. They will also use this as a launch event for Code for Germany, their new network of local hack groups called OK Labs. Read more on the German Open Data Day website.

Kenya

Kenya passed a new constitution in 2010 that created devolved administrative units (counties) which have been operational since March last year. For Open Data Day in Kenya, organized among other by Open Institute, Angani and pawa254 they are aiming to engage communities in the three major cities, Nairobi, Kisumu and Mombasa to take advantage of the new system to organize themselves to build a demand driven open data ecosystem in their communities. Activities around this involve talking to the local governments and institutions to open up their data and/or scraping the data from their website. Find out more at the Open Data Day Kenya website.

After so much inspiration – time to roll up our sleeves!

Call to Action #1: Join Open Data Day! Call to Action #2: Share your stories!
  • We’re collecting info about Open Data Day 2014 events all around the world, organised by the Open Knowledge Foundation community or by any other open community happy to connect with us (welcome! We’re so happy to connect with you!). Add info about your event here!

  • We’ll then gather all the information and resources about your initiatives in a wrap-up blogpost on the OKF main blog and (in this case, only if you feel comfortable about being recognised as part of the OKF community) spread the word about your work also on our OKF Community Stories Tumblr!

Let’s get making and building with open data!

Morgan, Eric Lease: Archival linked data use cases

planet code4lib - Fri, 2014-02-07 02:43

What can you do with archival linked data once it is created? Here are three use cases:

  1. Do simple publishing – At its very root, linked data is about making your data available for others to harvest and use. While the “killer linked data application” has seemingly not reared its head, this does not mean you ought not make your data available at linked data. You won’t see the benefits immediately, but sooner or later (less than 5 years from now), you will see your content creeping into the search results of Internet indexes, into the work of both computational humanists and scientists, and into the hands of esoteric hackers creating one-off applications. Internet search engines will create “knowledge graphs”, and they will include links to your content. The humanists and scientists will operate on your data similarly. Both will create visualizations illustrating trends. They will both quantifiably analyze your content looking for patterns and anomalies. Both will probably create network diagrams demonstrating the flow and interconnection of knowledge and ideas through time and space. The humanist might do all this in order to bring history to life or demonstrate how one writer influenced another. The scientist might study ways to efficiently store your data, easily move it around the Internet, or connect it with data set created by their apparatus. The hacker (those are the good guys) will create flashy-looking applications that many will think are weird and useless, but the applications will demonstrate how the technology can be exploited. These applications will inspire others, be here one day and gone the next, and over time, become more useful and sophisticated.?
  2. Create a union catalog – If you make your data available as linked data, and if you find at least one other archive who is making their data available as linked data, then you can find a third somebody who will combine them into a triple store and implement a rudimentary SPARQL interface against the union. Once this is done a researcher could conceivably search the interface for a URI to see what is in both collections. The absolute imperative key to success for this to work is the judicious inclusion of URIs in both data sets. This scenario becomes even more enticing with the inclusion of two additional things. First, the more collections in the triple store the better. You can not have enough collections in the store. Second, the scenario will be even more enticing when each archive publishes their data using similar ontologies as everybody else. Success does not hinge on similar ontologies, but success is significantly enhanced. Just like the relational databases of today, nobody will be expected to query them using their native query language (SQL or SPARQL). Instead the interfaces will be much more user-friendly. The properties of classes in ontologies will become facets for searching and browsing. Free text as well as fielded searching via drop-down menus will become available. As time goes on and things mature, the output from these interfaces will be increasingly informative, easy-to-read, and computable. This means the output will answer questions, be visually appealing, as well as be available in one or more formats for other computer programs to operate upon. ?
  3. Tell a story – You and your hosting institution(s) have something significant to offer. It is not just about you and your archive but also about libraries, museums, the local municipality, etc. As a whole you are a local geographic entity. You represent something significant with a story to tell. Combine your linked data with the linked data of others in your immediate area. The ontologies will be a total hodgepodge, at least at first. Now provide a search engine against the result. Maybe you begin with local libraries or museums. Allow people to search the interface and bring together the content of everybody involved. Do not just provide lists of links in search results, but instead create knowledge graphs. Supplement the output of search results with the linked data from Wikipedia, Flickr, etc. You don’t have to be a purist. In a federated search sort of way, supplement the output with content from other data feeds such as (licensed) bibliographic indexes or content harvested from OAI-PMH repositories. Creating these sorts of things on-the-fly will be challenging. On the other hand, you might implement something that is more iterative and less immediate, but more thorough and curated if you were to select a topic or theme of interest, and do your own searching and story telling. The result would be something that is at once a Web page, a document designed for printing, or something importable into another computer program.

This text is a part of a draft sponsored by LiAM — the Linked Archival Metadata: A Guidebook.

Syndicate content