You are here

Feed aggregator

Galen Charlton: Notes on making my WordPress blog HTTPS-only

planet code4lib - Tue, 2015-03-10 02:25

The other day I made this blog, galencharlton.com/blog/, HTTPS-only.  In other words, if Eve want to sniff what Bob is reading on my blog, she’ll need to do more than just capture packets between my blog and Bob’s computer to do so.

This is not bulletproof: perhaps Eve is in possession of truly spectacular computing capabilities or a breakthrough in cryptography and can break the ciphers. Perhaps she works for any of the sites that host external images, fonts, or analytics for my blog and has access to their server logs containing referrer headers information.  Currently these sites are Flickr (images), Gravatar (more images), Google (fonts) or WordPress (site stats – I will be changing this soon, however). Or perhaps she’s installed a keylogger on Bob’s computer, in which case anything I do to protect Bob is moot.

Or perhaps I am Eve and I’ve set up a dastardly plan to entrap people by recording when they read about MARC records, then showing up at Linked Data conferences and disclosing that activity.  Or vice versa. (Note: I will not actually do this.)

So, yes – protecting the privacy of one’s website visitors is hard; often the best we can do is be better at it than we were yesterday.

To that end, here are some notes on how I made my blog require HTTPS.

Certificates

I got my SSL certificate from Gandi.net. Why them?  Their price was OK, I already register my domains through them, and I like their corporate philosophy: they support a number of free and open source software projects; they’re not annoying about up-selling, and they have never (to my knowledge) run sexist advertising, unlikely some of their larger and more well-known competitors. But there are, of course, plenty of options for getting SSL certificates, and once Let’s Encrypt is in production, it should be both cheaper and easier for me to replace the certs next year.

I have three subdomains of galencharlton.com that I wanted a certificate for, so I decided to get a multi-domain certificate.  I consulted this tutorial by rtCamp to generate the CSR.

After following the tutorial to create a modified version of openssl.conf specifying the subjectAltName values I needed, I generated a new private key and a certificate-signing request as follows:

openssl req -new -key galencharlton.com.key \ -out galencharlton.com.csr \ -config galencharlton.com.cnf \ -sha256

The openssl command asked me a few questions; the most important of which being the value to set the common name (CN) field; I used “galencharlton.com” for that, as that’s the primary domain that the certificate protects.

I then entered the text of the CSR into a form and paid the cost of the certificate.  Since I am a library techie, not a bank, I purchased a domain-validated certificate.  That means that all I had to prove to the certificate’s issuer that I had control of the three domains that the cert should cover.  That validation could have been done via email to an address at galencharlton.com or by inserting a special TXT field to the DNS zone file for galencharlton.com. I ended up choosing to go the route of placing a file on the web server whose contents and location were specified by the issuer; once they (or rather, their software) downloaded the test files, they had some assurance that I had control of the domain.

In due course, I got the certificate.  I put it and the intermediate cert specified by Gandi in the /etc/ssl/certs directory on my server and the private key in /etc/private/.

Operating System and Apache configuration

Various vulnerabilities in the OpenSSL library or in HTTPS itself have been identified and mitigated over the years: suffice it to say that it is a BEASTly CRIME to make a POODLE suffer a HeartBleed — or something like that.

To avoid the known problems, I wanted to ensure that I had a recent enough version of OpenSSL on the web server and had configured Apache to disable insecure protocols (e.g., SSLv3) and eschew bad ciphers.

The server in question is running Debian Squeeze LTS, but since OpenSSL 1.0.x is not currently packaged for that release, I indeed up adding Wheezy to the APT repositories list and upgrading the openssl and apache2 packages.

For the latter, after some Googling I ended up adapting the recommended Apache SSL virtualhost configuration from this blog post by Tim Janik.  Here’s what I ended up with:

<VirtualHost _default_:443> ServerAdmin gmc@galencharlton.com DocumentRoot /var/www/galencharlton.com ServerName galencharlton.com ServerAlias www.galencharlton.com SSLEngine on SSLCertificateFile /etc/ssl/certs/galencharlton.com.crt SSLCertificateChainFile /etc/ssl/certs/GandiStandardSSLCA2.pem SSLCertificateKeyFile /etc/ssl/private/galencharlton.com.key Header add Strict-Transport-Security "max-age=15552000" # No POODLE SSLProtocol all -SSLv2 -SSLv3 +TLSv1.1 +TLSv1.2 SSLHonorCipherOrder on SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+ aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+AESGCM EECDH EDH+AESGCM EDH+aRSA HIGH !MEDIUM !LOW !aNULL !eNULL !LOW !RC4 !MD5 !EXP !PSK !SRP !DSS" </VirtualHost>

I also wanted to make sure that folks coming in via old HTTP links would get permanently redirected to the HTTPS site:

<VirtualHost *:80> ServerName galencharlton.com Redirect 301 / https://galencharlton.com/ </VirtualHost> <VirtualHost *:80> ServerName www.galencharlton.com Redirect 301 / https://www.galencharlton.com/ </VirtualHost>

Checking my work

I’m a big fan of the Qualsys SSL Labs server test tool, which does a number of things to test how well a given website implements HTTPS:

  • Identifying issues with the certificate chain
  • Whether it supports vulnerable protocol versions such as SSLv3
  • Whether it supports – and request – use of sufficiently strong ciphers.
  • Whether it is vulnerable to common attacks.

Suffice it to say that I required a couple iterations to get the Apache configuration just right.

WordPress

To be fully protected, all of the content embedded on a web page served via HTTPS must also be served via HTTPS.  In other words, this means that image URLs should require HTTPS – and the redirects in the Apache config are not enough.  Here is the sledgehammer I used to update image links in the blog posts:

create table bkp_posts as select * from wp_posts; begin; update wp_posts set post_content = replace(post_content, 'http://galen', 'https://galen') where post_content like '%http://galen%'; commit;

Whee!

I also needed to tweak a couple plugins to use HTTPS rather than HTTP to embed their icons or fetch JavaScript.

Finishing touches

In the course of testing, I discovered a couple more things to tweak:

  • The web sever had been using Apache’s mod_php5filter – I no longer remember why – and that was causing some issues when attempting to load the WordPress dashboard.  Switching to mod_php5 resolved that.
  • My domain ownership proof on keybase.io failed after the switch to HTTPS.  I eventually tracked that down to the fact that keybase.io doesn’t have a bunch of intermediate certificates in its certificate store that many browsers do. I resolved this by adding a cross-signed intermediate certificate to the file referenced by SSLCertificateChainFile in the Apache config above.

My blog now has an A+ score from SSL Labs. Yay!  Of course, it’s important to remember that this is not a static state of affairs – another big OpenSSL or HTTPS protocol vulnerability could turn that grade to an F.  In other words, it’s a good idea to test one’s website periodically.

FOSS4Lib Upcoming Events: NE regional Hydra meeting

planet code4lib - Mon, 2015-03-09 21:35
Date: Thursday, May 7, 2015 - 09:00 to 16:00Supports: Hydra

Last updated March 9, 2015. Created by Peter Murray on March 9, 2015.
Log in to edit this page.

From the announcement:

A NE Hydra Meeting is being planned for Thursday May 7, 2015 at Brown University and we’d like your input.

LITA: 2015 Kilgour Award Goes to Ed Summers

planet code4lib - Mon, 2015-03-09 21:27

The Library & Information Technology Association (LITA), a division of the American Library Association (ALA), announces Ed Summers as the 2015 winner of the Frederick G. Kilgour Award for Research in Library and Information Technology. The award, which is jointly sponsored by OCLC, is given for research relevant to the development of information technologies, especially work which shows promise of having a positive and substantive impact on any aspect(s) of the publication, storage, retrieval and dissemination of information, or the processes by which information and data is manipulated and managed. The awardee receives $2,000, a citation, and travel expenses to attend the award ceremony at the ALA Annual Conference in San Francisco, where the award will be presented on June 28, 2015.

Ed Summers is Lead Developer at the Maryland Institute for Technology in the Humanities (MITH), University of Maryland. Ed has been working for two decades helping to build connections between libraries and archives and the larger communities of the World Wide Web. During that time Ed has worked in academia, start-ups, corporations and the government. He is interested in the role of open source software, community development, and open access to enable digital curation. Ed has a MS in Library and Information Science and a BA in English and American Literature from Rutgers University.

Prior to joining MITH Ed helped build the Repository Development Center (RDC) at the Library of Congress. In that role he led the design and implementation of the NEH funded National Digital Newspaper Program’s Web application, which provides access to 8 million newspapers from across the United States. He also helped create the Twitter archiving application that has archived close to 500 billion tweets (as of September 2014). Ed created LC’s image quality assurance service that has allowed curators to sample and review over 50 million images. He served as a member of the Semantic Web Deployment Group at the W3C where he helped standardize SKOS, which he put to use in implementing the initial version of LC’s Linked Data service.

Before joining the Library of Congress Ed was a software developer at Follett Corporation where he designed and implemented knowledge management applications to support their early e-book efforts. He was the fourth employee at CheetahMail in New York City, where he led the design of their data management applications. And prior to that Ed worked in academic libraries at Old Dominion University, the University of Illinois and Columbia University where he was mostly focused on metadata management applications.

Ed likes to use experiments to learn about the Web and digital curation. Examples of this include his work with Wikipedia on Wikistream, which helps visualize the rate of change on Wikipedia, and CongressEdits, which allows Twitter users to follow edits being made to Wikipedia from the Congress. Some of these experiments are social, such as his role in creating the code4lib community, which is an international, cross-disciplinary group of hackers, designers and thinkers in the digital library space.

Notified of the award, Ed said: “It is a great honor to have been selected to receive the Kilgour Award this year. I was extremely surprised since I have spent most of my professional career (so far) as a developer, building communities of practice around software for libraries and archives, rather than traditional digital library research. During this time I have had the good fortune to work with some incredibly inspiring and talented individuals, teams and open source collaborators. I’ve only been as good as these partnerships have allowed me to be, and I’m looking forward to more. I am especially grateful to all those individuals that worked on a free and open Internet and World Wide Web. I remain convinced that this is a great time for library and archives professionals, as the information space of the Web is in need of our care, attention and perspective.”

Members of the 2014-15 Frederick G. Kilgour Award committee are:

  • Tao Zhang, Purdue University (chair)
  • Erik Mitchell, University of California, Berkeley (past chair)
  • Danielle Cunniff Plumer, DCPlumer Associates, LLC
  • Holly Tomren, Drexel University Libraries
  • Jason Simon, Fitchburg State University
  • Kebede Wordofa, Austin Peay State University, and
  • Roy Tennant, OCLC liaison

About LITA

Established in 1966, LITA is the leading organization reaching out across types of libraries to provide education and services for a broad membership of over 3,000 systems librarians, library technologists, library administrators, library schools, vendors and many others interested in leading edge technology and applications for librarians and information providers. For more information, visit www.lita.org.

About OCLC

Founded in 1967, OCLC is a nonprofit, membership, computer library service and research organization dedicated to the public purposes of furthering access to the world’s information and reducing library costs. OCLC Research is one of the world’s leading centers devoted exclusively to the challenges facing libraries in a rapidly changing information environment. It works with the community to collaboratively identify problems and opportunities, prototype and test solutions, and share findings through publications, presentations and professional interactions. For more information, visit www.oclc.org/research.

Question and Comments

Mary Taylor
Executive Director
Library & Information Technology Association (LITA)
(800) 545-2433 ext 4267
mtaylor@ala.org

 

Nicole Engard: Bookmarks for March 9, 2015

planet code4lib - Mon, 2015-03-09 20:30

Today I found the following resources and bookmarked them on <a href=

  • CardKit A simple, configurable, web based image creation tool

Digest powered by RSS Digest

The post Bookmarks for March 9, 2015 appeared first on What I Learned Today....

Related posts:

  1. Can you say Kebberfegg 3 times fast
  2. Planning a party or event?
  3. Decipher that Font

Jonathan Rochkind: Preservation in a war zone

planet code4lib - Mon, 2015-03-09 19:24

On the cover of today’s NYTimes (print washington edition)

Race in Iraq and Syria to Record and Shield Art Falling to ISIS
By ANNE BARNARD MARCH 8, 2015

BAGHDAD — In those areas of Iraq and Syria controlled by the Islamic State, residents are furtively recording on their cellphones damage done to antiquities by the extremist group. In northern Syria, museum curators have covered precious mosaics with sealant and sandbags….

…There was also the United States invasion in 2003, when American troops stood by as looters ransacked the Baghdad museum, a scenario that, Mr. Shirshab suggested, is being repeated today….

…The Babylon preservation plan also includes new documentation of the site, including brick-by- brick scale drawings of the ruins. In the event the site is destroyed, Mr. Allen said, the drawings can be used to rebuild it….

…The American invasion alerted archaeologists to what needed protecting. After damage and looting at many sites, documentation and preservation accelerated. One result was that the Mosul Museum, attacked by the Islamic State, had been digitally cataloged…

…He oversees an informal team of Syrians he has nicknamed the Monuments Men, many of them his former students. They document damage and looting by the Islamic State, pushing for crackdowns on the black market. Recently, the United Nations banned all trade in Syrian artifacts….

…Now, Iraqi colleagues teach conservators and concerned residents simple techniques to use in areas controlled by the Islamic State, such as turning on a cellphone’s GPS function when photographing objects, to help trace damage or theft, or to add sites to the “no-strike” list for warplanes….


Filed under: General

Open Knowledge Foundation: Open Data Day report #1: Highly inspiring activities across the Asia-Pacific

planet code4lib - Mon, 2015-03-09 17:11

Following the global Open Data Day 2015 event, which tooks place on February 21 with hundreds of events across the globe, we will do a blog series to highlight some of all the great activities that took place. In this first post (of four in total) we start by looking at some of the great events that took place across the Asia and Pacific. Three more accounts will bring similar accounts from the Americas, Africa and Europe in the days to come.

Philippines

In the Philippines, Open Knowledge Philippines and the School of Data local grouping celebrated the International Open Data Day 2015 with back to back events on February 20-21, 2015. The extensive event featured talks by Joel Garcia of Microsoft Philippines, Paul De Paula of Drupal Pilipinas, Dr. Sherwin Ona of De La Salle University and Michael Canares of Web Foundation Open Data Labs, Jakarta – alongside community leaders such as Happy Feraren of BantayPH (who is also one of the 2014 School of Data Fellows) and Open Knowledge Ambassador Joseph De Guia. The keynote speaker was Ivory Ong, Outreach Lead of Open Data Philippines, who rightly said that “we need citizens who are ready to use the data, and we need the government and citizens to work together to make the open data initiative successful.”

Talks were followed by an open data hackathon and a data jam. The hackathon used data sets taken from the government open data portal; General Appropriation Act (GAA) of the Department of Budget and Management (DBM). The students were tasked to develop a web or mobile app that would encourage participation of citizens in the grass root participatory budgeting program of national government. The winning team was able to develop a web application containing a dashboard of the Philippine National Budget and a “Do-It-Yourself” budget allocation.

Nepal

Another large event took place in Kathmandu, where Open Knowledge Nepal had teamed up with an impressive coalition of partners including open communities such as Free and Open Source Software (FOSS) Nepal Community, Mozilla Nepal, Wikimedians of Nepal,CSIT Association of Nepal, Acme Open Source Community (AOSC) and Open Source Ascol Circle (OSAC). The event had several streams of activities including among other a Spending Data Party, CKAN Localization session, a Data Scrapathon, a MakerFest, a Wikipedia Editathon and a community discussion. Each session had teams of facilitators and over 60 people tooks part in the day.

Bangladesh

In Dhaka an event was held by Bangladesh Open Source Network (BdOSN) and Open Knowledge Bangladesh. The event featured a series of distinguished speakers including Jabed Morshed Chowdhury, Joint Secretary of BDOSN and Bangla administrator of Google Developer Group, Nurunnaby Chowdhury Hasive, Ambassador Open Knowledge Bangladesh, Abu Sayed, president of Mukto Ashor, Bayzid Bhuiyan Juwel, General Secretary of Mukto Ashor, Nusrat Jahan, Executive Officer of Janata Bank Limited and Promi Nahid, BdOSN coordinator – who all discussed various topics and issues of open data including what open data is, how it works, where Bangladesh fits in and more. Moreover those interested in working with open data were introduced to various tools of Open Knowledge.

Tajikistan

An community initiative in Tajikistan took place in partnership with the magazine ICT4D under the banner of “A day of open data in Tajikistan”. The event was held at the Centre for Information Technology and Communications in the Office of Education in Dushanbe, and brought together designers, developers, statisticians and others who had ideas for the use of open data, or desires to find interesting projects to contribute to as well as learn how to visualize and analyze data. With participants both experienced and brand new to the topic, the event aimed to ensure that every citizen had the opportunity to learn and help the global community of open data to develop.

Among the activities were basic introductions to open data and discussions about how the local government could contribute to the creation of open data. There were also discussions about the involvement of local non-profit organizations and companies in the use of open data for products and missions, as well as trainings and other hands-on activities to participants actively involved.

India

Open Knowledge India, with support from the National Council of Education Bengal and the Open Knowledge micro grants, organised the India Open Data Summit on February, 28. It was the first ever Data Summit of this kind held in India and was attended by Open Data enthusiasts from all over India. Talks and workshops were held throughout the day, revolving around Open Science, Open Education, Open Data and Open GLAM in general, but also zooming in on concrete projects, for instance:

  • The Open Education Project, run by Open Knowledge India, which aims to complement the government’s efforts to bring the light of education to everyone. The project seeks to build a platform that would offer the Power of Choice to the children in matters of educational content, and on the matter of open data platforms, [CKAN](http://ckan.org/) was also discussed.
  • Opening up research data of all kinds was another point that was discussed. India has recently passed legislature ensuring that all government funded research results will be in the open.
  • Open governance not only at the national level, but even at the level of local governments, was something that was discussed with seriousness. Everyone agreed that in order to reduce corruption, open governance is the way to go. Encouraging the common man to participate in the process of open governance is another key point that was stressed upon. India is the largest democracy in the world and this democracy is very complex too.Greater use of the power of the crowd in matters of governance can help the democracy a long way by uprooting corruption from the very core.

Overall, the India Open Data Summit, 2015 was a grand success in bringing likeminded individuals together and in giving them a shared platform, where they can join hands to empower themselves. The first major Open Data Summit in India ended with the promise of keeping the ball rolling. Hopefully, in near future we will see many more such events all over India.

Australia

In Australia they had worked for a few weeks in advance to set up a regional Open Data Census instance, which was then launched on Open Data Day. The projects for the day included drafting a Contributor Guide, creating a Google Sheet to allow people to collect census entries prior to entering them online as well as adding Google Analytics to the site – plus of course submission of data sets.

The launch even drew media attention: CIO Magazine published an article where they covered International Open Data Day, the open data movement in Australia, and the importance of open data in helping the community.

You can read about the process of setting up the Open Data Census in this blog post and follow the Australian Regional Open Data Census team on @AuOpenDataIndex.

Cambodia

The Open Knowledge Cambodia local group in partnership with­ Open Development Cambodia & Destination Justice, co-­organized a full day event with presentations/talks in the morning & translate-a-­thon of the Open Data Handbook into the Khmer language at Development Innovations Cambodia. The event was attended by over 20 participants representing private sector employees, NGO staff, students and researchers.

Watch this space for more Open Data Day reports during the week!

Mark E. Phillips: Item States in our Digital Repository

planet code4lib - Mon, 2015-03-09 12:00

One of the things that I keep coming back to in our digital library system are the states that an object can be in and how that affects various aspects of our system.  Hopefully this post can explain some of them and how they are currently implemented locally.

Hidden vs Non-Hidden

Our main distinction once an item is in our system is if it is hidden or not.

Hidden means that it is not viewable by any of our users and that it is only available in our internal Edit system where a metadata record and basic access exists to the item. If a request for this items comes in through our public facing digital library interfaces,  the user will receive a “404 Not Found” response from our system.

If a record is not hidden then it is viewable and discoverable in one of our digital library interfaces.  If an end user tries to access this item there may be limitations based on the level of access,  or any embargoes on the item that might be present.

In our metadata scheme UNTL,  we notate if an item is hidden or not in the following way.  If there is a value of <meta qualifier=”hidden”>True</meta> then the item is considered hidden.  If there is a value of <meta qualifier=”hidden”>False</meta> then the item is considered not hidden.  If there is no element with qualifier of hidden then the default is placed as False in the system and it is considered not hidden.

This works pretty well for basic situations and with the assumption that nobody will ever make a mistake.

But… People make mistakes.

Deleted Items

The first issue we ran into when we started to scale up our systems is that from time to time we would accidentally load the same resource into the system twice.  This happens for a variety of reasons.  User error on the part of the ingest technician (me) is the major cause of this.  Also there are a number of times that the same item will be sent through the digitization/processing queue a number of times because of the amount of time that passes for some projects to complete.  There are other situations where the same item will be digitized again because the first instance was poorly scanned, and instead of updating the existing record it is added a second time.  For all of these situations we needed to have a way of suppressing these records

Right now we add an element to the metadata record that is <meta qualified=recordStatus”>deleted</meta> which designates that this item has been suppressed in the system and that it should be effectively forgotten.  On the technical side this triggers a delete from the Solr index, which holds our metadata indexes and the item is then gone.

When a user requests an item that is deleted she will currently receive a “404 Not Found” though we have an open ticket to change this behavior to return a “410 Gone” status code for these items. Another limitation of our current process of just deleting these from our Solr index is that we are not able to mark them as “deleted” in our OAI-PMH repositories which isn’t ideal. Finally by purging these items completely from our system we have no way of knowing how many have been suppressed/deleted, or not easy way of making the items visible again.

These suppressed records are only deleted from the Solr index but all of their edit history and the records themselves.  In fact if you know that an item used to be in a non-suppressed state, and remember the ARK identifier you can still access the full record,  remove the recordStatus flag and un-suppress the item.  Assuming you remember the identifier.

What does hidden really mean?

So right now we have hidden, and non-hidden and deleted and non-deleted.  The deleted items are effectively forgotten about,  but what about those hidden items,  what do they mean.

Here are some of the reasons that we have hidden records vs non-hidden records.

Metadata Missing

We have a workflow for our system that allows us to ingest stub records which have minimal descriptive metadata in place for items so that they can be edited in our online editing environment by metadata editors around the library, university, and state.  These are loaded with minimal title information (usually just the institution’s unique identifier for the item), the partner and collection that the item belongs to, and any metadata that makes sense to set across a large set of records.  Once in the editing system these items will have metadata created for them over time and be made available to the end user.

Hard Embargoes

While our system has built-in functionality for embargoing an item,  this functionality will always make available the descriptive metadata for the item to the public.   In our UNT Scholarly Works Repository, we work to make the contact information for the creators of the item known so that you can “request a copy” of the item if you discover it but if it is still under an embargo. Here is an example item that won’t become available until later this year.

Sometimes this is not the desired way of presenting the embargoed items to the public.  For example we work with a number of newspaper publishers around Texas who make available their PDF print masters to UNT for archiving and presentation via The Portal to Texas History.  They do so with the agreement that we will not make their items available until one, two, or three years after publication. Instead of presenting the end user with an item they aren’t able to access in the Portal,  we just have these items hidden until they are ready to be made available. I have a feeling that this method will be changed soon in the future because it becomes a large metadata management problem.

Finally there are items that we are either digitizing or capturing which we do not have the ability to provide access to because of current copyright restrictions.  We have these items in a hidden state in the system until either an agreement can be reached with the rights holder, or until the item falls into the public domain.

Right not it is impossible for us to identify how many of these items are being held as “embargoed” by the use of a hidden item flag.

Copyright Challenge, or Personally Identifiable Information

We have another small set of items (less than a dozen… I think) that are hidden because there is an active copyright challenge we are working with for the item, or because the item contained personally identifiable information.  Our first step in these situations is to mark the item as hidden until the item or the situations can be resolved.  If situation with the item has been successfully resolve and access restored to the item, it is marked as un-hidden.

Others?

I’m sure there are other reasons that an item can be hidden within a system,  I would be interested in hearing your reasons within your collections especially if they are different from the ones listed above.  I’m blissfully unaware of any controlled vocabularies for these kinds of states that a record might be in within digital library systems so if there is prior work in this area I’d love to hear about it.

As always feel free to contact me via Twitter if you have questions or comments.

Open Knowledge Foundation: How Open Data Can Change Pakistan

planet code4lib - Mon, 2015-03-09 10:47

This is a cross-post from the brand new Open Knowledge Pakistan Local Group blog. To learn more about (and get in touch with) the new community in Pakistan, go here.

Pakistan is a small country with a high population density. Within 796,096 square kilometres of its territory, Pakistan has a population of over 180 million people. Such a large population poses immense responsibilities on the government. Majority of the population in Pakistan is uneducated, living in rural areas, with a growing influx of the rural people to the urban areas. Thus we can say that the rate of urbanization in Pakistan is raising rapidly. This is a major challenge to the civic planners and the Government of Pakistan.

Urban population (% of total)

State Library of Denmark: Net archive indexing, round 2

planet code4lib - Mon, 2015-03-09 10:30

Using our experience from our initial net archive search setup, Thomas Egense and I have been tweaking options and adding patches to the fine webarchive-discovery from UKWA for some weeks. We will be re-starting indexing Real Soon Now. So what have we learned?

  • Stored text takes up a huge part of the index: Nearly half of the total index size. The biggest sinner is not surprisingly the content field, but we need that for highlighting and potentially text extraction from search results. As we have discovered that we can avoid storing DocValued fields, at the price of increased document retrieval time, we have turned off storing for several fields.
  • DocValue everything! Or at least a lot more than we did initially. Enabling DocValues for a field and getting low-overhead faceting turned out to be a lot disk-space-cheaper than we thought. As every other feature request from the researchers seems to be “We would also like to facet on field X”, our new strategy should make them at least half happy.
  • DocValues are required for some fields. Due to internal limits on facet.method=fc without DocValues, it is simply not possible to do faceting if the number of references gets high.
  • Faceting on outgoing links is highly valuable. Being able to facet on links makes it possible to generate real-time graphs for interconnected websites. Links with host- or domain granularity are easily handled and there is no doubt that those should be enabled. Based on posivitive experimental results with document-granularity links faceting (see section below), we will also be enabling that.
  • The addition of performance instrumentation made it a lot easier for us to prioritize features. We simply do not have time for everything we can think of and some specific features were very heavy.
  • Face recognition (just finding the location of faces in images, not guessing the persons)  was an interesting feature, but with a so-so success rate. Turning it on for all images would triple our indexing time and we have little need for sampling in this area, so we will not be doing it at all for this iteration.
  • Most prominent colour extraction was only somewhat heavy, but unfortunately the resulting colour turned out to vary a great deal depending on adjustment of extraction parameters. This might be useful if a top-X of prominent colours were extracted, but for now we have turned off this feature.
  • Language detection is valuable, but processing time is non-trivial and rises linear with the number of languages to check. We lowered the number of detected languages from 20 to 10, pruning the more obscure (relative to Danish) languages.
  • Meta-data about harvesting turned out to be important for the researchers. We will be indexing the ID of the harvest-job used for collecting the data, the institution responsible and some specific sub-job-ID.
  • Disabling of image-analysis features and optimization of part of the code-base means faster indexing. Our previous speed was 7-8 days/shard, while the new one is 3-4 days/shard. As we has also doubled our indexing hardware capacity, we expect to do a full re-build of the existing index in 2 months and catching up to the present within 6 months.
  • Our overall indexing workflow, with dedicated builders creating independent shards of a fixed size, worked very well for us. Besides some minor tweaks, we will not be changing this.
  • We have been happy with Solr 4.8. Solr 5 is just out, but as re-indexing is very costly for us, we do not feel comfortable with a switch at this time. We will do the conservative thing and stick to the old Solr 4-series, which currently means Solr 4.10.4.
Document-level links faceting

The biggest new feature will be document links. This is basically all links present on all web pages at full detail. For a single test shard with 217M documents / 906GB, there were 7 billion references to 640M unique links, the most popular link being used 2.4M times. Doing a full faceted search on *:* was understandable heavy at around 4 minutes, while ad hoc testing of “standard” searches resulted in response times varying from 50 ms to 3500 ms. Scaling up to 25 shards/machine, it will be 175 billion references to 16 billion values. It will be interesting to see the accumulated response time.

We expect this feature to be used to generate visual graphs of interconnected resources, which can be navigated in real-time. Or at least you-have-to-run-to-get-coffee-time. For the curious, here is the histogram for links in the test-shard:

References #terms 1 425,799,733 2 85,835,129 4 52,695,663 8 33,153,759 16 18,864,935 32 10,245,205 64 5,691,412 128 3,223,077 256 1,981,279 512 1,240,879 1,024 714,595 2,048 429,129 4,096 225,416 8,192 114,271 16,384 45,521 32,768 12,966 65,536 4,005 131,072 1,764 262,144 805 524,288 789 1,048,576 123 2,097,152 77 4,194,304 1

 


Chris Beer: LDPath in 3 examples

planet code4lib - Sun, 2015-03-08 00:00

At Code4Lib 2015, I gave a quick lightning talk on LDPath, a declarative domain-specific language for flatting linked data resources to a hash (e.g. for indexing to Solr).

LDPath can traverse the Linked Data Cloud as easily as working with local resources and can cache remote resources for future access. The LDPath language is also (generally) implementation independent (java, ruby) and relatively easy to implement. The language also lends itself to integration within development environments (e.g. ldpath-angular-demo-app, with context-aware autocompletion and real-time responses). For me, working with the LDPath language and implementation was the first time that linked data moved from being a good idea to being a practical solution to some problems.

Here is a selection from the VIAF record [1]:

<> void:inDataset <../data> ; a genont:InformationResource, foaf:Document ; foaf:primaryTopic <../65687612> . <../65687612> schema:alternateName "Bittman, Mark" ; schema:birthDate "1950-02-17" ; schema:familyName "Bittman" ; schema:givenName "Mark" ; schema:name "Bittman, Mark" ; schema:sameAs <http://d-nb.info/gnd/1058912836>, <http://dbpedia.org/resource/Mark_Bittman> ; a schema:Person ; rdfs:seeAlso <../182434519>, <../310263569>, <../314261350>, <../314497377>, <../314513297>, <../314718264> ; foaf:isPrimaryTopicOf <http://en.wikipedia.org/wiki/Mark_Bittman> .

We can use LDPath to extract the person’s name:

So far, this is not so different from traditional approaches. But, if we look deeper in the response, we can see other resources, including books by the author.

<../310263569> schema:creator <../65687612> ; schema:name "How to Cook Everything : Simple Recipes for Great Food" ; a schema:CreativeWork .

We can traverse the links to include the titles in our record:

LDPath also gives us the ability to write this query using a reverse property selector, e.g:

books = foaf:primaryTopic / ^schema:creator[rdf:type is schema:CreativeWork] / schema:name :: xsd:string ;

The resource links out to some external resources, including a link to dbpedia. Here is a selection from record in dbpedia:

<http://dbpedia.org/resource/Mark_Bittman> dbpedia-owl:abstract "Mark Bittman (born c. 1950) is an American food journalist, author, and columnist for The New York Times."@en, "Mark Bittman est un auteur et chroniqueur culinaire américain. Il a tenu une chronique hebdomadaire pour le The New York Times, appelée The Minimalist (« le minimaliste »), parue entre le 17 septembre 1997 et le 26 janvier 2011. Bittman continue d'écrire pour le New York Times Magazine, et participe à la section Opinion du journal. Il tient également un blog."@fr ; dbpedia-owl:birthDate "1950+02:00"^^<http://www.w3.org/2001/XMLSchema#gYear> ; dbpprop:name "Bittman, Mark"@en ; dbpprop:shortDescription "American journalist, food writer"@en ; dc:description "American journalist, food writer", "American journalist, food writer"@en ; dcterms:subject <http://dbpedia.org/resource/Category:1950s_births>, <http://dbpedia.org/resource/Category:American_food_writers>, <http://dbpedia.org/resource/Category:American_journalists>, <http://dbpedia.org/resource/Category:American_television_chefs>, <http://dbpedia.org/resource/Category:Clark_University_alumni>, <http://dbpedia.org/resource/Category:Living_people>, <http://dbpedia.org/resource/Category:The_New_York_Times_writers> ;

LDPath allows us to transparently traverse that link, allowing us to extract the subjects for VIAF record:

[1] If you’re playing along at home, note that, as of this writing, VIAF.org fails to correctly implement content negotiation and returns HTML if it appears anywhere in the Accept header, e.g.:

curl -H "Accept: application/rdf+xml, text/html; q=0.1" -v http://viaf.org/viaf/152427175/

will return a text/html response. This may cause trouble for your linked data clients.

Code4Lib: Code4Lib 2016 will be in Philadelphia

planet code4lib - Sat, 2015-03-07 23:40
Topic: code4lib2016

Code4Lib 2016 will be in Philadelphia, PA. The conference hosting proposal gives an idea of what it will be like. All necessary information will be available here as planning develops, and in the Code4Lib2016 category on the wiki.

District Dispatch: Call for Nominations: Robert L. Oakley Memorial Scholarship

planet code4lib - Fri, 2015-03-06 19:02

Bob Oakley

Librarians interested in intellectual property, public policy and copyright have until June 1, 2015, to apply for the Robert L. Oakley Memorial Scholarship. The annual $1,000 scholarship, which was developed by the American Library Association and the Library Copyright Alliance, supports research and advanced study for librarians in their early-to-mid-careers.

Applicants should provide a statement of intent for use of the scholarship funds. Such a statement should include the applicant’s interest and background in intellectual property, public policy, and/or copyright and their impacts on libraries and the ways libraries serve their communities.

Additionally, statements should include information about how the applicant and the library community will benefit from the applicant’s receipt of scholarship. Statements should be no longer than three pages (1000 words). The applicant’s resume or curriculum vitae should be included in their application.

Applications must be submitted via e-mail to Carrie Russell, crussell@alawash.org. Awardees may receive the Robert L. Oakley Memorial Scholarship up to two times in a lifetime. Funds may be used for equipment, expendable supplies, travel necessary to conduct, attend conferences, release from library duties or other reasonable and appropriate research expenses.

The award honors the life accomplishments and contributions of Robert L. Oakley. Professor and law librarian Robert Oakley was an expert on copyright law and wrote and lectured on the subject. He served on the Library Copyright Alliance representing the American Association of Law Librarians and played a leading role in advocating for U.S. libraries and the public they serve at many international forums including those of the World Intellectual Property Organization and United Nations Educational Scientific and Cultural Organization.

Oakley served as the United States delegate to the International Federation of Library Associations Standing Committee on Copyright and Related Rights from 1997-2003. Mr. Oakley testified before Congress on copyright, open access, library appropriations and free access to government documents and was a member of the Library of Congress’ Section 108 Study Group. A valued colleague and mentor for numerous librarians, Oakley was a recognized leader in law librarianship and library management who also maintained a profound commitment to public policy and the rights of library users.

The post Call for Nominations: Robert L. Oakley Memorial Scholarship appeared first on District Dispatch.

LITA: Librarians, Take the Struggle Out of Statistics

planet code4lib - Fri, 2015-03-06 18:50

Check out the brand new LITA web course:
Taking the Struggle Out of Statistics 

Instructor: Jackie Bronicki, Collections and Online Resources Coordinator, University of Houston.

Offered: April 6 – May 3, 2015
A Moodle based web course with asynchronous weekly lectures, tutorials, assignments, and group discussion.

Register Online, page arranged by session date (login required)

Recently, librarians of all types have been asked to take a more evidence-based look at their practices. Statistics is a powerful tool that can be used to uncover trends in library-related areas such as collections, user studies, usability testing, and patron satisfaction studies. Knowledge of basic statistical principles will greatly help librarians achieve these new expectations.

This course will be a blend of learning basic statistical concepts and techniques along with practical application of common statistical analyses to library data. The course will include online learning modules for basic statistical concepts, examples from completed and ongoing library research projects, and also exercises accompanied by practice datasets to apply techniques learned during the course.

Got assessment in your title or duties? This brand new web course is for you!

Here’s the Course Page

Jackie Bronicki’s background is in research methodology, data collection and project management for large research projects including international dialysis research and large-scale digitization quality assessment. Her focus is on collection assessment and evaluation and she works closely with subject liaisons, web services, and access services librarians at the University of Houston to facilitate various research projects.

Date:
April 6, 2015 – May 3, 2015

Costs:

  • LITA Member: $135
  • ALA Member: $195
  • Non-member: $260

Technical Requirements

Moodle login info will be sent to registrants the week prior to the start date. The Moodle-developed course site will include weekly asynchronous lectures and is composed of self-paced modules with facilitated interaction led by the instructor. Students regularly use the forum and chat room functions to facilitate their class participation. The course web site will be open for 1 week prior to the start date for students to have access to Moodle instructions and set their browser correctly. The course site will remain open for 90 days after the end date for students to refer back to course material.

Registration Information

Register Online page arranged by session date (login required)
OR
Mail or fax form to ALA Registration
OR
Call 1-800-545-2433 and press 5
OR
email registration@ala.org

Questions or Comments?

For all other questions or comments related to the course, contact LITA at (312) 280-4269 or Mark Beatty, mbeatty@ala.org.

Harvard Library Innovation Lab: Link roundup March 6, 2015

planet code4lib - Fri, 2015-03-06 17:08

Disney, tanks, Pantone, Bingo and the paperback book.

Raul Lemesoff’s Driveable Library | Mental Floss

Tank bookmobile weapon of mass instruction

Libraries are more popular than Disneyland?

Library visits vs. major tourist attractions

humanæ

Portraits with the exact Pantone color of the skin tone set as the background

Even Composting Comes With Sticker Shock – NYTimes.com

Composting company has customers collect troublesome fruit stickers on a Bingo card to receive free compost.

A Tribute to the Printer Aldus Manutius, and the Roots of the Paperback

The roots of the paperback. Pop into the Grolier Club for a fascinating exhibit.

District Dispatch: Archived webinar on 3D printing available

planet code4lib - Fri, 2015-03-06 17:00

from British Library Sound Archive

Wondering about the legal issues involved with 3D printing and how the library can protect itself from liability when patrons use these technologies in library spaces? Check out our latest archived webinar, “3D printing: policy and intellectual property law”.

The webinar was presented by Charlie Wapner, Policy Analyst (OITP) and Professor Tom Lipinski, Director of the University of Wisconsin-Milwaukee’s I-School.

The post Archived webinar on 3D printing available appeared first on District Dispatch.

CrossRef: New CrossRef Members

planet code4lib - Fri, 2015-03-06 16:12

Updated March 2, 2015

Voting Members

Asian Scientific Publishers
Global Business Publications
Institute of Polish Language
Journal of Case Reports
Journal Sovremennye Tehnologii v Medicine
Penza Psychological Newsletter
QUASAR, LLC
Science and Education, Ltd.
The International Child Neurology Association (ICNA)
Universidad de Antioquia

Represented Members
Balkan Journal of Electrical & Computer Engineering (BAJECE)
EIA Energy in Agriculture
Faculdade de Enfermagem Nova Esperanca
Faculdade de Medicina de Sao Jose do Rio Preto - FAMERP
Gumushane University Journal of Science and Technology Institute
Innovative Medical Technologies Development Foundation
Laboratorio de Anatomia Comparada dos Vertebrados
Nucleo para o Desenvolvimento de Tecnologia e Ambientes Educacionais (NPT)
The Journal of International Social Research
The Korean Society for the Study of Moral Education
Turkish Online Journal of Distance Education
Uni-FACEF Centro Universitario de Franca
Yunus Arastirma Bulteni

Last update February 23, 2015

Voting Members
Asia Pacific Association for Gambling Studies
Associacao Portguesa de Psicologia
Czestochowa University of Technology
Faculty of Administration, University of Ljubljana
Hipatia Press
Indonesian Journal of International Law
International Society for Horticultural Science (ISHS)
Journal of Zankoy Sulaimani - Part A
Methodos.revista de ciencias sociales
Paediatrician Publishers LLC
Physician Assistant Education Association
Pushpa Publishing House
ScienceScript, LLC
Smith and Frankling Academic Publishing Corporation, Ltd, UK
Sociedade Brasileira de Psicologia Organizacional e do Trabalho
Tambov State Technical University
Universidad de Jaen
University of Sarajevo Faculty of Health Sciences

Represented Members
Bitlis Eren University Journal of Science and Technology
Erciyes Iletisim Dergisi
Florence Nightingale Journal of Nursing
IFHAN
Inonu University Journal of the Facult of Education
International Journal of Informatics Technologies
P2M Invest
Saglik Bilimleri ve Meslekleri Dergisi
Samara State University of Architecture and Civil Engineering
Ufa State Academy of Arts

CrossRef: CrossRef Indicators

planet code4lib - Fri, 2015-03-06 15:05

Updated March 2, 2015

Total no. participating publishers & societies 5877
Total no. voting members 3164
% of non-profit publishers 57%
Total no. participating libraries 1931
No. journals covered 38,086
No. DOIs registered to date 72,500,322
No. DOIs deposited in previous month 469,198
No. DOIs retrieved (matched references) in previous month 39,460,869
DOI resolutions (end-user clicks) in previous month 131,824,772

Open Knowledge Foundation: Walkthrough: My experience building Australia’s Regional Open Data Census

planet code4lib - Fri, 2015-03-06 12:47

On International Open Data Day (21 Feb 2015) Australia’s Regional Open Data Census launched. This is the story of the trials and tribulations in launching the census.

Getting Started

Like many open data initiatives come to realise, after filling up a portal with lots of open data, there is a need for quality as well as quantity. I decided to tackle improving the quality of Australia’s open data as part of my Christmas holiday project.

I decided to request a local open data census on 23 Dec (I’d finished my Christmas shopping a day early). While I was waiting for a reply, I read the documentation – it was well written and configuring a web site using Google Sheets seemed easy enough.

The Open Knowledge Local Groups team contacted me early in the new year and introduced me to Pia Waugh and the team at Open Knowledge Australia. Pia helped propose the idea of the census to the leaders of Australia’s state and territory government open data initiatives. I was invited to pitch the census to them at a meeting on 19 Feb – Two days before International Open Data Day.

A plan was hatched

On 29 Jan I was informed by Open Knowledge that the census was ready to be configured. Could I be ready be launch in 25 days time?

Configuring the census was easy. Fill in the blanks, a list of places, some words on the homepage, look at other census and re-use some FAQ, add a logo and some custom CSS. However, deciding on what data to assess brought me to a screaming halt.

Deciding on data

The Global census uses data based on the G8 key datasets definition. The Local census template datasets are focused on local government responsibilities. There was no guidance for countries with three levels of government. How could I get agreement on the datasets and launch in time for Open Data Day?

I decided to make a Google Sheet with tabs for datasets required by the G8, Global Census, Local Census, Open Data Barometer, and Australia’s Foundation Spatial Data Framework. Based on these references I proposed 10 datasets to assess. An email was sent to the open data leaders asking them to collaborate on selecting the datasets.

GitHub is full of friends

When I encountered issues configuring the census, I turned to GitHub. Paul Walsh, one of the team on the OpenDataCensus repository on GitHub, was my guardian on GitHub – steering my issues to the right place, fixing Google Sheet security bugs, deleting a place I created called “Try it out” that I used for testing, and encouraging me to post user stories for new features. If you’re thinking about building your own census, get on GitHub and read what the team has planned and are busy fixing.

The meeting

I presented to the leaders of Australia’s state and territory open data leaders leaders on 19 Feb and they requested more time to add extra datasets to the census. We agreed to put a Beta label on the census and launch on Open Data Day.

Ready for lift off

The following day CIO Magazine emailed asking for, “a quick comment on International Open Data Day, how you see open data movement in Australia, and the importance of open data in helping the community”. I told them and they wrote about it.

The Open Data Institute Queensland and Open Knowledge blogged and tweeted encouraging volunteers to add to the census on Open Data Day.

I set up Gmail and Twitter accounts for the census and requested the census to be added to the big list of censuses.

Open Data Day

No support requests were received from volunteers submitting entries to the census (it is pretty easy). The Open Data Day projects included:

  • drafting a Contributor Guide.
  • creating a Google Sheet to allow people to collect census entries prior to entering them online.
  • Adding Google Analytics to the site.
What next?

We are looking forward to a few improvements including adding the map visualisation from the Global Open Data Index to our regional census. That’s why our Twitter account is @AuOpenDataIndex.

If you’re thinking about creating your own Open Data Census then I can highly recommend the experience and there is great team ready to support you.

Get in touch if you’d like to help with Australia’s Open Data Census.

Stephen Gates lives in Brisbane, Queensland, Australia. He has written Open Data strategies and driven their implementation. He is actively involved with the Open Data Institute Queensland contributing to their response to Queensland’s proposed open data law and helping coordinate the localisation of ODI Open Data Certificates. Stephen is also helping organise GovHack 2015 in Brisbane. Australia’s Regional Open Data Census is his first project working with Open Knowledge.

Open Knowledge Foundation: India Open Data Summit 2015

planet code4lib - Fri, 2015-03-06 09:54

This blog post is cross-posted from the Open Knowledge India blog and the Open Steps blog. It is written by Open Knowledge Ambassador Subhajit Ganguly, who is a physicist and an active member of various open data, open science and Open Access movements.

Open Knowledge India, with support from the National Council of Education Bengal and the Open Knowledge micro grants, organised the India Open Data Summit on February, 28. It was the first ever Data Summit of this kind held in India and was attended by Open Data enthusiasts from all over India. The event was held at Indumati Sabhagriha, Jadavpur University. Talks and workshops were held throughout the day. The event succeeded in living up to its promise of being a melting point of ideas.

The attendee list included people from all walks of life. Students, teachers, educationists, environmentalists, scientists, government officials, people’s representatives, lawyers, people from the tinseltown — everyone was welcomed with open arms to the event. The Chief Guests included the young and talented movie director Bidula Bhattacharjee, a prominent lawyer from the Kolkata High Court Aninda Chatterjee, educationist Bijan Sarkar and an important political activist Rajib Ghoshal. Each one of them added value to the event, making it into a free flow of ideas. The major speakers from the side of Open Knowledge India included Subhajit Ganguly, Priyanka Sen and Supriya Sen. Praloy Halder, who has been working for the restoration of the Sunderbans Delta, also attended the event. Environment data is a key aspect of the conservation movement in the Sunderbans and it requires special attention.

The talks revolved around Open Science, Open Education, Open Data and Open GLAM. Thinking local and going global was the theme from which the discourse followed. Everything was discussed from an Indian perspective, as many of the challenges faced by India are unique to this part of the world. There were discussions on how the Open Education Project, run by Open Knowledge India, can complement the government’s efforts to bring the light of education to everyone. The push was to build up a platform that would offer the Power of Choice to the children in matters of educational content. More and more use of Open Data platforms like the CKAN was also discussed. Open governance not only at the national level, but even at the level of local governments, was something that was discussed with seriousness. Everyone agreed that in order to reduce corruption, open governance is the way to go. Encouraging the common man to participate in the process of open governance is another key point that was stressed upon. India is the largest democracy in the world and this democracy is very complex too.Greater use of the power of the crowd in matters of governance can help the democracy a long way by uprooting corruption from the very core.

Opening up research data of all kinds was another point that was discussed. India has recently passed legislature ensuring that all government funded research results will be in the open. A workshop was held to educate researchers about the existing ways of disseminating research results. Further enquiries were made into finding newer and better ways of doing this. Every researcher, who had gathered, resolved to enrich the spirit of Open Science and Open Research. Overall, the India Open Data Summit, 2015 was a grand success in bringing likeminded individuals together and in giving them a shared platform, where they can join hands to empower themselves. The first major Open Data Summit in India ended with the promise of keeping the ball rolling. Hopefully, in near future we will see many more such events all over India.

Pages

Subscribe to code4lib aggregator