You are here

Feed aggregator

John Miedema: “Actually, these last two piles, JUNK and TOUGH, were the piles that gave him the most concern.”

planet code4lib - Mon, 2015-02-16 13:03

Phaedrus is the philosopher-protagonist in the well-known book, Zen and the Art of Motorcycle Maintenance by Robert Pirsig. Phaedrus is Robert Pirsig, the author, and his books represent a serious metaphysical inquiry. Lila is the lesser-known sequel in which Phaedrus refines and organizes his thought. It is the organizational elements that inspired my current software project. In the following quote, Phaedrus describes the information architecture of his project. It is elegant and complete, found in better organized folder systems, reflecting the natural development of thought.

In addition to the topic categories, five other categories had emerged. Phaedrus felt these were of great importance:

The first was UNASSIMILATED. This contained new ideas that interrupted what he was doing. They came in on the spur of the moment while he was organizing the other slips or sailing or working on the boat or doing something else that didn’t want to be disturbed. Normally your mind says to these ideas, ‘Go away, I’m busy,’ but that attitude is deadly to Quality. The UNASSIMILATED pile helped solve the problem. He just stuck the slips there on hold until he had the time and desire to get to them.

The next non-topical category was called PROGRAM. PROGRAM slips were instructions for what to do with the rest of the slips. They kept track of the forest while he was busy thinking about individual trees. With more than ten-thousand trees that kept wanting to expand to one-hundred thousand, the PROGRAM slips were absolutely necessary to keep from getting lost.

What made them so powerful was that they too were on slips, one slip for each instruction. This meant the PROGRAM slips were random access too and could be changed and resequenced as the need arose without any difficulty. He remembered reading that John Von Neumann, an inventor of the computer, had said the single thing that makes a computer so powerful is that the program is data and can be treated like any other data. That seemed a little obscure when Phaedrus had read it but now it was making sense.

The next slips were the CRIT slips. These were for days when he woke up in a foul mood and could find nothing but fault everywhere. He knew from experience that if he threw stuff away on these days he would regret it later, so instead he satisfied his anger by just describing all the stuff he wanted to destroy and the reasons for destroying it. The CRIT slips would then wait for days or sometimes months for a calmer period when he could make a more dispassionate judgment.

The next to the last group was the TOUGH category. This contained slips that seemed to say something of importance but didn’t fit into any topic he could think of. It prevented getting stuck on some slip whose place might become obvious later on.

The final category was JUNK. These were slips that seemed of high value when he wrote them down but which now seemed awful. Sometimes it included duplicates of slips he had forgotten he’d written. These duplicates were thrown away but nothing else was discarded. He’d found over and over again that the junk pile is a working category. Most slips died there but some reincarnated, and some of these reincarnated slips were the most important ones he had.

Actually, these last two piles, JUNK and TOUGH, were the piles that gave him the most concern. The whole thrust of the organizing effort was to have as few of these as possible. When they appeared he had to fight the tendency to slight them, shove them under the carpet, throw them out the window, belittle them, and forget them. These were the underdogs, the outsiders, the pariahs, the sinners of his system. But the reason he was so concerned about them was that he felt the quality and strength of his entire system of organization depended on how he treated them. If he treated the pariahs well he would have a good system. If he treated them badly he would have a weak one. They could not be allowed to destroy all efforts at organization but he couldn’t allow himself to forget them either. They just stood there, accusing, and he had to listen.

Pirsig, Robert M. (1991). Lila: An Inquiry into Morals. Pg. 25-26.

Alf Eaton, Alf: Visualising political donations

planet code4lib - Sun, 2015-02-15 17:35

Earlier this week I attended a “Big Data Investigation Workshop” run by British Library Labs as part of the International Digital Curation Conference.

The workshop was an introduction to working with tools for cleaning, analysing and visualising collections of data: OpenRefine (which is great but showing its age), Tableau (which is ridiculously impressive) and Gephi (which has fast graph layout but lacks usability).

As the workshop was co-organised by the Internation Crime Fiction Research Group, the theme of the data was “Crime Fiction”. However, for our project, we decided to look at “Crime Fact”. In particular, we looked at a recent news story in The Independent, which stated that “three senior figures at scandal-hit [HSBC] bank donated £875,000" to the Conservative Party in recent years.

Although the news story didn’t link to any source data, it almost certainly came from the Electoral Commision’s register of donations to political parties.

Running a basic search of the Electoral Commision’s register, with no filters, produced a CSV file containing all registered donations since 2001, which we then loaded into Tableau Public (Tableau’s limited, free desktop application for data visualisation).

Total donations per party

The first visualisation was a simple bar chart of the total donations to each party, including only “political party” recipients, coloured according to the type of donation.

Total donations per individual

The next visualisation was a summary of the donations from the individuals named in the news story. We added a filter on the donor name, searched for their surname and selected those names which matched (there were several variations on each donor’s name in the database), then used Tableau’s grouping to group together the name variations. Pleasingly the totals almost exactly matched those given in the news story, for the three named donors.

Location of the donors

Getting Tableau to recognise UK postcodes is a bit tricky, as it doesn’t recognise the full postcode - we had to write a function to separate out only the first part of the postcode. Once this was done, Tableau easily mapped the location of each donor, to produce the final visualisation: a map of each donation to a political party, coloured according to the recipient party and sized according to the value of the donation.

Alf Eaton, Alf: Force-directed tag clouds

planet code4lib - Sun, 2015-02-15 15:59

I’d been making graphs of Spotify’s “Related Artists” network, but was finding that pieces of the graph often remained disconnected.

To connect these disparate parts of the network, I queried last.fm for the top tags that had been attached to each artist, and added those to the graph.

This brought the network together nicely, so I applied it to a larger data set: all the unique artists that had ever been played on a particular BBC 6 Music radio show.

Dark matter

The full graph of artists and their tags was interesting, but to get a clearer overview of the show’s musical themes, the artist nodes were hidden after the graph had been laid out (using Gephi's "Force Layout 2" algorithm).

This left just the tags, laid out in two dimensions, where the most similar tags are closest together and the most frequently used are largest.

As some of the labels were overlapping, I used Gephi’s "Label Adjust" layout algorithm to shift their positions enough that most of the overlapping was avoided.

Here are some examples - I think they summarise the shows' content rather well:

Stuart Maconie’s Freakier Zone Gilles Peterson Marc Riley Unique identifiers

One problem was that when several artists shared the same name, irrelevant tags would be attached to an artist. To avoid this, only the artists that had been given MusicBrainz IDs in the BBC data were included, and these MBIDs were used to query last.fm for tags.

Discussion

In a sense, the artists are the “dark matter” of the graph: they pull the tags together and organise their macroscopic structure, but remain invisible in the final, visible map.

It may be that a highly-concentrated cluster of artists (as well as one or two very loosely-connected artists) pushes some tags further apart than they deserve to be.

These word clouds were generated with Gephi, as it handles thousands of nodes easily. I'd like to be able to do the same thing in D3, as Gephi is quite awkward to use, and has cropped the node labels when exporting the above images (it seems to only take the nodes into account when cropping the output, and not their labels).

Here's the (working, but unoptimised) code for building the artists + tags graph data.

Open Library: Digital PML uses BookReader to enhance access to local collections

planet code4lib - Sat, 2015-02-14 19:53

“We’re writing to let you know that we are proud and grateful users of the Internet Archive BookReader software on our new repository of digitized materials, Digital PML

So great! For other institutions that would like to use the BookReader can read through this documentation to help you get started.

Manage Metadata (Diane Hillmann and Jon Phipps): The Jane-athon Report

planet code4lib - Sat, 2015-02-14 19:43

I’ve been back from Chicago for just over a week now, but still reflecting on a very successful Jane-athon pre-conference the Friday before Midwinter. And the good news is that our participant survey responses agree with the “successful” part, plus contain a lot of food for thought going forward. More about that later …

There was a lot of buzz in the Jane-athon room that day, primarily from the enthusiastic participants, working together at tables, definitely having the fun we promised. Afterwards, the buzz came from those who wished they’d been there (many on Twitter @Janeathon) and others that wanted us to promise to do it again. Rest assured–we’re planning on another one in San Francisco at ALA Annual, but it will probably be somewhat different because by then we’ll have a better support infrastructure and will be able to be more concrete about the question of ‘what do you do with the data once you have it?’ If you’re particularly interested in that question, keep an eye on the rballs.info site, where new resources and improvements will be announced.

Rballs? What the heck are those? Originally they were meant to be ‘RIMMF-balls’, but then we started talking about ‘resource-balls’, and other such wanderings. The ‘ball’ part was suggested by ‘tar-balls’ and ‘mudballs’ (mudball was a term of derision in the old MARBI days, but Jon and I started using it more generally when we were working on aggregated records in NSDL).

So, how did we come up with such a crazy idea as a Jane-athon anyway? The idea came from Deborah Fritz, who’d been teaching about RDA for some time, plus working with her husband Richard on the RIMMF (RDA In Many Metadata Formats) tool, which is designed to allow creation of RDA data and export to RDF. The tool was upgraded to version 3 for the Jane-athon, and Deborah added some tutorials so that Jane-athon participants could get some practice with RIMMF beforehand (she also did online sessions for team leaders and coaches).

Deborah and I had discussed many times the frustration we shared with the ‘sage on the stage’ model of training, which left attendees to such events unhappy with the limitations of that model. They wanted something concrete–they usually said–something they could get their teeth into. Something that would help them visualize RDA out of the context of MARC. The Jane-athon idea promised to do just that.

I had done a prototype session of the Jane-athon with some librarians from the University of Hawaii (Nancy Sack did a great job organizing everything, even though a dodgy plane made me a day late to the party!) We got some very useful evaluations from that group, and those contributed to the success of the official Chicago debut.

So a crazy idea, bolstered by a lot of work and a whole lot of organizational effort, actually happened, and was even better than we’d dared to hope. There was a certain chaos on the day, which most people accepted with equanimity, and an awful lot of learning of the best kind. The event couldn’t have happened without Deborah and Richard Fritz, Gordon Dunsire, and Jon Phipps, each of whom had a part to play. Jamie Hennelly from ALA Publishing was instrumental in making the event happen, despite his reservations about herding the organizer cats.

And, as the cherry on top: After the five organizers finished their celebratory dinner later in the evening after the Jane-athon, we were all out on the sidewalk looking for cabs. A long black limousine pulled up, and asked us if we wanted a ride. Needless to say, we did, and soon pulled up in style in front of the Hyatt Regency on Wacker. Sadly, there was no one we knew at the front of the hotel, but many looked askance at the somewhat scruffy mob who piled out of the limo, no doubt wondering who the heck we were.

What’s up next? We think we’re on the path of a new data sharing paradigm, and we’ll run with that for the next few months, and maybe riff on that in San Francisco. Stay tuned! And do download a copy of RIMMF and play–there are rballs to look at and use for your purposes.

P.S. A report of the evaluation survey will be on RDA-L sometime next week.

William Denton: Disquiet Junto 0163

planet code4lib - Sat, 2015-02-14 15:28

I follow Marc Weidenbaum’s collaborative musical project the Disquiet Junto to see what the projects are, and sometimes listen to the work people create. I’ve never contributed before, but the current project, Disquiet Junto Project 0163: Layering Minutes After Midnight, was something I could tackle easily with Sonic Pi, so I had a go.

The instructions for this project are:

Step 1: Revisit project #0160 from January 22, 2015, in which field recordings were made of the sound one minute past midnight:

http://disquiet.com/0160/

Step 2: Locate segments that are especially quiet and meditative — and confirm that they are available for creative reuse. Many should have a Creative Commons license stating such, and if you’re not sure just check with the responsible Junto participant.

Step 3: Using segments from three different tracks from the January 22 project, create a new work of sound that layers the pre-existing material into something new, something nocturnal. Keep the length of your final piece to one minute

Step 4: Upload the finished track to the Disquiet Junto group on SoundCloud.

Step 5: Be sure to include link/mentions regarding the source tracks.

Step 6: Then listen to and comment on tracks uploaded by your fellow Disquiet Junto participants.

What I did was this. First, I downloaded all of the downloadable WAV files in the project. (Sonic Pi can only sample them and FLAC, and for some reason the FLAC file I got didn’t work.)

Next I wrote a script that would choose three different WAV files at random and for each one a random starting time within the first 20 seconds of the track. (Assuming all tracks are exactly 60 seconds, this means choosing a random number between 0 and 1/3, because for Sonic Pi the start of a sample is at 0 and the end is at 1.)

use_random_seed Time.now.to_i soundfiles = Dir.glob("*wav") STDERR.puts soundfiles.size tracks = [] start = [] 3.times do |i| index = rrand_i(0, soundfiles.size) tracks[i] = soundfiles[index] soundfiles.delete_at(index) start[i] = rrand(0, 0.3333) end sample tracks[0], start: start[0], finish: start[0] + 0.6666, attack: 5, release: 5, amp: 0.7 sleep 10 sample tracks[1], start: start[1], finish: start[2] + 0.6666, attack: 5, release: 5, amp: 0.7 sleep 10 sample tracks[2], start: start[2], finish: start[2] + 0.6666, attack: 5, release: 5, amp: 0.7 Emacs, split window, editing on the left and Sonic Pi output on the right, dark Solarized theme

It plays the fragment of the first track, then 10 seconds later starts the fragment of the second track, then 10 seconds later starts the fragment of the third track. Since each is 40 seconds long, for 20 seconds all three are on top of each other, then the first ends, then the second, and for the last 10 seconds only the third track is playing. The attack and release settings mean each track takes 5 seconds to fade in and 5 seconds to fade out.

I was doing all this in Emacs if interested) in sonic-pi-mode. After some testing I ran M-x sonic-pi-start-recording, ran the script, then ran M-x sonic-pi-stop-recording and saved the file.

These are the three tracks it chose:

  1. Spin Cycle-disquiet160-oneminutepastmidnight by High Tunnels
  2. archway road midnight (disquiet160-oneminutepastmidnight) by Zedkah
  3. Can you hear the boredom? (Disquiet0160-Oneminutepastmidnight) by moduS ponY.

All have Creative Commons licenses, which I checked before going further.

The result was 72 seconds long (a few seconds were added while I ran the start/stop but I don’t see how that added up to 12) so I used Audacity to change the length to 60s without changing the pitch. I went back later to edit out the start/stop dead time but accidentally overwrote my original file, so I left it as is.

The result is “Waves Upon Waves” (embedded from SoundCloud):

District Dispatch: ALA seeks candidates for 2015 Google policy summer fellowship

planet code4lib - Fri, 2015-02-13 22:43

Google Policy Fellows at a luncheon

The American Library Association (ALA) today announces the opening of the application process for the prestigious 2015 Google Policy Fellows program. The ALA Office for Information Technology Policy began its participation eight years ago at the program’s founding.

For the summer of 2015, the selected fellow will spend 10 weeks in residence at the ALA policy office in Washington, D.C., to learn about national policy and complete a major project. Google provides the $7,500 stipend for the summer, but the work agenda is determined by the ALA and the selected fellow. Throughout the summer, Google’s Washington office will provide an educational program for all of the fellows, such as lunchtime talks and interactions with Google Washington staff.

The fellows work in diverse areas of information policy that may include digital copyright, e-book licenses and access, future of reading, international copyright policy, broadband deployment, telecommunications policy (including e-rate and network neutrality), digital divide, access to information, free expression, digital literacy, online privacy, the future of libraries generally, and many other topics.

Margaret Kavaras, a recent graduate from the George Washington University, served as the 2014 ALA Google Policy Fellow. Kavaras was later appointed as an OITP Research Associate shortly after participating in the Google Fellowship program.

Further information about the program and host organizations is available at the Google Public Policy Fellowship website. ALA encourages all interested graduate students to apply and, of course, especially those in library and information science-related academic programs. Applications are due by Thursday, March 12, 2015.

The post ALA seeks candidates for 2015 Google policy summer fellowship appeared first on District Dispatch.

District Dispatch: Creating a fair digital society

planet code4lib - Fri, 2015-02-13 22:38

Susan Crawford, Co-Director, Berkman Center for Internet and Society, Harvard University
Sir Tim Berners-Lee, Inventor, World Wide Web

How can the nation’s leaders, policymakers and community institutions work together to create an equitable digital society? That was the broad question asked this week at the launch of NetGain, a day-long symposium that gathered the world’s government, philanthropy, business and technology leaders to launch a major new partnership, explore shared principles, and get ambitious about the next generation of innovation for social change and progress.

Alan S. Inouye, director of the American Library Association’s Office for Information Technology Policy, participated in the event and wrote about his experiences on American Libraries magazine’s blog The Scoop.

[New York] Mayor de Blasio said that the analog era was not necessarily a time of great inclusion, equality, and fairness. While the digital age provides promise for a better future, there are no guarantees of better outcomes and, indeed, notes that we have now surpassed the roaring ‘20s in terms of inequality in society, in terms of economics and opportunity.

He said that today so many opportunities are absolutely correlated to education and internet access. New York is working to increase this access, such as the effort to transform pay phones into free wi-fi hotspots and to lend free wi-fi devices through New York Public Library. This kind of innovation and experimentation is key across the country, and from what’s learned, best practices must then be broadly shared and adopted, de Blasio concluded.

There was frankness with the nature of the challenges involved. When moderator Gwen Ifill asked about being behind and where foundations fit, several foundation presidents readily acknowledged that they are still largely in the analog world and that their organizations must evolve more quickly to reflect the opportunities and demands of the digital era. Many in the foundation world are not that comfortable with the technology. Also, neither the foundations nor anyone else have all the answers, and so we need to be bolder and more accepting of some failures along the road to creating lasting change.

The immediate next step is to identify the big ideas that need to be pursued. Everyone is invited to participate in the Netgain Challenge—you put on your thinking caps and submit. During the session, I asked about the process ahead, observing that some concentration of topics would be needed to be effective—that the scope of topics discussed thus far in the convening covered such a broad range of national policy challenges. There was a good “library moment” as session moderator John Palfrey responded to my question in jest (or was it!?!?)—that “Librarians can do anything, because they’re awesome!” Yes, indeed, there will be some process that will unfold to develop a strategy and some focus.

Read more on The Scoop

Gwen Ifill, Managing Editor, Washington Week; Chris Stone, President, Open Society Foundations; Mitchell Baker, Executive Chairwoman, Mozilla Foundation; Darren Walker, President, Ford Foundation

Darren Walker, President, Ford Foundation

The post Creating a fair digital society appeared first on District Dispatch.

Harvard Library Innovation Lab: Link roundup February 13, 2015

planet code4lib - Fri, 2015-02-13 20:04

Had to include some snow this time

Snow Script on Behance

Snow street art

60fps on the mobile web — Flipboard Engineering

Using canvas for lightening fast web apps. Sorry DOM.

Lars Andersen: a new level of archery – YouTube

Historical documents helped this amazing archer learn insanely impressive skills

A Photographer Who Tinkers With Time

High speed camera + subway car = amazing images.

TWINKIND | The world´s finest 3D photo figurines

3D Printed figurines. The quality looks amazing. WANT!

Mita Williams: Teach for America. Code for America. Librarianing for America

planet code4lib - Fri, 2015-02-13 18:35
On Friday the 13th, I gave the morning keynote at the Online Northwest Conference in Corvallis, OR. Thanks so much to the organization for inviting me.



Last October, I was driving home from a hackathon when I heard something extraordinary on the radio. Now, as human beings, we tend to get over-excited by coincidence - it’s a particular cognitive bias called the frequency illusion - you buy a gold station wagon and suddenly you see gold station wagons everywhere (yes, that’s my gold station wagon behind me). But that being said, I  still contend that there was something special about what I heard and when I heard it. Because you don’t hear people talking about Open Data on the radio very often.



So here’s the brief backstory.  The local technology incubator in partnership with the local hackerspace that I’m involved with was co-hosting a week long hackathon to celebrate Science and Technology Week.  I was just returning from its kick-off event where I had just given a presentation on the role of licensing in Open Data.  This particular hackathon was a judged event, with the three top prizes being a smart watch, admission to an app commercialization seminar, and an exclusive dinner with an expert in the commercialization of apps -- which was kind of odd since the data sets that were provided for the event were sets like pollution monitoring data from the Detroit River, but hey - that’s the part of the challenge of making commercial apps out of open data.


While it has been said that we are now living in the age of Big Data, only the smallest subset of that data is explicitly licensed in such a way that we the citizen can have access and can make use of it without having to ask permission or buy a license.  I’m the lead of Open Data Windsor Essex and much of my role involves explaining what Open Data is because it’s not largely understood. Because I’m talking to my fellow librarians, I’m going to give you a very abbreviated version of my standard Open Data explainer:

One of the most common definitions of Open Data comes from the Open Knowledge Foundation: Open data is data that can be freely used, reused and redistributed by anyone - subject only, at most, to the requirement to attribute and sharealike.

So, using this definition, a creative commons license of CC-BY : which means that the work has been designated in the creative commons as free to use without requiring permission as long as there is attribution is given is considered Open Data.  But CC-NC which stands for Creative Commons Non-Commercial is not considered Open Data because the domain of use has been restricted.

We in librarianship talk a lot about open source, and open access, but even we don’t talk about open data very much. So that’s why I was so surprised when there was a conversation coming from my car radio on the importance of Open Data.  Granted, I was listening to campus Radio - but still, I think I reserve the right to be impressed by how the stars seemed to have aligned just for me.




The show I was listening to was Paul Chislett’s The Shake Up on CJAM and he was interviewing Paula Z. Segal, the lead executive of a Brooklyn-based organization called 596 Acres. Her organization builds online tools that makes use of Open Data to allow neighbours to find the vacant public land hidden in plain sight in the city as the first step in the process of turning them into shared resources, such as community gardens.  Perhaps not surprising to you now, but in 2011 there was 596 acres of such empty lots in Brooklyn alone.






Segal was telling the radio host and the listening audience that many communities make data - such as data that describes what land is designated for what purpose - open and accessible to its residents. However, most citizens don’t know that the data exists because the data is contained in obscure portals, and if even if they did find the data, they generally do not understand how to handle the data, how to make sense of it and how to make it meaningful to their experiences.

Now when I heard that and whenever I hear similar complaints that the promise of Open Data has failed because it tends to add power to already powerful, I keep thinking the same thing - this is a job for librarians.

It reminds me of this quote from open government advocate, David Eaves:

We didn’t build libraries for a literate citizenry. We built libraries to help citizens become literate. Today we build open data portals not because we have public policy literate citizens, we build them so that citizens may become literate in public policy.

This brings us to the theme of this morning’s talk- which is not Open Data - although I will express today's theme through it largely because I’m back from a year’s sabbatical immersed in the topic and it’s still difficult for me to not talk about it. No, today I would like to make a case for a creating a nationwide program to put more librarians into more communities and into more community organizations. I have to warn you that I'm not going to give you any particulars about what shape or scope of what such a program could be; I'm just going to try to make a case for such an endeavor. I haven't even thought of a good name for it. The best I can come up with is Librarianing for America. On that note, I would like to give a shout-out to Chris Bourg for - if not coining the word librarianing - for at least, bringing to my attention.

And I very much hope that perchance the stars will align again and this theme will complement the work that I am very much looking forward to hearing today at Online Northwest : about digitally inclusive communities, about designing and publishing, about being embedded, about sensemaking through visualization, about enhancing access and being committed to outreach.




Before I continue I feel I should disclose that I’m not actually American.  I grew up across the river from Port Huron, Michigan and I now live across the river from Detroit, Michigan.  I literally can see Detroit from my house.

And Detroit is the setting for my next story.

A quick aside first - my research interest in open data has been largely focused on geospatial data as well as the new software options and platforms that are making web mapping much more accessible and viable for individuals and community groups  when compared to the complex geographic information systems commonly known as GIS -  that institutions such as city governments and academic libraries tend to exclusively support.




I mention this as a means to explain why I decided to crash the inaugural meeting of Maptime Detroit that happened in early November last year.

Maptime is time designated to making maps. It is the result of kind volunteers who find a space, designate a time, and extend an open invitation to anyone who is interested to drop in and learn about making maps. It started in San Francisco a couple of years ago and now there are over 40 Maptime Chapters around the world.

Now, when I went to the first Maptime Detroit event, there wasn’t actually any time given to make maps. For this inaugural meeting, instead there was a set of speakers who were already using mapping in their work.





Not very many people know that Detroit has an amazing history of citizen mapping initiatives  - the map behind me is from The Detroit Geographical Expedition from their work Field Notes Three from 1970.  I think you could make a case that another kind of community mapping outreach work is starting to emerge again through the many community initiatives that are supported by mapping that is happening in Detroit today. 






Many of the organizations who are doing community mapping work were presenting at Maptime Detroit including Justin Wedes, an organizer from the Detroit Water Brigade.

As you might already know, the city of Detroit declared bankruptcy in 2013 with debts somewhere between $18 to $20 billion dollars.  The city is collapsing upon itself at a scale that’s very difficult to wrap one’s mind around. 

The Detroit Water and Sewage Department is currently conducting mass water shut offs in the city which will affect over 120,000 account holders over an 18 month period at a rate of 3,000 per week. This will account for over 40% of customers who are using the Detroit Water system. As 70,000 of those accounts are residential accounts, it is thought that 200,000-300,000 people could be directly affected.

The Detroit Water Brigade coordinates volunteers efforts in the distribution of bottled water to affected neighbours as well as acts an advocate for the UN recognized human right to water on behalf of Detroiters.





But at Maptime Detroit, Justin Wedes didn’t begin his talk with his work in Detroit. Instead he began his presentation by speaking about of his experiences with Occupy Sandy.  In October of 2012, while New York’s FEMA offices were closed due to bad weather, veterans from the Occupy Wall Street community came forward and used their organization skills to mobilize ground support for those who needed it most. At first, Occupy Sandy was using free online services such Google Spreadsheets and Amazon’s Web Registry to collect and redistribute donations but by the end of their work, they had started using the exact same software that the city of New York uses for dispatching resources during disasters.




Wedes described the work of the Detroit Water Brigade and as he did so, he also told us how very different his experiences were in Detroit as compared to his ones in New York after Superstorm Sandy. After Sandy hit, he told us, those New Yorkers who could help their more badly damaged neighbours did so with great enthusiasm and that help was well received.  With the water shutoffs in Detroit, however, Justin feels there is an underlying sense of shame in accepting help and the response from the community at large is more restrained.  When he said this, the first thing that came to my mind was an article I had read years ago by Rebecca Solnit in Harper’s Magazine. In that article, which was later expanded into a book called A Paradise Built in Hell: The Extraordinary Communities That Arise in Disaster, Solnit makes an observation humanity opens itself to great compassion and community when a disaster is brought on by weather but this capacity is strikingly less so when that disaster is man-made.





There are many reasons why this water shut-off situation in Detroit came about and I'm not going to go into them, largely because I don't fully understand how things got to become so dire. I just want to draw attention to the tragic dynamic at hand: as the problems of Detroit grow - due to less people being about to pay for an increasingly costly and crumbling infrastructure, the capacity of the city government to deal with the worsening situation in turn, is also reduced.




What I believe should be of particular interest to us, as librarians, is that there has been a collective response from the philanthropic, non-profit community organizations along with businesses and start-ups to help Detroit through the collection and sharing of city data for the benefit of the city as a whole. Data Driven Detroit does collect and host open data, but it also hosts datasets that are collected from public and private sources as a means to create “clear communication channels back and forth between the public, the government, and city service providers.”





One of the more striking datasets that's both explorable through a map as well as available for download as open data, is Detroit Property Information through the Motor City Mapping project.  In In the fall of 2013, a team of 150 surveyed the entire city and took photos and captured condition information for every property in the city of Detroit. According to their information at this given moment, of Detroit's 374,706 properties surveyed, 25,317 are publicly owned structures. Of those, 18,410 are unoccupied, 13,570 require boarding up, and the condition of 2511 of these buildings are so poor that demolition is suggested.
Now, I can only speak for myself, but when I see these kind of projects it makes me want to learn the computer based wizardry that would allow me to do similar things.  Because while I do enjoy the intellectual work that's involved with computer technology, what really inspires me is this idea that through learning to program, I can gain superpowers that take masses amount of data and do some good with them at the scales of a city.



In short, I want to have to the powers of Tiffani Ashley Bell.  Tiffani heard about the plight of water-deprived Detroiters last July and after being urged on by a friend, she sat down and came up with the core of The Detroit Water Project in about four hours.  The Detroit Water Project pairs donors with someone in Detroit with an outstanding water bill and makes it possible for these donors to directly contribute to their water bill. Since the project started in July, over 8000 donors have paid $300,000 directly towards water bills.
Now, while I think this project is incredibly valuable and very touching as allows donors to directly improve the situation of one household in Detroit, the project admittedly does not change the dynamics involved that gave the grievous situation at hand. 
So what is to be done? How can we combine the power of open data, computer code, and the intention to do good to make more systematic changes?  How can we support and help the residents and the City of Detroit doing the good work that they already do?
This where I think another organization comes in: Code for America.



Code for America believes it can help government be more responsive to its residents by embedding those who can read and write code into the city government itself.  It formed in 2009 and it works by enlisting technology and design professionals to work with city governments in the United States in year long fellowships in order to build open-source applications that promote openness, participation, and efficiency in government.

In other words, it's a combination of service and app building that is paid for by the city, usually with the help of corporate sponsors. Each year Code for America selects 8-10 local government partners from across the US and 24-30 fellows for the program through a competitive application process.

In 2012, the Knight Foundation and Kellogg Foundation funded three Code for America fellows for a residency in Detroit.  These Code for America fellows worked with the Detroit Department of Transportation to release a real-time transit API and build the TextMyBus bus notification system which launched in September of that year.

In addition to TextMyBus, the fellows also built an app called Localdata to standardize location-based data collected by data analysts and community groups. "Localdata offers a mobile collection tool with a map interface as well as a paper collection option that can be scanned and uploaded for data syncing." This particular project joined the Code for America Incubator and has since expanded into a civic tech startup company.




In my mind, Code for America can be thought of as a scaled up version of a civic hackathon. If you aren't familiar with hackathons, they are a generally weekend affair in which participants work solo or in groups to code a website or app that ostensibly solves a problem. Sometimes there are prizes and sometimes the event is designed as a means to generate the first concept of a potential start-up. Hackathons can be a good thing - you might remember from the beginning of my talk that I sometimes help out with them which I means that I endorse them - but I do admit that that have their limits (many of which are described in this blog post behind me).  For one, it’s simply not reasonable to expect that a weekend of hacking is going to result in a wonderful app that will meet the needs of users that the programmers have likely not even met.  But, with good event design that strives incorporates mentorship, workshops, and opportunities to meet with potential users of said apps, hackathons can be a great start towards a future collaborations.




Code for America also incorporates mentorship and training into its process. Those selected for a fellowship begin at an institute in San Francisco where fellows receive training about how local government and agencies work, how to negotiate and communicate as well as how to plan, and focus their future code work.  That being said, Code for America has its own limitations as well. This particular article gently suggests that Code for America may - in some instances - seem to benefit the fellows involved more than the cities themselves.  For one, it costs a city a lot of money - $440,000 - just to support a set of Code for America fellows for a year and then, after they leave, the city needs to be able to have the capacity to support the care and feeding of the open source apps that have been left behind.

Which makes me think.

If only... if only....


If only there were people who could also help cities help their communities who didn’t have to be flown in and disappear after a year. If there was only some group of people who could partner with cities and their residents who already had some experience and expertise in open data licensing, and who understood the importance of standardizing descriptors in datasets, who were driven to improve better user experience, and who understood that data use requires data literacy which demands both  teaching and community outreach.

Friends, this is work that we - librarians can be doing.  And our communities need us. Our cities need us.

Furthermore, I don't know whether you've noticed but every year emerges another an amazing class of passionate and talented freshly minted librarians and we are simply not building enough libraries to put them in.

So I think it’s time to work towards making our own Librarianing for America.

I don’t think it’s possible to model ourselves directly on Code for America. It’s not likely we are going to find cities willing to pay $440,000 for the privilege to host 3 librarians for a year.  At least, not initially. Let’s call that our stretch goal.




We can start out small. Perhaps librarians could participate in one of the 137 Code for America Brigades that bring together civic-minded volunteers to work together via meetups.  There are a variety of other organizations that also draw on civic minded volunteers to work together towards other goals including the Open Knowledge Foundation, hack4good, CrisisMappers, and the Humanitarian OpenStreetMap Team.





Or perhaps we can follow the lead of libraries such as the Edmonton Public Library, York University Libraries, The Chattanooga Public Library, and  the University of Ottawa, who have all hosted hackathons for their communities.



This is a slide that’s admittedly out of context. I took it from a Code for America presentation and I'm not sure how precise this statistic of 75% is to their project and even whether it can be widely applied to all projects. But, I do think it is safe to say that programming code is only as good as its data is clean and meaningful.

And I say this because I don't believe that librarians have to know how to program in order to participate in Librarianing for America. I believe our existing skillset lends itself to the cause. Our values and our talents are greatly under appreciated by many, many people including librarians themselves.  



But it appears that that the talent of librarians is starting to be recognized. The City of Boston  recently was awarded a Knight Foundation grant for the specific purpose of hiring a librarian as part of a larger team to turn the City of Boston’s many Open Datasets into something findable, usable, and meaningful by its residents.



And perhaps we can learn and expand on the work of ILEAD USA.
ILEAD stands for Innovative Librarians Explore, Apply and Discover, and it is a continuing education program that is supported by grant funding from the Institute of Museum and Library Services and has librarians from ten states who are involved in this program
ILEAD USA gathers librarians together with the goal to develop a team projects over a nine month period through a combination of intermittent face-to-face meetings and online technology training sessions. At the end of nine months, each team presents their project to the entire ILEAD USA audience, with the goal of either sustaining these projects as ongoing library programs or directly applying the knowledge gained from ILEAD USA to future collaborative projects.  
Now when I first proposed this talk, I was unaware of the work of the ILEAD program. And since then I’ve had the pleasure to speak with the its project director on the phone. I asked her if she was familiar with Code for America and she told me no, although she did know about Teach for America. 
I don’t know about you, but to me, ILEAD sounds a little bit like Librarianing for America to me. Or at least it sounds like what one possible form that it could take.

Or it could be that Librarianing for America could be a placement service that matched and embedded librarians with non-profits. The non-profits could gain from the technical and material experiences of the librarian and the librarian would be able learn more about the needs of the community and form partnerships that can only occur when we step outside of our buildings.

I don't think it's so far-fetched. Last year, my local hackerspace received three years of  provincial funding to hire a staff coordinator to run a number of Open Data hackathons, host community roundtables and pay small stipends to community members who help in our efforts to making open data from the non-profit community more readily available to the community they serve.

Now it just might be the Frequency Illusion, but I prefer to think it is as if the stars are aligning for libraries and their communities..  At least they appear so when I look up towards our shared horizon. 
Thank you all for kind attention this morning and I very much look forward to spending this day librarianing with everyone here at OnlineNorthwest..  

LITA: Let’s Talk About E-rate

planet code4lib - Fri, 2015-02-13 18:28

E-rate isn’t new news. Established almost 20 years ago (I feel old, and you’re about to too) by the Telecommunications Act of 1996, E-rate provides discounts to assist schools and libraries in the United States to obtain affordable telecommunications and internet access.

What is new news is the ALA initiative Got E-rate? and more importantly the overhaul of E-rate which prompted the initiative- and it’s good news. The best part might well be 1.5 billion dollars added to annual available funding. What that means, in the simplest terms, is new opportunities for libraries to offer better, faster internet. It’s the chance for public libraries of every size to rethink their broadband networks and  make gains toward the broadband speeds necessary for library services.

But beyond the bottom line, this incarnation of E-rate has been deeply influenced by ALA input. The Association worked with the FCC to insure that the reform efforts would benefit libraries. So while we can all jump and cheer about more money/better internet, we can also get excited because there are more options for libraries who lack sufficient broadband capacity to design and maintain broadband networks that meet their community’s growing needs.

The application process has been improved and simplified, and if you need to upgrade your library’s wireless network, there are funds earmarked for that purpose specifically.

Other key victories in this reform include:

  • Adopting a building square footage formula for Category 2 (i.e., internal connections) funding that will ensure libraries of all sizes get a piece of the C2 pie.
  • Suspending the amortization requirement for new fiber construction.
  • Adopting 5 years as the maximum length for contracts using the expedited application review process.
  • Equalizing the program’s treatment of lit and dark fiber.
  • Allowing applicants that use the retroactive reimbursement process (i.e., BEAR form) to receive direct reimbursement from USAC.
  • Allowing for self-construction of fiber under certain circumstances.
  • Providing incentives for consortia and bulk purchasing.

If you’re interested in learning more, I’d suggest going to the source. But it’s a great Friday when you get to celebrate a victory for libraries everywhere.

To receive alerts on ALA’s involvement in E-rate, follow the ALA Office for Information Technology Policy (OITP) on Twitter at @OITP. Use the Twitter hashtag #libraryerate

 

Access Conference: Call for Proposals is Up

planet code4lib - Fri, 2015-02-13 18:18

Pssst, we posted our Call for Proposals.

You can find it under the ‘Program’ section of the menu or by clicking here.

Open Knowledge Foundation: Global Community Stories: January 2015

planet code4lib - Fri, 2015-02-13 17:30

As some might remember, last year we ran a very popular blog post series called Global Community Stories, which highlighted activities in the ever-broadening Local Groups global community. Towards the latter half of the year lots of other projects demanded time and the series came to an unintended halt. With the turn of the year, however, we want to change that and that we why we’re now rebooting the series and plan to make this a monthly activity.

Enough talk, let’s start our journey – here are some of the things that happened in January!

FRANCE

In France lots of activities are in motion right now, but one particularly worth noting is the participation in the first Public Domain Festival. It ran from 16th to 31st January in Paris and featured concerts, screenings, workshops, conferences in museums, libraries, hackerspaces and schools. It included over 28 different events which aimed to inform citizens and enable them to create together – as well as to highlight the public domain from all angles and for all ages.

GERMANY

Among other news, the Open Knowledge Chapter in Germany has been awarded a new EU Horizon 2020 research project, titled The Digital Whistleblower: Fiscal Transparency, Risk Assessment and Impact of Good Governance Policies Assessed (DIGIWHIST), to improve transparency in public spending and support whistleblowing. The central objective of DIGIWHIST is to improve trust in governments and efficiency of public spending across Europe by empowering civil society, investigative journalists and civil servants with the information and tools they need to increase transparency in public spending and thus accountability of public officials in all EU and in some neighbouring countries. Read more about the project here.

SPAIN

The Spanish chapter of the Open Knowledge once again organizes a global award for the best initiatives in open knowledge, open data and transparency. The six categories will award those projects and initiatives that have made visible or give practical for the public, industry and economics of open data, open knowledge and transparency.

The awards consist of six categories, including the best initiative to encourage entrepreneurship based on open knowledge, the best business based on open knowledge, the best non-public transparency initiative, the best open science initiative, the best public initiative to support Transparency through the Open Data, and the best public open data initiative with involvement of citizens/society.

The awards ceremony will take place on February 21 at Media Lab Prado, coinciding with the celebration of the worldwide Open Data Day. The ceremony will feature an address by the president of Open Knowledge, Rufus Pollock, as well as the announcement of the winner of a special initiative: The Anti-Award ‘Padlock’ to the most opaque and closed initiative, whether public or private, elected by registered users via the Award prize page. This second edition of the award features an English page to help institutions and initiatives internationally take part. Today is the deadline for applications, so jump on in and make a submission if you have candidates!

SWITZERLAND

Coinciding with Champions League, Milan joined Swiss groups in Basel and Sierre to kick off the new Sports Working Group with a first hackathon, sparking discussion of transparency on an international level at the yearly conference in Zürich, where the community engaged in diverse talks and launched new projects. A big theme of 2014 was renewed commitments to Swiss openness: a parliamentary motion for Procurement Data, legal provisions to opening weather data, developments in the City of Zürich and Canton of St.Gallen – and the Open Government Data Strategy confirmed by the Federal Council in April and embedded in the action plan.

While Open Budget visualisations are now deployed for the canton of Berne and six municipalities, spending data remains a challenge. Student teams participating in a new university course are helping to advance the cause for financial transparency. New open data projects were released, such as WindUndWetter.ch and SwissMetNet API, based on just-opened national weather data. But, talk about “hold your horses”: a closed-source city waste removal schedule app led to intense debate with officials over open data policy, the results making waves in the press and open data developers leading by doing.

The new year promises at least as much: the next hackathon organised by the new OpenGLAM.ch Working Group, together with Wikimedia and the National Library, is canvassing Swiss institutions to provide content, data, and expertise – and inviting global participation. For the full calendar of upcoming events, visit their blog.

DPLA: DPLA welcomes four new Service Hubs to its growing national network: Tennessee, Maryland, Maine, and the Caribbean

planet code4lib - Fri, 2015-02-13 16:00

The Digital Public Library of America (DPLA) is pleased to announce the addition of four Service Hubs that will be joining our Hub network. The Hubs represent Tennessee, Maryland, Maine and the Caribbean.  The addition of these Hubs continues our efforts to help build local community and capacity, and further efforts to build an on-ramp to DPLA participation for every cultural heritage institution in the United States and its territories.

These Hubs were selected from the first-ever application process for new DPLA Hubs, intended to give both prospective Hubs and DPLA a better sense of what is involved in bringing on a new Hub. Each Hub has a strong commitment to bring together the cultural heritage content in their state to be a part of DPLA, and to build community and data quality among the participants.

In Tennessee, the Service Hub responsibilities will be shared by the University of Tennessee – Knoxville, Tenn-Share and the Tennessee State Library and Archives. Tennessee plans to make available important material on the Civil Rights Movement, and Appalachia and the Great Smoky Mountains.

In Maryland, the Service Hub responsibilities will be shared by Digital Maryland, based at the Enoch Pratt Free Library, and the University System of Maryland and Affiliated Institutions (USMAI). The collections Maryland plans to make available include materials about Women’s Suffrage, the Civil War, World War I and II, agriculture, sports, transportation and historic architecture.

In Maine, the Service Hub will be run by the Maine State Library. Collections to be shared with DPLA as part of the Maine Service Hub include materials from the Maine Memory Network, a project of the Maine Historical Society, which in turn represents collections from a number of smaller institutions throughout Maine and from the full history of Maine, among other important topics. In addition, important films that are a part of the North East Historic Films collection and the Maine Music Box Sheet Music collection will be shared.

The final Service Hub, representing the Caribbean, is a partnership between the Digital Library of the Caribbean and the University of Florida. Topics and genres to be shared with DPLA from this Hub include Caribbean maps and materials about the Panama Canal, the sugar industry, and vodou.

“We are excited to welcome these four new Service Hubs to the DPLA Network,” said Emily Gore, DPLA Director for Content. “We look forward to sharing their aggregated content with the content of our other Hubs and with the public. We appreciate the commitment by these Hubs to broadly share cultural heritage content and to improve data quality.”

A second call for new DPLA Hubs will occur in June of 2015. To receive updates about the next application cycle, sign up for the DPLA Newslist.

 

 

 

Ariadne Magazine: Automating Harvest and Ingest of the Medical Heritage Library

planet code4lib - Fri, 2015-02-13 15:35

Christy Henshaw, Dave Thompson and João Baleia describe an automated process to harvest medical books and pamphlets from the Internet Archive into the Wellcome Library’s Digital Services environment.

Overview of the UK Medical Heritage Library Project

The aim of the UK Medical Heritage Library (UK-MHL) Project is to provide free access to a wealth of medical history and related books from UK research libraries. There are already over 50,000 books and journal issues in the Medical Heritage Library drawn from North American research libraries. The UK-MHL Project will expand this collection considerably by digitising a further 15 million pages for inclusion in the collection.

read more

Ariadne Magazine: Automating Harvest and Ingest of the Medical Heritage Library

planet code4lib - Fri, 2015-02-13 15:35

Christy Henshaw, Dave Thompson and João Baleia describe an automated process to harvest medical books and pamphlets from the Internet Archive into the Wellcome Library’s Digital Services environment.

Overview of the UK Medical Heritage Library Project

The aim of the UK Medical Heritage Library (UK-MHL) Project is to provide free access to a wealth of medical history and related books from UK research libraries. There are already over 50,000 books and journal issues in the Medical Heritage Library drawn from North American research libraries. The UK-MHL Project will expand this collection considerably by digitising a further 15 million pages for inclusion in the collection.

read more

Ariadne Magazine: Editorial Introduction to Issue 73

planet code4lib - Fri, 2015-02-13 14:27

The Editor introduces Issue 73 and provides an update on the future of Ariadne.

The requirement to make a business case to maintain or establish a service or a project is a familiar process for many of us working in Libraries.  Many libraries are asked to justify their very existence on a regular basis.  Some succeed, others unfortunately do not.

read more

Ariadne Magazine: Editorial Introduction to Issue 73

planet code4lib - Fri, 2015-02-13 14:27

The Editor introduces Issue 73 and provides an update on the future of Ariadne.

The requirement to make a business case to maintain or establish a service or a project is a familiar process for many of us working in Libraries.  Many libraries are asked to justify their very existence on a regular basis.  Some succeed, others unfortunately do not.

read more

Ariadne Magazine: Internet Librarian International Conference 2014

planet code4lib - Fri, 2015-02-13 14:22

Zoë Hurley and Garth Bradshaw report on the Internet Librarian Conference, held at the Olympia Conference Centre in London over 20-22 October 2014.

Zoë reports from day one of the conference and Garth reports from day two.

read more

Ariadne Magazine: Internet Librarian International Conference 2014

planet code4lib - Fri, 2015-02-13 14:22

Zoë Hurley and Garth Bradshaw report on the Internet Librarian Conference, held at the Olympia Conference Centre in London over 20-22 October 2014.

Zoë reports from day one of the conference and Garth reports from day two.


Day 1 : 21 October 2014

read more

Pages

Subscribe to code4lib aggregator