Here's the typical framing of "the book discovery problem". "When I go to a bookstore, I frequently leave with all sort of books I never meant to get. I see a book on the table, pick it up and start reading, and I end up buying it. But that sort of serendipitous discovery doesn't happen at Amazon. How do we recreate that experience?" Or "There are so many books published, how do we match readers to the books they'd like best?"
This "problem" has always seemed a bit bogus to me. First of all, when I'm on the internet, I'm constantly running across interesting sounding books. There are usually links pointing me at Amazon, and occasionally I'll buy the book.
As a consumer, I don't find I have a problem with book discovery. I'm not compulsive about finding new books; I'm compulsive about finishing the book I've read half of. When I finish a book, it's usually two in the morning and I really want to get to sleep. I have big stacks both real and virtual of books on my to-read list.
Finally, the "discovery problem" is a tractable one from a tech point of view. Throw a lot of data and some machine learning at the problem and a good-enough solution should emerge. (I should note here that book "discovery" on the website I run, unglue.it, is terrible at the moment, but pretty soon it will be much better.)
So why on earth does Amazon, which throws huge amounts of money into capital investment, do such a middling job of book discovery?
Recently the obvious answer hit me in the face, as such answers are wont to do. The answer is that mediocre discovery creates a powerful business advantage for Amazon!
Consider the two most important discovery tools used on the Amazon website:
- People who bought X also bought y.
- Top seller lists.
What happens in this sort of positive feedback system is pretty obvious to an electrical engineer like me, but Wikipedia's example of a cattle stampede makes a better picture.
The number of cattle running is proportional to the overall level of panic, which is proportional to...the number of cattle running! "Stampede loop" by Trevithj. CC BY-SAResult: Stampede! Yeah, OK, these are sheep. But you get the point. "Herdwick Stampede" by Andy Docker. CC BY.Imagine what would happen if Amazon shifted from sales-based recommendations to some sort of magic that matched a customer with the perfect book. Then instead of focusing effort on steering readers to Amazon, authors and publishers would concentrate on creating the perfect book. The rich would stop getting richer, and instead, reward would find the deserving.
Ain't never gonna happen. Stampedes sell more books.
Islandora Camp is going to Denver, CO in just 10 days. When we get there, we will be very happy to welcome our Camp attendees with their official t-shirt, designed by UPEI's Donald Moses:
Don's iCampCO logo will join the ranks of our historical Camp Logos. Keep an eye out for our next Camp and yours could be next!
We are now a little over a year into the Local Groups scheme that was launched in early 2013. Since then we have been receiving hundreds of applications from great community members wanting to start Local Groups in their countries and become Ambassadors and community leaders. From this great body of amazing talent, Local Groups in over 50 countries have been established and frankly we’ve been overwhelmed with the interest that this program has received!
Over the course of this time we have learned a lot. Not only have we seen that open knowledge first and foremost develops locally and how global peer support is a great driver for making a change in local environments. We’re humbled and proud to be able to help facilitate the great work that is being done in all these countries.
We have also learned, however, of things in the application process and the general network structure that can be approved. After collecting feedback from the community earlier in the year, we learned that the structure of the network and the different labels (Local Group, Ambassador, Initiative and Chapter) were hard to comprehend, and also that the waiting time that applicants wanting to become Ambassadors and starting Local Groups were met with was a little bit frustrating. People applying are eager to get started, and of course having to wait weeks or even longer (because of the number of applications that came in) was obviously a little bit frustrating.Presenting a more streamlined structure and way of getting involved
We have now thoroughly discussed the feedback with our great Local Groups community and as a result we are excited to present a more streamlined structure and a much easier way of getting involved. The updated structure is written up entirely on the Open Knowledge wiki, and includes the following major headlines:1. Ambassador and Initiative level merge into “Local Groups”
As mentioned, applying to become an Ambassador and applying to set up an Initiative were the two kinds of entry-level ways to engage; “Ambassador” implying that the applicant was – to begin with – just one person, and “Initiative” being the way for an existing group to join the network. These were then jointly labelled “Local Groups”, which was – admittedly – a lot of labels to describe pretty much the same thing: People wanting to start a Local Group and collaborate. Therefore we are removing the Initiative label all together, and from now everyone will simply apply through one channel to start a Local Group. If you are just one person doing that (even though more people will join later) you are granted the opportunity to take the title of Ambassador. If you are a group applying collectively to start a Local Group, then everyone in that group can choose to take the title of Local Group Lead, which is a more shared way to lead a new group (as compared to an Ambassador). Applying still happens through a webform, which has been revamped to reflect these changes.2. Local Group applications will be processed twice per year instead of on a rolling basis
All the hundreds of applications that have come in over the last year have been peer-reviewed by a volunteer committee of existing community members (and they have been doing a stellar job!). One of the other major things we’ve learned is the work pressure that the sheer number of applications put on this hard-working group simply wasn’t long term sustainable. That is why that we as of now will replace the rolling basis processing and review of applications in favor of two annual sprints in October and April. This may appear as if waiting time for applicants becomes even longer, but that is not the case! In fact, we are implementing a measure that ensures no waiting at all! Keep reading.3. Introducing a new easy “get-started-right-away” entry level: “Local Organiser”
This is the new thing we are most excited to introduce! Seeing how setting up a formal Local Group takes time (regardless of how many applications come in), it was clear that we needed a way for people to get involved in the network right away, without having to wait for weeks and weeks on formalities and practicalities. This has lead to the new concept of “Local Organiser”:
Anyone can pick up this title immediately and start to organise Open Knowledge activities locally in their own name, but by calling themselves Local Organiser. This can include organising meetups, contributing on discussion lists, advocating the use of open knowledge, building community and gather more people to join – or any other relevant activity aligned with the values of Open Knowledge.
Local Organisers needs to register by setting up a profile page on the Open Knowledge wiki as well as filling this short form. Shortly thereafter the Local Organiser will then be greeted officially into the community with an email from the Open Knowledge Local Group Team containing a link to the Local Organiser Code of Conduct that the person automatically agrees to adhere to when he/she picks up the title.
Local Organisers use existing, public tools such as Meetup.com, Tumblr, Twitter etc. – but can also request Open Knowledge to set up a public discussion list for their country (if needed – otherwise they can also use other existing public discussion lists). Additionally, they can use the Open Knowledge wiki as a place to put information and organize as needed. Local Organisers are enrouraged to publicly document their activities on their Open Knowledge wiki profile in order to become eligible to apply to start an official Open Knowledge Local Group later down the road.A rapidly growing global network
What about Chapters you might wonder? Their status remain unchanged and continue to be the expert level entity that Local Groups can apply to become when reaching a certain level of prowess.
All in all it’s fantastic to see how Open Knowledge folks are organising locally in all corners of the world. We look forward to continue supporting you all!
If you have any questions, ideas or comments, feel free to get in touch!
How might we leverage libraries as a platform to build more knowledgeable communities?
I'm reviewing these entries because I think some of them might prove useful in a paper I'm currently writing. There are some reoccurring themes to the entries that I think are quite telling.
Of the 680 entries, there's some wonderful ideas that need to be shared. Here are some of the proposals that I've applauded:
- The Library Freedom Project: Bringing Privacy Education and Digital Tools to Local Communities Through Libraries : The Library Freedom Project teaches librarians about privacy rights, law, and tech tools to protect patrons from dragnet surveillance.
- OpenArchive: Uniting Libraries and Archives to Empower Engaged Citizens: We want to create a world where a people’s history is the primary history; where we leverage the resources of libraries and public interest organizations to collect, preserve, and amplify local citizen media
- Making the Invisible Visible: a digital overlay on the in-library visit that combines mobile technology with location based sensors to reveal the full riches of the public library to our communities: A context-aware mobile application that intelligently reveals previously hidden resources and opportunities based on your locations, interests and actions as you walk through the library.
- NYC Space/Time Directory: Community-Driven Urban History With The Ease Of Google Maps : The NYC Space/Time Directory does for old New York what Google Maps does for modern cities, turning historic maps and data into time machines.
- New models for creating open digital television libraries -- starting with the Marion Stokes Collection, the largest private collection of U.S television news : Starting with the historic Marion Stokes Collection, the Internet Archive will add to its free and open research library of television news--creating cost effective processes, technology, and digital access to historic videotapes and sharing this model with others.
- Empowering libraries to hear all voices: Loomio breaks down barriers to inclusive library-centered community engagement by making online participation disability-accessible.
- Librii: Digital Community Library in Africa : Librii’s mission is to stimulate cultural creation in developing countries by providing access to digital and physical resources.
- Beyond Asset Maps: Linking the Community to Vital Resources : Beyond Asset Maps: Linking the Community to Vital Resources will build a more knowledgeable community by creating a digital hub to connect vulnerable residents to critical services.
- Building Libraries Together: new tools for communities to help build digital libraries of the future: Give global communities the tools they need to save, manage and share their cultural treasures forever for free.
- Crowd-Sourcing a Digital Community Archive for the Future: We want to create a crowd-sourced community digital archive in Rochester, NY, to preserve records of the community's present for the future.
- How Libraries Can Work Together to Save the World from Link Rot: Perma.cc: Perma is a platform for libraries, authors and publishers to work together to ensure that online source materials are preserved and accessible to readers forever.
- Making Historic Software Eternal: Imagine an emulator that plays every computer program, game, and app in your browser - one click and you're in software history.
- Building Reading - Using data to add a layer of digital transparency to the exterior of the library: We want as many people as possible to see how libraries are sources of knowledge that support communities through sharing data about how the library functions -- in real time -- on the outside of the library itself.
- LOCAL: Local Online Communities And Libraries: LOCAL will empower public libraries to create, preserve, and provide ongoing access to local history collections of the meaningful online content documenting their communities.
- Safe Harbor Guide: Community toolkits, resilient neighborhoods: Libraries, neighborhoods, and federal agencies work together to create the first Safe Harbor Guide for collaborative disaster response planning.
- GITenberg: Modern Maintenance Infrastructure for Our Literary Heritage: Help libraries use and maintain Project Gutenberg public domain ebooks to serve their communities... with GitHub!
- Maptime Public Library, a.k.a Null Island Inter-Library Loan (NIILL): Maptime Public Library is an online library designed for map making beginners that connects resources for both geospatial and open source technology learning, all while teaching the basic terminology necessary to navigate this technical space.
- BookNode: Using Book Journeys and Traces of Readers to Build Community: BookNode uses journeys of books and traces of their former readers to extend the library’s reach, all while creating a playful and social urban experience that builds a strong local community.
- Reward your worst customers: the push to create repeat non-offenders: Let your worst customers keep their privileges in order to rebuild their library credit and become responsible customers.
- Make the Things that Measure the Future: Libraries and Open Hardware: Develop, deploy, and train librarians on open hardware devices that let libraries more effectively and efficiently serve their communities
- Civic Ticker: Libraries Mobilizing Citizens to Inform and be Informed: Ann Arbor District Library will pioneer a new way for libraries to inform their communities, transforming libraries from passive collectors to active producers of news and information.
- Upgrade Libraries' eBook Catalogs into a Publishing Platform for Indie Authors: Linking up all libraries' ebook catalogs so they become THE publishing platform for indie authors looking to be discovered by the most avid readers.
- An aspiration graph in every community: Libraries help build an “aspiration graph” in their communities that makes it possible to see what residents dream about being able to learn or do, and makes it easier for educators and entrepreneurs to build an economy of opportunity around residents’ expressed aspirations.
- Powers of Ten: Scaling up libraries’ capacity to build better worlds with code: I’ll develop a coding curriculum and teaching model specific to librarians’ real-world needs, so they have more ways to change the world.
For the purposes of my paper, I'm interested in the intersections of Open Data and Libraries. Here are the entries that touch on these two topics:
- The People’s Public Information Portal: Transform the California State Library into an open source center for public information and civic engagement by building a cadre of informationistas who'll foster an open data culture in state government: Begin creating a modern, open data, open source code portal housed in the State Library and, over time, connect it to California’s 1,182 public libraries to improve statewide access to government information, boost civic awareness and spur community action.
- The Librarian of the Future: Providing data analysis and visualization training to librarians in city, school, and college libraries, so that they can respond to the growing needs of their communities by making the library the seed incubator of data driven local change.
- An Open Data Roadmap for Public Libraries: Our goal is to establish a framework for libraries to better harness open data in order to improve resident interactions between government and civic organizations, improve data management practices at public and civic institutions, and enhance the library as a central place for civic discourse and information literacy.
- Nevada Data Access Portal (NDAP): Promoting Data Literacy and Nevada Data Resources to the Community: With the Nevada Data Access Portal (NDAP), community members will be able to access an external portal through UNLV Libraries to research and collect data. #dataliteracy #NVdata
- Libraries as Labs for Open Government: Convene Oaklanders from every neighborhood in Oakland’s public libraries, to experiment with technologies for open government and work with technologists to make these tools more useful to them and their communities.
- Pittsburgh’s Community Information Commons: Storytelling, Problem solving, Resource sharing: The Community Information Commons will be a shared knowledge base of community history, issues, and resources for problem solving, strategizing, and engagement using open source software (Collective Access) to curate and disseminate content.
- Leverage Library Taxonomies for Open Data and the Web: Use library skills to build new taxonomies that make open data more discoverable.
- L.A. Drone Cartography Project: We will give L.A. residents precise spatial knowledge about their communities with meaningful layers of historical and real-time data to help them better understand where they live.
- Visualizing the Smart City: Leveraging the City's open data, social media analytics and the Library's own statistics, we will create an installation that provides data visualizations that respond to user feedback and interaction.
- From open data to open knowledge: Using libraries to turn civic data into a valuable resource for citizens, researchers, and City Hall alike: We seek to turn the City’s Open Data collection into a true knowledge resource by working with libraries in our community to catalog the information and provide essential context to citizens and researchers.
- Data libraries: Let people search less time for reliable (local) data. How a metadata and open source software can help: A module in Datawrapper, where libraries can add metadata to a dataset - thus enabling search and re-use of such data.
- Turning Libraries into “Community Observatories”: We turn libraries into “community observatories” where residents work together to crowdsource data, make mapstories, and explore the historical evolution of their local communities.
And I would be remiss if I didn't tell you that I am also collaborating on this entry:
OVER UNDER AROUND THROUGH: a national library-library game to build civic engagement skills: OVER UNDER AROUND THROUGH is kinda like a dance-off challenge: libraries challenge each other – but instead of “show us your moves” the challenge is “show us how you would take on” actual community challenges such as economic disparity and racial tensions
In many ways, this Knight News Challenge is just such a dance-off.
CrossRef: The Public Knowledge Project and CrossRef Collaborate to Improve Services for Publishers using Open Journal Systems
2 October 2014, Lynnfield, MA, USA and Vancouver, BC, Canada---CrossRef and the Public Knowledge Project (PKP) are collaborating to help publishers and journals using the Open Journal Systems (OJS) platform take better advantage of CrossRef services.
The collaboration involves an institutional arrangement between CrossRef and PKP, and new software features. Features include an improved CrossRef plugin for OJS that will automate Digital Object Identifier (DOI) deposits, as well as plans to create new tools for extracting references from submissions. To facilitate CrossRef membership, PKP has also become a CrossRef Sponsoring Entity, which will allow OJS-based publishers to join CrossRef through PKP.
The latest release of OJS version 2.4.5 includes a new CrossRef plugin with improved support for CrossRef deposits, the process by which CrossRef member publishers can assign DOIs (persistent, actionable identifiers) to their content. A CrossRef deposit includes the bibliographic metadata about an article or other scholarly document, the current URL of the item, and the DOI. Publishers need only update the URL at CrossRef if the content's web address changes. The cited DOI will automatically direct readers to the current URL.
OJS 2.4.5 includes several general improvements that benefit CrossRef members directly and indirectly. First, OJS now allows for automatic deposits to the CrossRef service - manually uploading data via CrossRef's web interface is no longer necessary. Second, users of the plugin will be able to deposit Open Researcher and Contributor Identifiers (ORCIDs), which OJS can now accept during the author registration and article submission processes.
Additionally, this release also allows OJS publishers to more easily participate in the LOCKSS archiving service of their choice (including the forthcoming PKP PLN Service).
Finally, this new release will serve as the foundation for further integration of other CrossRef features and services, such as the deposit of FundRef funding data, and the CrossMark publication record service.
"The release of OJS 2.4.5 signals a new strategic direction for PKP in the provision of enhanced publishing services, such as the new CrossRef plugin," said Brian Owen, Managing Director of PKP. "Our collaboration with CrossRef has enabled us to move up the development of features that our publishers have been asking for. The partnership doesn't end here, either. We're looking forward to supporting publishers more directly now that we are a Sponsoring Entity and to jointly develop tools that will make it easier for publishers to comply with CrossRef's outbound reference linking requirements."
CrossRef Executive Director Ed Pentz noted, "The profile of CrossRef's member publishers has changed significantly over the years. We are growing by hundreds of members each year. Many of these publishers are small institution-based journals from around the world. And many are hosted by the open source OJS software. It has been challenging for some of these organizations to meet our membership obligations like outbound reference linking and arranging for long-term archiving. And many have not been able to participate in newer services, because they require the ability to deposit additional metadata. We want all of our publishers to have a level playing field, regardless of their size. Our cooperation with PKP will help make that happen."
Journals and publishers that use OJS and that already have established a direct relationship with CrossRef, or those that have an interest in becoming members through PKP, may take advantage of the enhanced features in the new CrossRef plugin by upgrading to OJS 2.4.5. And starting now, eligible journals can apply for a PKP-sponsored CrossRef membership for free DOI support. See PKP's CrossRef page for more information.
The Public Knowledge Project was established in 1998 at the University of British Columbia. Since that time PKP has expanded and evolved into an international and virtual operation with two institutional anchors at Stanford University (http://www.stanford.edu/) and Simon Fraser University Library (http://www.lib.sfu.ca/ ). OJS is open source software made freely available to journals worldwide for the purpose of making open access publishing a viable option for more journals, as open access can increase a journal's readership as well as its contribution to the public good on a global scale. More information about PKP and its software and services is available at pkp.sfu.ca.
CrossRef (www.crossref.org) serves as a digital hub for the scholarly communications community. A global not-for profit membership organization of scholarly publishers, CrossRef's innovations shape the future of scholarly communications by fostering collaboration among multiple stakeholders. CrossRef provides a wide spectrum of services for identifying, locating, linking to, and assessing the reliability and provenance of scholarly content.
James MacGregor, PKP
Carol Anne Meyer, CrossRef
Phone +1 781-295-0072 x23
Washington, DC The 2015 National Agenda for Digital Stewardship has been released!
You can download a copy of the Executive Summary and Full Report here: http://www.digitalpreservation.gov/ndsa/nationalagenda/index.html
Winchester, MA The 8th DuraSpace Hot Topics Community Webinar Series, “Doing It: How Non-ARL Institutions are Managing Digital Collections” began October 2, 2014. The first webinar in this series curated by Liz Bishoff, “Research Results on Non-ARL Academic Libraries Managing Digital Collections,” provided an overview of the methodology and key questions and findings of the Managing Digital Collections Survey of non-ARL academic libraries. Participants also had the opportunity to share how
Weekly PeerLibrary meeting finalizing our Knight News Challenge submission.
Cynthia Ng: Access 2014: Closing Keynote Productivity and Collaboration in the Age of Digital Distraction
Evergreen 2.7.0 has been released!
Small delay in announcing, but here we go…
Cheers and many thanks to everyone who helped to make Evergreen 2.7.0 a reality, our first official release of the 2.7 series! After six months of hard work with development, bug reports, testing, and documentation efforts, the 2.7.0 files are available on the Evergreen website’s downloads page: http://evergreen-ils.org/egdownloads
So what’s new in Evergreen 2.7? You can see the full release notes here: http://evergreen-ils.org/documentation/release/RELEASE_NOTES_2_7.html. To briefly summarize though, there were contributions made for both code and documentation by numerous individuals. A special welcome and acknowledgement to all our first-time contributors, thanks for your contributions to Evergreen!
Some caveats now… currently Evergreen 2.7.0 requires the use of the latest OpenSRF 2.4 series, which is still in its alpha release (beta coming soon). As folks help to test the OpenSRF release, this will no doubt help to make Evergreen 2.7 series better. Also, for localization/i18n efforts, there was some last minute bug finding and we plan to release updated translation files in the next maintenance release 2.7.1 for the 2.7 series.
Evergreen 2.7.0 includes a preview of the new web-based staff client code. The instructions for setting this up are being finalized by the community and should be expected for release during the next maintenance version 2.7.1 later in October.
See here for some direct links to the various files so far:
Once again, a huge thanks to everyone in the community who has participated this cycle to contribute new code, test and sign-off on features, and work on new documentation and other ongoing development efforts.
This post was featured on the Mozilla Community Blog
If you’ve been wondering why you haven’t received the best in Mozilla’s community news in some weeks, it’s because we’ve been busy redesigning our newsletter in order to bring you even more great content.
Non-profit marketing is no easy feat. Even with our team of experts here at Mozilla, we don’t always hit the bar when it comes to open rates, click through rates, and other metrics that measure marketing success. For our community newsletter, I watched our metrics steadily decrease over the six month period since we re-launched the newsletter and started publishing on a regular basis.
It was definitely time for a makeover.
Our community newsletter is a study in pathways and retention: How do we help people who have already expressed interest in contributing get involved and stay involved? What are some easy ways for people to join our community? How can communities come together to write inspiring content for the Web?
At Mozilla, we put out three main newsletters: Firefox and You (currently on a brief hiatus), the Firefox Student Ambassadors newsletter, and our Mozilla Communities Newsletter (formerly called about:Mozilla)
It was important to me to have the newsletter feel authentically like the voice of the community, to help people find their Mozillian way, and to point people in the direction of others who share their interests, opening up participation to a wider audience.
A peer assist with Andrea Wood and Kelli Klein at the Mozilla Foundation helped me articulate what we needed and stay on-target with the newsletter’s goal to “provide the best in contribution opportunities at Mozilla.” Andrea demonstrated to me how the current newsletter was structured for consumption, not action, and directed me toward new features that would engage people with the newsletter’s content and eventually help them join us.
I also took a class with Aspiration Tech on how to write emails that captivate as well as read a lot about non-profit email marketing. While some of it seemed obvious, my research also gave me an overview of the field, which allowed me to redesign the newsletter according to best practices.
Here’s what I learned:
1. According to M & R, who publishes the best (and most hilarious) study of non-profit email campaigns, our metrics were right on track with industry averages. Non-profit marketing emails have a mean open rate of 13% with a 2.5% deviance in either direction. This means that at between 25% and 15% open rate we were actually doing better than other non-profit emails. What worried me was that our open rate rapidly and steadily decreased, signalling a disengagement with the content.
I came up with similar findings for our click through rates– on par with the industry, but steadily decreasing. (From almost 5% on our first newsletter to less than 1.5% on our last, eek!)
2. While I thought that our 70,000 subscribers put us safely in the “large email list” category, I learned that we are actually a small/medium newsletter according to industry averages! In terms of how we gain subscribers, I’m hoping that an increased social media presence as well as experiments with viral marketing (IE “forward this to a friend!”) will bring in new voices and new people to engage with our community.
3. “The Five Second Rule” is perhaps the best rule I learned about email marketing. Have you captured the reader in three seconds? Can you open an email and know what it’s trying to ask you in five seconds? If not, you should redesign.
4. Stories that asked people to take action were always the most clicked on stories in our last iteration. This is unsurprising, but “learn more” and “read more” don’t seem to move our readers. “Sign this petition” and “Sign up” were always well-received.
5. There is no statistically “best time” to send an email newsletter. The best time to send an email newsletter is “when it’s ready.” While every two weeks is a good goal for the newsletter, sending it slightly less frequently will not take away from its impact.
6. As M & R writes, “For everything, (churn churn churn) there is a season (churn, churn, churn)…” our churn rate on the newsletter was pretty high (we lost and gained subscribers at a high rate.) I’m hoping that our new regular features about teaching and learning as well as privacy will highlight what’s great about our community and how to take action.
And now to the redesign!
The first thing you’ll notice is that our newsletter is now called “Mozilla Communities.” We voted on the new name a few weeks ago after the Grow Mozilla call. Thanks to everyone who gave feedback.
What we need from you:
1. We need writers, coders, social media gurus, copy editors, and designers who are interested in consistently testing and improving the newsletter. The opportunity newsletter is a new contribution area on the October 15th relaunch of the Get Involved page (under the “Writing –> Journalism” drop down choice) and I’m hoping that will engage new contributors as well.
2. A newsletter can’t run without content, and we experimented with lots of ways to collect that content in the last few months. Do you have content for the newsletter? Do you want to be a featured contributor? Reach out to mozilla-communities at mozilla dot com.
3. Feedback requested! I put together an Etherpad that asks specific questions about improving the design. Please put your feedback here or leave it in the comments.
The newsletter is a place for us to showcase our work and connect with each other. We can only continue improving, incorporating best practices, and connecting more deeply and authentically through our platforms. Thank you to everyone who helped in the Mozilla Communities redesign and to all of you who support Mozilla communities every day.
Note: as with all the content on this blog, this post reflects my own thoughts about a personal project, and not the opinions or activities of my employer.
Two days ago a retweet from my friend Ian Davis scrolled past in my Twitter stream:
— Parliament WikiEdits (@parliamentedits)
The simplicity of combining Wikipedia and Twitter in this way immediately struck me as a potentially useful transparency tool. So using my experience on a previous side project I quickly put together a short program that listens to all major language Wikipedias for anonymous edits from Congressional IP address ranges (thanks Josh) … and tweets them.
In less than 48 hours the @congressedits Twitter account had more than 3,000 followers. My friend Nick set up gccaedits for Canada using the same software … and @wikiAssemblee (France) and @RiksdagWikiEdit (Sweden) were quick to follow.
Watching the followers rise, and the flood of tweets from them brought home something that I believed intellectually, but hadn’t felt quite so viscerally before. There is an incredible yearning in this country and around the world for using technology to provide more transparency about our democracies.
Sure, there were tweets and media stories that belittled the few edits that have been found so far. But by and large people on Twitter have been encouraging, supportive and above all interested in what their elected representatives are doing. Despite historically low approval ratings for Congress, people still care deeply about our democracies, our principles and dreams of a government of the people, by the people and for the people.
We desperately want to be part of a more informed citizenry, that engages with our local communities, sees the world as our stage, and the World Wide Web as our medium.
Consider this thought experiment. Imagine if our elected representatives and their staffers logged in to Wikipedia, identified much like Dominic McDevitt-Parks (a federal employee at the National Archives) and used their knowledge of the issues and local history to help make Wikipedia better? Perhaps in the process they enter into conversation in an article’s talk page, with a constituent, or political opponent and learn something from them, or perhaps compromise? The version history becomes a history of the debate and discussion around a topic. Certainly there are issues of conflict of interest to consider, but we always edit topics we are interested and knowledgeable about, don’t we?
I think there is often fear that increased transparency can lead to increased criticism of our elected officials. It’s not surprising given the way our political party system and media operate: always looking for scandal, and the salacious story that will push public opinion a point in one direction, to someone’s advantage. This fear encourages us to clamp down, to decrease or obfuscate the transparency we have. We all kinda lose, irrespective of our political leanings, because we are ultimately less informed.
I wrote this post to make it clear that my hope for @congressedits wasn’t to expose inanity, or belittle our elected officials. The truth is, @congressedits has only announced a handful of edits, and some of them are pretty banal. But can’t a staffer or politician make a grammatical change, or update an article about a movie? Is it really news that they are human, just like the rest of us?
I created @congressedits because I hoped it could engender more, better ideas and tools like it. More thought experiments. More care for our communities and peoples. More understanding, and willingness to talk to each other. More humor. More human.
@Congressedits is why we invented the Internet
— zarkinfrood (@zarkinfrood)
I’m pretty sure zarkinfrood meant @congressedits figuratively, not literally. As if perhaps @congressedits was emblematic, in its very small way, of something a lot bigger and more important. Let’s not forget that when we see the inevitable mockery and bickering in the media. Don’t forget the big picture. We need transparency in our government more than ever, so we can have healthy debates about the issues that matter. We need to protect and enrich our Internet, and our Web … and to do that we need to positively engage in debate, not tear each other down.
Educate and inform the whole mass of the people. Enable them to see that it is their interest to preserve peace and order, and they will preserve them. And it requires no very high degree of education to convince them of this. They are the only sure reliance for the preservation of our liberty. — Thomas Jefferson
Who knew TJ was a Wikipedian…
— MIT Media Lab (@medialab) October 1, 2014
It seems like an important move for MIT to formally recognize that social media is a new medium that deserves its own research focus, and investment in infrastructure. The language on the homepage gives a nice flavor for the type of work they plan to be doing. I was particularly struck by their frank assessment of how our governance systems are failing us, and social media’s potential role in understanding and helping solve the problems we face:
In a time of growing political polarization and institutional distrust, social networks have the potential to remake the public sphere as a realm where institutions and individuals can come together to understand, debate and act on societal problems. To date, large-scale, decentralized digital networks have been better at disrupting old hierarchies than constructing new, sustainable systems to replace them. Existing tools and practices for understanding and harnessing this emerging media ecosystem are being outstripped by its rapid evolution and complexity.
Their notion of “social machines” as “networked human-machine collaboratives” reminds me a lot of my somewhat stumbling work on @congressedits and archiving Ferguson Twitter data. As Nick Diakopoulos has pointed out we really need a theoretical framework for thinking about what sorts of interactions these automated social media agents can participate in, formulating their objectives, and for measuring their effects. Full disclosure: I work with Nick at the University of Maryland, but he wrote that post mentioning me before we met here, which was kind of awesome to discover after the fact.
The Laboratory for Social Machines will experiment in areas of public communication and social organization where humans and machines collaborate on problems that can’t be solved manually or through automation alone.
What a lovely encapsulation of the situation we find ourselves in today, where the problems we face are localized and yet global. Where algorithms and automation are indispensable for analysis and data gathering, but people and collaborative processes are all the more important. The ethical dimensions to algorithms and our understanding of them is also of growing importance, as the stories we read are mediated more and more by automated agents. It is super that Twitter has decided to help build this space at MIT where people can answer these questions, and have the infrastructure to support asking them.
When I read the quote I was immediately reminded of the problem that some of us were discussing at the last Society of American Archivists meeting in DC: how do we document the protests going on in Ferguson?
Much of the primary source material was being distributed through Twitter. Internet Archive were looking for nominations of URLs to use in their web crawl. But weren’t all the people tweeting about Ferguson including URLs for stories, audio and video that were of value? If people are talking about something can we infer its value in an archive? Or rather, is it a valuable place to start inferring from?
I ended up archiving 13 million of the tweets that mention “ferguson” for the 2 week period after the killing of Michael Brown. I then went through the URLs in these tweets, and unshortened them and came up with a list of 417,972 unshortened URLs. You can see the top 50 of them here, and the top 50 for August 10th (the day after Michael Brown was killed) here.
I did a lot of this work in prototyping mode, writing quick one off scripts to do this and that. One nice unintended side effect was unshrtn which is a microservice for unshortening URLs, which John Kunze gave me the idea for years ago. It gets a bit harder when you are unshortening millions of URLs.
But what would a tool look like that let us analyze events in social media, and helped us (archivists) collect information that needs to be preserved for future use? These tools are no doubt being created by those in positions of power, but we need them for the archive as well. We also desperately need to explore what it means to explore these archives: how do we provide access to them, and share them? It feels like there could be a project here along the lines of what George Washington University University are doing with their Social Feed Manager. Full disclosure again: I’ve done some contracting work with the fine folks at GW on a new interface to their library catalog.
The 5 million dollars aside, an important contribution that Twitter is making here (that’s probably worth a whole lot more) is firehose access to the Tweets that are happening now, as well as the historic data. I suspect Deb Roy’s role at MIT as a professor and as Chief Media Scientist at Twitter helped make that happen. Since MIT has such strong history of supporting open research, it will be interesting to see how the LSM chooses to share data that supports its research.
The following is a guest post by Julio Díaz Laabes, HACU intern and Program Management Assistant at the Library of Congress.
Coming off the heels of a successful beginning for the Boston and New York set of cohorts, the National Digital Stewardship Residency Program is becoming a model for digital stewardship residencies on a national scale. This residency program, funded by the Institute of Museum and Library Services,offers recent master’s and doctoral program graduates in specialized fields- library science, information science, museum studies, archival studies and others- the opportunity to gain professional experience in the field of digital preservation.
The inaugural year of the NDSR program was completed in May of 2014. During this year, ten residents were placed in various organizations in the Washington, DC area. Since completing the program, all ten residents are now working in positions related to the field of digital preservation! Here are some accounts of how the program has impacted each of the resident’s lives and where they are now in their careers.
Lee Nilsson is employed in a contract position as a junior analyst at the Department of State, Bureau of International Information and programs. Specifically, he is working in the analytics office on foreign audience research. On how the residency helped him, Lee said, “The residency got me to D.C and introduced me to some great people. Without NDSR I would not have made it this far.” Furthermore, Lee commented that the most interesting aspect of his job is “the opportunity to work with some very talented people on some truly global campaigns.”
Following the residency, Maureen McCormick Harlow accepted a permanent position as the new Digital Librarian at PBS (Public Broadcast Service). She works in the Media Library and her tasks include consulting on the development of the metadata schema for an enterprise-wide digital asset management system, fulfilling archival requests for legacy materials and working with copyright holders to facilitate the next phase of a digitization project (which builds on the NDSR project of Lauren Work). Maureen stated that “NDSR helped her to foster and cultivate a network of digital preservationists and practitioners in the DC area over the nine months that I participated in it.” An interesting aspect of her job is working with the history of PBS and learning about PBS programming to see how it has changed over the years.
On an international scale, Heidi Elaine Dowding is currently in a three-year PhD Research Fellow position at the Royal Dutch Academy of Arts and Sciences Huygens ING Institute. This position is funded through the European Commission. “My research involves the long-term publication and dissemination of digital scholarly editions, so aspects of digital preservation will be key,” said Heidi. On the best part of her position, Heidi said, “I am lucky enough to be fully funded, which allows me to focus on my studies. This gives me that opportunity to research things that I am interested in every day.”
Erica Titkemeyer is currently employed at the University of North Carolina at Chapel Hill as the Project Director and AV Conservator for the Southern Folklife Collection. This position was created as part of a grant-funded initiative to research and analyze workflows for the mass reformatting and preservation of legacy audiovisual materials. “NDSR allotted me the opportunity to participate in research and projects related to the implementation of digital preservation standards. It provided me access to a number of networking events and meetings related to digital stewardship.” In her position, she hopes to help see improved access to the collections, while also having the opportunity to learn more about the rich cultural content they contain.
Given these success stories, the National Digital Stewardship Residency has proven to be an invaluable program, providing opportunity for real world practical experience in the field of digital preservation. Also, the diversity of host institutions and location areas across major U.S. cities gives residents the opportunity to build up an extensive network of colleges, practitioners and potential employers in diverse fields. Stay tuned for part two of this blog post which will showcase the remaining residents of the 2013-2014 Washington D.C cohort.
Delivered by FeedBurner
Just a brief pair of threads this week. First is a look at what is happening with mobile device encryption as consumer electronics companies deal with data privacy in the post-Snowden era. There is also the predictable backlash from law enforcement organizations, and perhaps I just telegraphed how I feel on the matter. The second thread looks at how Getty Images is trying to get into distributing its content for free to get it in front of eyeballs that will end up paying for some of it.
Feel free to send this to others you think might be interested in the topics. If you find these threads interesting and useful, you might want to add the Thursday Threads RSS Feed to your feed reader or subscribe to e-mail delivery using the form to the right. If you would like a more raw and immediate version of these types of stories, watch my Pinboard bookmarks (or subscribe to its feed in your feed reader). Items posted to are also sent out as tweets; you can follow me on Twitter. Comments and tips, as always, are welcome.Apple and Android Device Data Encryption
In an open letter posted on Apple’s website last night, CEO Tim Cook said that the company’s redesigned its mobile operating system to make it impossible for Apple to unlock a user’s iPhone data. Starting with iOS8, only the user who locked their phone can unlock it.
This is huge. What it means is that even if a foreign government or a US police officer with a warrant tries to legally compel Apple to snoop on someone, they won’t. Because they can’t. It’s a digital Ulysses pact.- Apple Will No Longer Let The Cops Into Your Phone, By PJ Vogt, TL;DR blog, 18-Sep-2014
The next generation of Google’s Android operating system, due for release next month, will encrypt data by default for the first time, the company said Thursday, raising yet another barrier to police gaining access to the troves of personal data typically kept on smartphones.- Newest Androids will join iPhones in offering default encryption, blocking police, by Craig Timberg, The Washington Post, 18-Sep-2014
Predictably, the US government and police officials are in the midst of a misleading PR offensive to try to scare Americans into believing encrypted cellphones are somehow a bad thing, rather than a huge victory for everyone’s privacy and security in a post-Snowden era. Leading the charge is FBI director James Comey, who spoke to reporters late last week about the supposed “dangers” of giving iPhone and Android users more control over their phones. But as usual, it’s sometimes difficult to find the truth inside government statements unless you parse their language extremely carefully. So let’s look at Comey’s statements, line-by-line.- Your iPhone is now encrypted. The FBI says it&aposll help kidnappers. Who do you believe? by Trevor Timm, Comment is free on theguardian.com, 30-Sep-2014
I think it is fair to say that Apple snuck this one in on us. To the best of my knowledge, the new encrypted-by-default wasn’t something talked about in the iOS8 previews. And it looks like poor Google had to play catch-up by announcing on the same day that they were planning to do the same thing with the next version of the Android operating system. (If Apple and Google conspired to make this announcement at the same time, I haven’t heard that either.)
As you can probably tell by the quote I pulled from the third article, I think this is a good thing. I believe the pendulum has swung too far in the direction of government control over communications, and Apple/Google are right to put new user protections in place. This places the process of accessing personal information firmly back in the hands of the judiciary through court orders to compel people and companies to turn over information after probable cause has been shown. There is nothing in this change that prevents Apple/Google from turning over information stored on cloud servers to law enforcement organizations. It does end the practice of law enforcement officers randomly seizing devices and reading data off them.
As an aside, there is an on-going discussion about the use of so-called “stingray” equipment that impersonates mobile phone towers to capture mobile network data. The once-predominant 2G protocol that the stingray devices rely on was woefully insecure, and the newer 3G and 4G mobile carrier protocols are much more secure. In fact, stingray devices are known to jam 3G/4G signals to force mobile devices to use the insecure 2G protocol. Mobile carriers are planning to turn off 2G protocols in the coming years, though, which will make the current generation of stingray equipment obsolete.Getty Offers Royalty-Free Photos
The story of the photography business over the past 20 years has been marked by two shifts: The number of photographs in circulation climbs toward infinity, and the price that each one fetches falls toward zero. As a result, Getty Images, which is in the business of selling licensing rights, is increasingly willing to distribute images in exchange for nothing more than information about the public’s photo-viewing habits.
Now Getty has just introduced a mobile app, Stream, targeted at nonprofessionals to run on Apple’s new operating system. The app lets people browse through Getty’s images, with special focus on curated collections. It’s sort of like a version of Instagram (FB) featuring only professional photographers—and without an upload option.- Getty&aposs New App Is Part of Its Plan to Turn a Profit From Free Photos, by Joshua Brustein, Businessweek, 19-Sep-2014
Commercial photography is another content industry — like mass-market and trade presses, journal publishers, newspapers, and many others — that is facing fundamental shifts in its business models. In this case, Getty is going the no-cost, embed-in-a-web-page route to getting their content to more eyeballs. They announced the Getty Images Embed program a year ago, and have now followed it up with this iOS app for browsing the collection of royalty-free images.Link to this post!
An attempt to explain sparse faceting and when to use it in not-too-technical terms. Sparse faceting in Solr is all about speeding up faceting on high-cardinality fields for small result sets. That’s a clear statement, right? Of course not. What is high, what is small and what does cardinality mean? Dmitry Kan has spend a lot of time testing sparse faceting with his high-cardinality field, without getting the promised performance increase. Besides unearthing a couple of bugs with sparse faceting, his work made it clear that there is a need for better documentation. Independent testing for the win!What is faceting?
When we say faceting in this context, it means performing a search and getting a list of terms for a given field. The terms are ordered by their count, which is the number of times they are referenced by the documents in the search result. A classic example is a list of authors:The search for "fairy tale" gave 15 results «hits». Author «field» - H.C. Andersen «term» (12 «count») - Brothers Grimm «term» (5 «count») - Lewis Carroll «term» (3 «count»)
Note how the counts sums up to more than the number of documents: A document can have more than one reference to terms in the facet field. It can also have 0 references, all depending on the concrete index. In this case, there are either more terms than are shown or some of the documents have more than one author . There are other forms of faceting, but they will not be discussed here.Under the hood
At the abstract level, faceting in Solr is quite simple:
- A list of counters is initialized. It has one counter for each unique term in the facet field in the full corpus.
- All documents in the result set are iterated. For each document, a list of its references to terms is fetched.
- The references are iterated and for each one, the counter corresponding to its term is increased by 1.
- The counters are iterated and the Top-X terms are located.
- The actual Strings for the located terms are resolved from the index.
Sparse faceting improves on standard Solr in two ways:
- Standard Solr allocates a new list of counters in step 1 for each call, while sparse re-uses old lists.
- Standard Solr iterates all the counters in step 3, while sparse only iterates the ones that were updated in step 2.
Distributed faceting in Solr adds a few steps:
- All shards are issued the same request by a coordinating Solr. They perform step 1-4 above and returns the results to the coordinator.
- The coordinator merges the shard-responses into one structure and extracts the Top-X terms from that.
- For each Top-X term, its exact count is requested from the shards that did not deliver it as part of step a.
Standard Solr handles each exact-count separately by performing a mini-search for the term in the field. Sparse reuses the filled counters from step 2 (or repeats step 1-2 if the counter has been flushed from the cache) and simply locates the counters corresponding to the terms. Depending on the number of terms, sparse is much faster (think 5-10x) than standard Solr for this task. See Ten times faster for details.What is cardinality?
Down to earth, cardinality just means how many there are of something. But what thing? The possibilities for faceting are many: Documents, fields, references and terms. To make matters worse, references and terms can be counted for the full corpus as well as just the search result.
- Performance of standard Solr faceting is linear to the number of unique terms in the corpus in step 1 & 3 and linear to the number of references in the search result in step 2.
- Performance of sparse faceting is (nearly) independent of the number of unique terms in the corpus and linear to the number of references in the search result in step 2 & 3.
Both standard Solr and sparse treats each field individually, so they both scale linear for that. The documents returned as part of base search are represented in a sparse structure itself (independent of sparse faceting) and scales with result set size. While it does take time to iterate over these documents, this is normally dwarfed by the other processing steps. Ignoring the devils in the details: Standard Solr facet performance scales with the full corpus size as well as the result size, while sparse faceting scales just with the result size.Examples please!
- For faceting on URL in the Danish Web Archive, cardinality is very high for documents (5bn), references (5bn) and terms (5bn) in the corpus. The overhead of performing a standard Solr faceting call is huge (hundreds of milliseconds), due to the high number of terms in the corpus. As the typical search results are quite a lot smaller than the full corpus, sparse faceting is very fast.
- For faceting on host in the Danish Web Archive, cardinality is very high for documents (5bn) and references (5bn) in the corpus. However, the number of terms (1.3m) is more modest. The overhead of performing a standard Solr faceting call is quite small (a few milliseconds), due to the modest number of terms; the time used in step 2, which is linear to the references, is often much higher than the overhead. Sparse faceting is still faster in most cases, but only by a few milliseconds. Not much if the total response time is hundreds of milliseconds.
- For faceting on content_type_norm in the Danish Web Archive, cardinality is very high for documents (5bn) and references (5bn) in the corpus. It is extremely small for the number of unique terms, which is 10. The overhead of performing a standard Solr faceting call is practically zero; the time used in step 2, which is linear to the references, is often much higher than the overhead. Sparse faceting is never faster than Solr for this and as a consequence falls back to standard counting, making it perform at the same speed.
- For faceting on author at the library index at Statsbiblioteket, cardinality is high for documents (15m), references (40m) and terms (9m) in the corpus. The overhead of performing a standard Solr faceting call is noticeable (tens of milliseconds), due to the 9m terms in the corpus. The typical search results is well below 8% of the full corpus, and sparse faceting is markedly faster than standard Solr. See Small Scale Sparse Faceting for details.
The Blue Earth County Historical Society (BECHS), founded in 1901, is located in Mankato, Minnesota. The content submitted by BECHS to the Minnesota Digital Library (MDL) and the Digital Public Library of America (DPLA) is unique to this county of Minnesota. Our collection chronicles the people, places, and events that shaped Blue Earth County from our agricultural roots to the rise of our cities. The images tell a variety of stories and showcase all walks of life across decades of time. All of the images have been donated to BECHS by people who wanted to preserve the past for future generations.
BECHS was honored when the DPLA selected one of our images to represent MDL when it came on as a partner in April 2013. The image was of Dr. G. A. Dahl posing for a photograph at a local photography studio. The interesting aspect of this image was that there was a photographer in the image as well as Dr. Dahl. It seems Dahl was fascinated with photography as he had interior photographs taken of his home and office, in a time when photographs of living spaces were rare. These images are also a part of our collection.
Our involvement in MDL helps people across Minnesota locate our images and have access to our collection. BECHS’ involvement with the DPLA has amplified that reach. People from across the country, and the world, are able to locate our images, which gives the user a glimpse into our collection and history. Based on the analytics from our MDL webpage, the DPLA is the highest referral to our webpage for visitors. Because the DPLA is a national resource, people can search this one site to find images from different locations and be directed back to the image’s home location. It is an excellent resource, especially for genealogists.
The Blue Earth County Historical Society is located in Mankato, Minnesota. BECHS was founded in 1901 in preparation for the semi-centennial of Mankato and Blue Earth County. In 1938, BECHS purchased the Hubbard family home and opened the first public history museum. BECHS operated from this location for 50 years before moving into our current location. The Hubbard House was placed on the National Register of Historic Places in 1978 and is still operated by BECHS as a living history museum. As we enter into our 113th year, BECHS has upcoming expansion plans in our current facility and will continue to collect, preserve and present the history of Blue Earth County for present and future generations.
Featured image credit: Detail from Portrait of Dr. G. A. Dahl, Mankato, Minnesota, ca.1900. Keene, George E. Courtesy of the Blue Earth County Historical Society via the Minnesota Digital Library.
All written content on this blog is made available under a Creative Commons Attribution 4.0 International License. All images found on this blog are available under the specific license(s) attributed to them, unless otherwise noted.
I’ve written about The Great Eastern a couple of times: once in April, when I had just started to listen to all the episodes for the fifth or sixth time, and then briefly in August with a quote about libraries. I finished listening to it all last month. It’s still a masterpiece, one of the finest radio comedies and one of the richest and deepest works of radio fiction ever.
Michael Collins has been writing long pieces about it on his web site and they are mandatory reading if you know the show at all:
- The Great Eastern’s 20th Anniversary: Introduction
- “‘How’s She Goin’, B’y?’ No More”: Season 1, Episode 5 of The Great Eastern, which discusses the early episode where Paul covers the death of sub-monoist Hugh Kuva and the controversial “Writing Through Region” conference.
- “The folly of Newfoundland’s grand colonial experiment”: The Great Eastern’s Oougubomba Saga (Season 4, Episodes 11, 13, and 14), about the stunning three Oougubomba episodes.
Mack Furlong, who played host Paul Moth, won a John Drainie award recently. He was impeccable on the show.
170 or so people are gathered together in Cleveland, Ohio, for Hydra Connect #2 – the second annual Hydra get-together. If you weren’t able to come (and even if you were) you’ll find increasing numbers of presentations and meeting notes hanging off the program page at https://wiki.duraspace.org/display/hydra/Hydra+Connect+2+Program