You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 1 hour 27 min ago

Harvard Library Innovation Lab: Link roundup September 5, 2014

Fri, 2014-09-05 17:36

This is the good stuff.

Photogrammar

So nice, could even be taken further, I’d imagine they’ve got a lot of ideas in the works –

Our Cyborg Future: Law and Policy Implications | Brookings Institution

Whoa, weird. Our devices and us.

Evolution of the desk

The desk becomes clear of its tools as those tools centralize in the digital space.

Mass Consensual Hallucinations with William Gibson

Technology trumps ideology.

Awesomeness: Millions Of Public Domain Images Being Put Online

Mining the archive for ignored treasure.

John Miedema: The four steps Watson uses to answer a question. An example from literature.

Fri, 2014-09-05 16:01

Check out this excellent video on the four steps Watson uses to answer a question. The Jeopardy style question (i.e., an answer) comes from the topic of literature, so quite relevant here: “The first person mentioned by name in ‘The Man in the Iron Mask’ is this hero of a previous book by the same author.’ This video is not sales material, but a good overview of the four (not so simple) steps: 1. Question Analysis, 2. Hypothesis Generation, 3. Hypothesis & Evidence Scoring, 4. Final Merging & Ranking. “Who is d’Artagnan?” I am so pleased that IBM is sharing its knowledge in this way. I had new insight watching it.

Library of Congress: The Signal: Studying, Teaching and Publishing on YouTube: An Interview With Alexandra Juhasz

Fri, 2014-09-05 15:07

Alexandra Juhasz, professor of Media Studies at Pitzer College

The following is a guest post from Julia Fernandez, this year’s NDIIPP Junior Fellow. Julia has a background in American studies and working with folklife institutions and worked on a range of projects leading up to CurateCamp Digital Culture in July. This is part of a series of interviews Julia conducted to better understand the kinds of born-digital primary sources folklorists, and others interested in studying digital culture, are making use of for their scholarship.

The numbers around user-generated video are staggering. YouTube, one of the largest user-generated video platforms, has more than 100 hours of video content uploaded to it every minute. What does this content mean for us and our society? What of it should we aspire to ensure long-term access to?

As part of the NDSA Insights interview series, I’m delighted to interview Alexandra Juhasz, professor of Media Studies at Pitzer College. Dr. Juhasz has written multiple articles on digital media and produced the feature films “The Owls” and “The Watermelon Woman.” Her innovative “video-book” “Learning from YouTube” was published by MIT Press, but partly enabled through YouTube itself, and is available for free here. In this regard, her work is relevant to those working in digital preservation both in better understanding the significance of user-generated video platforms like YouTube and in understanding new hybrid forms of digital scholarly publishing.

Julia: In the intro to your online video-book “Learning From YouTube” you say “YouTube is the Problem, and YouTube is the solution.” Can you expand on that a bit for us?

Alex: I mean “problem” in two ways. The first is more neutral: YouTube is my project’s problematic, its subject or concern. But I also mean it more critically as well: YouTube’s problems are multiple–as are its advantages–but our culture has focused much more uncritically on how it chooses to sell itself: as a democratic space for user-made production and interaction. The “video-book” understands this as a problem because it’s not exactly true. I discuss how YouTube isn’t democratic in the least; how censorship dominates its logic (as does distraction, the popular and capital).

YouTube is also a problem in relation to the name and goals of the course that the publication was built around (my undergraduate Media Studies course also called “Learning from YouTube” held about, and also on, the site over three semesters, starting in 2007). As far as pedagogy in the digital age is concerned, the course suggests there’s a problem if we do all or most or even a great deal of our learning on corporate-owned platforms that we have been given for free, and this for many reasons that my students and I elaborate, but only one of which I will mention here as it will be most near and dear to your readers’ hearts: it needs a good archivist and a reasonable archiving system if it’s to be of any real use for learners, teachers or scholars. Oh, and also some system to evaluate content.

YouTube is the solution because I hunkered down there, with my students, and used the site to both answer the problem, and name the problems I have enumerated briefly above.

Julia: What can you tell us about how you approached the challenges of teaching a course about YouTube? What methods of analysis did you apply to its content? How did you select which materials to examine given the vast scope and diversity of YouTube’s content?

Alex: I have taught the course three times (2007, 2008, 2010). In each case the course was taught on and about YouTube. This is to say, we recorded class sessions (the first year only), so the course could be seen on YouTube; all the class assignments needed to take the form of YouTube “writing” and needed to be posted on YouTube (as videos or comments); and the first time I taught it, the students could only do their research on YouTube (thereby quickly learning the huge limits of its vast holdings). You can read more about my lessons learned teaching the course here and here.

The structure of the course mirrors many of the false promises of YouTube (and web 2.0 more generally), thereby allowing students other ways to see its “problems.” It was anarchic, user-led (students chose what we would study, although of course I graded them: there’s always a force of control underlying these “free” systems), public, and sort of silly (but not really).

As the course developed in its later incarnations, I developed several kinds of assignments (or methods of analysis as you put it), including traditional research looking at the results of published scholars, ethnographic research engaging with YouTubers, close-textual analysis (of videos and YouTube’s architecture), and what I call YouTours, where students link together a set of YouTube videos to make an argument inside of and about and with its holdings. I also have them author their own “Texteo” as their final (the building blocks, or pages, of my video-book; texteo=the dynamic linking of text and video), where they make a concise argument about some facet of YouTube in their own words and the words of videos they make or find (of course, this assignment allows them to actually author a “page” of my “book,” thereby putting into practice web 2.0?s promise of the decline of expertise and the rise of crowd-sourced knowledge production).

Students choose the videos and themes we study on YouTube. I like this structure (giving them this “control”) because they both enjoy and know things I would never look at, and they give me a much more accurate reading of mainstream YouTube than I would ever find on my own. My own use of the site tends to take me into what I call NicheTube (the second, parallel structure of YouTube, underlying the first where a few videos are seen by many many people, and these are wholly predictable in their points of view and concerns. On YouTube it’s easy to find popular videos. On NicheTube content is rarely seen, hard to find and easy to lose; everything might be there, but very few people will ever see it.

Now that YouTube Studies has developed, I also assign several of the book-length studies written about it from a variety of disciplines (I list these below). When I first taught the class in 2007, my students and I were generating the primary research and texts of YouTube Studies: producing work that was analytical and critical about the site, in its vernaculars, and on its pages.

Julia: What were some of the challenges of publishing an academic work in digital form? A large part of the work depends on linking to YouTube videos that you did not create and/or are no longer available. What implications are there for long-term access to your work?

Alex: I discuss this in greater length in the video-book because another one of its self-reflexive structures, mirroring those of YouTube, is self-reflexivity: an interest in its own processes, forms, structures and histories.

While MIT Press was extremely interested and supportive, they had never “published” anything like this before. The problems were many, and varied, and we worked through them together. I’ve detailed answers to your question in greater details within the video-book, but here’s one of the lists of differences I generated:

  • Delivery of the Work
  • Author’s Warranty
  • Credit
  • Previous Publication
  • Size of the Work
  • Royalties
  • Materials Created by Other Persons
  • Upkeep
  • Editing
  • Author’s Alterations
  • Promotion
  • Index

Many of these differences are legal and respond directly to the original terms in the contract they gave me that made no sense at all with a born-digital, digital-only object, and in particular about writing a book composed of many things I did not “own,” about “selling” a book for free, making a book that was already-made, or moving a book that never needed to be shipped.

One solution is that the video-book points to videos, but they remained “owned” by YouTube (I backed up some of the most important and put them on Critical Commons knowing that they might go away). But, in the long run, I do not mind that many of the videos fade away, or that the book itself will probably become quickly unreadable (because the systems is written on will become obsolete). It is another myth of the Internet that everything there is lasting, permanent, forever. In fact, by definition, much of what is housed or written there is unstable, transitory, difficult to find, or difficult to access as platforms, software and hardware change.

In “On Publishing My YouTube “Book” Online (September 24, 2009)” I mention these changes as well:

  1. Audience. When you go online your readers (can) include nonacademics.
  2. Commitment. Harder to command amid the distractions.
  3.  Design. Matters more; and it has meaning.
  4.  Finitude. The page(s) need never close.
  5.  Interactivity. Should your readers, who may or may not be experts, author too?
  6.  Linearity. Goes out the window, unless you force it.
  7.  Multimodality. Much can be expressed outside the confines of the word.
  8.  Network. How things link is within or outside the author’s control.
  9.  Single author. Why hold out the rest of the Internet?
  10.  Temporality. People read faster online. Watching video can be slow. A book is long.

Now, when I discuss the project with other academics, I suggest there are many reasons to write and publish digitally: access, speed, multi-modality, etc. (see here), but if you want your work to be seen in the future, better to publish a book!

Julia: At this point you have been studying video production since the mid 90s. I would be curious to hear a bit about how your approach and perspective have developed over time.

Alex: My research (and production) interests have stayed consistent: how might everyday people’s access to media production and distribution contribute to people’s and movement’s empowerment? How can regular citizens have a voice within media and therefore culture more broadly, so that our interests, concerns and criticisms become part of this powerful force?

Every time I “study” the video of political people (AIDS activists, feminists, YouTubers), I make video myself. I theorize from my practice, and I call this “Media Praxis” (see more about that here). But what has changed during the years I’ve been doing this and thinking about it is that more and more people really do have access to both media production and distribution since when I first began my studies (and waxed enthusiastically about how camcorders were going to foster a revolution). Oddly, this access can be said to have produced many revolutions (for instance the use of people-made media in the Arab Spring) and to have quieted just as many (we are more deeply entrenched in both capitalism’s pull and self-obsessions then at any time in human history, it seems to me!). I think a lot about that in the YouTube video-book and in projects since (like this special issue on feminist queer digital media praxis that I just edited for the online journal Ada).

Julia: You end up being rather critical of how popularity works on YouTube. You argue that “YouTube is not democratic. Its architecture supports the popular. Critical and original expression is easily lost to or censored by its busy users, who not only make YouTube’s content, but sift and rate it, all the while generating its business.” You also point to the existence of what you call “NicheTube,” the vast sea of little-seen YouTube videos that are hard to find given YouTube’s architecture of ranking and user-generated tags.” Could you tell us a bit more about your take on the role of filtering and sorting in it’s system?

Alex: YouTube is corporate owned, just as is Facebook, and Google, and the many other systems we use to find, speak, navigate and define our worlds, words, friends, interests and lives. Filtering occurs in all these places in ways that benefit their bottom lines (I suggest in “Learning From YouTube” that a distracted logic of attractions keeps our eyeballs on the screen, which is connected to their ad-based business plan). In the process, we get more access to more and more immediate information, people, places and ideas than humans ever have, but it’s filtered through the imperatives of capitalism rather than say those of a University Library (that has its own systems to be sure, of great interest to think through, and imbued by power like anything else, but not the power of making a few people a lot of money).

The fact that YouTube’s “archive” is unorganized, user-tagged, chaotic and uncurated is their filtering system.

Julia: If librarians, archivists and curators wanted to learn more about approaches like yours to understanding the significance and role of online video what examples of other scholars’ work would you suggest? It would be great if you could mention a few other scholars’ work and explain what you think is particularly interesting about their approaches.

Alex: I assign these books in “Learning from YouTube”: Patrick Vonderau, “The YouTube Reader”; Burgess and Green, “YouTube” and Michael Strangelove, “Watching YouTube.” I also really like the work of Michael Wesch and Patricia Lange who are anthropologists whose work focuses on the site and its users.

Outside of YouTube itself, many of us are calling this kind of work “platform studies,” where we look critically and carefully at the underlying structures of the underlying structures of Internet culture. Some great people working here are Caitlin Benson-Allott, danah boyd, Wendy Chun, Laine Nooney, Tara McPherson, Siva Vaidhyanathan and Michelle White.

I also think that as a piece of academic writing, Learning from YouTube (which I understand to be a plea for the longform written in tweets, or a plea for the classroom written online) is in conversation with scholarly work that is thinking about the changing nature of academic writing and publishing (and all writing and publishing, really). Here I like the work of Kathleen Fitzpatrick or Elizabeth Losh, as just two examples.

Julia: I would also be interested in what ways of thinking about the web you see this as being compatible or incompatible with other approaches to theorizing the web. How is your approach to studying video production online similar or different from other approaches in new media studies, internet research, anthropology, sociology or the digital humanities?

Alex: “Learning from YouTube” is new media studies, critical Internet studies, and DH, for sure. As you say above, my whole career has looked at video; since video moved online, I did too. I think of myself as an artist and a humanist (and an activist) and do not think of myself as using social science methods although I do learn a great deal from research done with in these disciplines.

After “Learning from YouTube” I have done two further web-based projects: a website that tries to think about and produce alternatives to corporate-made and owned Internet experiences (rather than just critique this situation), www.feministonlinespaces.com; and a collaborative criticism of the MOOC (Massive Online Open course), what we call a DOCC (Distributed Open Collaborative Course): http://femtechnet.newschool.edu.

In all three cases I think that “theorizing the web” is about making and using the web we want and not the version that corporations have given to us for free. I do this using the structures, histories, theories, norms and practices of feminism, but any ethical system will do!

FOSS4Lib Recent Releases: Library Instruction Recorder - 1.0.0

Fri, 2014-09-05 13:31
Topics: bibliographic instructioninstructioninstruction schedulinglibrarylibrary instructionlibrary instruction recorderteachingPackage: Library Instruction RecorderRelease Date: Friday, August 29, 2014

Last updated September 5, 2014. Created by Cliff Landis on September 5, 2014.
Log in to edit this page.

Initial release of Library Instruction Recorder

FOSS4Lib Updated Packages: Library Instruction Recorder

Fri, 2014-09-05 13:28

Last updated September 5, 2014. Created by Cliff Landis on September 5, 2014.
Log in to edit this page.

The Library Instruction Recorder (LIR) is a WordPress plugin designed to record library instruction classes and provide statistical reports. It is simple, easy-to-use, and intuitive.

Features

Accessible only from the WordPress Dashboard, allowing it to be used on either internally- or externally-facing WordPress instances.
Displays classes by: Upcoming, Incomplete, Previous and My Classes
Customizable fields for Department, Class Location, Class Type and Audience.
Customizable flags (i.e. "Do any students have disabilities or special requirements?" "Is this a First Year Experience class?")
Ability to duplicate classes for multiple sessions.
Statistical reports can be narrowed by date range or primary librarian. Reports are downloadable as .csv files.
Email reminder to enter the number of students who attended the class.

Package Links Releases for Library Instruction Recorder TechnologyLicense: GPLv3Development Status: Production/StableOperating System: Browser/Cross-PlatformProgramming Language: PHPDatabase: MySQL

HangingTogether: Linked Data Survey results 5 – Technical details

Fri, 2014-09-05 13:00

OCLC Research conducted an international linked data survey for implementers between 7 July and 15 August 2014. This is the fifth post in the series reporting the results.   

20 of the linked data projects that publish linked data are not yet accessible. Of those that are, 25 make their data accessible through Web pages and 24 through SPARQL Endpoint. Most offer multiple methods; when only one method is offered it’s by SPARQL Endpoint or file dumps.

The alphabetical list below shows the ways survey respondents make their linked data accessible; those that include methods used by Dewey, FAST, ISNI, VIAF, WorldCat.org and WorldCat.org Works are checked.

Of the 59 responses to the question about the serializations of linked data used, the majority use RDF/XML (47 projects/services).  Here’s the alphabetical list of the serializations used; those that include uses by Dewey, FAST, ISNI, VIAF, WorldCat.org and WorldCat.org Works are checked.

Trix was the only other serialization cited. The remainder of the “other” responses was for projects that hadn’t yet been implemented or the respondent wasn’t sure.

The technologies used by respondents for consuming linked data overlap with those used for publishing linked data.  Most are mentioned only once or twice so these lists are in alphabetical order.

Technologies mentioned for consuming linked data:

  • Apache Fuseki
  • ARC2 on PHP
  • Bespoke Jena applications (or bespoke local software tools)
  • CURL API
  • eXist database
  • HBase/Hadoop
  • Javascript
  • jQuery
  • Map/Reduce
  • Orbeon Xforms (or just Xforms)
  • RDF Store
  • Reasoning
  • SKOS repository
  • Solr
  • SPARQL
  • Web browsers
  • XML
  • Xquery

Technologies mentioned for publishing linked data:

  • 4store
  • AllegroGraph
  • Apache Digester
  • Apache Fuseki
  • ARC2 on PHP
  • Django
  • Drupal7
  • EADitor (https://github.com/ewg118/eaditor)
  • Fedora Commons
  • Google Refine
  • HBase/Hadoop
  • Humfrey
  • Java
  • JAX-RS
  • Jena applications
  • Lodspeakr
  • Map/Reduce
  • MarkLogic XML Database
  • Orbeon Xforms
  • OWLIM RDF triple store API
  • OWLIM-SE Triple Store by Ontotext Software
  • Perl
  • Pubby
  • Python
  • RDF Store
  • RDFer by British Museum
  • Saxon/XSLT
  • Solr
  • SPARQL
  • Sublima topic tool
  • The European Library Linked Data Platform
  • Tomcat
  • Virtuoso Universal Server (provide SPARQL endpoint)
  • xEAC (https://github.com/ewg118/xEAC)
  • XSLT
  • Zorba

Coming next: Linked Data Survey results-Advice from the implementers (last in the series)

About Karen Smith-Yoshimura

Karen Smith-Yoshimura, program officer, works on topics related to renovating descriptive and organizing practices with a focus on large research libraries and area studies requirements.

Mail | Web | Twitter | More Posts (50)

Lukas Koster: Looking for data tricks in Libraryland

Fri, 2014-09-05 12:12

IFLA 2014 Annual World Library and Information Congress Lyon – Libraries, Citizens, Societies: Confluence for Knowledge

After attending the IFLA 2014 Library Linked Data Satellite Meeting in Paris I travelled to Lyon for the first three days (August 17-19) of the IFLA 2014 Annual World Library and Information Congress. This year’s theme “Libraries, Citizens, Societies: Confluence for Knowledge” was named after the confluence or convergence of the rivers Rhône and Saône where the city of Lyon was built.

This was the first time I attended an IFLA annual meeting and it was very much unlike all conferences I have ever attended. Most of them are small and focused. The IFLA annual meeting is very big (but not as big as ALA) and covers a lot of domains and interests. The main conference lasts a week, including all kinds of committee meetings, and has more than 4000 participants and a lot of parallel tracks and very specialized Special Interest Group sessions. Separate Satellite Meetings are organized before the actual conference in different locations. This year there were more than 20 of them. These Satellite Meetings actually resemble the smaller and more focused conferences that I am used to.

A conference like this requires a lot of preparation and organization. Many people are involved, but I especially want to mention the hundreds of volunteers who were present not only in the conference centre but also at the airport, the railway stations, on the road to the location of the cultural evening, etc. They were all very friendly and helpful.

Another feature of such a large global conference is that presentations are held in a number of official languages, not only English. A team of translators is available for simultaneous translations. I attended a couple of talks in French, without translation headset, but I managed to understand most of what was presented, mainly because the presenters provided their slides in English.

It is clear that you have to prepare for the IFLA annual meeting and select in advance a number of sessions and tracks that you want to attend. With a large multi-track conference like this it is not always possible to attend all interesting sessions. In the light of a new data infrastructure project I recently started at the Library of the University of Amsterdam I decided to focus on tracks and sessions related to aspects of data in libraries in the broadest sense: “Cloud services for libraries – safety, security and flexibility” on Sunday afternoon, the all day track Universal Bibliographic Control in the Digital Age: Golden Opportunity or Paradise Lost?” on Monday and “Research in the big data era: legal, social and technical approaches to large text and data sets” on Tuesday morning.

Cloud Services for Libraries

It is clear that the term “cloud” is a very ambiguous term and consequently a rather unclear concept. Which is good, because clouds are elusive objects anyway.

In the Cloud Services for Libraries session there were five talks in total. Kee Siang Lee of the National Library Board of Singapore (NLB) described the cloud based NLB IT infrastructure consisting of three parts; a private, public and hybrid cloud. The private (restricted access) cloud is used for virtualization, an extensive service layer for discovery, content, personalization, and “Analytics as a service”, which is used for pushing and recommending related content from different sources and of various formats to end users. This “contextual discovery” is based on text analytics technologies across multiple sources, using a Hadoop cluster on virtual servers. The public cloud is used for the Web Archive Singapore project which is aimed at archiving a large number of Singapore websites. The hybrid cloud is used for what is called the Enquiry Management System (EMS), where “sensitive data is processed in-house while the non-sensitive data resides in the cloud”. It seems that in Singapore “cloud” is just another word for a group of real or virtual servers.

In the talk given by Beate Rusch of the German Library Network Service Centre for Berlin and Brandenburg KOBV the term “cloud” meant: the shared management of data on servers located somewhere in Germany. KOBV is one of the German regional Library Networks involved in the CIB project targeted at developing a unified national library data infrastructure. This infrastructure may consist of a number of individual clouds. Beate Rusch described three possible outcomes: one cloud serving as a master for the others, a data roundabout linking the other clouds, and a cross cloud dataspace where there is an overlapping shared environment between the individual clouds. An interesting aspect of the CIB project is that cooperation with two large commercial library system vendors, OCLC and Ex Libris, is part of the official agreement. This is of interest for other countries that have vested interests in these two companies, like The Netherlands.

Universal Bibliographic Control in the Digital Age

The Universal Bibliographic Control (UBC) session was an all day track with twelve very diverse presentations. Ted Fons of OCLC gave a good talk explaining the importance of the transition from the description of records to the modeling of entities. My personal impression lately is that OCLC all in all has been doing a good job with linked data PR, explaining the importance and the inevitability of the semantic web for libraries to a librarian audience without using technical jargon like URI, ontology, dereferencing and the like. Richard Wallis of OCLC, who was at the IFLA 2014 Linked Data Satellite Meeting and in Lyon, is spreading the word all over the globe.

Of the rest of the talks the most interesting ones were given in the afternoon. Anila Angjeli of the National Library of France (BnF) and Andrew MacEwan of the British Library explained the importance, similarities and differences of ISNI and VIAF, both authority files with identifiers used for people (both real and virtual). Gildas Illien (also one of the organizers of the Linked Data Satellite Meeting in Paris) and Françoise Bourdon, both BnF, described the future of Universal Bibliographic Control in the web of data, which is a development closely related to the topic of the talks by Ted Fons, Anila Angjeli and Andrew MacEwan.

The ONKI project, presented by the National Library of Finland, is a very good example of how bibliographic control can be moved into the digital age. The project entails the transfer of the general national library thesaurus YSA to the new YSO ontology, from libraries to the whole public sector and from closed to open data. The new ontology is based on concepts (identified by URIs) instead of monolingual text strings, with multilingual labels and machine readable relationships. Moreover the management and development of the ontology is now a distributed process. On top of the ontology the new public online Finto service has been made available.

The final talk of the day “The local in the global: universal bibliographic control from the bottom up” by Gordon Dunsire applied the “Think globally, act locally” aphorism to the Universal Bibliographic Control in the semantic web era. The universal top down control should make place for local bottom up control. There are so many old and new formats for describing information that we are facing a new biblical confusion of tongues: RDA, FRBR, MARC, BIBO, BIBFRAME, DC, ISBD, etc. What is needed are a number of translators between local and global data structures. On a logical level: Schema Translator, Term Translator, Statement Maker, Statement Breaker, Record Maker, Record Breaker. These black boxes are a challenge to developers. Indeed, mapping and matching of data of various types, formats and origins are vital in the new web of information age.

Research in the big data era

The Research in the big data era session had five presentations on essentially two different topics: data and text mining (four talks) and research data management (one talk). Peter Leonard of Yale University Library started the day with a very interesting presentation of how advanced text mining techniques can be used for digital humanities research. Using the digitized archive of Vogue magazine he demonstrated how the long term analysis of statistical distribution of related terms, like “pants”, “skirts”, “frocks”, or “women”, “girls”, can help visualise social trends and identify research questions. To do this there are a number of free tools available, like Google Books N-Gram Search and Bookworm. To make this type of analysis possible, researchers need full access to all data and text. However, rights issues come into play here, as Christoph Bruch of the Helmholtz Association, Germany, explained. What is needed is “intelligent openness” as defined by the Royal Society: data must be accessible, assessable, intelligible and usable. Unfortunately European copyright law stands in the way of the idea of fair use. Many European researchers are forced to perform their data analysis projects outside Europe, in the USA. The plea for openness was also supported by LIBER’s Susan Reilly. Data and text mining should be regarded as just another form of reading, that doesn’t need additional licenses

IdeasBox

IdeasBox packed

A very impressive and sympathetic library project that deserves everybody’s support was not an official programme item, but a bunch of crates, seats, tables and cushions spread across the central conference venue square. The whole set of furniture and equipment, that comes on two industrial pallets, constitutes a self supporting mobile library/information centre to be deployed in emergency areas, refugee camps etc. It is called IdeasBox, provided by Libraries without Borders. It contains mobile internet, servers, power supplies, ereaders, laptops, board games, books, etc., based on the circumstances, culture and needs of the target users and regions. The first IdeasBoxes are now used in Burundi in camps for refugees from Congo. Others will soon go to Lebanon for Syrian refugees. If librarians can make a difference, it’s here. You can support Libraries without Borders and IdeadBox in all kinds of ways: http://www.ideas-box.org/en/support-us.html.

IdeasBox unpacked

Conclusion

The questions about data management in libraries that I brought with me to the conference were only partly addressed, and actual practical answers and solutions were very rare. The management and mapping of heterogeneous and redundant types of data from all types of sources across all domains that libraries cover, in a flexible, efficient and system independent way apparently is not a mainstream topic yet. For things like that you have to attend Satellite Meetings. Legal issues, privacy, copyright, text and data mining, cloud based data sharing and management on the other hand are topics that were discussed. It turns out that attending an IFLA meeting is a good way to find out what is discussed, and more importantly what is NOT discussed, among librarians, library managers and vendors.

The quality and content of the talks vary a lot. As always the value of informal contacts and meetings cannot be overrated. All in all, looking back I can say that my first IFLA has been a positive experience, not in the least because of the positive spirit and enthusiasm of all organizers, volunteers and delegates.

(Special thanks to Beate Rusch for sharing IFLA experiences)

Open Knowledge Foundation: OKFestival 2014: we made it! A write-up & Thank You note

Fri, 2014-09-05 09:12

Open Knowledge Festival 2014! We built it, made it and ran it – it was a blast, thank you!

  • 1056 participants from 60 countries
  • 215 facilitators and moderators
  • 17 Programme Team members
  • 70 volunteers

made it all happen. Who says that numbers are dry? Just by writing them down, our hearts are melting.

Group work! – Pic by Gregor Fischer

Six weeks have passed since the end of OKFestival 2014, many of you participated in our feedback survey, we all caught up with the lack of sleep and are now hard at work with the public post-event report which will be shared on the festival website in the next few weeks (keep your eyes peeled!).

At the festival, we tried a lot of experiments, and experimenting is both risky and thrilling – and you were up for the challenge! So we thought it was time to take a moment to have a look at what we built together and celebrate the challenges we bravely took on and the outcomes that came out of them (and, yes, there are also learnings from things which could have gone better – is there any event with bullet-proof WiFi? can a country not known to be tropical and not used to air conditioning experience a heat wave on the 2 days out of 365 when you’ll run an event?)

Rocking selfies! – Pic by Burt Lum

Summing it up:

  • an event for the whole open movement: we were keen to be the convenor of a global gathering, welcoming participants from all around the world and a multitude of folks from open communities, organisations, small and big NGOs, governments, grassroots initiatives as well as people new to the topic and willing to dive in. We wanted to create an environment connecting diverse audiences, thus enabling a diverse groups of thinkers, makers and activists to come together and collaborate to effect change.

Ory Okolloh & Rufus Pollock fireside chat – Pic by Gregor Fischer

  • hands-on and outcome-driven approach: we wanted the event to be an opportunity to get together, make, share and learn with – and from – each other and get ready to make plans for what comes next. We didn’t want the event to be simply wonderful, we also wanted it to be useful – for you, your work and the future of the open movement. We’ve just started sharing a selection of your stories on our blog and more is yet to come this month, with the launch of our public post-OKFestival report, filled out with outcome stories you told us in the weeks after the event – who you met, what did you start to plan, what’s the new project coming out of the festival you’re already working on as we speak!

Meeting, talking, connecting! – Pic by Gregor Fischer

  • narrative streams: We made a bold choice – no streams-by-topic, but streams following a narrative. The event was fuelled by the theory that change happens when you bring together knowledge – which informs change – tools – which enable change – and society – which effects change. The Knowledge, Tools and Society streams aimed to explore the work we do and want to develop further beyond the usual silos which streams-by-topic could have created. Open hardware and open science, open government and open sustainability, open culture and open source, arts and privacy and surveillance.

Your vote, your voice! – Pic by Gregor Fischer

  • crowd sourced programme and participatory formats and tools (and powerpoints discouraged): We encouraged you to leave the comfort zone with no written presentations to read in sync with slides, but instead to create action-packed sessions in which all participants were contributing with their knowledge to work to be done together. We shared tips and tricks about creation and facilitation of such formats and hosted hangouts to help you propose your ideas for our open call – and hundreds of community members sent their proposals! Also, in the most participatory of the spirits, OKFestival also had its own unconference, the unFestival run by the great DATA Uruguay Team, who complemented our busy core programme with a great space where anyone could pitch and run her/his own emerging session on the spot, to give room and time to great new born ideas and plans. And a shout out also goes to a couple of special tools: our etherpads – according to the OKFestival Pad of Pads 85 pads have been co-written and worked with – and our first code of collaboration which we hope will accompany us also in future ventures!

Green volunteering power – always on! – Pic by Gregor Fischer

  • diversity of backgrounds, experiences, cultures, domains: months before we started producing the festival, we started to get in touch with people from all around the world who were running projects we admired, and with whom we’d never worked together before. This guided us in building a diverse Programme Team first, and receiving proposals and financial aid applications from many new folks and countries later on. This surely contributed to the most exciting outcome of all – having a really international crowd of the event, people from 60 countries, speaking dozens of different languages. Different backgrounds enriched everybody’s learning and networking and nurtured new collaborations and relationships.

Wow, that was a journey. And it’s just the beginning! As we said, OKFestival aimed to be the fuel, the kick-off, the inspiration for terrific actions and initiatives to come and now it’s time to hear some of most promising stories and project started there!

You can start having taste following the ever-growing OKFestival Stories article series on our blog and be ready for more, when in the next weeks we’ll publish more outcomes, interviews, quotes and reports from you, the protagonists of it all.

Thank you again, and see you very soon!

Your OKFestival Team

Riley Childs: The Universal Library Search and Then Some, Part One

Fri, 2014-09-05 05:35

Let’s talk about universal searches… I recently had the pleasure of partaking in a focus group (actually more of a user group) at the Charlotte-Mecklenburg Library, here we talked about the upcoming plans to upgrade their ILS and what the users (patrons) wanted, one of the things we talked about in particular was the library’s […]

The post The Universal Library Search and Then Some, Part One appeared first on Riley Childs.

Tim Ribaric: Grad School Round Two: This time it's personal (Sabbatical Part 5)

Fri, 2014-09-05 01:32

 

Tomorrow I start grad school again. This time for serious & as an old man.

read more

Karen Coyle: WP:NOTABILITY (and Women)

Fri, 2014-09-05 01:04
I've been spending quite a bit of time lately following the Wikipedia pages of "Articles for Deletion" or WP:AfD in Wikipedia parlance. This is a fascinating way to learn about the Wikipedia world. The articles for deletion fall mostly into a few categories:
  1. Brief mentions of something that someone once thought interesting (a favorite game character, a dearly loved soap opera star, a heartfelt local organization) but that has not been considered important by anyone else. In Wikipedian, it lacks WP:NOTABILITY.
  2. Highly polished P.R. intended to make someone or something look more important than it is, knowing that Wikipedia shows up high on search engine results, and that any site linked to from Wikipedia also gets its ranking boosted.
Some of #2 is actually created by companies that are paid to get their clients into Wikipedia along with promoting them in other places online. Another good example is that of authors of self-published books, some of whom appear to be more skilled in P.R. than they are in the literary arts.

In working through a few of the fifty or more articles proposed for deletion each day, you get to do some interesting sleuthing. You can see who has edited the article, and what else they have edited; any account that has only edited one article could be seen as a suspected bogus account created just for that purpose. Or you could assume that only one person in the English-speaking world has any interest in this topic at all.

Most of the work, though, is in seeing if you can establish notability. Notability is not a precise measure, and there are many pages of policy and discussion on the topic. The short form is that for something or someone to be notable, it has to be written about in respected, neutral, third-party publications. Thus a New York Times book review is good evidence of notability for a book, while a listing in the Amazon book department is not. The grey area is wide, however. Publisher's Weekly may or may not indicate notability, since they publish only short paragraphs, and cover about 7,000 books a year. That's not very discriminating.

Notability can be tricky. I recently came across an article for deletion pointing to Elsie Finnimore Buckley, a person I had never heard of before. I discovered that her dates were 1882-1959, and she was primarily a translator of works from French into English. She did, though, write what appears to have been a popular book of Greek tales for young people.

As a translator, her works were listed under "E. F. Buckley." I can well imagine that if she had used her full name it would not have been welcome on the title page of the books she translated. Some of the works she translated appear to have a certain stature, such as works by Franz Funck-Brentano. She has an LC name authority file under "Buckley, E. F." although her full name is added in parentheses: "(Elsie Finnimore)".

To understand what it was like for women writers, one can turn to Linda Peterson's book "Becoming a Woman of Letters and the fact of the Victorian market." In that, she quotes a male reviewer of Buckley's Greek tales, which she did publish under her full name. His comments are enough to chill the aspirations of any woman writer. He said that writing on such serious topics is "not women's work" and that "a woman has neither the knowledge nor the literary tact necessary for it." (Peterson, p. 58) Obviously, her work as a translator is proof otherwise, but he probably did not know of that work.

Given this attitude toward women as writers (of anything other than embroidery patterns and luncheon menus) it isn't all that surprising that it's not easy to establish WP:NOTABILITY for women writers of that era. As Dale Spender says in "Mothers of the Novel; 100 good women writers before Jane Austen":
"If the laws of literary criticism were to be made explicit they would require as their first entry that the sex of the author is the single most important factor in any test of greatness and in any preservation for posterity." (p. 137)That may be a bit harsh, but it illustrates the problem that one faces when trying to rectify the prejudices against women, especially from centuries past, while still wishing to provide valid proof that this woman's accomplishments are worthy of an encyclopedia entry.

We know well that many women writers had to use male names in order to be able to publish at all. Others, like E.F. Buckley, hid behind initials. Had her real identity been revealed to the reading public, she might have lost her work as a translator. Of late, J.K. Rowling has used both techniques, so this is not a problem that we left behind with the Victorian era. As I said in the discussion on Wikipedia:
"It's hard to achieve notability when you have to keep your head down."

Cherry Hill Company: Cherry Hill to present at DrupalCamp LA this weekend

Thu, 2014-09-04 22:03

Cherry Hill is looking forward to DrupalCamp LA this weekend! Come join us for some of our sessions to expand your Drupal knowledge. Whether you are a seasoned Drupal ninja, or a green newbie, LA Drupal community members, including the crew at Cherry Hill, will be on hand to show you some ins and outs of the Drupal world. 

Check out our sessions below:

Saturday: Morning InstallFest: Get PHP & Drupal running in under 15 minutes with Tommy Keswick

8:30am Pacific Ballroom AB 
InstallFest volunteers will help guide and verify the installation of PHP and/or Drupal on your personal laptop.

Drupal Camp Into for Newbies with John Romine and Ashok Modi

8:40am Pacific Ballroom C
Pre-camp cup of coffee and a quick introduction to how to get...

Read more »

HangingTogether: 939,594,891 library users worldwide — Prove me wrong!

Thu, 2014-09-04 20:19

That crunching you hear is the sound of the numbers available from OCLC’s Global Library Statistics page.

Over the past several years, the OCLC Library has been compiling data for the total number of libraries, librarians, volumes, expenditures, and users for every country and territory in the world, broken down into the major library types: academic, public, school, special and national.  The goal was to provide statistics on all libraries—not just OCLC libraries—that could be accessed and used by anyone.

A while back Dr. Frank Seeliger, Director of the Library at the Technical University of Applied Sciences in Wildau, Germany, contacted me about the statistics.  He asked if I could send him the actual data behind the site so that he could total up all the libraries, librarians, books, etc.  (At the time the information was only accessible country-by-country.)  I was happy to oblige, and here’s what he came up with.

Global library statistics summary

His request created the impetus for us to make the data available under an Open Data Commons Attribution License. Two spreadsheets provide information for countries and for U.S. states and Canadian provinces.  A third gives information on the over 80 sources that contribute data.

See the data for yourself!

The staff of the OCLC Library extracted data from respected third-party sources, both electronic and print, that in their judgment are the most current and accurate sources to which they have access. For many countries, data were either unavailable (indicated in the charts as NA) or sporadic. For a lot of the world, the data were not as current as the we would have liked.

We want to makes these statistics as accurate as we can.  Once you’ve taken a look at the Global Library Statistics, take a look the Sources and send me your suggestions or leave a comment below.  While $51 billion in library expenditures is nothing to sneeze at, it is, as Dr. Seeliger put it, a Hausnummer.  A ballpark figure.  And it’s not even adjusted for inflation!

Thanks for your help.

About Tam Dalrymple

Tam Dalrymple is Senior Information Specialist (reference librarian) at the OCLC Library in Dublin Ohio. Prior to joining OCLC as a product manager some years back, Tam managed reference services at Ohio State and at the Columbus Metropolitan Library.

Mail | More Posts (1)

Jodi Schneider: Rating the evidence, citation by citation?

Thu, 2014-09-04 17:21

Publishers from HighWire Press are experimenting with a plugin called SocialCite. This is intended to rate the evidence, citation by citation. Like this:

SocialCite at PNAS, HighWire Press from http://www.pnas.org/content/108/14/5488.full#ref-list-1:

So far a few publishers (including PNAS) have implemented it as a pilot. Apparently the Journal of Bone and Joint Surgery is apparently leading this effort, I’d be really interested in speaking with them further:

Find out more about SocialCite from their website or the slidedeck from their debut at the HighwirePress meeting.

SocialCite makes its debut at the HighWire Press meeting from Kent Anderson

I’m *very* curious to hear what peopel think of this — it really surprised me.

LITA: LITA Updates

Thu, 2014-09-04 16:45

This is one of our periodic messages sent to all LITA members. This update includes information about:

  • LITA Forum Opportunities
  • New LITA Guides available
LITA Forum in Albuquerque

Two workshops, three keynotes, 30 plus concurrent sessions, poster sessions, and, multiple networking opportunities promise to deliver opportunities to you.

The two preconference workshops begin on Wednesday, November 5, 1:00-5:00pm and run through Thursday, November 6, 8am to noon.

1) Learn Python by Playing with Library Data with Francis Kayiwa. Learn the basics on how to set up your Python environment, install useful packages, and, write programs.

2) LinkedData for Libraries: How libraries can make use of Linked Open Data to share information about library resources and to improve discovery, access, and understanding for library users with Dean Krafft and Jon Corson-Rikert from Cornell University Library.

The three keynote speakers are:

AnnMarie Thomas, Engineering Professor at the University of St. Thomas. AnnMarie is the director of the UST Design Laboratory. Dr. Thomas co-founded, and co-directs the University of St. Thomas Center for Pre-Collegiate Engineering Education. She served as the Founding Executive Director of the Maker Education Initiative. AnnMarie has also worked on robotics design, creation, and propulsion.

Lorcan Dempsey, Vice President, OCLC Research and Chief Strategist, oversees the research division and participates in planning at OCLC. Lorcan has policy, research and service development experience, mostly in the area of networked information and digital libraries.

Kortney Ryan Ziegler, Founder of Trans*h4ck, is an award winning artist, writer, and the first person to hold the PhD of African American Studies from Northwestern University. Trans*H4CK is the only tech event of its kind that spotlights trans* created technology, trans* entrepreneurs and trans* led startups.

Networking opportunities

All Forum sessions are in a single hotel which facilitates networking opportunities. These include a first night reception, two nights of networking dinners (gather on site and then move off site to various restaurants), all conference meals on site (breakfasts, lunch) and lengthy breaks. Not to mention conversations in the hotel hallways and elevators. The first night reception launches the Sponsor Showcase where participants will have ample opportunities to meet with representatives from EBSCO, Springshare, and, @MIRE both that evening and the next day. Our thanks go to all the Forum sponsors including Innovative and OCLC. Rachel Vacek, LITA President, and, Thomas Dowling, LITA President-elect, have plans to lead two networking dinners focused on LITA specific Kitchen Conversations. LITA and the LITA Forum fully support the Statement of Appropriate Conduct at ALA Conferences

Hope to see you in Albuquerque!

New LITA Guides

Two LITA Guides were published this summer. The Top Technologies Every Librarian Needs to Know, Kenneth Varnum, editor and contributor, and, Using Massive Digital Libraries by Andrew Weiss with Ryan James.

The Top Technologies guide is focused on the impact a technology could have on staff, services, and patrons. An expert on each emerging technology talks about the technology within the near-term future of three to five years. In the introduction, Ken Varnum says, “Each chapter includes a thorough description of a particular technology: what it is, where it came from, and why it matters. We will look at early adopters or prototypes for the technology to see how it could be used more broadly. And then, having described a trajectory, we will paint a picture of how the library of the not-so-distant future could be changed by adopting and embracing that particular technology.”

Using Massive Digital Libraries examines “what Ryan James and (Andrew Weiss) in previous studies have together defined as massive digital libraries (MDLs). … A massive digital library is a collection of organized information large enough to rival the size of the world’s largest bricks-and-mortar libraries in terms of book collections. The examples examined in this book range from hundreds of thousands of books to tens of millions. This basic definition … is a starting point for discussion. As the book progresses this definition is refined further to make it more usable and relevant. This book will introduce more characteristics of MDLs and examine how they affect the current traditional library.”

I encourage you to connect with LITA by:

  1. Exploring our web site.
  2. Subscribing to LITA-L email discussion list. E-mail to sympa@ala.org with the subject line “subscribe lita-l”.
  3. Visiting the LITA blog and LITA Division page on ALA Connect.
  4. Connecting with us on Facebook and Twitter.
  5. Reaching out to the LITA leadership at any time.

Please note: the Information Technology and Libraries (ITAL) journal is available to you and to the entire profession. ITAL features high-quality articles that undergo rigorous peer-review as well as case studies, commentary, and information about topics and trends of interest to the LITA community and beyond. Be sure to sign up for notifications when new issues are posted (March, June, September, and December).

If you have any questions or wish to discuss any of these items, please do let me know.

All the best,

Mary

Mary Taylor, Executive Director
Library and Information Technology Association (LITA)
50 E. Huron, Chicago, IL 60611
800-545-2433 x4267
312-280-4267 (direct line)
312-280-3257 (fax)
mtaylor (at) ala.org
www.lita.org

Join us in Albuquerque, November 5-8, 2014 for the LITA Forum. The theme is “Transformation: From Node to Network”

District Dispatch: Free webinar: Understanding Social Security

Thu, 2014-09-04 16:26

Photo by the Knight Foundation

Do you know how to help your patrons locate information on Supplemental Security Income or Social Security? The American Library Association (ALA) is encouraging librarians to participate in “My SSA,” a free webinar that will teach participants how to use My Social Security (MySSA), the online Social Security resource.

Presented by leaders and members of the development team of MySSA, this session will provide attendees with an overview of MySSA. In addition to receiving benefits information in print, the Social Security Administration is encouraging librarians to create an online MySSA account to view and track benefits.

Attendees will learn about viewing earnings records and receiving instant estimates of their future Social Security benefits. Those already receiving benefits can check benefit and payment information and manage their benefits.

Speakers include:

  • Maria Artista-Cuchna, Acting Associate Commissioner, External Affairs
  • Kia Anderson, Supervisory Social Insurance Specialist
  • Arnoldo Moore, Social Insurance Specialist
  • Alfredo Padilia Jr., Social Insurance Specialist
  • Diandra Taylor, Management Analyst

Date: Wednesday, September 17, 2014
Time: 2:00 PM – 3:00 PM EDT
Register for the free event

If you cannot attend this live session, a recorded archive will be available. To view past webinars also hosted collaboratively with iPAC, please visit Lib2Gov.org.

The post Free webinar: Understanding Social Security appeared first on District Dispatch.

Library of Congress: The Signal: DPOE Working Group Moves Forward on Curriculum

Thu, 2014-09-04 13:03

The working group at their recent meeting. Photo by Julio Diaz.

For many organizations that are just starting to tackle digital preservation, it can be a daunting challenge – and particularly difficult to figure out the first steps to take.  Education and training may be the best starting point, creating and expanding the expertise available to handle this kind of challenge.  The Digital Preservation Outreach and Education  program here at the Library aims to do just that, by providing the materials as well as the hands-on instruction to help build the expertise needed for current and future professionals working on digital preservation.

Recently, the Library was host to a meeting of the DPOE Working Group, consisting of a core group of experts and educators in the field of digital preservation.  The Working Group participants were Robin Dale (Institute of Museum and Library Services), Sam Meister (University of Montana-Missoula), Mary Molinaro (University of Kentucky), and Jacob “Jake” Nadal (Princeton University).  The meeting was chaired by George Coulbourne of the Library of Congress, and Library staffers Barrie Howard and Kris Nelson also participated.

The main goal of the meeting was to update the existing DPOE Curriculum, which is used as the basis for the Program’s training workshops and then subsequently, by the trainees themselves.  A survey is being conducted to gather even more information, and will help inform this curriculum as well (see a related blog post).   The Working Group reviewed and edited all of the six substantive modules which are based on terms from the OAIS Reference Model framework:

  • Identify   (What digital content do you have?)
  • Select   (What portion of your digital content will be preserved?)
  • Store   (What issues are there for long-term storage?)
  • Protect  (What steps are needed to protect your digital content?)
  • Manage   (What provisions are needed for long-term management?)
  • Provide   (What considerations are there for long-term access?)

The group also discussed adding a seventh module on implementation.  Each of these existing modules contains a description, goals, concepts and resources designed to be used by current and/or aspiring digital preservation practitioners.

Mary Molinaro, Director, Research Data Center at the University of Kentucky Libraries, noted that “as we worked through the various modules it became apparent how flexible this curriculum is for a wide range of institutions.  It can be adapted for small, one-person cultural heritage institutions and still be relevant for large archives and libraries. ”

Mary also spoke to the advantages of having a focused, group effort to work through these changes: “Digital preservation has some core principles, but it’s also a discipline subject to rapid technological change.  Focusing on the curriculum together as an instructor group allowed us to emphasize those things that have not changed while at the same time enhancing the materials to reflect the current technologies and thinking.”

These curriculum modules are currently in the process of further refinement and revision, including an updated list of resources. The updated version of the curriculum will be available later this month. The Working Group also recommended some strategies for extending the curriculum to address executive audiences, and how to manage the process of updating the curriculum going forward.

Peter Murray: Thursday Threads: History of the Future, Kuali change-of-focus, 2018 Mindset List

Thu, 2014-09-04 10:22
Receive DLTJ Thursday Threads:

by E-mail

by RSS

Delivered by FeedBurner

This weeks threads are a mixture of the future, the present and the past. Starting things off is A History of the Future in 100 Objects, a revealing look at what technology and society has in store for us. Parts of this resource are available freely on the website with the rest available as a $5 e-book. Next, in the present, is the decision by the Kuali Foundation to shift to a for-profit model and what it means for open source in the academic domain. And finally, a look at the past with the mindset list for the class of 2018 from Beloit College.

Feel free to send this to others you think might be interested in the topics. If you find these threads interesting and useful, you might want to add the Thursday Threads RSS Feed to your feed reader or subscribe to e-mail delivery using the form to the right. If you would like a more raw and immediate version of these types of stories, watch my Pinboard bookmarks (or subscribe to its feed in your feed reader). Items posted to are also sent out as tweets; you can follow me on Twitter. Comments and tips, as always, are welcome.

A History of the Future in 100 Objects

What are the 100 objects that future historians will pick to define our 21st century? A javelin thrown by an ‘enhanced’ Paralympian, far further than any normal human? Virtual reality interrogation equipment used by police forces? The world’s most expensive glass of water, mined from the moons of Mars? Or desire modification drugs that fuel a brand new religion?
A History of the Future in 100 Objects describes a hundred slices of the future of everything, spanning politics, technology, art, religion, and entertainment. Some of the objects are described by future historians; others through found materials, short stories, or dialogues. All come from a very real future.

- About A History of the Future, by Adrian Hon

I was turned on to this book-slash-website-slash-resource by a tweet from Herbert Von de Sompel:

I'm assuming @apple doesn't believe in the future – "A history of the Future in 100 objects" not in iBooks / @cni_org http://t.co/dK5OI4JuIr

— Herbert (@hvdsomp) August 21, 2014


The name is intriguing, right? I mean, A History of the Future in 100 Objects? What does it mean to have a “History of the Future”?

The answer is an intriguing book that places the reader in the year 2082 looking back at the previous 68 years. (Yes, if you are doing the math, the book starts with objects from 2014.) Whether it is high-tech gizmos or the impact of world events, the author makes a projection of what might happen by telling the brief story of an artifact. For those in the library arena, you want to read about the reading rooms of 2030, but I really suggest starting at the beginning and working your way through the vignettes from the book that the author has published on the website. There is a link in the header of each pages that points to e-book purchasing options.

Kuali Reboots Itself into a Commercial Entity

Despite the positioning that this change is about innovating into the next decade, there is much more to this change than might be apparent on the surface. The creation of a for-profit entity to “lead the development and ongoing support” and to enable “an additional path for investment to accelerate existing and create new Kuali products fundamentally moves Kuali away from the community source model. Member institutions will no longer have voting rights for Kuali projects but will instead be able to “sit on customer councils and will give feedback about design and priority”. Given such a transformative change to the underlying model, there are some big questions to address.

- Kuali For-Profit: Change is an indicator of bigger issues, by Phil Hill, e-Literate

As Phil noted in yesterday’s post, Kuali is moving to a for-profit model, and it looks like it is motivated more by sustainability pressures than by some grand affirmative vision for the organization. There has been a long-term debate in higher education about the value of “community source,” which is a particular governance and funding model for open source projects. This debate is arguably one of the reasons why Indiana University left the Sakai Foundation (as I will get into later in this post). At the moment, Kuali is easily the most high-profile and well-funded project that still identifies itself as Community Source. The fact that this project, led by the single most vocal proponent for the Community Source model, is moving to a different model strongly suggests that Community Source has failed.
It’s worth taking some time to talk about why it has failed, because the story has implications for a wide range of open-licensed educational projects. For example, it is very relevant to my recent post on business models for Open Educational Resources (OER).

- Community Source Is Dead, by Michael Feldstein, e-Literate blog

I touched on the cosmic shift in the direction of Kuali on DLTJ last week, but these two pieces from Phil Hill and Michael Feldstein on the e-Literate blog. I have certainly been a proponent of the open source method of building software and the need for sustainable open source software to develop a community around that software. But I can’t help but think there is more to this story than meets the eye: that there is something about a lack of faith by senior university administrators in having their own staff own the needs and issues of their institutions. Or maybe it has something to do with the high levels of fiscal commitment to elaborate “community source” governance structures. In thinking about what happened with Kuali, I can’t help but compare it to the reality of Project Hydra, where libraries participate with in-kind donations of staff time, travel expenses and good will to a self-governing organization that has only as much structure as it needs.

The 2018 Mindset List

Students heading into their first year of college this year were generally born in 1996.

Among those who have never been alive in their lifetime are Tupac Shakur, JonBenet Ramsey, Carl Sagan, and Tiny Tim.

On Parents’ Weekend, they may want to watch out in case Madonna shows up to see daughter Lourdes Maria Ciccone Leon or Sylvester Stallone comes to see daughter Sophia.

For students entering college this fall in the Class of 2018…

- 2018 List, by Tom McBride and Ron Nief, Beloit College Mindset List

So begins the annual “mindset list” — a tool originally developed to help the Beloit College instructors use cultural references that were relevant to the students entering their classrooms. I didn’t see as much buzz about it this year in my social circles, so I wanted to call it out (if for no other reason than to make you feel just a little older…).

Link to this post!

Peter Murray: Blocking /xmlrpc.php Scans in the Apache .htaccess File

Thu, 2014-09-04 02:41

Someone out there on the internet is repeatedly hitting this blog’s /xmlrpc.php service, probably looking to enumerate the user accounts on the blog as a precursor to a password scan (as described in Huge increase in WordPress xmlrpc.php POST requests at Sysadmins of the North). My access logs look like this:

176.227.196.86 - - [04/Sep/2014:02:18:19 +0000] "POST /xmlrpc.php HTTP/1.0" 200 291 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" 195.154.136.19 - - [04/Sep/2014:02:18:19 +0000] "POST /xmlrpc.php HTTP/1.0" 200 291 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" 176.227.196.86 - - [04/Sep/2014:02:18:19 +0000] "POST /xmlrpc.php HTTP/1.0" 200 291 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" 176.227.196.86 - - [04/Sep/2014:02:18:21 +0000] "POST /xmlrpc.php HTTP/1.0" 200 291 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" 176.227.196.86 - - [04/Sep/2014:02:18:22 +0000] "POST /xmlrpc.php HTTP/1.0" 200 291 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" 176.227.196.86 - - [04/Sep/2014:02:18:24 +0000] "POST /xmlrpc.php HTTP/1.0" 200 291 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" 195.154.136.19 - - [04/Sep/2014:02:18:24 +0000] "POST /xmlrpc.php HTTP/1.0" 200 291 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" 176.227.196.86 - - [04/Sep/2014:02:18:26 +0000] "POST /xmlrpc.php HTTP/1.0" 200 291 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)"

By itself, this is just annoying — but the real problem is that the PHP stack is getting invoked each time to deal with the request, and at several requests per second from different hosts this was putting quite a load on the server. I decided to fix the problem with a slight variation from what is suggested in the Sysadmins of the North blog post. This addition to the .htaccess file at the root level of my WordPress instance rejects the connection attempt at the Apache level rather than the PHP level:

RewriteCond %{REQUEST_URI} =/xmlrpc.php [NC] RewriteCond %{HTTP_USER_AGENT} .*Mozilla\/4.0\ \(compatible:\ MSIE\ 7.0;\ Windows\ NT\ 6.0.* RewriteRule .* - [F,L]

Which means:

  1. If the requested path is /xmlrpc.php, and
  2. you are sending this particular agent string, then
  3. send back a 403 error message and don’t bother processing any more Apache rewrite rules.

If you need to use this yourself, you might find that the HTTP_USER_AGENT string has changed. You can copy the user string from your Apache access logs, but remember to preface each space or each parenthesis with a backslash.

Link to this post!

Peter Murray: 2nd Workshop on Sustainable Software for Science: Practice and Experiences — Accepted Papers and Travel Support

Thu, 2014-09-04 02:08

The conference organizers for WSSSPE2 have posted the list of accepted papers and the application for travel support. I was on the program committee for this year’s conference, and I can point to some papers that I think are particularly useful to libraries and the cultural heritage community in general:

Link to this post!

Pages