news aggregator

Bisson, Casey: When not to use esc_js()

planet code4lib - Mon, 2013-12-30 17:46

From the codex for esc_js:

If you’re not working with inline JS in HTML event handler attributes, a more suitable function to use is json_encode, which is built-in to PHP.

Farkas, Meredith: Why I Teach Freshman

planet code4lib - Mon, 2013-12-30 13:21

In September, I went to a lecture given by Allison Head of Project Information Literacy fame at a local university. During the lecture, she offered a preview of the research report that would be coming out soon on first-year students. I hadn’t realized until then that none of the other PIL research had examined this population and I was eager to hear what she learned. You can see my tweets from the lecture under the hashtag #sherrerlecture. I came out of the lecture feeling validated in the approaches we have taken with library instruction for the Freshman Inquiry (FRINQ) classes at Portland State. We try to offer a warmth session (a brief visit in thier classroom focused on library awareness and putting a friendly face on the library) to every FRINQ class in addition to any information literacy instruction we might provide. We also take a train-the-trainer approach so that information literacy instruction doesn’t only happen during our time with the students. We focus a lot of our instruction on what I call pre-search, which is focused on topic development, keyword brainstorming, determining what evidence you need, where to search, etc. And what I heard from Alison’s summary of the research was that our focus is in exactly the right place: 74% of Freshmen have trouble with brainstorming keywords, making it the most widespread problem found in the study.

Now the report is out and you can see for yourself. The results from this study are particularly compelling and there have  been terrific posts summarizing the results from Karen Schneider (who brought up the elephant in the room, namely the importance of high school information literacy instruction at a time when so many K-12 librarians are losing their jobs) and Barbara Fister (who like me was pleasantly surprised to see the percentage of students who found librarians a useful resource).

I can’t remember if I’ve mentioned this before, but Portland State has some unusual student demographics. A full 2/3 of our undergraduates come in as transfers — mostly from Portland-area community colleges, but also from other 4-year schools. So, only around 1/3 of our students ever go through the Freshman Inquiry program, a terrific full-academic-year cohort-based program that includes peer mentor support as well as the same instructor for the whole year. Because of these demographics, we have had many discussions at our library about how much emphasis we should place on teaching in FRINQ. Some have argued that we should focus the majority of our efforts on reaching students in the upper-division courses in their majors. As we are seeing our budget cut and our staff shrink, I fear that Freshman instruction is going to be an area where we are going to be asked to further cut back in. But I think it’s wrong. And I always have a list of arguments in my back pocket ready to advocate for the importance of Freshman instruction; arguments that have just been bolstered by the Project Information Literacy report:

1. They need us – Freshmen come into college having had experience with libraries that on average had 19 times fewer databases and with research that usually required reading stuff and summarizing it. In our Freshman Inquiry classes, the students are confronted with some pretty sophisticated research assignments that, for the most part, they are ill-equipped to handle. We can make all the arguments we want about how faculty shouldn’t be assigning full research papers in the first year, but the reality is that they are and we owe it to these students to support them. How could we wait two years, letting them struggle on their own that whole time?

2. We have a retention problem – Like many large, public, urban universities, Portland State has a big retention problem, especially among first year students. We have a lot of first-generation students, academically unprepared students, and students under significant financial strain. We also have a big commuter population and not enough of a campus  culture or co-curricular activities. One of the big issues students have is that they feel disconnected and lost at such a large school and don’t get the individual attention/support they need. The more we can put a friendly face on the library and encourage them to seek help from us, the more likely they are to feel supported within the institution. I’ve seen research in the past that has shown that students who connect with supporters on campus (like tutors, advisors, librarians, etc.) are more likely to be retained. If that’s the case, then we should absolutely be focusing on Freshman as our contribution to the retention effort. And not just by providing a bunch of tutorials. I was happy to see in the PIL report that “Freshmen said they found campus librarians (29%) and their English composition instructors (29%) were the most helpful individuals on campus with guiding them through college-level research.” We do make a difference.

3. Freshman DO get a lot out of library instruction – An argument I’ve commonly heard against instruction for first-year students is that they don’t realize they need help so they don’t get anything out of our instruction. Yes, I know that engaging Freshman is a lot more difficult than engaging upper-division students. I’ve had instruction sections with Freshman where I wonder if anyone listened to anything I said. The key with Freshman is to make the instruction section as active as possible. Do as little lecturing as you can. I do a lot of group activities, pair and shares, and jigsaw exercises. Last Spring I also tried having students do an online research worksheet (that included instructional videos and required them to do research on their topic) before they came to class. Because they’d already had the experience of struggling with research on their topics, they were much more engaged during the instruction session than I’ve noticed in the past. A side benefit was that I got to see specifically where they were struggling with their research before the session. It may take more effort, but Freshman can get a lot out of library instruction.

4. Freshman do not realize they can (and often should) get help with their research - College students think that they should be self-sufficient and independent when they go to college and that extends to research. It doesn’t occur to many students that they can get help from a reference librarian. One student said “I just found out when the librarian visited our class that talking to her in the library was an option — I had no idea. I went to the reference desk and… she gave me different ways of thinking about going and using the databases. … I went and saw a librarian at the end of my research process, but honestly, what would really be good though is go to them in the beginning.” Insights like that are only gained when librarians are present in the classroom. Disciplinary faculty can promote the library, but for those who see the library as a scary place or asking for help as admitting failure, it takes meeting a librarian to change their thinking. There is value to “putting a friendly face on the library,” even if your visit to their class is only meant to achieve that goal.

5. If you want to create library users, get ‘em in early – Students are creatures of habit. The PIL report supports the idea that the research tools and strategies students learn about early on are those that they continue using. If a librarian tells them once to use JSTOR, they will use that for every assignment henceforth, whether it’s the ideal database or not. So if you want to get students to use that library and its databases, you need to sell them on it as soon as they get to your institution. Get them using it for their first research assignment and help them use it successfully, and you’re golden.

I have to say that the last two paragraphs of the Project Information Literacy report made me giddy:

Based on our studies, we believe that the greatest gains may occur by focusing on teaching freshmen. This is a time when students are new to higher learning and most excited about discovering more about topics that interest them. Moreover, there needs to be coordinated efforts between librarians and educators, so that information literacy is taught in a progressive and contextual manner.

If instruction efforts are not stepped up early many freshmen run the very real risk of ‘flatlining.’ By this we mean that the research styles students develop during their ever-important first year could become static as they progress as sophomores, juniors, and seniors. Neglecting this will greatly impede their ability to solve information problems once they graduate, join the workplace, and continue as lifelong learners.

I’m not going to argue that first-year instruction is more important than instruction in the major. I think they’re both important for different reasons. I don’t think it needs to be an either/or; we have to find a healthy balance between the two where no area feels short-changed.

I know most of you probably aren’t in the position of having to advocate for the existence of your library instruction program for first-year students, but what other reasons do you have for thinking first-year instruction is important? What steps have you taken in your teaching to make it more valuable for this group? How might the results from the PIL study change your instructional approach?

 

Image credit: Campus Life – Freshman Orientation by Lafayette, on Flickr.

ALA Equitable Access to Electronic Content: Confused about copyright? Tweet us your questions on Jan. 7th

planet code4lib - Mon, 2013-12-30 13:09

Can you legally photocopy pages from that textbook? Can students legally remix music for school assignments? What does fair use mean, and how can it be applied in the school library or classroom? If you are a school librarian or educator who is confused by copyright law, you’re not alone. School principals, superintendents, educators and librarians have specific questions about copyright law but often find themselves without guidance on the subject.

On January 7, 2014, from 6:00-7:00p.m. EST, school leaders will have the opportunity to have their questions answered during an interactive tweetchat with copyright expert and bestselling author Carrie Russell. Participants can submit questions and take part in the free tweetchat by using the #k12copylaw hashtag.

As part of the tweetchat, Russell will offer clear guidance on the ways that principals, superintendents, teachers and librarians can legally provide materials to students. Additionally, Russell will discuss scenarios often encountered by educators in schools, such as using digital works in the classroom and students’ use of information found on the web. Russell is also the director of the American Library Association’s Program on Public Access to Information.

Tweetchat participants will learn about:

  • Fair use
  • Copyright law in the digital age
  • Copyright exploitation in schools (i.e., incidents when copyright industry groups exploit school staff under the guise of copyright law compliance)

Russell is the author of Complete Copyright for K–12 Librarians and Educators, a book that teaches educators how to fully exercise rights such as fair use while making decisions that are both lawful and best serve the learning community. To receive a 10% discount on Complete Copyright (20% for ALA members), use the coupon code CC2014 before January 15, 2014.

In addition to being the director of the American Library Association’s Program on Public Access to Information, Russell speaks frequently at state, regional, and national library conferences about the intricacies of copyright law.

The interactive social media event will be hosted jointly by AASA: The School Superintendents Association, the American Library Association, the National Association of Elementary School Principals and the National Association of Secondary School Principals.

Participate in the free tweetchat by using #k12copylaw on January 7, 2014, from 6:00-7:00p.m. EST.

The post Confused about copyright? Tweet us your questions on Jan. 7th appeared first on District Dispatch.

Ruest, Nick: Islandora Web ARChive SP updates

planet code4lib - Mon, 2013-12-30 06:21
Community

Some pretty exciting stuff has been happening lately in the Islandora community. Earlier this year, Islandora began the transformation to a federally incorporated, community-driven soliciting non-profit. Making it, in my opinion, and much more sustainable project. Thanks to my organization joining on as a member, I've been provided the opporutinity to take part in the Roadmap Committe. Since I've joined, we have been hard at work creating transparent policies and processes software contributions, licenses, and resources. Big thanks to the Hydra community for providing great examples to work from!

I signed my first contirbutor licence agreement, and initiated the process for making the Web ARChive Solution Pack a canonical Islandora project, subject to the same release management and documentation processes as other Islandora modules. After orking through the process, I'm happy to see that the Web ARChive Solution Pack is now a canonical Islandora project.

Project updates

I've been slowly picking off items from my initial todo list for the project, and have solved two big issues: indexing the warcs in Solr for full-text/keyword searching and creating and index of each warc.

Solr indexing was very problematic at first. I ened up having a lot of trouble getting an xslt to take the warc datastream and give it to FedoraGSearch, and in-turn to Solr. Frustrated, I began experimenting with newer versions of Solr, which thankfully has Apache Tika bundled, thereby allowing for Solr to index basically whatever you throw at it.

I didn't think our users wanted to be searching the full markup of a warc file. Just the actual text. So, using the Internet Archives' Warctools and @tef's wonderful assistance, I was able to incorporate warcfilter into the derivative creation.

$ warcfilter -H text warc_file > filtered_file

You can view an example of the full-text searching of warcs in action here.

In addition to the full-text searching, I wanted to provided users with a quick overview of what is in a given capture, and was able to do so by also incorporating warcindex into the derivative creation.

$ warcindex warc_file > csv_file

#WARC filename offset warc-type warc-subject-uri warc-record-id content-type content-length /extra/tmp/yul-113521_OBJ.warc 0 warcinfo None <urn:uuid:588604aa-4ade-4e94-b19a-291c6afa905e> application/warc-fields 514 /extra/tmp/yul-113521_OBJ.warc 797 response dns:yfile.news.yorku.ca <urn:uuid:cbeefcb0-dcd1-466e-9c07-5cd45eb84abb> text/dns 61 /extra/tmp/yul-113521_OBJ.warc 1110 response http://yfile.news.yorku.ca/robots.txt <urn:uuid:6a5d84d1-b548-41e4-a504-c9cf9acfcde7> application/http; msgtype=response 902 /extra/tmp/yul-113521_OBJ.warc 2366 request http://yfile.news.yorku.ca/robots.txt <urn:uuid:363da425-594e-4365-94fc-64c4bb24c897> application/http; msgtype=request 257 /extra/tmp/yul-113521_OBJ.warc 2952 metadata http://yfile.news.yorku.ca/robots.txt <urn:uuid:62ed261e-549d-45e8-9868-0da50c1e92c4> application/warc-fields 149

The updated Web ARChive SP datastreams now look like so:

One of my major goals with this project has been integration with a local running instance of Wayback, and it looks like we are pretty close. This solution might not be the cleanest, but at least it is a start, and hopefully it will get better over time. I've updated the default MODS form for the module so that it better reflects this Library of Congress example. The key item here is the 'url' element with the 'Archived site' attribute.

<location> <url displayLabel="Active site">http://yfile.news.yorku.ca/</url> <url displayLabel="Archived site">http://digital.library.yorku.ca/wayback/20131226/http://yfile.news.yorku.ca/</url> </location>

Wayback accounts for a date in its url structure 'http://digital.library.yorku.ca/wayback/20131226/http://yfile.news.yorku.ca/' and we can use that to link a given capture to its given dissemination point in Wayback. Using some Islandora Solr magic, I should be able give that link to a user on a given capture page.

We have automated this in our capture and preserve process: capturing warcs with Heritrix, creating MODS datastreams, and screenshots. This allows us to batch import our crawl quickly and efficiently.

Hopefully in the new year we'll have a much more elegant solution!

tags: islandoracommunitywarcweb archives

Morgan, Eric Lease: Semantic Web in Libraries 2013

planet code4lib - Mon, 2013-12-30 04:47

I attended the Semantic Web in Libraries 2013 conference in Hamburg (November 25-27), and this posting documents some of my experiences. In short, I definitely believe the linked data community in libraries is maturing, but I still wonder whether or not barrier for participation is really low enough for the vision of the Semantic Web to become a reality.

Preconference on provenance

On the first day I attended a preconference about linked data and provenance led by Kai Eckert (University of Mannheim) and Magnus Pfeffer (Stuttgarat Media University). One of the fundamental ideas behind the Semantic Web and linked data is the collecting of triples denoting facts. These triples are expected to be amassed and then inferenced across in order to bring new knowledge to light. But in the scholarly world it is important cite and attribute scholarly output. Triples are atomistic pieces of information: subjects, predicates, objects. But there is no room in these simple assertions to denote where the information originated. This issue was the topic of the preconference discussion. Various options were outlined but none of them seemed optimal. I’m not sure of the conclusion, but one “solution” may be the use of PROV, “a model, corresponding serializations and other supporting definitions to enable the inter-operable interchange of provenance information in heterogeneous environments such as the Web”.

Day #1

Both Day #1 and Day #2 were peppered with applications which harvested linked data (and other content) to create new and different views of information. AgriVIVO, presented by John Fereira (Cornell University) was a good example:

AgriVIVO is a search portal built to facilitate connections between all actors in the agricultural field, bridging across separately hosted directories and online communities… AgriVIVO is based on the VIVO open source semantic web application initially developed at Cornell University and now adopted by several cross-institutional research discovery projects.

Richard Wallis (OCLC) advocated the creation of library knowledge maps similar to the increasingly visible “knowledge graphs” created by Google and displayed at the top of search results. These “graphs” are aggregations of images, summaries, maps, and other bit of information providing the reader with answers / summaries describing what may be the topic of search. They are the same sort of thing one sees when searches are done in Facebook as well. And in the true spirit of linked data principles, Wallis advocated the additional use of additional peoples’ Semantic Web ontologies such as the ontology used by Schema.org. If you want to participate and help extend the bibliographic entities of Schema.org, then consider participating in a W3C Community called Schema Bib Extend Community Group.

BIBFRAME was described by Julia Hauser (Reinhold Heuvelmann German National Library). Touted as as a linked data replacement for MARC, its data model consists of works, instances, authorities, and annotations (everything else). According to Hauser, “The big unknown is how can RDA or FRBR be expressed using BIBFRAME.” Personally, I noticed how BIBFRAME contains no holdings information, but such an issue may be resolvable through the use of schema.org.

“Language effects hierarchies and culture comes before language” were the concluding remarks in a presentation by the National Library of Finland. Leaders in the linked data world, the presenters described how they were trying to create a Finnish ontology, and they demonstrated how language does not fit into neat and orderly hierarchies and relationships. Things always get lost in translation. For example, one culture may have a single word for a particular concept, but another culture may have multiple words because the concept has more nuances in its experience. Somewhere along the line the presenters alluded to onki-light, “a REST-style API for machine and Linked Data access to the underlying vocabulary data.” I believe the presenters were using this tool to support access to their newly formed ontology.

Yet another ontology was described by Carsten Klee (Berlin State Library) and Jakob Vo? (GBV Common Library Network). This was a holdings ontology which seemed unnecessarily complex to me, but then I’m no real expert. See the holding-ontology repository on Github.

Day #2

I found the presentation — “Decentralization, distribution, disintegration: Towards linked data as a first class citizen in Library Land” — by Martin Malmsten (National Library of Sweden) to be the most inspiring. In the presentation he described why he thinks linked data is the way to describe the content of library catalogs. He also made insightful distinctions between file formats and the essencial characteristics of data, information, knowledge, (and maybe wisdom). Like many at the conference, he advocated interfaces to linked data, not MARC:

Working with RDF has enabled me to see beyond simple formats and observe the bigger picture — “Linked data or die”. Linked data is the way to do it now. I advocate the abstraction of MARC to RDF because RDF is more essencial and fundmental… Mixing data is new problem with the advent of linked data. This represents a huge shift in our thinking of Library Land. It is transformative… Keep the formats (monsters and zombies) outside your house. Formats are for exchange. True and real RDF is not a format.

Some of the work demonstrating the expressed ideas of the presentation is available on Github in a package called librisxl.

Another common theme / application demonstrated at the conference were variations of the venerable library catalog. OpenCat, presented by Agnes Simon (Bibliothéque Nationale de France), was an additional example of this trend. Combining authority data (available as RDF) provided by the National Library of France with works of a second library (Fresnes Public Library), the OpenCat prototype provides quite an interesting interface to library holdings.

Peter Király (Europeana Foundation) described how he is collecting content over many protocols and amalgamating it into the data store of Europenana. I appreciated the efforts he has made to normalize and enrich the data — not an easy task. The presentation also made me think about provenance. While provenance is important, maybe trust of provenance can come from the aggregator. I thought, “If these aggregators believe — trust — the remote sources, then may be I can too.” Finally, the presentation got me imagining how one URI can lead to others, and my goal would be to distill it down again into a single URI all of the interesting information I found a long the way, as in the following image I doodled during the presentation.

Enhancing the access and functionality of manuscripts was the topic of the presentation by Kai Eckert (Universität Mannheim). Specifically, manuscripts are digitized and an interface is placed on top allowing scholars to annotate the content beneath. I think the application supporting this functionality is called Pundit. Along the way he takes heterogeneous (linked) data and homogenizes it with a tool called DM2E.

OAI-PMH was frequently alluded to during the conference, and I have some ideas about that. In “Application of LOD to enrich the collection of digitized medieval manuscripts at the University of Valencia” Jose Manuel Barrueco Cruz (University of Valencia) described how the age of his content inhibited his use of the currently available linked data. I got the feeling there was little linked data closely associated with the subject matter of his manuscripts. Still, an an important thing to note, is how he started his investigations with the use of Datahub:

a data management platform from the Open Knowledge Foundation, based on the CKAN data management system… [providing] free access to many of CKAN’s core features, letting you search for data, register published datasets, create and manage groups of datasets, and get updates from datasets and groups you’re interested in. You can use the web interface or, if you are a programmer needing to connect the Datahub with another app, the CKAN API.

Simeon Warner (Cornell University) described how archives or dumps of RDF triple stores are synchronized across the Internet via HTTP GET, gzip, and a REST-ful interface on top of Google sitemaps. I was impressed because the end result did not necessarily invent something new but rather implemented an elegant solution to a real-world problem using existing technology. See the resync repository on Github.

In “From strings to things: A linked data API for library hackers and Web developers” Fabian Steeg and Pascal Christoph (HBZ) described an interface allowing librarians to determine the URIs of people, places, and things for library catalog records. “How can we benefit from linked data without being linked data experts? We want to pub Web developers into focus using JSON for HTTP.” There are few hacks illustrating some of their work on Github in the lobid repository.

Finally, I hung around for a single lightning talk — Carsten Klee’s (Berlin State Library) presentation of easyM2R, a PHP script converting MARC to any number of RDF serializations.

Observations, summary, and conclusions

I am currently in the process of writing a short book on the topic of linked data and archives for an organization called LiAM — “a planning grant project whose deliverables will facilitate the application of linked data approaches to archival description.” One of my goals for attending this conference was to determine my level of understanding when it comes to linked data. At the risk of sounding arrogant, I think I’m on target, but at the same time, I learned a lot at this conference.

For example, I learned that the process of publishing linked data is not “rocket surgery” and what I have done to date is more than functional, but I also learned that creating serialized RDF from MARC or EAD is probably not the best way to create RDF. I learned that publishing linked data is only one half of the problem to be solved. The other half is figuring out ways to collect, organize, and make useful the published content. Fortunately this second half of the problem was much of what the conference was about. Many people are using linked data to either create or enhance “next-generation library catalogs”. In this vein they are not really doing anything new and different; they are being evolutionary. Moreover, many of the developers are aggregating content using quite a variety of techniques, OAI-PMH being one of the more frequent.

When it comes to OAI-PMH and linked data, I see very much the same vision. Expose metadata in an agreed upon format and in an agreed upon method. Allow others to systematically harvest the metadata. Provide information services against the result. OAI-PMH was described as protocol with a low barrier to entry. The publishing of linked data is also seen as low barrier technology. The challenges of both first lie the vocabularies used to describe the things of the metadata. OAI-PMH required Dublin Core but advocated additional “ontologies”. Few people implemented them. Linked data is not much different. The problem with the language of the things is just as prevalent, if not more so. Linked data is not just the purview of Library Land and a few computer scientists. Linked data has caught the attention of a much wider group of people, albiet the subject is still a bit esoteric. I know the technology supporting linked data functions. After all, it is the technology of the Web. I just wonder if: 1) there will ever be a critical mass of linked data available in order to fulfill its promise, and 2) will we — the information community — be able to overcome the “Tower of Babel” we are creating with all the the various ontologies we are sporting. A single ontology won’t work. Just look at Dublin Core. Many ontologies won’t work either. There is too much variation and too many idiosyncrasies in real-world human language. I don’t know what the answer is. I just don’t.

Despite some of my misgivings, I think the following quote by Martin Malmsten pretty much sums up much of the conference — Linked data or die!

Bisson, Casey: Dynamic range vs. price and brand

planet code4lib - Sat, 2013-12-28 18:22

Dynamic range is what keeps skies blue while also capturing detail in the foreground. Without enough dynamic range, we’re forced to choose between a blue sky and dark foreground, or properly exposed foreground and white sky.

I’ve been using multiple exposure HDR techniques to increase the dynamic range I can capture, but multiple exposures don’t work well with moving subjects. A camera that can capture good dynamic range in one shot would be better than one that requires multiple shots to do the same.

I’ve been looking at camera options for a while now, this is just the latest angle for me. Still, compare the Canon EOS 700D/Rebel T5i’s dynamic range against other cameras you might consider (yes, this has as much to do with the JPEG processing engine as the sensor, but it’s the best indicator I can find of the sensor’s maximum dynamic range). Here are some comparators I might consider, ordered by ascending price (body-only):

$450 Sony NEX 5T

$600 Sony NEX 6

$600 Canon EOS 700D/Rebel T5i

$750 Panasonic Lumix GX7

$900 Olympus OM-D E-M5

$950 Canon EOS 70D

The charts come from the DPReview link above. DPReview doesn’t have an exact match for the NEX 5T, so I used their chart for a previous model, the 5N.

In their review of the Lumic GX7 they make it easy to compare the cameras with different dynamic range optimizations as well. Honestly, I which I’d discovered that before doing the above, since I think these charts likely better represent the sensor’s dynamic range than the charts above. Still, here’s that set of charts in the same order:

The Olympus’ dynamic range is simply astounding in this mode, though I’m not sure it would work well for timelapse photography, as the automatic gradation might shift between shots and cause flicker in the assembled video.

Two conclusions here, based on both the optimized and “normal” charts:

  1. Generally, dynamic range increases as price increases.
  2. Canon cameras seem to have the lowest dynamic range at a given price point compared to other brands.

Grimmelmann, James: MOOC Op-Ed in the Baltimore Sun

planet code4lib - Fri, 2013-12-27 17:02

I have an op-ed in the Baltimore Sun on MOOCs. I’m bullish on opening up education to everyone, but bearish on the MOOC business model. The basic argument should be familiar from The Merchants of MOOCs but I have refined into pithy op-ed format. Here’s an excerpt:

The gold rush surrounding MOOCs has a dark side. Opening up courses so that money and geography are no obstacle to learning is the kind of outreach that universities should be excited about. But the venture capitalists who have flocked to MOOC companies have been more concerned with “disrupting” American higher education, drawing students away from non-profit, public-serving universities to for-profit MOOCs. Disruption for disruption’s sake is hardly a good thing. The Syrian civil war is disruptive, too.

The Stanford AI course showed how much hunger there is around the world for knowledge and how easy it can be to satisfy that hunger when we step back from tuition and business models and use the Internet to share knowledge freely. If they can focus more on students and less on getting rich quick, MOOCs still have a lot to offer. The goal of knowledge for all is well worth pursuing.

Grimmelmann, James: MOOC Op-Ed in the Baltimore Sun

planet code4lib - Fri, 2013-12-27 17:02

I have an op-ed in the Baltimore Sun on MOOCs. I’m bullish on opening up education to everyone, but bearish on the MOOC business model. The basic argument should be familiar from The Merchants of MOOCs but I have refined into pithy op-ed format. Here’s an excerpt:

The gold rush surrounding MOOCs has a dark side. Opening up courses so that money and geography are no obstacle to learning is the kind of outreach that universities should be excited about. But the venture capitalists who have flocked to MOOC companies have been more concerned with “disrupting” American higher education, drawing students away from non-profit, public-serving universities to for-profit MOOCs. Disruption for disruption’s sake is hardly a good thing. The Syrian civil war is disruptive, too.

The Stanford AI course showed how much hunger there is around the world for knowledge and how easy it can be to satisfy that hunger when we step back from tuition and business models and use the Internet to share knowledge freely. If they can focus more on students and less on getting rich quick, MOOCs still have a lot to offer. The goal of knowledge for all is well worth pursuing.

Open Knowledge Foundation: “Share, improve and reuse public sector data” – French Government unveils new CKAN-based data.gouv.fr

planet code4lib - Thu, 2013-12-26 07:38

This is a guest post from Rayna Stamboliyska and Pierre Chrzanowski of the Open Knowledge Foundation France

Etalab, the Prime Minister’s task force for Open Government Data, unveiled on December 18 the new version of the data.gouv.fr platform (1). OKF France salutes the work the Etalab team has accomplished, and welcomes the new features and the spirit of the new portal, rightly summed up in the website’s baseline, “share, improve and reuse public sector data”.

OKF France was represented at the data.gouv.fr launch event by Samuel Goëta in the presence of Jean-Marc Ayrault, Prime Minister of France, Fleur Pellerin, Minister Delegate for Small and Medium Enterprises, Innovation, and the Digital Economy and Marylise Lebranchu, Minister of the Reform of the State. Photo credit: Yves Malenfer/Matignon

Etalab has indeed chosen to offer a platform resolutely turned towards collaboration between data producers and re-users. The website now enables everyone not only to improve and enhance the data published by the government, but also to share their own data; to our knowledge, a world first for a governmental open data portal. In addition to “certified” data (i.e., released by departments and public authorities), data.gouv.fr also hosts data published by local authorities, delegated public services and NGOs. Last but not least, the platform also identifies and highlights other, pre-existing, Open Data portals such as nosdonnees.fr (2). A range of content publishing features, a wiki and the possibility of associating reuses such as visualizations should also allow for a better understanding of the available data and facilitate outreach efforts to the general public.

We at OKF France also welcome the technological choices Etalab made. The new data.gouv.fr is built around CKAN, the open source software whose development is coordinated by the Open Knowledge Foundation. All features developed by the Etalab team will be available for other CKAN-based portals (e.g., data.gov or data.gov.uk). In turn, Etalab may more easily master innovations implemented by others.

The new version of the platform clearly highlights the quality rather than quantity of datasets. This paradigm shift was expected by re-users. On one hand, datasets with local coverage have been pooled thus providing nation-wide coverage. On the other hand, the rating system values datasets with the widest geographical and temporal coverage as well as the highest granularity.

The platform will continue to evolve and we hope that other features will soon complete this new version, for example:

  • the ability to browse data by facets (data producers, geographical coverage or license, etc.);
  • a management system for “certified” (clearly labelled institutional producer) and “non-certified” (data modified, produced, added by citizens) versions of a dataset;
  • a tool for previewing data, as natively proposed by CKAN;
  • the ability to comment on the datasets;
  • a tool that would allow to enquire about a dataset directly at the respective public administration.

Given this new version of data.gouv.fr, it is now up to the producers and re-users of public sector data to demonstrate the potential of Open Data. This potential can only be fully met with the release of fundamental public sector data as a founding principle for our society. Thus, we are still awaiting for the opening of business registers, detailed expenditures as well as non-personal data on prescriptions issued by healthcare providers.

Lastly, through the new data.gouv.fr, administrations are no longer solely responsible for the common good that is public sector data. Now this responsibility is shared with all stakeholders. It is thus up to all of us to demonstrate that this is the right choice.

(1) This new version of data.gouv.fr is the result of codesign efforts that the Open Knowledge Foundation France participated in.

(2) Nosdonnees.fr is co-managed by Regards Citoyens and OKF France.

Read Etalab’s press release online here

Open Knowledge Foundation: “Share, improve and reuse public sector data” – French Government unveils new CKAN-based data.gouv.fr

planet code4lib - Thu, 2013-12-26 07:38

This is a guest post from Rayna Stamboliyska and Pierre Chrzanowski of the Open Knowledge Foundation France

Etalab, the Prime Minister’s task force for Open Government Data, unveiled on December 18 the new version of the data.gouv.fr platform (1). OKF France salutes the work the Etalab team has accomplished, and welcomes the new features and the spirit of the new portal, rightly summed up in the website’s baseline, “share, improve and reuse public sector data”.

OKF France was represented at the data.gouv.fr launch event by Samuel Goëta in the presence of Jean-Marc Ayrault, Prime Minister of France, Fleur Pellerin, Minister Delegate for Small and Medium Enterprises, Innovation, and the Digital Economy and Marylise Lebranchu, Minister of the Reform of the State. Photo credit: Yves Malenfer/Matignon

Etalab has indeed chosen to offer a platform resolutely turned towards collaboration between data producers and re-users. The website now enables everyone not only to improve and enhance the data published by the government, but also to share their own data; to our knowledge, a world first for a governmental open data portal. In addition to “certified” data (i.e., released by departments and public authorities), data.gouv.fr also hosts data published by local authorities, delegated public services and NGOs. Last but not least, the platform also identifies and highlights other, pre-existing, Open Data portals such as nosdonnees.fr (2). A range of content publishing features, a wiki and the possibility of associating reuses such as visualizations should also allow for a better understanding of the available data and facilitate outreach efforts to the general public.

We at OKF France also welcome the technological choices Etalab made. The new data.gouv.fr is built around CKAN, the open source software whose development is coordinated by the Open Knowledge Foundation. All features developed by the Etalab team will be available for other CKAN-based portals (e.g., data.gov or data.gov.uk). In turn, Etalab may more easily master innovations implemented by others.

The new version of the platform clearly highlights the quality rather than quantity of datasets. This paradigm shift was expected by re-users. On one hand, datasets with local coverage have been pooled thus providing nation-wide coverage. On the other hand, the rating system values datasets with the widest geographical and temporal coverage as well as the highest granularity.

The platform will continue to evolve and we hope that other features will soon complete this new version, for example:

  • the ability to browse data by facets (data producers, geographical coverage or license, etc.);
  • a management system for “certified” (clearly labelled institutional producer) and “non-certified” (data modified, produced, added by citizens) versions of a dataset;
  • a tool for previewing data, as natively proposed by CKAN;
  • the ability to comment on the datasets;
  • a tool that would allow to enquire about a dataset directly at the respective public administration.

Given this new version of data.gouv.fr, it is now up to the producers and re-users of public sector data to demonstrate the potential of Open Data. This potential can only be fully met with the release of fundamental public sector data as a founding principle for our society. Thus, we are still awaiting for the opening of business registers, detailed expenditures as well as non-personal data on prescriptions issued by healthcare providers.

Lastly, through the new data.gouv.fr, administrations are no longer solely responsible for the common good that is public sector data. Now this responsibility is shared with all stakeholders. It is thus up to all of us to demonstrate that this is the right choice.

(1) This new version of data.gouv.fr is the result of codesign efforts that the Open Knowledge Foundation France participated in.

(2) Nosdonnees.fr is co-managed by Regards Citoyens and OKF France.

Read Etalab’s press release online here

Rochkind, Jonathan: ebooks and privacy, the netflixes of ebooks?

planet code4lib - Wed, 2013-12-25 15:09

Libraries have long considered reading habits, as revealed by circulation or usage records, to be private and confidential information. We have believed that freedom of inquiry requires confidentiality and privacy.

In the digital age however, most people don’t actually seem too concerned with their privacy, and it’s commonplace for our actions to be tracked by the software we use and the companies behind it. This extends to digital reading habits too.

What role should libraries have in educating users, or in protecting their privacy when using library-subscribed ebook services? Regardless of libraries’ role — and some of these services clearly have the potential to eclipse libraries entirely, replaced with commercial flat-fee digital ‘lending services —  how concerned should we be about the social effects of pervasive tracking of online reading habits? 

From the New York Times, an article that takes it as a given that what titles you read is tracked, but talks about new technology to track exactly what passages you read in what order at what times, too:

As New Services Track Habits, the E-Books Are Reading You

SAN FRANCISCO — Before the Internet, books were written — and published — blindly, hopefully. Sometimes they sold, usually they did not, but no one had a clue what readers did when they opened them up. Did they skip or skim? Slow down or speed up when the end was in sight? Linger over the sex scenes?

A wave of start-ups is using technology to answer these questions — and help writers give readers more of what they want. The companies get reading data from subscribers who, for a flat monthly fee, buy access to an array of titles, which they can read on a variety of devices. The idea is to do for books what Netflix did for movies and Spotify for music.

[...]

Last week, Smashwords made a deal to put 225,000 books on Scribd, a digital library here that unveiled a reading subscription service in October. Many of Smashwords’ books are already on Oyster, a New York-based subscription start-up that also began in the fall.

The move to exploit reading data is one aspect of how consumer analytics is making its way into every corner of the culture. Amazon and Barnes & Noble already collect vast amounts of information from their e-readers but keep it proprietary. Now the start-ups — which also includeEntitle, a North Carolina-based company — are hoping to profit by telling all.

“We’re going to be pretty open about sharing this data so people can use it to publish better books,” said Trip Adler, Scribd’s chief executive.

[...]

“Would we provide this data to an author? Absolutely,” said Chantal Restivo-Alessi, chief digital officer for HarperCollins Publishers. “But it is up to him how to write the book. The creative process is a mysterious process.”

The services say they will make the data anonymous so readers will not be identified. The privacy policies however are broad. “You are consenting to the collection, transfer, manipulation, storage, disclosure and other uses of your information,” Oyster tells new customers.

Before writers will broadly be able to use any data, the services must become viable by making deals with publishers to supply the books. Publishers, however, are suspicious of yet another disruption to their business. HarperCollins has signed up with Oyster and Scribd, but Penguin Random House and Simon & Schuster have thus far stayed away.

While the headline of the article is about new methods of tracking, it’s actually about several new businesses aiming to be “netflix for books” — flat-rate services that let you read all ebooks in their collection. You know, like a library, but not free.   These companies are having similar difficulties to libraries in working out deals with publishers, but if they succeed, what will it mean for libraries?

Before writers will broadly be able to use any data, the services must become viable by making deals with publishers to supply the books. Publishers, however, are suspicious of yet another disruption to their business. HarperCollins has signed up with Oyster and Scribd, but Penguin Random House and Simon & Schuster have thus far stayed away


Filed under: General

Rochkind, Jonathan: ebooks and privacy, the netflixes of ebooks?

planet code4lib - Wed, 2013-12-25 15:09

Libraries have long considered reading habits, as revealed by circulation or usage records, to be private and confidential information. We have believed that freedom of inquiry requires confidentiality and privacy.

In the digital age however, most people don’t actually seem too concerned with their privacy, and it’s commonplace for our actions to be tracked by the software we use and the companies behind it. This extends to digital reading habits too.

What role should libraries have in educating users, or in protecting their privacy when using library-subscribed ebook services? Regardless of libraries’ role — and some of these services clearly have the potential to eclipse libraries entirely, replaced with commercial flat-fee digital ‘lending services —  how concerned should we be about the social effects of pervasive tracking of online reading habits? 

From the New York Times, an article that takes it as a given that what titles you read is tracked, but talks about new technology to track exactly what passages you read in what order at what times, too:

As New Services Track Habits, the E-Books Are Reading You

SAN FRANCISCO — Before the Internet, books were written — and published — blindly, hopefully. Sometimes they sold, usually they did not, but no one had a clue what readers did when they opened them up. Did they skip or skim? Slow down or speed up when the end was in sight? Linger over the sex scenes?

A wave of start-ups is using technology to answer these questions — and help writers give readers more of what they want. The companies get reading data from subscribers who, for a flat monthly fee, buy access to an array of titles, which they can read on a variety of devices. The idea is to do for books what Netflix did for movies and Spotify for music.

[...]

Last week, Smashwords made a deal to put 225,000 books on Scribd, a digital library here that unveiled a reading subscription service in October. Many of Smashwords’ books are already on Oyster, a New York-based subscription start-up that also began in the fall.

The move to exploit reading data is one aspect of how consumer analytics is making its way into every corner of the culture. Amazon and Barnes & Noble already collect vast amounts of information from their e-readers but keep it proprietary. Now the start-ups — which also includeEntitle, a North Carolina-based company — are hoping to profit by telling all.

“We’re going to be pretty open about sharing this data so people can use it to publish better books,” said Trip Adler, Scribd’s chief executive.

[...]

“Would we provide this data to an author? Absolutely,” said Chantal Restivo-Alessi, chief digital officer for HarperCollins Publishers. “But it is up to him how to write the book. The creative process is a mysterious process.”

The services say they will make the data anonymous so readers will not be identified. The privacy policies however are broad. “You are consenting to the collection, transfer, manipulation, storage, disclosure and other uses of your information,” Oyster tells new customers.

Before writers will broadly be able to use any data, the services must become viable by making deals with publishers to supply the books. Publishers, however, are suspicious of yet another disruption to their business. HarperCollins has signed up with Oyster and Scribd, but Penguin Random House and Simon & Schuster have thus far stayed away.

While the headline of the article is about new methods of tracking, it’s actually about several new businesses aiming to be “netflix for books” — flat-rate services that let you read all ebooks in their collection. You know, like a library, but not free.   These companies are having similar difficulties to libraries in working out deals with publishers, but if they succeed, what will it mean for libraries?

Before writers will broadly be able to use any data, the services must become viable by making deals with publishers to supply the books. Publishers, however, are suspicious of yet another disruption to their business. HarperCollins has signed up with Oyster and Scribd, but Penguin Random House and Simon & Schuster have thus far stayed away


Filed under: General

Reese, Terry: Merry Christmas! MarcEdit 5.9 Update

planet code4lib - Wed, 2013-12-25 09:09

Merry Christmas!  I hope that everyone has a wonderful holiday, full of happiness, family, and rest.  I’ve been spending the past couple of days wrapping up my last Christmas present – this one to the MarcEdit community…a shiny new update.  In what has become a bit of a holiday tradition, I try and use the Christmas update to handle a few nagging issues as well as introduce something unexpected and new.  Some years that’s harder than others.

This year, I decided to tackle an issue that is becoming more problematic with each new computer refresh cycle – that being challenges arising from the proliferation of 64-bit Windows systems.  The challenge isn’t with MarcEdit itself, the challenge is when MarcEdit works with 3rd party programs or components that I don’t control.

When I designed MarcEdit, I designed it to be processor independent.  If a user is on a 32-bit system, MarcEdit would run in 32-bit mode.  If a user was on a 64-bit system, the program would run in 64-bit mode.  For the application, this works great.  The problem arises when MarcEdit has to utilize 3rd party tools and services.  This directly impacts one popular MarcEdit plugin – the OCLC Connexion plugin.  This plugin allows users to import data from their Connexion local save file, process the data in MarcEdit, and then save the data back to the Connexion save file.  And for users of 64-bit systems, this plugin has been unavailable for use.  The reason is that OCLC utilizes Microsoft Access’s database format to store the local Connexion data.  The component to read the MS Access file format, an operating system component distributed with Windows (or installed when a user installs Office), is a 32-bit component.  Microsoft has gone on record saying that they won’t be updating this component, and has advised developers to just build programs targeting 32-bit systems if they need to access database components.  This is what Connexion does, but in MarcEdit, there are some good reasons from a speed and optimization perspective, to utilizing the native data types when on 32 and 64 bit systems.  So, for a long-time, this restriction essentially prevented users on 64-bit systems from using this plugin.

Well, I’ve been researching a potential solution for some time, and I’ve come up with one.  As of today, I’ve introduced a 32-bit mode in MarcEdit.  Users have the option to select this mode, and MarcEdit will create a pocket, virtualized environment allowing the application to run as a 32-bit application and thus, utilize the plugin.  At this point, I’m not giving users the option to always run MarcEdit in 32-bit mode…I’m not sure I ever will.  But this option will give users the ability to work with the Connexion plugin, regardless of they system type.  I’ve recorded a youtube video demonstrating how this new function works.

 

I’m hoping that this enhancement will also lead to the ability to utilize this approach to provide improved debugging, but I will have to continue to do a little more research. 

As you can imagine, this update also includes updates to the OCLC Connexion Plugin.  If you want to use it, you will need to either download it from the plugin manager…or, if you already have it – delete the existing plugin and download the new one using the plugin manager. 

MarcEdit update notes:

  • Bug Fix: Copy Field: When data is present in the Find Text box to define conditional Copy Field operations, the Find data is ignored. This has been corrected.
  • Bug Fix: Extract/Delete Records: When calculating the display field, MarcEdit utilizes the non-filing character indicators when extracting title data for display. However, if the non-filing data is incorrect, then an error can occur. This has been corrected.
  • Enhancement: Validate URLs: Users now have the ability to select a non-variable field to be the display field in the status reports.
  • Enhancement: OCLC Connexion Plugin: Corrected the 008 data processing so that the data is valid when moving between the Connexion data file and MarcEdit. Previously, MarcEdit didn’t process this data which caused the data to be invalid when imported into MarcEdit since OCLC stores two extra bytes in the field. The data is now cleaned and fixed when moving data between the Connexion data file and MarcEdit.
  • Enhancement: OCLC Connexion Plugin: Updated the user interface.
  • Enhancement: OCLC Connexion Plugin: Improved the Error Handling and the messages that users receive on error.
  • Enhancement: OCLC Connexion Plugin: Improved the status thread handling so error and status messages are not covered up.
  • New Feature: MarcEdit 32-bit Mode: When users migrated to a 64-bit environment, the Connexion Plugin ceased to function because it relied on a 32-bit operating system component, a component Microsoft has said will not be supported in 64-bit mode. To allow users to continue to use tools like the OCLC Connexion plugin, MarcEdit now has a 32-bit mode, a service that essentially creates a virtual 32-bit environment around the MarcEdit application, allowing the program the ability to utilize 32-bit components while running on a 64-bit system.
  • Merry Christmas!!!

If you have automated updates enabled, the program will prompt you to update the next time you use the program.  Otherwise, you can download the updates from:

Merry Christmas,

–tr

Reese, Terry: Merry Christmas! MarcEdit 5.9 Update

planet code4lib - Wed, 2013-12-25 09:09

Merry Christmas!  I hope that everyone has a wonderful holiday, full of happiness, family, and rest.  I’ve been spending the past couple of days wrapping up my last Christmas present – this one to the MarcEdit community…a shiny new update.  In what has become a bit of a holiday tradition, I try and use the Christmas update to handle a few nagging issues as well as introduce something unexpected and new.  Some years that’s harder than others.

This year, I decided to tackle an issue that is becoming more problematic with each new computer refresh cycle – that being challenges arising from the proliferation of 64-bit Windows systems.  The challenge isn’t with MarcEdit itself, the challenge is when MarcEdit works with 3rd party programs or components that I don’t control.

When I designed MarcEdit, I designed it to be processor independent.  If a user is on a 32-bit system, MarcEdit would run in 32-bit mode.  If a user was on a 64-bit system, the program would run in 64-bit mode.  For the application, this works great.  The problem arises when MarcEdit has to utilize 3rd party tools and services.  This directly impacts one popular MarcEdit plugin – the OCLC Connexion plugin.  This plugin allows users to import data from their Connexion local save file, process the data in MarcEdit, and then save the data back to the Connexion save file.  And for users of 64-bit systems, this plugin has been unavailable for use.  The reason is that OCLC utilizes Microsoft Access’s database format to store the local Connexion data.  The component to read the MS Access file format, an operating system component distributed with Windows (or installed when a user installs Office), is a 32-bit component.  Microsoft has gone on record saying that they won’t be updating this component, and has advised developers to just build programs targeting 32-bit systems if they need to access database components.  This is what Connexion does, but in MarcEdit, there are some good reasons from a speed and optimization perspective, to utilizing the native data types when on 32 and 64 bit systems.  So, for a long-time, this restriction essentially prevented users on 64-bit systems from using this plugin.

Well, I’ve been researching a potential solution for some time, and I’ve come up with one.  As of today, I’ve introduced a 32-bit mode in MarcEdit.  Users have the option to select this mode, and MarcEdit will create a pocket, virtualized environment allowing the application to run as a 32-bit application and thus, utilize the plugin.  At this point, I’m not giving users the option to always run MarcEdit in 32-bit mode…I’m not sure I ever will.  But this option will give users the ability to work with the Connexion plugin, regardless of they system type.  I’ve recorded a youtube video demonstrating how this new function works.

 

I’m hoping that this enhancement will also lead to the ability to utilize this approach to provide improved debugging, but I will have to continue to do a little more research. 

As you can imagine, this update also includes updates to the OCLC Connexion Plugin.  If you want to use it, you will need to either download it from the plugin manager…or, if you already have it – delete the existing plugin and download the new one using the plugin manager. 

MarcEdit update notes:

  • Bug Fix: Copy Field: When data is present in the Find Text box to define conditional Copy Field operations, the Find data is ignored. This has been corrected.
  • Bug Fix: Extract/Delete Records: When calculating the display field, MarcEdit utilizes the non-filing character indicators when extracting title data for display. However, if the non-filing data is incorrect, then an error can occur. This has been corrected.
  • Enhancement: Validate URLs: Users now have the ability to select a non-variable field to be the display field in the status reports.
  • Enhancement: OCLC Connexion Plugin: Corrected the 008 data processing so that the data is valid when moving between the Connexion data file and MarcEdit. Previously, MarcEdit didn’t process this data which caused the data to be invalid when imported into MarcEdit since OCLC stores two extra bytes in the field. The data is now cleaned and fixed when moving data between the Connexion data file and MarcEdit.
  • Enhancement: OCLC Connexion Plugin: Updated the user interface.
  • Enhancement: OCLC Connexion Plugin: Improved the Error Handling and the messages that users receive on error.
  • Enhancement: OCLC Connexion Plugin: Improved the status thread handling so error and status messages are not covered up.
  • New Feature: MarcEdit 32-bit Mode: When users migrated to a 64-bit environment, the Connexion Plugin ceased to function because it relied on a 32-bit operating system component, a component Microsoft has said will not be supported in 64-bit mode. To allow users to continue to use tools like the OCLC Connexion plugin, MarcEdit now has a 32-bit mode, a service that essentially creates a virtual 32-bit environment around the MarcEdit application, allowing the program the ability to utilize 32-bit components while running on a 64-bit system.
  • Merry Christmas!!!

If you have automated updates enabled, the program will prompt you to update the next time you use the program.  Otherwise, you can download the updates from:

Merry Christmas,

–tr

Engard, Nicole: Bookmarks for December 24, 2013

planet code4lib - Tue, 2013-12-24 20:30

Today I found the following resources and bookmarked them on <a href=

  • Patreon
  • CauseVox Create your own online fundraising and crowdfunding site
  • StartSomeGood crowdfunding for non-profits, social entrepreneurs and changemakers
  • Causes Causes is the place to discover, support and organize campaigns, fundraisers, and petitions around the issues that impact you and your community
  • Indiegogo An International Crowdfunding Platform to Raise Money

Digest powered by RSS Digest

The post Bookmarks for December 24, 2013 appeared first on What I Learned Today....

Related posts:

  1. Keynote: We The People: Open Source, Open Data
  2. Software Freedom Day in September
  3. DonateNow Mashup Challenge

Engard, Nicole: Bookmarks for December 24, 2013

planet code4lib - Tue, 2013-12-24 20:30

Today I found the following resources and bookmarked them on <a href=

  • Patreon
  • CauseVox Create your own online fundraising and crowdfunding site
  • StartSomeGood crowdfunding for non-profits, social entrepreneurs and changemakers
  • Causes Causes is the place to discover, support and organize campaigns, fundraisers, and petitions around the issues that impact you and your community
  • Indiegogo An International Crowdfunding Platform to Raise Money

Digest powered by RSS Digest

The post Bookmarks for December 24, 2013 appeared first on What I Learned Today....

Related posts:

  1. Keynote: We The People: Open Source, Open Data
  2. Software Freedom Day in September
  3. DonateNow Mashup Challenge

Open Knowledge Foundation: 2013 – A great year for CKAN

planet code4lib - Tue, 2013-12-24 14:47

2013 has seen CKAN and the CKAN community go from strength to strength. Here are some of the highlights.

February May June July August
  • CKAN 2.1 released with new capabilities for managing bulk datasets amongst many other improvements
September October
  • Substantial new version of CKAN’s geospatial extension, including pycsw and MapBox integration and revised and expanded docs.
November
  • Future City Glasgow launch open.glasgow.gov.uk prototype as part of their TSB funded Future Cities Demonstrator programme
December Looking forward

The CKAN community is growing incredibly quickly so we’re looking forward to seeing what people do with CKAN in 2014.

So if your city, region or state hasn’t already done so, why not make 2014 the year that you launch your own CKAN powered open data portal?

Download CKAN or contact us if you need help getting started.

This post was cross posted from the CKAN blog

Syndicate content