You are here

Feed aggregator

Open Knowledge Foundation: BudgetApps: The First All-Russia Contest on Open Finance Data

planet code4lib - Fri, 2015-01-16 17:26

This is a guest post by Ivan Begtin, Ambassador for Open Knowledge in Russia and co-founder of the Russian Local Group.

Dear friends, the end of 2014 and the beginning of 2015 have been marked by an event, which is terrific for all those who are interested in working with open data, participating in challenges for apps developers and generally for all people who are into the Open Data Movement. I’m also sure, by the way, that people who are fond of history will find it particularly fascinating to be involved in this event.

On 23 December 2014, the Russian Ministry of Finance together with NGO Infoculture launched an apps developers’ challenge BudgetApps based on the open data, which have been published by the Ministry of Finance over the past several years. There is a number of various datasets, including budget data, audit organisations registries, public debt, national reserve and many other kinds of data.

Now, it happened so that I have joined the jury. So I won’t be able to participate, but let me provide some details regarding this initiative.

All the published data can be found at the Ministry website. Lots of budget datasets are also available at The Single Web Portal of the Russian Federation Budget System. That includes the budget structure in CSV format, the data itself, reference books and many other instructive details. Data regarding all official institutions are placed here. This resource is particularly interesting, because it contains indicators, budgets, statutes and numerous other characteristics regarding each state organisation or municipal institution in Russia. Such data would be invaluable for anyone who considers creating a regional data-based project.

One of the challenge requirements is that the submitted projects should be based on the data published by the Ministry of Finance. However, it does not mean that participants cannot use data from other sources alongside with the Ministry data. It is actually expected that the apps developers will combine several data sources in their projects.

To my mind, one should not even restrict themselves to machine-readable data, because there are also available human-readable data that can be converted to open data formats by participants.

Many potential participants know how to write parsers on their own. For those who have never had such an experience there are great reference resources, e.g. ScraperWiki that can be helpful for scraping web pages. There are also various libraries for analysing Excel files or extracting spreadsheets from PDF documents (for instance, PDFtables, Abbyy Finereader software or other Abbyy services ).

Moreover, at other web resources of the Ministry of Finance there is a lot of interesting information that can be converted to data, including news items that recently have become especially relevant for the Russian audience.

Historical budgets

There is a huge and powerful direction in the general process of opening data, which has long been missing in Russia. What I mean here is publishing open historical data that are kept in archives as large paper volumes of reference books containing myriads of tables with data. These are virtually necessary when we turn to history referring to facts and creating projects devoted to a certain event.

The time has come at last. Any day now the first scanned budgets of the Russian Empire and the Soviet Union will be openly published. A bit later, but also in the near future, the rest of the existing budgets of the Russian Empire, the Soviet Union, and the Russian Soviet Federated Socialist Republic will be published as well.

These scanned copies are being gradually converted to machine-readable formats, such as Excel and CSV data reconstructed from these reference books – both as raw data and as initially processed and ordered data. We created these ordered normalised versions to make it easier for developers to use them in further visualisations and projects. A number of such datasets have already been openly published. It is also worth mentioning that a considerable number of scanned copies of budget reference books (from both the Russian Empire and USSR) have already been published online by Historical Materials, a Russian-language grass-root project launched by a group of statisticians, historians and other enthusiasts.

Here are the historical machine-readable datasets published so far:

I find this part of the challenge particularly inspiring. If I were not part of the jury, I would create my own project based on historical budgets data. Actually, I may well do something like that after the challenge is over (unless somebody does it earlier).

More data?

There is a greater stock of data sources that might be used alongside with the Ministry data. Here are some of them:

These are just a few examples of numerous available data sources. I know that many people also use data from Wikipedia and DBPedia.

What can be done?

First and foremost, there are great opportunities for creating projects aimed at enhancing the understandability of public finance. Among all, these could be visual demos of how the budget (or public debt, or some particular area of finance) is structured.

Second, lots of projects could be launched based on the data on official institutions at bus.gov.ru. For instance, it could be a comparative registry of all hospitals in Russia. Or a project comparing all state universities. Or a map of available public services. Or a visualisation of budgets of Moscow State University (or any other Russian state university for that matter).

As to the historical data, for starters it could be a simple visualisation comparing the current situation to the past. This might be a challenging and fascinating problem to solve.

Why is this important?

BudgetApps is a great way of promoting open data among apps developers, as well as data journalists. There are good reasons for participating. First off, there are many sources of data that provide a good opportunity for talented and creative developers to implement their ambitious ideas. Second, the winners will receive considerable cash prizes. And last, but not least, the most interesting and perspective projects will get a reference at the Ministry of Finance website, which is a good promotion for any worthy project. Considerable amounts of data have become available. It’s time now for a wider audience to become aware of what they are good for.

Karen Coyle: Real World Objects

planet code4lib - Fri, 2015-01-16 16:01
I was asked a question about the meaning and import of the RDF concept of "Real World Object" (RWO) and didn't give a very good answer off the cuff. I'll try to make up for that here.

The concept of RWO comes out of the artificial intelligence (AI) community. Imagine that you are developing robots and other machines that must operate within the same world that you and I occupy. You have to find a way to "explain," in a machine-operational way, everything in our world: stairs and ramps, chairs and tables, the effect of gravity on a cup when you miss placing it on the table, the stars, love and loyalty (concepts are also objects in this view). The AI folks have actually set a goal to create such descriptions, which they call ontologies, for everything in the world; for every RWO.

You might consider this a conceit, or a folly, but that's the task they have set for themselves.

The original Scientific American article that described the semantic web used as its example intelligent 'bots that would manage your daily calendar and make appointments for you. This was far short of the AI "ontology of everything" but the result that matters to us now is that there have been AI principles baked into the development of RDF, including the concept of the RWO.

RWO isn't as mysterious as it may seem, and I can provide a simple example from our world. The MARC record for a book has the book as its RWO, and most of its data elements "speak" about the book. At the same time, we can say things about the MARC record, such as who originally created it, and who edited it last, and when. The book and the record are different things, different RWO's in an RDF view. That's not controversial, I would assume.

Our difficulties arise because in the past we didn't have a machine-actionable way to distinguish between those two "things": the book and the record. Each MARC record got an identifier, which identified the record. We've never had identifiers for the thing the record describes (although the ISBN sometimes works this way). It has always been safe to assume that the record was about the book, and what identified the book was the information in the record. So we obviously have a real world object, but we didn't give it its own identifier - because humans could read the text of the record and understand what it meant (most of the time or some of the time).

I'm not fully convinced that everything can be reduced to RWO/not-RWO, and so I'm not buying that is the only way to talk about our world and our data. It should be relatively easy, though, without getting into grand philosophical debates, to determine the difference between our metadata and the thing it describes. That "thing it describes" can be fuzzy in terms of the real world, such as when the spirit of Edgar Cayce speaks through a medium and writes a book. I don't want to have to discuss whether the spirit of Edgar Cayce is real or not. We can just say that "whoever authors the book is as real as it gets." So if we forget RWO in the RDF sense and just look sensibly at our data, I'm sure we can come to a practical agreement that allows both the metadata and the real world object to exist.

That doesn't resolve the problem of identifiers, however, and for machine-processing purposes we do need separate identifiers for our descriptions and what we are describing.* That's the problem we need to solve, and while we may go back and forth a bit on the best solution, the problem is tractable without resorting to philosophical confabulations.

* I think that the multi-level bibliographic descriptions like FRBR and BIBFRAME make this a bit more complex, but I haven't finished thinking about that, so will return if I have a clearer idea.

Library of Congress: The Signal: Digital Audio Preservation at MIT: an NDSR Project Update

planet code4lib - Fri, 2015-01-16 14:25

The following is a guest post by Tricia Patterson, National Digital Stewardship Resident at MIT Libraries

Tricia

This month marks the mid-way point of my National Digital Stewardship Residency at MIT Libraries, a temporal vantage point that allows me to reflect triumphantly on what has been achieved so far and peer fearlessly ahead at all that must be accomplished before I am finished.

As mentioned in our previous introductory group post, I was primarily tasked with completing a gap analysis of the digital preservation workflows currently in place, developing lower-level diagrammatic and narrative workflows, and calling out a digital audio use case from the Lewis Music Library materials we are using to build the workflows. My work is part of a larger preservation planning effort underway at MIT, and it has enabled me to make higher-level, organizational contributions while also familiarizing me with the nitty-gritty procedural details across the departments. This project really has relied on strong, interdepartmental collaboration with input from: Peter Munstedt and Cate Gallivan from the Lewis Music Library; Tom Rosko, Mikki Macdonald, Liz Andrews and Kari Smith from the Institute Archives and Special Collections; Ann Marie Willer from Curation and Preservation Services; Helen Bailey from IT; and finally my host supervisor, Nancy McGovern, who heads Digital Curation and Preservation. Others have consulted throughout the project so far as well.

I will shamefully admit that during graduate school, I really hadn’t designated much consideration to workflow documentation. Aside from the OAIS reference model, my thinking about digital preservation was relegated to isolated, technical steps such as format migration or appropriate preservation metadata. Since beginning this project however, I’ve realized that workflow documentation is receiving increased acknowledgement and appreciation. Without a tested, repeatable road map, it is difficult to process larger projects with efficiency and security. A detailed, documented workflow elucidates processes across departments, giving us insight into redundancies and deficiencies. It allows for transparency, clarification of roles, and accountability within the chain of custody.

Above is the high-level content management workflow that the digital audio workflow subgroup developed prior to my arrival. My work so far has been on the second (digitization) and third (managing digital content) sequences of the workflow, fleshing out optimized, lower-level documentation for the steps within each bubble (or stage). Below is an example of the lower-level workflow diagram that I designed for stage A2: Define Digitization Requirements based on the information gathered from archives and preservation staff. Not pictured is the accompanying narrative documentation for the stage. I actually just wrapped up drafting the six stages of the “Transform Physical/Analog to Digital” sequence at the end of December, and while I am drafting the documentation for the next sequence – “Managing Digital” – we are simultaneously moving through the review process for the initial set.

Other benefits that have emerged so far include getting a better idea of what digitization project documentation is generated and what of that documentation needs to be preserved as well. It has also helped us to identify steps that would benefit from automation. For example, as the physical materials are handed off on their way to a vendor to be digitized, we must maintain a chain of custody for the content, so our metadata archivist created a database to more accurately track the items in real-time as they transition through the workflow. We have also gained better perspective on which tools we need that will have the biggest impact on streamlining the work using this workflow. It is becoming clear how much easier it will be to initiate digitization projects, now knowing exactly which avenues need to be traveled and what documentation is necessary. And developing a strong, tested infrastructure can be leveraged for increased funding for projects and acquisitions.

Beyond the workflow development, I am contributing to other projects such as evaluating a streaming audio access platform for the Lewis Music Library and compiling a PREMIS profile for MIT Libraries that can be used for digital audio. The evaluation, an activity of our new Digital Sustainability Lab, has been especially fascinating, as our team is a combined effort between the technological and organizational wings of the libraries, working together to define requirements and measure options against them.

We began by itemizing 50-60 delivery requirements, including relevant TRAC requirements (PDF), covering display and interface, search and discovery, accessibility, ingest and export, metadata, content management, permissions, documentation and other considerations. From there, our group prioritized the requirements on a scale from zero to four: “might be nice” to “showstopper/must-have.” We also kept in mind that while we are only focusing on audio streaming currently, the system should be extensible to audiovisual materials. Next, we will be measuring the platform options against our prioritized requirements to determine which one will be best suited to meet the needs of the Libraries now. For me, this has been one of the most important parts of the position, to facilitate meaningful access to these audio treasures.

The residencies expand beyond the work at our institutions, however. All of the residents have been organizing tours, demonstrations and classes for one another. In December, I arranged for some of the NDSR-Boston crew to go on a behind-the-scenes tour of the John F. Kennedy Library and Museum, home to a renowned digitization program. This spring, another resident (Joey Heinen) and I are partnering to host a digital audio panel with speakers from some of the host institutions that will hopefully be beneficial to external audiences in the area grappling with common preservation concerns.

The residency will be over before I know it. In the upcoming months, I will wrap up the workflow documentation on the digital sequence, continue work on peripheral extant projects that are ongoing, and attend a couple of conferences to talk about our work. Building these models has been fun and intensely educational – and sharing it with the community will be truly rewarding.

DuraSpace News: SHARE Webinar Recording Available

planet code4lib - Fri, 2015-01-16 00:00

Winchester, MA  On January 14, 2014 Judy Ruttenberg, Program Director, Association of Research Libraries presented, “Roadmap to the Future of SHARE.”  Judy highlighted SHARE’s project plans and long-term vision that include taking a life-cycle approach to research services in an effort to build a robust repository ecosystem.  This was the third and final webinar in the DuraSpace Hot Topics Community Webinar Series, “All About the SHared Access Research Ecosystem (SHARE),” curated by Greg Tananbaum, Product Lead, SHARE.

Harvard Library Innovation Lab: Awesome Box top 110 of all time

planet code4lib - Thu, 2015-01-15 21:11

It’s been almost two years since Somerville Public Library helped us launch the Awesome Box to public libraries and beyond.

There are now 364 Awesome libraries around the world.

Over 41,000 items have been dropped in an Awesome Box in those libraries.

See the items just Awesomed on the Awesome Box page.

Now that the year-end lists have come and gone, we’d like to present the top 110 Awesome items* from the past two years.

 

  1. Diary of a Wimpy Kid 131
  2. The fault in our stars 110
  3. Divergent 71
  4. Wonder 64
  5. The Hunger Games 59
  6. Gone girl 55
  7. The invention of wings 50
  8. Naruto 50
  9. Unbroken 49
  10. The book thief 45
  11. Orphan train 45
  12. Eleanor & Park 44
  13. Bone 42
  14. The heroes of Olympus 41
  15. Smile 40
  16. The goldfinch 38
  17. Allegiant 38
  18. Star wars 35
  19. All the light we cannot see 33
  20. The maze runner 33
  21. The giver 32
  22. Ready player one 32
  23. Insurgent 32
  24. Big Nate 32
  25. Where’d you go, Bernadette 30
  26. Life after life 30
  27. Fangirl 30
  28. Maximum Ride 29
  29. Dork diaries 29
  30. The boys in the boat 28
  31. Doctor Who 28
  32. Me before you 28
  33. The signature of all things 28
  34. Babymouse 28
  35. The light between oceans 27
  36. Mr. Penumbra’s 24-hour bookstore 27
  37. The storied life of A.J. Fikry 27
  38. Sisters 27
  39. And the mountains echoed 27
  40. Squish 27
  41. Cinder 27
  42. The night circus 27
  43. The Lego movie 27
  44. The help 26
  45. Wild 26
  46. The walking dead 26
  47. Harry Potter and the sorcerer’s stone 26
  48. Junie B. Jones loves handsome Warren 26
  49. Drama 25
  50. Percy Jackson & the Olympians 24
  51. Ender’s game 24
  52. The ocean at the end of the lane 24
  53. Animal Ark Labrador on the Lawn 24
  54. Harry Potter and the Order of the Phoenix 24
  55. The Rosie project 24
  56. Sycamore row 24
  57. Frozen 24
  58. Harry Potter and the Half-Blood Prince 23
  59. I am Malala 23
  60. Amulet 23
  61. Geronimo Stilton 23
  62. Big little lies 23
  63. Every day 22
  64. Harry Potter and the chamber of secrets 22
  65. The husband’s secret 22
  66. Legend 22
  67. The lightning thief 22
  68. Maze runner trilogy 22
  69. Heroes of Olympus 22
  70. Leaving time 22
  71. The invention of Hugo Cabret 21
  72. Out of my mind 21
  73. The sea of monsters 21
  74. Escape from Mr. Lemoncello’s library 21
  75. The perks of being a wallflower 21
  76. Hyperbole and a half 21
  77. Fruits basket 21
  78. Delicious 21
  79. Wonderstruck 20
  80. Black butler 20
  81. Pete the cat 20
  82. Downton Abbey 20
  83. The one and only Ivan 20
  84. Harry Potter and the prisoner of Azkaban 20
  85. The Selection 20
  86. The monuments men 20
  87. Mr. Mercedes 20
  88. Mean streak 20
  89. Room 19
  90. Batman 19
  91. The golem and the jinni 19
  92. The unlikely pilgrimage of Harold Fry 19
  93. Harry Potter and the goblet of fire 19
  94. Matched 19
  95. Game of thrones 19
  96. Paper towns 19
  97. Written in my own heart’s blood 19
  98. The silkworm 19
  99. The immortal life of Henrietta Lacks 18
  100. Graceling 18
  101. One summer 18
  102. The great Gatsby 18
  103. The Cuckoo’s Calling 18
  104. The lowland 18
  105. Steelheart 18
  106. The strange case of Origami Yoda 18
  107. Philomena 18
  108. We were liars 18
  109. Edge of eternity 18
  110. The blood of Olympus 18

*Some series items are clumped together. I kind of like it that way.

Jonathan Rochkind: Ruby threads, gotcha with local vars and shared state

planet code4lib - Thu, 2015-01-15 18:25

I end up doing a fair amount of work with multi-threading in ruby. (There is some multi-threaded concurrency in Umlaut, bento_search, and traject).  Contrary to some belief, multi-threaded concurrency can be useful even in MRI ruby (which can’t do true parallelism due to the GIL), for tasks that spend a lot of time waiting on I/O, which is the purpose in Umlaut and bento_search (in both cases waiting on external HTTP apis). Traject uses multi-threaded concurrency for true parallelism in jruby (or soon rbx) for high performance.

There’s a gotcha with ruby threads that I haven’t seen covered much. What do you think this code will output from the ‘puts’?

value = 'original' t = Thread.new do sleep 1 puts value end value = 'changed' t.join

It outputs “changed”.   The local var `value` is shared between both threads, changes made in the primary thread effect the value of `value` in the created thread too.  This is an issue not unique to threads, but is a result of how closures work in ruby — the local variables used in a closure don’t capture the fixed value at the time of closure creation, they are pointers to the original local variables. (I’m not entirely sure if this is traditional for closures, or if some other languages do it differently, or the correct CS terminology for talking about this stuff).  It confuses people in other contexts too, but can especially lead to problems with threads.

Consider a loop which in each iteration prepares some work to be done, then dispatches to a thread to actually do the work.  We’ll do a very simple fake version of that, watch:

threads = [] i = 0 10.times do # pretend to prepare a 'work order', which ends up in local # var i i += 1 # now do some stuff with 'i' in the thread threads << Thread.new do sleep 1 # pretend this is a time consuming computation # now we do something else with our work order... puts i end end threads.each {|t| t.join}

Do you think you’ll get “1”, “2”, … “10” printed out? You won’t. You’ll get 10 10’s. (With newlines in random places becuase of interleaving of ‘puts’, but that’s not what we’re talking about here). You thought you dispatched 10 threads each with different values for ‘i’, but the threads are actually all sharing the same ‘i’, when it changes, it changes for all of them.

Oops.

Ruby stdlib Thread.new has a mechanism to deal with this, although like much in ruby stdlib (and much about multi-threaded concurrency in ruby), it’s under-documented. But you can pass args to Thread.new, which will be passed to the block too, and allow you to avoid this local var linkage:

require 'thread' value = 'original' t = Thread.new(value) do |t_value| sleep 1 puts t_value end value = 'changed' t.join

Now that prints out “original”. That’s the point of passing one or more args to Thread.new.

You might think you could get away with this instead:

require 'thread' value = 'original' t = Thread.new do # nope, not a safe way to capture the value, there's # still a race condition t_value = value sleep 1 puts t_value end value = 'changed' t.join

While that will seem to work for this particular example, there’s still a race condition there, the value could change before the first line of the thread block is executed, part of dealing with concurrency is giving up any expectations of what gets executed when, until you wait on a `join`.

So, yeah, the arguments to Thread.new. Which other libraries involving threading sometimes propagate. With a concurrent-ruby ThreadPoolExecutor:

work = 'original' pool = Concurrent::FixedThreadPool.new(5) pool.post(work) do |t_work| sleep 1 puts t_work # is safe end work = 'new' pool.shutdown pool.wait_for_termination

And it can even be a problem with Futures from ruby-concurrent. Futures seem so simple and idiot-proof, right? Oops.

value = 100 future = Concurrent::Future.execute do sleep 1 # DANGER will robinson! value + 1 end value = 200 puts future.value # you get 201, not 101!

I’m honestly not even sure how you get around this problem with Concurrent::Future, unlike Concurrent::ThreadPoolExecutor it does not seem to copy stdlib Thread.new in it’s method of being able to pass block arguments. There might be something I’m missing (or a way to use Futures that avoids this problem?), or maybe the authors of ruby-concurrent haven’t considered it yet either? I’ve asked the question of them.  (PS: The ruby-concurrent package is super awesome, it’s still building to 1.0 but usable now; I am hoping that it’s existence will do great things for practical use of multi-threaded concurrency in the ruby community).

This is, for me, one of the biggest, most dangerous, most confusing gotchas with ruby concurrency. It can easily lead to hard-to-notice, hard-to-reproduce, and hard-to-debug race condition bugs.


Filed under: General

Library of Congress: The Signal: The DPC’s 2014 Digital Preservation Awards

planet code4lib - Thu, 2015-01-15 16:21

From the Digital Preservation Coalition.

In November, our colleagues at the Digital Preservation Coalition presented their Digital Preservation 2014 awards. These awards, which are given every two years, were established in 2004 to help raise awareness about digital preservation.

The Library of Congress welcomes any public recognition of excellence in digital preservation. We, too, have given our own awards, most recently the Innovation Awards at our Digital Preservation 2012 (PDF), 2013 and 2014 (PDF) conferences. The Library’s awards tend to be limited to work done in the United States of America, as the DPC awards tend to be limited to work done in Western Europe; it’s built into the nature of the organizations.

We are happy to draw attention to the work at the DPC and I hope our communication with our international colleagues continues to expand so we can learn more about each other’s digital preservation work. Of course, not all great work is publicly recognized or rewarded. But when we have the opportunity to learn of others’ projects and results, it advances our collective digital preservation progress a bit.

This year, the DPC awards were divided into four categories:

  1. Open Preservation Foundation Award for Research and Innovation
  2. The NCDD Award for Teaching and Communications
  3. The DPC Award for the Most Distinguished Student Work in Digital Preservation
  4. The DPC Award for Safeguarding the Digital Legacy.

The Research and Innovation award emphasized real-world results and impact. The award went to bwFLA , who offer emulation as a service. The other nominees were:

  • Jpylyzer, a JPEG2000 validator and extractor.
  • The SPRUCE project, which supported grass-roots preservation activity by connecting digital data managers with technical experts.

The Teaching and Communication award applied to outreach, training and advocacy. The award went to Adrian Brown for his book, “Practical Digital Preservation: a how-to guide for organizations of any size.” The other nominees were:

The Most Distinguished Student Work was awarded to “Game Preservation in the UK,” by Alasdair Bachell of the University of Glasgow. The other nominees were:

Safeguarding the Digital Legacy celebrates practical efforts to save important digital collections. The award went to the Carcanet Press Email Archive, which documents the preservation of 215,000 emails and 65,500 attachments, spanning a twelve-year period, along with full metadata. The other nominees were:

The complete video of the award ceremonies is online.

DPLA: Apply to our 3rd Class of Community Reps

planet code4lib - Thu, 2015-01-15 15:40

We’re thrilled to announce today our third-ever call for applications for the DPLA Community Reps program! The application for this third class of Reps will close on Friday, February 13.

What is the DPLA Community Reps program? In brief, we’re looking for enthusiastic volunteers who are willing to help us bring DPLA to their local communities through outreach activities or support DPLA by working on special projects. Reps give a small commitment of time to community engagement, collaboration with fellow Reps and DPLA staff, and check-ins with DPLA staff. We have a terrific first two classes of reps from diverse places and professions.

With the third class, we are hoping to strengthen and expand our group geographically and professionally.

Geographic priorities
  • States without reps: Washington, DC, Delaware, Kansas, Maryland, Nevada, Oregon
  • States with only one rep: Alabama, Alaska, Arkansas, Connecticut, Hawaii, Iowa, Kentucky, Louisiana, Minnesota, Montana, Nebraska, New Mexico, North Dakota, Oklahoma, Puerto Rico, Tennessee, Vermont, West Virginia, Wyoming
Professional backgrounds
  • We are looking for people with interests that intersect with DPLA’s mission, wherever they work. We will give some priority to K-12 teachers, programmers/developers, genealogists, museum professionals, and historical society staff.

Although applicants who help us with these initiatives will be given special consideration, the single most important factor in selection is the applicant’s ability to clearly identify communities they can serve and plan relevant outreach activities or DPLA-related projects for them. If you don’t fall into the groups outlined above, please consider applying anyway. We are looking for enthusiastic, motivated people from the US and the world with great ideas above all else!

To answer general inquiries about what type of work reps normally engage in and to provide information about the program in general, open information and Q&A sessions will be held with key DPLA staff members and current community reps.  

Reps Info Session #1: Tue, January 27, 6pm – 7pm Eastern
Reps Info Session #2: Thu, February 5, 1pm – 2pm Eastern

If you would like to join one of these webinars, please register for the date and time that works best for you.

For more information about the DPLA Community Reps program, please contact info@dp.la.

Hydra Project: ActiveFedora 8.0.0 released

planet code4lib - Thu, 2015-01-15 13:47

We are pleased to announce that ActiveFedora 8.0.0 final has been released.

The release notes for this gem can be found at:  https://github.com/projecthydra/active_fedora/releases/tag/v8.0.0

ActiveFedora 8 is the last major version of this software that will be compatible with Fedora Commons Repository version 3.  ActiveFedora 9 is targeted at Fedora 4.

D-Lib: Enabling Living Systematic Reviews and Clinical Guidelines through Semantic Technologies

planet code4lib - Thu, 2015-01-15 13:43
Article by Laura Slaughter; The Interventional Centre, Oslo University Hospital (OUS), Norway; Christopher Friis Berntsen and Linn Brandt, Internal Medicine Department, Innlandet Hosptial Trust and MAGICorg, Norway and Chris Mavergames, Informatics and Knowledge Management Department, The Cochrane Collaboration, Germany

D-Lib: A Methodology for Citing Linked Open Data Subsets

planet code4lib - Thu, 2015-01-15 13:43
Article by Gianmaria Silvello, University of Padua, Italy

D-Lib: Challenges in Matching Dataset Citation Strings to Datasets in Social Science

planet code4lib - Thu, 2015-01-15 13:43
Article by Brigitte Mathiak and Katarina Boland, GESIS -- Leibniz Institute for the Social Sciences, Germany

D-Lib: Data Citation Practices in the CRAWDAD Wireless Network Data Archive

planet code4lib - Thu, 2015-01-15 13:43
Article by Tristan Henderson, University of St Andrews, UK and David Kotz, Dartmouth College, USA

D-Lib: Data as "First-class Citizens"

planet code4lib - Thu, 2015-01-15 13:43
Guest Editorial by Lukasz Bolikowski, ICM, University of Warsaw, Poland; Nikos Houssos, National Documentation Centre / National Hellenic Research Foundation, Greece; Paolo Manghi, Istituto di Scienza e Tecnologie dell'Informazione, Consiglio Nazionale delle Ricerche, Italy and Jochen Schirrwagen, Bielefeld University Library, Germany

D-Lib: A-posteriori Provenance-enabled Linking of Publications and Datasets via Crowdsourcing

planet code4lib - Thu, 2015-01-15 13:43
Article by Laura Dragan, Markus Luczak-Roesch, Elena Simperl, Heather Packer and Luc Moreau, University of Southampton, UK; Bettina Berendt, KU Leuven, Belgium

D-Lib: Data without Peer: Examples of Data Peer Review in the Earth Sciences

planet code4lib - Thu, 2015-01-15 13:43
Article by Sarah Callaghan, British Atmospheric Data Centre, UK

D-Lib: Semantic Enrichment and Search: A Case Study on Environmental Science Literature

planet code4lib - Thu, 2015-01-15 13:43
Article by Kalina Bontcheva, University of Sheffield, UK; Johanna Kieniewicz and Stephen Andrews, British Library, UK; Michael Wallis, HR Wallingford, UK

D-Lib: A Framework Supporting the Shift from Traditional Digital Publications to Enhanced Publications

planet code4lib - Thu, 2015-01-15 13:43
Article by Alessia Bardi and Paolo Manghi, Istituto di Scienza e Tecnologie dell'Informazione, Consiglio Nazionale delle Ricerche, Italy

D-Lib: Science 2.0 Repositories: Time for a Change in Scholarly Communication

planet code4lib - Thu, 2015-01-15 13:43
Article by Massimiliano Assante, Leonardo Candela, Donatella Castelli, Paolo Manghi and Pasquale Pagano, Istituto di Scienza e Tecnologie dell'Informazione, Consiglio Nazionale delle Ricerche, Italy

Pages

Subscribe to code4lib aggregator