You are here

Feed aggregator

Peter Murray: Registration Now Open for a Fall Forum on the Future of Library Discovery

planet code4lib - Wed, 2015-08-26 14:23

Helping patrons find the information they need is an important part of the library profession, and in the past decade the profession has seen the rise of dedicated “discovery systems” to address that need. The National Information Standards Organization (NISO) is active at the intersection of libraries, content suppliers, and service providers in smoothing out the wrinkles between these parties:

Next in this effort is a two-day meeting where these three groups will hear about the latest activities and plan activities to advance the standards landscape. Registration for this meeting has just opened, and included below is the announcement. I’ll be Baltimore in early October to participate and offer the closing keynote, and I hope you will be able to attend in person or participate in the live stream.

NISO will host a two–day meeting to take place in Baltimore, Maryland on October 5 & 6, 2015 on The Future of Library Discovery. In February 2015, NISO published a white paper commissioned from library consultant Marshall Breeding by NISO’s Discovery to Delivery Topic Committee. The in-person meeting will be an extension of the white paper with a series of presenters and panels offering an overview of the current resource discovery environment. Attendees will then participate in several conversations that will examine possibilities regarding how these technologies, methodologies, and products might be able to adapt to changes in the evolving information landscape in scholarly communications and to take advantage of new technologies, metadata models, or linking environments to better accomplish the needs of libraries to provide access to resources.

For the full agenda, please visit:

Confirmed speakers include:

  • Opening Keynote: Marshall Breeding, Independent Library Consultant,
  • Scott Bernier, Senior Vice President, Marketing, EBSCO
  • Michael Levine-Clark, Professor / Associate Dean for Scholarly Communication and Collections Services, University of Denver Libraries
  • Gregg Gordon, President & CEO, Social Sciences Research Network (SSRN)
  • Neil Grindley, Head of Resource Discovery, Jisc
  • Steve Guttman, Senior Product Manager, ProQuest
  • Karen Resch McKeown, Director, Product Discovery, Usage and Analytics, Gale | Cengage Learning
  • Jason S. Price, Ph.D., Director of Licensing Operations, SCELC Library Consortium
  • Mike Showalter, Executive Director, End-User Services, OCLC
  • Christine Stohn, Product Manager, ExLibris Group
  • Julie Zhu, Manager, Discovery Service Relations, Marketing, Sales & Design, IEEE
  • Closing Keynote: Peter Murray, Library Technologist and blogger at the Disruptive Library Technology Jester

This event is generously sponsored by: EBSCO, Sage Publications, ExLibris Group, and Elsevier. Thank you!

Early Bird rates until September! The cost to attend the two-day seminar in person for NISO Members (Voting or LSA) is only $250.00; Nonmember: $300.00; and for Students: $150.00. To register, click here.

Please visit the event page for the most up-to-date information on the agenda, speakers and registration information.

For any questions regarding your in-person or virtual attendance at this NISO event, contact Juliana Wood, Educational Programs Manager, via email or phone 301.654.2512.

We hope to see you in Baltimore in the Fall!

Link to this post!

DuraSpace News: DuraSpace Selects Gunter Media Group, Inc. as a Registered Service Provider for VIVO

planet code4lib - Wed, 2015-08-26 00:00

Winchester, MA  Gunter Media Group, Inc., an executive management consulting firm that helps libraries, publishers and companies leverage key operational, technical, business and human assets, has become a DuraSpace Registered Service Provider (RSP) for the VIVO Project. Gunter Media Group, Inc.  will provide VIVO related services such as strategic evaluation, project management, installation, search engine optimization and integration for institutions looking to join the VIVO network.

LITA: iPads in the Library

planet code4lib - Tue, 2015-08-25 17:00

Getting Started/Setting Things Up

Several years ago we added twenty iPad 2s to use in our children’s and teen programming. They have a variety of apps on them ranging from early literacy and math apps to Garage Band and iMovie to Minecraft and Clash of Clans*. Ten of the iPads are geared towards younger kids and ten are slanted towards teen interests.

Not surprisingly, the iPads were very popular when we first acquired them. We treated app selection as an extension of our collection development policy. Both the Children’s and Adult Services departments have a staff iPad they can use to try out apps before adding them to the programming iPads.

We bought a cart from Spectrum Industries (a WI-based company; we also have several laptop carts from them) so that we had a place to house and charge the devices. The cart has space for forty iPads/tablets total. We use an Apple MacBook and the Configurator app to handle updating the iPads and adding content to them. We created a Volume Purchase Program account in order to buy multiple copies of apps and then get reimbursed for taxes after the fact. The VPP does not allow for tax exempt status but the process of receiving refunds is pretty seamless.

The only ‘bothersome’ part of updating the iPads is switching the cable from the power plug to the USB ports (see above) and then making sure that all the iPads have their power cables plugged firmly into them to make a solid connection. Once I’d done it a few times it became less awkward. The MacBook needs to be plugged into the wall or it won’t have enough power for the iPads. It also works best running on an ethernet connection versus WiFi for downloading content.

It takes a little effort to set up the Conifgurator** but once you have it done, all you need to do is plug the USB into the MacBook, launch the Configurator, and the iPads get updated in about ten to fifteen minutes even if there’s an iOS update.

Maintaining the Service/Adjusting to Our Changing Environment

Everything was great. Patrons loved the iPads. They were easy to maintain. They were getting used.

Then the school district got a grant and gave every student, K-12, their own iPad.

They rolled them out starting with the high school students and eventually down through the Kindergartners. The iPads are the students’ responsibility. They use them for homework and note-taking. Starting in third grade they get to take them home over the summer.

Suddenly our iPads weren’t so interesting any more. Not only that, but our computer usage plummeted. Now that our students had their own Internet-capable device they didn’t need our computers any more. They do need our WiFi and not surprisingly those numbers went up.

There are restrictions for the students. For example, younger students can’t put games on their iPads. And while older students have fewer restrictions, they don’t tend to put pay apps on their iPads. That means we have things on our iPads that the students couldn’t or didn’t have.

I started meeting with the person at the school district in charge of the program a couple times a year. We talk about technology we’re implementing at our respective workplaces and figure out what we can do to supplement and help each other. I’ll unpack this in a future post and talk about creating local technology partnerships.

Recently I formed a technology committee consisting of staff from every department in the library. One of the things we’ll be addressing is the iPads. We want to make sure that they’re being used. Also, it won’t be too long and they will be out-of-date and we’ll have to decide if we’re replacing them and whether we’d just recycle the old devices or repurpose them (as OPACs potentially?).

We don’t circulate iPads but I’d certainly be open to that idea. How many of you have iPads/tablets in your library? What hurdles have you faced?

* This is a list of what apps are on the iPads as of August 2015. Pay apps are marked with a $:

  • Children’s iPads (10): ABC Alphabet Phonics, Air Hockey Gold, Bub – Wider, Bunny Fun $, Cliffed: Norm’s World XL, Dizzypad HD, Don’t Let the Pigeon Run This App! $, Easy-Bake Treats, eliasMATCH $, Escape – Norm’s World XL, Fairway Solitaire HD, Fashion Math, Go Away, Big Green Monster! $, Hickory Dickory Dock, Jetpack Joyride, Make It Pop $, Mango Languages, Minecraft – Pocket Edition $, Moo, Baa, La La La! $, My Little Pony: Twilight Sparkle, Teacher for a Day $, NFL Kicker 13, Offroad Legends Sahara, OverDrive, PewPew, PITFALL!, PopOut! The Tale of Peter Rabbit! $, Punch Quest, Skee-Ball HD Free, Sound Shaker $, Spot the Dot $, The Cat in the Hat – Dr. Seuss $, Waterslide Express
  • Teen iPads (10): Air Hockey Gold, Bad Flapping Dragon, Bub – Wider, Can You Escape, Clash of Clans, Cliffed: Norm’s World XL, Codea $, Cut the Rope Free, Despicable Me: Minion Rush, Dizzypad HD, Easy-Bake Treats, Escape – Norm’s World XL, Fairway Solitaire HD, Fashion Math, Fruit Ninja Free, GarageBand $, iMovie $, Jetpack Joyride, Mango Languages, Minecraft – Pocket Edition $, NFL Kicker 13, Ninja Saga, Offroad Legends Sahara, OverDrive, PewPew, PITFALL!, Punch Quest, Restaurant Town, Skee-Ball HD Free, Stupid Zombies Free, Temple Run, Waterslide Express, Zombies vs. Ninja

** It’s complicated but worth spelling out so I’m working on a follow-up post to explain the process of creating a VPP account and getting the Configurator set up the way you want it.

Open Knowledge Foundation: Global Open Data Index 2015 is open for submissions

planet code4lib - Tue, 2015-08-25 12:43

The Global Open Data Index measures and benchmarks the openness of government data around the world, and then presents this information in a way that is easy to understand and easy to use. Each year the open data community and Open Knowledge produces an annual ranking of countries, peer reviewed by our network of local open data experts. Launched in 2012 as tool to track the state of open data around the world. More and more governments were being to set up open data portals and make commitments to release open government data and we wanted to know whether those commitments were really translating into release of actual data.

The Index focuses on 15 key datasets that are essential for transparency and accountability (such as election results and government spending data), and those vital for providing critical services to citizens (such as maps and water quality). Today, we are pleased to announce that we are collecting submissions for the 2015 Index!

The Global Open Data Index tracks whether this data is actually released in a way that is accessible to citizens, media and civil society, and is unique in that it crowdsources its survey results from the global open data community. Crowdsourcing this data provides a tool for communities around the world to learn more about the open data available in their respective countries, and ensures that the results reflect the experience of civil society in finding open information, rather than accepting government claims of openness. Furthermore, the Global Open Data Index is not only a benchmarking tool, it also plays a foundational role in sustaining the open government data community around the world. If, for example, the government of a country does publish a dataset, but this is not clear to the public and it cannot be found through a simple search, then the data can easily be overlooked. Governments and open data practitioners can review the Index results to locate the data, see how accessible the data appears to citizens, and, in the case that improvements are necessary, advocate for making the data truly open.


Methodology and Dataset Updates

After four years of leading this global civil society assessment of the state of open data around the world, we have learned a few things and have updated both the datasets we are evaluating and the methodology of the Index itself to reflect these learnings! One of the major changes has been to run a massive consultation of the open data community to determine the datasets that we should be tracking. As a result of this consultation, we have added five datasets to the 2015 Index. This year, in addition to the ten datasets we evaluated last year, we will also be evaluating the release of water quality data, procurement data, health performance data, weather data and land ownership data. If you are interested in learning more about the consultation and its results, you can read more on our blog!

How can I contribute?

2015 Index contributions open today! We have done our best to make contributing to the Index as easy as possible. Check out the contribution tutorial in English and Spanish, ask questions in the discussion forum, reach out on twitter (#GODI15) or speak to one of our 10 regional community leads! There are countless ways to get help so please do not hesitate to ask! We would love for you to be involved. Follow #GODI15 on Twitter for more updates.

Important Dates

The Index team is hitting the road! We will be talking to people about the Index at the African Open Data Conference in Tanzania next week and will also be running Index sessions at both AbreLATAM and ConDatos in two weeks! Mor and Katelyn will be on the ground so please feel free to reach out!

Contributions will be open from August 25th, 2015 through September 20th, 2015. After the 20th of September we will begin the arduous peer review process! If you are interested in getting involved in the review, please do not hesitate to contact us. Finally, we will be launching the final version of the 2015 Global Open Data Index Ranking at the OGP Summit in Mexico in late October! This will be your opportunity to talk to us about the results and what that means in terms of the national action plans and commitments that governments are making! We are looking forward to a lively discussion!

Hydra Project: Only 15 tickets left for Hydra Connect 2015

planet code4lib - Tue, 2015-08-25 08:53

Four weeks to go!  Yes, Hydra Connect 2015 is just four weeks away.  The Connect 2015 wiki page has full details of the program and other aspects of the event. As I write this there are only 15 tickets left so, if you haven’t booked already, you really ought to do so very soon!  All our discounted hotel rooms are sold out, but apparently the discount travel sites can still find you a good deal.

Journal of Web Librarianship: “Snow Fall”-ing Special Collections and Archives

planet code4lib - Tue, 2015-08-25 06:37
Jason Paul Michel

District Dispatch: Court cases shaping the fair use landscape

planet code4lib - Mon, 2015-08-24 20:28

U.S. Supreme Court Building. From Wikimedia Commons.

Join us on CopyTalk in September to hear about the leading legal cases affecting Fair Use and our ability to access, archive and foster our common culture. Our presenter on this topic will be Corynne McSherry, Legal Director at the Electronic Frontier Foundation.

CopyTalk will take place on Thursday, September 3rd at 11am Pacific/2pm Eastern time.  After a brief introduction, Corynne will present for 50 minutes, and we will end with a Q&A session (questions will be collected during the presentations).

Please join us at

We are limited to 100 concurrent viewers, so we ask you to watch with others at your institution if at all possible.  The presentations are recorded and will be available online  soon after the presentation. Audio is provided online via the webinar software only, so you will need speakers for your computer; there is no call-in number for audio.

The post Court cases shaping the fair use landscape appeared first on District Dispatch.

Jonathan Rochkind: blacklight_cql plugin

planet code4lib - Mon, 2015-08-24 18:13

I’ve updated the blacklight_cql plugin for running without deprecation warnings on Blacklight 5.14.

I wrote this plugin way back in BL 2.x days, but I think many don’t know about it, and I don’t think anyone but me is using it, so I thought I’d take the opportunity having updated it, to advertise it.

blacklight_cql gives your BL app the ability to take CQL queries as input. CQL is a query language for writing boolean expressions (; I don’t personally consider it suitable for end-users to enter manually, and don’t expose it that way in my BL app.

But I do it use it as an API for other internal software to make complex boolean queries against my BL app; like “format = ‘Journal’ AND (ISSN = X OR ISSN =Y OR ISBN = Z)”  Paired with the BL Atom response, it’s a pretty powerful query API against a BL app.

Both direct Solr fields, and search_fields you’ve configured in Blacklight are available in CQL; they can even be mixed and matched in a single query.

The blacklight_cql plug-in also provides an SRU/ZeeRex EXPLAIN handler, for a machine-readable description of what search fields are supported via CQL.  Here’s “EXPLAIN” on my server:

The plug-in does NOT provide a full SRU/SRW implementation — but as it does provide some of the hardest parts of an SRW implementation, it would probably not be too hard to write a bit more glue code to get a full implementation.  I considered doing that to make my BL app a target of various federated search products that speak SRW, but never wound up having a business case for it here.  (Also, it may or may not actually work out, as SRW tends to vary enough that even if it’s a legal-to-spec SRW implementation, that’s no guarantee it will work with a given client).

Even though the blacklight_cql plugin has been around for a while, it’s perhaps still somewhat immature software (or maybe it’s that it’s “legacy” software now?). It’s worked out quite well for me, but I’m not sure anyone else has used it, so it may have edge case bugs I’m not running into, or bugs that are triggered by use cases other than mine. It’s also, I’m afraid, not very well covered by automated tests. But I think what it does is pretty cool, and if you have a use for what it does, starting with blacklight_cql should be a lot easier than starting from scratch.

Feel free to let me know if you have questions or run into problems.

Filed under: General

Islandora: Meet Your Developer: Jared Whiklo

planet code4lib - Mon, 2015-08-24 14:26

The Islandora community has seen a lot of growth since the Islandora Foundation got its start in 2013. The growth of our user and institutional community has been easy to see, but there has been another layer of growth in a vital part of the community that isn't always as visible: Islandora developers. Modules, bug fixes, and other commits to the Islandora codebase are coming from a much wider varsity of sources that in the early days of Islandora.

Today, we are going to learn more about one of those community developers. Jared Whiklo is an Applications Developer at the University of Manitoba. He has also been an integral part of the Islandora 7.x-2.x development team and will be co-leading Islandora's first Community Sprint at the end of the month. Jared has authored some handy Islandora tools of his own, including Islandora Custom Solr to replace Sparql queries with Solr queries where possible for speed improvements. You can learn more about how he runs Islandora from the University of Manitoba's entry in the Islandora Deployments Repo.

Please tell us a little about yourself. What do you do when you’re not at work?

I am a self-taught programmer from days past (like Turbo Pascal on 14 disks, past). I am married with two young kids. I like to build, fix things, camp (in a tent), bike, skate and run the occasional marathon.


How long have you been working with Islandora? How did you get started?

Over the past 3 years in my current position I have slowly gotten deeper and more involved in Islandora. Our institution had invested early in the Islandora project, we liked the flexibility as we were moving away from about 3 different legacy products.


Sum up your area of expertise in three words:

Master of none


What are you working on right now?

We are migrating content from various different systems into our Islandora instance as well as bringing other groups on campus on-board to store their data.


What contribution to Islandora are you most proud of?

I am proud of each little contribution. Every little bit helps to move the community forward.


What new feature or improvement would you most like to see?

Islandora 7.x-2.x!!


What’s the one tool/software/resource you cannot live without?

Git. When you swing between work for different interests it makes it vital. 


If you could leave the community with one message from reading this interview, what would it be?

Don't get discouraged.

Casey Bisson: Compact camera recommendations

planet code4lib - Mon, 2015-08-24 03:55

A friend asked the internet:

Can anyone recommend a mirrorless camera? I have some travel coming up and I’m hesitant to lug my DSLR around.

Of course I had an opinion:

I go back and forth on this question myself. My current travel camera is a Sony RX100 mark 3 (the mark 4 was recently released). Some of my photos with that camera are on Flickr. If I decide to get a replacement for my for my bigger cameras, I’ll probably go with a full frame Sony A7 of some sort. The Fuji X system APS-C, and Olympus and Panasonic Micro 4/3 cameras look great, but they don’t offer enough improvement over the RX100 to excite me much.   One of the biggest issues for me is sensor size. The smallest camera with the largest sensor is usually the winner for me. Other compact cameras I like include the Panasonic LUMIX LX100 and Canon PowerShot G1 X Mark II. Both have bigger sensors for shallower depth of field. If the Panasonic supported remote shutter release I would definitely have picked that instead of the Sony (I have a predecessor to the LX100, the LX3, that I loved). If you don’t care to do timelapse like I do, then remote shutter release might not be a requirement for you.   Back to my RX100: its my go-to digital. I shoot raw, sometimes with auto-bracketing, to maximize dynamic range. Even without bracketing, the raw files have great dynamic range–much more than my Canon bodies. The only reason I’ve used my Canon bodies recently is when I needed a hot shoe for strobist work (which I’d like to do more of).   To give context to my rambling: I offered my camera history up to mid-2014 previously. After that, I got deep into film, including instant and celluloid. My darling wife agreed to let me to buy a Hasselblad in March if I promised not to say a word about buying another camera for a full year. That lasted about a month, but at least (most) film cameras are cheap. I’m easy to find on Flickr and Instagram.

Terry Reese: MarcEdit Validate Headings: Part 2

planet code4lib - Mon, 2015-08-24 02:16

Last week, I posted an update that included the early implementation of the Validate Headings tool.  After a week of testing, feedback and refinement, I think that the tool now functions in a way that will be helpful to users.  So, let me describe how the tool works and what you can expect when the tool is run.


The Validate Headings tool was added as a new report to the MarcEditor to enable users to take a set of records and get back a report detailing how many records had corresponding Library of Congress authority headings.  The tool was designed to validate data in the 1xx, 6xx, and 7xx fields.  The tool has been set to only query headings and subjects that utilize the LC authorities.  At some point, I’ll look to expand to other vocabularies.

How does it work

Presently, this tool must be run from within the MarcEditor – though at some point in the future, I’ll extract this out of the MarcEditor, and provide a stand alone function and a integration with the command line tool.  Right now, to use the function, you open the MarcEditor and select the Reports/Validate Headings menu.

Selecting this option will open the following window:

Options – you’ll notice 3 options available to you.  The tool allows users to decide what values that they would like to have validated.  They can select names (1xx, 600,10,11, 7xx) or subjects (6xx).  Please note, when you select names, the tool does look up the 600,610,611 as part of the process because the validation of these subjects occurs within the name authority file.  The last option deals with the local cache.  As MarcEdit pulls data from the Library of Congress – it caches the data that it receives so that it can use it on subsequent headings validation checked.  The cache will be used until it expires in 30 days…however, a user at any time can check this option and MarcEdit will delete the existing cache and rebuild it during the current data run. 

Couple things you’ll also note on this screen. There is an extract button and it’s not enabled.  Once the Validate report is run, this button will become enabled if there are any records that are identified as having headings that could not be validated against the service. 

Running the Tool:

Couple notes about running the tool.  When you run the tool, what you are asking MarcEdit to do is process your data file and query the Library of Congress for information related to the authorized terms in your records.  As part of this process, MarcEdit sends a lot of data back and forth to the Library of Congress utilizing the service.  The tool attempts to use a light touch, only pulling down headings for a specific request – but do realize that a lot of data requests are generated through this function.  You can estimate approximately how many requests might be made on a specific file by using the following formula: (number of records x 2)  + (number of records), assuming that most records will have 1 name to authorize and 1 subjects per record.  So a file with 2500 records would generate ~7500 requests to the Library of Congress.  Now, this is just a guess, in my tests, I’ve had some sets generate as many as 12,000 requests for 2500 records and as few as 4000 requests for 2500 records – but 7500 tended to be within 500 requests in most test files.

So why do we care?  Well, this report has the potential to generate a lot of requests to the Library of Congress’s identifier service – and while I’ve been told that there shouldn’t be any issues with this – I think that question won’t really be known until people start using it.  At the same time – this function won’t come as a surprise to the folks at the Library of Congress – as we’ve spoken a number of times during the development.  At this point, we are all kind of waiting to see how popular this function might be, and if MarcEdit usage will create any noticeable up-tick in the service usage.

Validation Results:

When you run the validation tool, the program will go through each record, making the necessary validation requests of the LC ID service.  When the service has completed, the user will receive a report with the following information:

Validation Results: Process completed in: 121.546001431667 minutes. Average Response Time from LC: 0.847667984420415 Total Records: 2500 Records with Invalid Headings: 1464 ************************************************************** 1xx Headings Found: 1403 6xx Headings Found: 4106 7xx Headings Found: 1434 ************************************************************** 1xx Headings Not Found: 521 6xx Headings Not Found: 1538 7xx Headings Not Found: 624 ************************************************************** 1xx Variants Found: 6 6xx Variants Found: 1 7xx Variants Found: 3 ************************************************************** Total Unique Headings Queried: 8604 Found in Local Cache: 1001 ***************************************************************

This represents the header of the report.  I wanted users to be able to quickly, at a glance, see what the Validator determined during the course of the process.  From here, I can see a couple of things:

  1. The tool queried a total of 2500 records
  2. Of those 2500 records, 1464 of those records had a least one heading that was not found
  3. Within those 2500 records, 8604 unique headers were queried
  4. Within those 2500 records, there were 1001 duplicate headings across records (these were not duplicate headings within the same record, but for example, multiple records with the same author, subject, etc.)
  5. We can see how many Headings were found by the LC ID service within the 1xx, 6xx, and 7xx blocks
  6. Likewise, we can see how many headings were not found by the LC ID service within the 1xx, 6xx, and 7xx blocks.
  7. We can see number of Variants as well.  Variants are defined as names that resolved, but that the preferred name returned by the Library of Congress didn’t match what was in the record.  Variants will be extracted as part of the records that need further evaluation.

After this summary of information, the Validation report returns information related to the record # (record number count starts at zero) and the headings that were not found.  For example:

Record #0 Heading not found for: Performing arts--Management--Congresses Heading not found for: Crawford, Robert W Record #5 Heading not found for: Social service--Teamwork--Great Britain Record #7 Heading not found for: Morris, A. J Record #9 Heading not found for: Sambul, Nathan J Record #13 Heading not found for: Opera--Social aspects--United States Heading not found for: Opera--Production and direction--United States

The current report format includes specific information about the heading that was not found.  If the value is a variant, it shows up in the report as:

Record #612 Term in Record: bible.--criticism, interpretation, etc., jewish LC Preferred Term: Bible. Old Testament--Criticism, interpretation, etc., Jewish URL: Heading not found for: Bible.--Criticism, interpretation, etc

Here you see – the report returns the record number, the normalized form of the term as queried, the current LC Preferred term, and the URL to the term that’s been found.

The report can be copied and placed into a different program for viewing or can be printed (see buttons).

To extract the records that need work, minimize or close this window and go back to the Validate Headings Window.  You will now see two new options:

First, you’ll see that the Extract button has been enabled.  Click this button, and all the records that have been identified as having headings in need of work will be exported to the MarcEditor.  You can now save this file and work on the records. 

Second, you’ll see the new link – save delimited.  Click on this link, and the program will save a tab delimited copy of the validation report.  The report will have the following format:

Record ID [tab] 1xx [tab] 6xx [tab] 7xx [new line]

Each column will be delimited by a colon, so if two 1xx headings appear in a record, the current process would create a single column, but with the headings separated by a colon like: heading 1:heading 2. 

Future Work:

This function required making a number of improvements to the linked data components – and because of that, the linking tool should work better and faster now.  Additionally, because of the variant work I’ve done, I’ll soon be adding code that will give the user the option to update headings for Variants as is report or the linking tool is running – and I think that is pretty cool.  If you have other ideas or find that this is missing a key piece of functionality – let me know.


DuraSpace News: Welcome Jared Whiklo, University of Manitoba, to the Fedora Committers

planet code4lib - Mon, 2015-08-24 00:00

From Andrew Woods, on behalf of on behalf of the Fedora Committers and Leadership Team

Winchester, MA  The Fedora Committers and Leadership Teams are pleased to welcome Jared Whiklo, Web Application Developer at the University of Manitoba, to the Fedora Committers team.

DuraSpace News: Second meeting of the DSpace UI Working Group, 8/25

planet code4lib - Mon, 2015-08-24 00:00

From Tim Donohue, DSpace Tech Lead, DuraSpace

Winchester, MA  A reminder that the second meeting of the DSpace UI Working Group is TOMORROW (Tues, Aug 25) at 15:00 UTC (11:00am EDT).  Connection information is below.

Anyone is welcome to attend and join this new working group. A working group charter, with deliverables, is available at

Meeting Agenda:


Subscribe to code4lib aggregator