We are very pleased to share the roster of workshop instructors for the upcoming Islandora Camp in Vancouver, BC. Camp will, as usual, split up into two groups for hands-on Islandora time on the second day: one group exploring the front-end in the Admin track, and the other looking at code in the Developer track. Here are your instructors:Developer Track
Mark Jordan is the Head of Library Systems at Simon Fraser University. He has been developing in Drupal since 2007 and is currently leading the effort to migrate SFU Library's digital collections to Islandora. He is a member of the Islandora 7.x-1.3 and 7.x-1.4 release teams and is component manager for several Islandora modules that deal with digital preservation (and developer of several other Islandora-related tools available at his GitHub page). He is also author of Putting Content Online: A Practical Guide for Libraries (Chandos, 2006). Mark taught in the Developer track at iCampCA in 2014.
Mitch MacKenzie is a Solution Architect at discoverygarden where he manages the execution of Islandora projects for institutions across North America and Europe. Mitch has been developing Islandora tools for three years and has been building with Drupal since 2006. His development contributions include the initial work on the Islandora Compound Solution Pack, Islandora Sync, Simple Workflow, and porting the XML Forms family of modules to Islandora 7. This is Mitch's first Islandora Camp as an instructor.Admin Track
Melissa Anez has been working with Islandora since 2012 and has been the Community and Project Manager of the Islandora Foundation since it was founded in 2013. She spends her time arranging Islandora events, doing what she can to keep the Islandora community ticking along, and writing about herself in the third person in blog posts. Melissa taught in the Admin Track at several previous camps.
Erin Tripp is a librarian, journalist, and business development manager for an open source software services company. Personally, Erin believes in investing in people and ideas – making the open source software space a natural fit. Since 2011, Erin’s been involved in the Islandora project and has been involved in close to 30 different Islandora projects as a project manager. The projects ranged from consulting, installation, custom development, and data migrations. This is Erin's first Islandora Camp as an instructor.
The rest of Camp will be filled with sessions, and we want you to get up and lead the room. A Call for Proposals is open until December 15th. You can check out the slides linked on schedules from previous camps to see what kinds of presentations the community has given in the past.
The 2014 LITA Forum took place in Albuquerque, NM the first week of November. I had the opportunity to present on Islandora and what we accomplished with the Detroit Public Library Digital Collections site that we built.Presentation
At the beginning of my presentation I took some time to answer the question, "Why use a DAMS?" instead of building an image gallery on your website or using a web service. (We will have a blog post up next week expanding on this topic.)
About half the audience had heard of Islandora, but only a few were using it in their libraries. Most of them did not know the details about what Islandora is and what...Read more »
When I first started my job as Digital Curation Coordinator in June, I didn’t quite know what I would be doing. Then I started figuring it out. As I’ve gotten settled, I’ve realized that I want to be more proactive in identifying tools and platforms that the researchers I’m working with are using so that I can connect with their experience more easily.
However, the truth is that I find it hard to know what tools I should focus on. What usually happens when I learn about a new tool is a cursory read through the documentation… I familiarize myself well enough to share in a few sentences what it does, but most of the time I don’t become incredibly familiar. There are just soooo many tools out there. It’s daunting.
Knowing my tendencies, I decided it would be a good challenge for me to dig deeper into three areas where I am more ignorant than I’d like to be.
I don’t know a lot about data analysis but I think it will be critical in terms of how well I can understand researchers. Of the three, I’m most familiar with SPSS already and I’ll probably devote the most time to learning R (perhaps through this data science MOOC, which fellow LITA blog writer Bryan pointed out). With SAS, I’m mostly interested in learning how it differs from the others rather than delving too deep.
Why these two? It’s pretty arbitrary, I guess: I learned about them in a recent ecology data management workshop I was presenting at. As is often the case, I learned a lot from the other presenters! A big part of my job is figuring out how to help researchers manage their data – and a big barrier to that is the painstaking work of creating metadata.
Digital forensics tool BitCurator
I was lucky enough to be able to attend a two-day workshop at my institution, so I have played around with this in the past. BitCurator is an impressive suite of tools that I’m convinced I need to find a use case to explore further. This is a perfect example of a tool I know decently already – but I really want to know better, especially since I already have people bringing me obsolete media and asking what I can do about it.
What tools do you want to learn? And for anyone who helps researchers with data management in some capacity, what additional tools do you recommend I look into?
Of interest to many Hydranauts:
The Tenth International Conference on Open Repositories, OR2015, will be held on June 8-11, 2015 in Indianapolis (Indiana, USA). The organizers are pleased to invite you to contribute to the program. This year’s conference theme is:
LOOKING BACK, MOVING FORWARD: OPEN REPOSITORIES AT THE CROSSROADS
OR2015 is the tenth OR conference, and this year’s overarching theme reflects that milestone: Looking Back/Moving Forward: Open Repositories at the Crossroads. It is an opportunity to reflect on and to celebrate the transformative changes in repositories, scholarly communication and research data over the last decade. More critically however, it will also help to ensure that open repositories continue to play a key role in supporting, shaping and sharing those changes and an open agenda for research and scholarship.
The full call for proposals can be found at http://www.or2015.net/call-for-proposals/
From Kristi Park, Marketing Manager, Texas Digital Library
Austin, Texas The Texas Digital Library (TDL) is pleased to announce the development of cost-effective, easy-to-use digital preservation storage for its member institutions through DuraCloudTM@TDL. With DuraCloudTM @TDL, members can accurately plan preservation costs, enjoy predictability of service, and rely on known, durable technologies for ensuring the integrity of their digital collections.
The Tenth International Conference on Open Repositories, OR2015, will be held on June 8-11, 2015 in Indianapolis (Indiana, USA). The organizers are pleased to invite you to contribute to the program. This year's conference theme is:
LOOKING BACK, MOVING FORWARD: OPEN REPOSITORIES AT THE CROSSROADS
Happy Movember, DPLA friends! The month of November brings about a great many things—Thanksgiving, brisk breezes, falling leaves—including ditching the razor for a good cause. Movember encourages participants to grow out mustaches and beards to raise awareness for men’s health issues.
In celebration, we’re providing some historic grooming inspiration. Check back once a week for a selection of some of the best beards and mustaches from the DPLA collection, and up your “Movember” game!
This week, we’re featuring the one thing the North and South could unite around: excellent facial hair.Major General John M. Schofield, Officer of the Federal Army Captain William Harris Northrup John F. Mackie, Medal of Honor Recipient Lewis C. Shepard, Medal of Honor Recipient Portrait of Captain George E. Dolphin, of Minnesota Portrait of Andrew Anderson, of Minnesota Portrait of Captain Asgrim K. Skaro [?], of Minnesota Portrait of Jacob Dieter, of Minnesota Portrait of Simon Gabert Portrait of Jeremiah C. Donahower, of Minnesota
New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.New This Week
Visit the LITA Job Site for more available jobs and for information on submitting a job posting.
Hello Islandora Community!
Documentation is an important aspect of any software and an essential tool for the Islandora community.
By participating in this survey, you will help the IDIG to better understand and serve the needs of our users.
To contribute, please fill out the Islandora Community Survey, which should take approximately 15 minutes to complete.
If you have any questions please contact us at email@example.com
The Free and Open Source Software Outreach Program for Women announced the names of interns who will be participating in the program’s December 2014 – March 2015 internship round.
The Evergreen project is pleased to announce that Julia Lima of Villa Carlos Paz, Cordoba, Argentina will be working with the Evergreen community during this internship period to create a User Interface Style Guide for the new web client.
Julia is a student studying design at the Universidad Provincial de Cordoba in Argentina. She will work on the style guide during her summer break. As part of her contribution to the project this fall, Julia made three specific recommendations to improve the existing web client User Interface.
The Evergreen OPW mentors selected Julia’s proposal after reviewing proposals from nine potential candidates, several of whom worked with the community during the application period and submitted very good proposals. Bill Erickson, Grace Dunbar, and Dan Wells will serve as project mentors for the UI Style Guide.
Expect to hear more from Julia as she begins working on her project next month.
We also hope to continue hearing from the many candidates with whom we connected during the application period. They brought a lot of enthusiasm and fresh ideas to the project, and we encourage everyone to keep working with us as time allows.
I also want to extend thanks to all the mentors who worked with potential candidates during the application period and reviewed the applications; to others in the community who helped the candidates with installation, answered their questions, and provided feedback to their ideas; and to the Evergreen Oversight Board for supporting the project by funding the internship.
There’s an conversation shaping up on the Code4Lib email list with the title “Why Learn Unix?”, and this is a wonderful question to ask. A lot of technical library jobs are asking for UNIX experience and as a result a lot of library schools are injecting bits and pieces of it into their courses, but without a proper understanding of the why of Unix, the how might just go in one ear and out the other. When I was learning about Unix in library school, it was in the context of an introductory course to library IT. I needed no convincing, I fell in love almost immediately and cemented my future as a command line junkie. Others in the course were not so easily impressed, and never received a satisfactory answer to the question of “Why Learn Unix?” other than a terse “Because It’s Required”. Without a solid understanding of a technology’s use, it’s nearly impossible to maintain motivation to learn it. This is especially true of something as archaic and intimidating as the Unix command line interface that looks like something out of an early 90’s hacker movie. Those who don’t know Unix get along just fine, so what’s the big deal?
The big deal is that Unix is the 800 lb. gorilla of the IT world. While desktops and laptops are usually a pretty even split between Windows and Mac, the server world is almost entirely Unix (either Linux or BSD, both of which are UNIX variants). If you work in a reasonably technical position, you have probably had to log in to one of these Unix servers before to do something. If you are in library school and looking to get a tech oriented library job after graduating, this WILL happen to you, maybe even before you graduate (a good 50% of my student worker jobs were the result of knowing Unix). As libraries move away from vendor software and externally hosted systems towards Open Source software, Unix use is only going to increase because pretty much all Open Source software is designed to run on Linux (which is itself Open Source software). The road to an Open Source future for libraries is paved with LIS graduates who know their way around a command line.
So let’s assume that I’ve convinced you to learn Unix. What now? The first step on the journey is deciding how much Unix you want to learn. Unix is deep enough that one can spend a great deal of time getting lost in its complexities (not to say that this wouldn’t be time well spent). The most important initial steps of any foray into the world of Unix should start with how to log in to the system (which can vary a lot depending on whether you are using Windows or Mac, and what Unix system you are trying to log in to). Once you have that under control, learn the basic commands for navigating around the system, copying and deleting files, and checking the built-in manual (University of Illinois has a great cheat sheet).
How to learn Unix as opposed to why is a completely separate conversation with just as many strong opinions, but I will say that learning Unix requires more courage than intelligence. The reason most people actively avoid using Unix is because it is so different from the point-and-click world they are used to, but once you get the basics under your belt you may find that you prefer it. There are a lot of things that are much easier to do via command line (once you know how), and if you get really good at it you can even chain commands together into a script that can automatically perform complex actions that might take hours (or days, or weeks, or years) to do by hand. This scriptability is where Unix systems really shine, but by no means do you have to dive in this deep to find value in learning Unix. If you take the time to learn the basics, there will come a time when that knowledge pays off. Who knows, it might even change the direction of your career path.
Do you have any questions or opinions about the need for librarians to learn Unix? Are you struggling with learning Unix and want to air your grievances? Are you a wizard who wants to point out the inaccurate parts of my post? Let me know in the comments!
Check out our brand new screencast video of PeerLibrary 0.3!
We are proud to announce an updated screencast which demos the increased functionality and updated user interface of the PeerLibrary website. This screencast debuted at the Mozilla Festival in October as part of our science fair presentation. The video showcases an article by Paul Dourish and Scott D. Mainwaring entitled “Ubicomp’s Colonial Impulse” as well as the easy commenting and discussion features which PeerLibrary emphasizes. One of the MozFest conference attendees actually recognized the article which drew him towards our booth and into a conversation with our team. Check out the new screencast and let us know what you think!
From Hardy Pottinger on behalf of the DSpace Committers
The DSpace 5.0 Testathon is going on right now, and will continue through November 21, 2014.
• Details on how to participate: see 
• Details about new features, bug fixes in 5.0 and release schedule: see 
Yesterday President Barack Obama re-affirmed his commitment to network neutrality principles and to the strongest rules to protect the open internet. The American Library Association (ALA) welcomes his statement and outline of principles that echo those of public comments filed by the ALA and a coalition of library and higher education organizations this year.
“The ALA heartily agrees with the essential elements of network neutrality affirmed by President Obama: no blocking, no throttling, increased transparency, and no paid prioritization,” said ALA Incoming President Sari Feldman. “As the President noted, these elements are ‘built into the fabric of the internet since its creation.’ In fact, the initial protocols for the internet were developed by institutions of higher education, and universities were the first to deploy private high-speed data networks that formed the test-bed for what later became the public internet.
“Since then, our nation’s libraries and institutions of higher education have become leaders in creating, fostering, using, extending and maximizing the potential of the internet for research, education and the public good. An open “neutral” internet is absolutely crucial to fulfill our missions and serve our communities.
“Further, we are heartened that both the President and recent statements from FCC Chairman Tom Wheeler reflect an understanding that network neutrality must apply to both fixed and mobile broadband. We look forward to continuing to work with the FCC to secure strong, legally enforceable rules that ensure the internet remains an open platform for information exchange, intellectual discourse, creativity, innovation and learning for all,” Feldman concluded.
The post ALA welcomes President Obama’s strong affirmation of net neutrality appeared first on District Dispatch.
The way we access and use information in the digital age is fundamentally mediated by copyright policy. For several decades, this policy has been largely shaped by commercial interests. However, in the last three years, several court decisions have been more protective of public access to information and accommodating to the needs of the education, research, and library sectors. Is this a real trend and will it continue?
On November 18, 2014, the American Library Association (ALA) will host “Too Good to Be True: Are the Courts Revolutionizing Fair Use for Education, Research and Libraries?,” a symposium that will explore copyright policy in a digital and networked environment. During the discussion, a diverse panel of copyright policy experts from the library and publishing fields will attempt to make sense of key court cases such as UCLA v. AIME, Authors Guild v. HathiTrust, and the high profile U.S. Supreme Court case Kirtsaeng v. Wiley. These experts will discuss the prospects these decisions may create for public policy development over the next few years informed by the 2014 midterm elections and the upcoming 2016 general election. RSVP for the event.
This event is offered under the rubric of the Policy Revolution! Initiative of ALA’s Office for Information Technology Policy (OITP). Central to this initiative is strengthening the library community’s engagement and visibility in national public policy. Look for additional outreach activities in 2015.Panelists include:
Mary Rasenberger is the newly appointed Executive Director for the Authors Guild. Mary has worked in the area of intellectual property, technology, and copyright law for 25 years. Prior to joining the Authors Guild, Mary was a partner at Cowan DeBaets Abrahams and Sheppard where she counseled publishing, media, entertainment, internet, and other technology companies, as well as authors and artists in all areas of copyright and related rights, including licensing, litigation, infringement analysis, policy, enforcement and digital rights. From 2002 to 2008, Mary worked for the U.S. Copyright Office and Library of Congress as senior policy advisor and program director for the National Digital Preservation Program. Mary also worked at other major New York law firms and for BMG Music.
Jonathan Band has represented a wide range of clients, including technology companies and library associations, on domestic and international copyright policy matters for more than 25 years. He has filed amicus briefs on behalf of the Library Copyright Alliance (LCA) in numerous important cases, such as Kirtsaeng v. Wiley, Authors Guild v. HathiTrust, Authors Guild v. Google, and the recently decided Georgia State e-reserves case. He also has represented the Library Copyright Alliance in connection with the Marrakesh Treaty for the print-disabled and the U.S. House Judiciary Committee’s ongoing review of copyright.
Brandon Butler is the practitioner-in-residence at the Glushko-Samuelson Intellectual Property Clinic at American University’s Washington College of Law (WCL). At the clinic, Professor Butler supervises student attorneys who represent clients in a variety of IP matters. Before joining the WCL faculty, Brandon was the director of Public Policy Initiatives at the Association of Research Libraries (ARL). While there, he worked on a host of issues ranging from fair use to network neutrality to the PATRIOT Act. He is a co-facilitator, with Professors Peter Jaszi and Patricia Aufderheide, of the “ARL Code of Best Practices in Fair Use for Academic and Research Libraries,” released in January 2012.
RSVP now if you would like to attend the no-cost event.
The post ALA to host copyright policy discussion in Washington, D.C. appeared first on District Dispatch.
The Code4Lib 2015 Program Committee is happy to announce that voting is now open for prepared talks.
To vote, visit http://vote.code4lib.org/election/33, review the proposals, and assign points to those presentations you would like to see on the program this year.
You will need to log in with your code4lib.org account in order to vote. If you have any issues with your account, please contact Ryan Wick at firstname.lastname@example.org.
Voting will end on Tuesday, November 25, 2014 at 11:59:59 PM PT (GMT-8).
The top 10 proposals are guaranteed a slot at the conference. The Program Committee will curate the remainder of the program in an effort to ensure diversity in program content and presenters. Community votes will still weigh heavily in these decisions.
The final list of presentations will be announced in early- to mid-December.
For more information about Code4Lib 2015, visit
France may not have any money left for its universities but it does have money for academic publishers.
While university presidents learn that their funding is to be reduced by EUR 400 million, the Ministry of Research has decided, under great secrecy, to pay EUR 172 million to the world leader in scientific publishing Elsevier .
In an exclusive piece published by the French news outlet Rue89 (Le Monde press group), Open Knowledge France members and open science evangelists Pierre-Carl Langlais and Rayna Stamboliyska released the agreement between the French Ministry and Elsevier. The post originally appeared here, in French.The Work of Volunteers
The scientific publishing market is an unusual sector, those who create value are never remunerated. Instead, they often pay to see their work published. Authors do not receive any direct financial gain from their articles, and the peer review is conducted voluntarily.
This enormous amount of work is indirectly funded by public money. Writing articles and participating in peer review are part of the expected activities of researchers, expected activities that lead to further research funding from the taxpayer.
Scientific publishing is centred around several privately-held publishing houses who own the journals where scientific research is published. Every journal has an editorial review board who receive potential contributions which are then sent to volunteer scientists for peer review. It is on the basis of comments and feedback from the peer review process that a decision is made whether an article is to be published or rejected and returned to the author(s).
When the article is accepted, the authors usually sign their copyright over to the publishers to sell access to the work, or can choose to make their work available to everyone, which oftentimes involves paying a given sum. In some cases journals only receive income for the service of publishing an article which is henceforth free to the consumer, but some journals have a mixed ‘hybrid’ selection so authors pay to publish some articles and their library still pays to purchase the rest of the journal. This is called ‘double dipping’ and while publishers claim they take it into account in their journal pricing, the secrecy around publisher contracts and lack of data means it is impossible to tell where money is flowing.
Huge Profit Margins
This is important because access to these journals is rarely cheap and publishers sell access primarily to academic libraries and research laboratories. In other words, financial resources for the publication of scientific papers come from credits granted to research laboratories; access to the journals these papers are published in is purchased by these same institutions. In both cases, these purchases are subsidies by the public.
The main actors in scientific publishing generate considerable income. In fact, the sector is dominated by an oligopoly with “the big four” sharing most of the global pie:
- The Dutch Elsevier
- The German Springer
- The American Wiley
- The English Informa
They draw huge profits: from 30% to 40% annual net profit in the case of Elsevier and Springer.
In other words, these four major publishers resell to universities content that the institutions themselves have produced.
In this completely closed market, competition does not exist, and pre-existing agreement is the rule: subscription prices have continued to soar for thirty years, while the cost of publishing, in the era of electronic publishing, has never been lower. For example, the annual subscription to Elsevier’s journal ‘Brain Research’ costs a whopping 15,000 EUR.
The Ministry Shoulders This Policy
The agreement between France and Elsevier amounted to ca. EUR 172 million for 476 universities and hospitals.
The first payment (approximately EUR 34 million of public money) was paid in full in September 2014. In return, 476 public institutions will have access to a body of about 2,000 academic journals.
This published research was mainly financed by public funds. Therefore in the end, we will have paid to Elsevier twice: once to publish, a second time to read.
This is not a blip. The agreement between Elsevier and the government is established policy. In March 2014, Geneviève Fioraso, Minister of Higher Education and Research, elaborated upon the main foci of her political agenda to the Academy of Sciences;two of which involve privileged interactions with Elsevier. This would be the first time that negotiating the right to read for hundreds of public research institutions and universities was managed at national level.
One could argue in favour of the Ministry’s benevolence vis-à-vis public institutions to the extent it supports this vital commitment to research. Such an argument would, however, fail to highlight multiple issues. Among these, we would pinpoint the total opacity in the choice of supplier (why Elsevier in particular?) and the lack of competitive pitch between several actors (for such an amount, open public tendering is required). The major problem which prevents competition is the monopolistic hold of publishers over knowledge – no-one else has the right to sell that particular article on cancer research that a researcher in Paris requires for their work – so there is little choice but to continue paying the individual publishers under the current system. Their hold on only expires with copyright, which is 70 years from the death of the last author and therefore entirely incompatible with the timeline of scientific discovery.
Prisoners of a game with pre-set rules, the negotiators (the Couperin consortium and the Bibliographic Agency for Higher Education, abbreviated as ABES) have not had much breathing space for negotiation. As aforementioned, a competitive pitch did not happen. Article 4 of the Agreement is explicit:“Market for service provision without publication and without prior competition, negotiated with a particular tenderer for reasons connected with the protection of exclusive distribution rights.”
Therefore, a strange setup materialises for Elsevier to keep its former customers in its back pocket. The research organisations already having a contract with the publisher can only join the national license providing they accept a rise of the costs (that goes from 2.5 to 3.5%). Those without previous contract are not concerned.
How Many Agreements of the Sort?
To inflate the bill even more, Elsevier sells bundles of journals (its ‘flagship journals’): “No title considered as a ‘flagship journal’ (as listed in Annex 5) can be withdrawn from the collection the subscribers can access” (art. 6.2). These ‘flaghip journals’ cannot all claim outstanding impact factors. Moreover, they are not equally relevant acrossdisciplines and scientific institutions.
The final price has been reduced from the estimation initially planned in February: “only” EUR 172 million instead of EUR 188 million. Yet, this discount does not seem to be a gratuitous gift from Elsevier. Numerous institutions have withdrawn from the national license: from 642 partners in February, only 476 remain in the final deal.
Needless to say, the sitation is outrageous. Yet, it is just one agreement with one among several vendors. A recent report by the French Academy of Science [http://www.academie-sciences.fr/presse/communique/rads_241014.pdf] alluded to a total of EUR 105 million annually, dedicated to acquiring access to scientific publications. This figure, however, comes out as far below the reality. Indeed, the French agreement with Elsevier grants access to publications only to some of the research institutions and universities in France; and yet in this case, the publisher already preempts EUR 33-35 million per year. The actual costs plausibly reach a total of EUR 200-300 million.
An alternative exists.
Elsewhere in Europe…
An important international movement has emerged and developed promoting and defending a free and open access to scientific publications. The overall goal is to make this content accessible and reusable to anyone.
As a matter of fact, researchers have no interest whatsoever in maintaining the current system. Copyright in scholarly publication does not requite authors and thus constitutes a fiction whose main goal is to perpetrate the publisher’s rights. Not only does this enclosure limit access to scientific publications — it also prevents the researcher from reusing their own work, as they oftenconcede their copyright when opting in to publication agreements.
The main barrier to opening up access to publications appears to stem from the government. No action is taken for research to be released from the grip of oligopolistic publishers. Assessment of publicly funded research focuses on journals referred to as “qualifying” (that is, journals mainly published by big editors). Some university departments even consider that open access publications are, by default, “not scientific”.
Several European Countries lead the way:
- Germany has passed a law limiting the publishers’ exclusive rights to one year. Once the embargo has expired, the researcher is free to republish his work and allow open access to it. More details here.
- Negotiations have been halted in Elsevier’s base, the Netherlands. Even though Elsevier pays most of its taxes there, the Dutch governement fully supports the demands of researchers and librarians, aiming to open up the whole corpus of Dutch scientific publications by 2020. More details here.
The most chilling potential effect of the Elsevier deal is removing, for five years, any possible collective incentive to an ambitious French open access policy. French citizens will continue to pay twice for research they cannot read. And the government will sustain a closed and archaic editorial system whose defining feature is to single-handedly limit the right to read.