On behalf of 2.10 release maintainer Galen Charlton, 2.9 release maintainer Jason Stephenson, and 2.8 release maintainer Bill Erickson, we are pleased to announced the following releases:
All three releases are bugfix releases. With these releases, support for Debian Squeeze is dropped, as that release of Debian is no longer supported or available. Also, 2.8.8 is the last scheduled release in the 2.8.x series, although future security releases may be made if warranted.
Please visit the downloads page to retrieve the server software and staff clients.
The LITA Forum is a highly regarded annual event for those involved in new and leading edge technologies in the library and information technology field. Please send your proposal submissions here by May 13, 2016, and join your colleagues in Fort Worth Texas.
The 2016 LITA Forum Committee seeks proposals for the 19th Annual Forum of the Library Information and Technology Association in Fort Worth Texas, November 17-20, 2016 at the Omni Fort Worth Hotel.
The Forum Committee welcomes proposals for full-day pre-conferences, concurrent sessions, or poster sessions related to all types of libraries: public, school, academic, government, special, and corporate. Collaborative and interactive concurrent sessions, such as panel discussions or short talks followed by open moderated discussions, are especially welcomed. We deliberately seek and strongly encourage submissions from underrepresented groups, such as women, people of color, the LGBT community and people with disabilities.
The New Submission deadline is Friday May 13, 2016.
Proposals could relate to, but are not restricted to, any of the following topics:
- Discovery, navigation, and search
- Practical applications of linked data
- Library spaces (virtual or physical)
- User experience
- Emerging technologies
- Cybersecurity and privacy
- Open content, software, and technologies
- Systems integration
- Hacking the library
- Scalability and sustainability of library services and tools
- Consortial resource and system sharing
- “Big Data” — work in discovery, preservation, or documentation
- Library I.T. competencies
Proposals may cover projects, plans, ideas, or recent discoveries. We accept proposals on any aspect of library and information technology. The committee particularly invites submissions from first time presenters, library school students, and individuals from diverse backgrounds.
Vendors wishing to submit a proposal should partner with a library representative who is testing/using the product.
Presenters will submit final presentation slides and/or electronic content (video, audio, etc.) to be made available on the web site following the event. Presenters are expected to register and participate in the Forum as attendees; a discounted registration rate will be offered.
If you have any questions, contact Tammy Allgood Wolf, Forum Planning Committee Chair, at email@example.com.
Last updated April 28, 2016. Created by Peter Murray on April 28, 2016.
Log in to edit this page.
From the announcement:
On the afternoon of Friday, June 24 from 1 - 4pm myself and Stephen Perkins will be delivering another half-day Islandora for Managers: Open Source Digital Repository Training session at the Library and Information Technology Association (LITA) American Library Association (ALA) conference in Orlando, Florida. There is a registration fee for this workshop. One can purchase a ticket during the ALA conference registration process.
Last updated April 28, 2016. Created by Peter Murray on April 28, 2016.
Log in to edit this page.
From the announcement:
On the afternoon of Monday, June 13 from 1:30 - 6 pm myself and Melissa Anez will be delivering a half-day Islandora for Managers workshop at the International Conference on Open Repositories at Trinity College in Dublin, Ireland. All are welcome. There's no charge to participate. We’ll be notifying folks when registration is open for the session.
FOR IMMEDIATE RELEASE
Duluth, GA–April 28, 2016
Equinox is proud to announce that NC Cardinal, one of the largest Evergreen Consortia, now has two new members. Iredell County Public Library and Henderson County Public Library were recently migrated to Evergreen. These two additions bring NC Cardinal’s total library count to 143.
Henderson County has six branches and migrated over 185,000 bib records along with almost 72,000 patrons. Iredell County has three branches and migrated 173,000 bib records and almost 41,000 patrons. Equinox performed the migration and data extract and will continue to provide support and training. Both libraries are now using Acquisitions within NC Cardinal.
Trina Rushing, Director at Henderson County, had this to say about the move: “It was a pleasure working with Equinox throughout our migration process. The staff were exceptionally helpful, knowledgeable, and willing to work with us to configure our data and settings in a manner that would most benefit our community. The administration is thrilled with the cost savings that the Evergreen ILS provides and our patrons are delighted with the resource sharing opportunities. It’s a win-win for everyone!”
Peggy Carter, Assistant Director at Iredell County, added: “We’re excited to be a part of the NC Cardinal community and are looking forward to beginning resource sharing. We have enjoyed working with the NC Cardinal migration team and the Equinox staff members who helped with our migration. We have been “live” on NC Cardinal for a little more than a week now and I think we are proceeding nicely. We love being part of the Nest.”
Erica Rohlfs, Project Manager for Implementation at Equinox, remarked: “Henderson and Iredell were amazing groups to work with during their migrations! Many staff members played an active role in the two projects. Henderson and Iredell are two sophisticated groups of librarians who thoroughly tested their data and quickly acclimated to Cardinal.”
About Equinox Software, Inc.
Equinox was founded by the original developers and designers of the Evergreen ILS. We are wholly devoted to the support and development of open source software in libraries, focusing on Evergreen, Koha, and the FulfILLment ILL system. We wrote over 80% of the Evergreen code base and continue to contribute more new features, bug fixes, and documentation than any other organization. Our team is fanatical about providing exceptional technical support. Over 98% of our support ticket responses are graded as “Excellent” by our customers. At Equinox, we are proud to be librarians. In fact, half of us have our ML(I)S. We understand you because we *are* you. We are Equinox, and we’d like to be awesome for you.
For more information on Equinox, please visit http://www.esilibrary.com.
Evergreen is an award-winning ILS developed with the intent of providing an open source product able to meet the diverse needs of consortia and high transaction public libraries. However, it has proven to be equally successful in smaller installations including special and academic libraries. Today, almost 1200 libraries across the US and Canada are using Evergreen including NC Cardinal, SC Lends, and B.C. Sitka.
For more information about Evergreen, including a list of all known Evergreen installations, see http://evergreen-ils.org.
Sequoia is a cloud-based library solutions platform for Evergreen, Koha, FulfILLment, and more, providing the highest possible uptime, performance, and capabilities of any library automation platform available. Sequoia was designed by Equinox engineers in order to ensure that our customers are always running the most stable, up to date version of the software they choose.
For more information on Sequoia, please visit http://esilibrary.com/what-we-do/sequoia/
First of all, THANK YOU to all of the attendees, presenters, sponsors, and host institutions that helped make the third annual DPLAfest a great success! With so many great sessions, conversations, workshops (and sightseeing!) taking place at once, we wanted to be sure to share a one-stop recap of the highlights. Whether you missed the fest, participated from afar, or are just hoping to revisit some of the great ideas shared during the conference, look no further! This post is your guide to the news, notes, media, and other materials associated with the DPLAfest 2016.Announcements & Milestones
- A growing network: DPLA now has over 13 million items from 1,900 contributing institutions
- Debut of RightsStatements.org, a collaborative approach to rights statements that can be used to communicate the copyright status of cultural objects
- 100 Primary Source Sets now published for educators and students
- Open eBooks launched this spring to a great reception with over 1.4 million access codes distributed to date
- DPLA looks forward to partnering with the Library of Congress
- And…we’re on Instagram!
To find presentation slides and notes from DPLAfest 2016 sessions, visit the online agenda (click on each session to find attached slides and links to notes, where available).Recorded Sessions
The DPLAfest Opening Plenary session is now available on the DPLAfest 2016 videos page. We are currently processing recordings of additional sessions, which will be available in the coming months. Stay tuned for more video content.
If you weren’t able to make it to the fest (or if you just want to re-live it), check out the fantastic online conversation on Twitter using hashtag #DPLAfest or read our selection of posts on Storify.
Special thanks to the many DPLAfest attendees who helped capture each session on social media!Instagram
We were excited to see great content contributed by fest participants on our newest social media platform – check out photos from our attendees.
The Digital Public Library of America wishes to thank its generous DPLAfest Sponsors:
- Digital Transitions, Division of Cultural Heritage
- CLIR Digital Library Federation
DPLA also wishes to thank its gracious hosts:
- Library of Congress
- US National Archives and Records Administration
- Smithsonian Institution
DPLAfest host organizations are essential contributors to one of the most prominent gatherings in the country involving librarians, archivists, and museum professionals, developers and technologists, publishers and authors, teachers and students, and many others who work together to further the mission of providing maximal access to our shared cultural heritage.
- For colleges and universities, DPLAfest is the perfect opportunity to directly engage your students, educators, archivists, librarians and other information professionals in the work of a diverse national community of information and technology leaders.
- For public libraries, hosting DPLAfest brings the excitement and enthusiasm of our community right to your hometown, enriching your patrons’ understanding of library services through free and open workshops, conversations, and more.
- For museums, archives, and other cultural heritage institutions, DPLAfest is a great way to promote your collections and spotlight innovative work taking place at your organization.
It’s also a chance to promote your institution nationally and internationally, given the widespread media coverage of DPLAfest and the energy around the event. Look for our formal call for proposals very soon!
This is a guest post by Nicole Contaxis.On April 12th2016, Alice Allen, editor of the Astrophysics Source Code Library, came to the National Library of Medicine to speak with National Digital Stewardship Residency participants, mentors and visitors about the importance of software as a research object and about why the ASCL is a necessary and effective resource for the astronomy and astrophysics academic communities.
Astrophysicists and astronomers frequently write their own code to do their research, and this code helps them interpret and manipulate large data sets. These codes, as an integral part of the research process, are important to share for two reasons: (1) they increase the efficiency of work by allowing code to be re-used and (2) it helps ensure the transparency of scientific research.
Yet, difficulties persist when it comes to encouraging researchers to share source code, regardless of the benefits. Allen talked about how researchers are reluctant to share code that may be “messy” and that creating this source code library requires community engagement and change management. She spoke about studying the impact of non-traditional scholarly outputs, like code, and the issues of scholarly publishing. Allen showed how ASCL has helped allow journal authors cite code, which had been a far more difficult procedure earlier. The ASCL assigns Digital Object Identifiers — persistent and unique identifiers — to source code in their library, which means that future academics can cite that code, even if that code is not featured in a journal article or a more traditional academic resource.The discussion turned to the difficulties of grant-based funding. The ASCL is basically unfunded, and all labor, including Allen’s, is voluntary. While talking about other code libraries that have lost funding and closed, Allen talked about how grant-funding, which runs on two- to five-year cycles, does not provide enough time to fully engage a community with a resource, regardless of how well that resource is designed, implemented and managed. Funding, as a universal source of concern, was a common point of interest, even for attendees without experience working with software or code.
The session included a tour of Visual Human Project, which is an NLM project that collects extensive data on a male and female cadaver, allowing artists and researchers to visualize that data in new and exciting ways.
Today, the American Library Association (ALA) announced that Nick Gross will serve as its 2016 Google Policy Fellow. As part of his summer fellowship, Gross will spend ten weeks in Washington, D.C. working on technology and Internet policy issues. As a Google Policy Fellow, Gross will explore diverse areas of information policy, such as copyright law, e-book licenses and access, information access for underserved populations, telecommunications policy, digital literacy, online privacy, the future of libraries, and others. Google, Inc. pays the summer stipends for the fellows and the respective host organizations determine the fellows’ work agendas.
Gross will work for the American Library Association’s Office for Information Technology Policy (OITP), a unit of the association that works to ensure the library voice in information policy debates and promote full and equitable intellectual participation by the public. Gross is a Ph.D. candidate at the University of North Carolina, Chapel Hill, specializing in media law and policy. He completed a J.D. at the University of Miami School of Law and is a graduate of the University of California, Davis with an undergraduate degree in international relations. Gross was a staff attorney for the U.S. Court of Appeals for the Eleventh Circuit and is a member of the California Bar.
“ALA is pleased to participate once again in the Google Policy Fellowship program as it has from its inception,” said Alan S. Inouye, director of the ALA Office for Information Technology Policy. “We look forward to working with Nick Gross on information policy topics that leverage his strong background and advance library interests as we prepare for the next presidential Administration.”
Find more information the Google Policy Fellowship Program
Austin, TX If you need a flexible service that allows you to easily access and manage actively-used digital content that also requires long-term preservation, then DuraCloud is your solution. Learn more about DuraCloud and DuraCloud Vault, and the differences between these two types of hosted digital preservation services, in this three-minute Quickbyte broadcast from DuraSpace: https://youtu.be/lSvfxrnF7z0
Austin, TX The most recent Fedora camp in Pasadena, California was hosted by the Caltech Library at the California Institute of Technology's Keck Institute for Space Studies.
Collecting web usage data through services like Google Analytics is a top priority for any library. But what about user privacy?
Most libraries (and websites for that matter) lean on Google Analytics to measure website usage and learn about how people access their online content. It’s a great tool. You can learn about where people are coming from (the geolocation of their IP addresses anyway), what devices, browsers and operating systems they are using. You can learn about how big their screen is. You can identify your top pages and much much more.
Google Analytics is really indispensable for any organization with an online presence.
But then there’s the privacy issue.Is Google Analytics a Privacy Concern?
The question is often asked, what personal information is Google Analytics actually collecting? And then, how does this data collection jive with our organization’s privacy policies.
It turns out, as a user of Google Analytics, you’ve already agreed to publish a privacy document on your site outlining the why and what of your analytics program. So if you haven’t done so, you probably should if only for the sake of transparency.Personally Identifiable Data
Fact is, if someone really wanted to learn about a particular person, it’s not entirely outside the realm of possibility that they could glean a limited set of personal attributes from the generally anonymized data Google Analytics collects. IP addresses can be loosely linked to people. If you wanted to, you could set up filters in Google Analytics that look at a single IP.
Of course, on the Google side, any user that is logged into their Gmail, YouTube or other Google account, is already being tracked and identified by Google. This is a broadly underappreciated fact. And it’s a critical one when it comes to how approach the question of dealing with the privacy issue.
In both the case of what your organization collects with Google Analytics and what all those web trackers, including Google’s trackers, collect, the onus falls entirely on the user.The Internet is Public
Over the years, the Internet has become a public space and users of the Web should understand it as such. Everything you do, is recorded and seen. Companies like Google, Facebook, Mircosoft, Yahoo! and many, many others are all in the data mining business. Carriers and Internet Service Providers are also in this game. They deploy technologies in websites that identify you and then sell what your interests, shopping habits, web searches and other activities are to companies interested in selling to you. They’ve made billions on selling your data.
Ever done a search on Google and then seen ads all over the Web trying to sell you that thing you searched last week? That’s the tracking at work.Only You Can Prevent Data Fires
The good news is that with little effort, individuals can stop most (but not all) of the data collection. Browsers like Chrome and Firefox have plugins like Ghostery, Avast and many others that will block trackers.
Google Analytics can be stopped cold by these plugins. But it won’t solve all the problems. Users also need to set up their browsers to delete cookies websites save to their browsers. And moving off of accounts provided from data mining companies “for free” like Facebook accounts, Gmail and Google.com can also help.
But you’ll never be completely anonymous. Super cookies are a thing and are very difficult to stop without breaking websites. And some trackers are required in order to load content. So sometimes you need to pay with your data to play.Policies for Privacy Conscious Libraries
All of this means that libraries wishing to be transparent and honest about their data collection, need to also contextualize the information in the broader data mining debate.
First and foremost, we need to educate our users on what it means to go online. We need to let them know its their responsibility alone to control their own data. And we need to provide instructions on doing so.
Unfortunately, this isn’t an opt-in model. That’s too bad. It actually would be great if the world worked that way. But don’t expect the moneyed interests involved in data mining to allow the US Congress to pass anything that cuts into their bottom line. This ain’t Germany, after all.
We actually do our users a service by going with the opt-out model. This underlines the larger privacy problems on the Wild Wild Web, which our sites are a part of.
New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.
New This Week
Visit the LITA Job Site for more available jobs and for information on submitting a job posting.
DuraSpace News: VIVO Updates for April 24–Mozilla Open Science Hackathon, VIVO User Group Meeting and More
From Mike Conlon, VIVO Project Director
From the VIVO 2016 Conference organizers, to be held In Denver, CO August 17-19
The VIVO 2016 Planning Committee is excited to announce two invited speakers! We’re looking forward to their talks, and we’re thrilled that, in addition to their invited sessions, both Dr. Ruben Verborgh and Dr. Pedro Szekely will be hosting half-day Workshops on August 17th.
From the Federal Depository Library Program (FDLP):
A live training webinar, “School Librarian’s Workshop: Federal Government Resources for K-12 / Taller para maestros de español: Recursos de gobierno federal para niveles K-12,” will be presented on Tuesday, May 31, 2016.
Click here to register!
- Start time: 2:00 p.m. (Eastern)
- Duration: 60 minutes
- Speaker: Jane Canfield, Coordinator of Federal Documents, Pontifical Catholic University of Puerto Rico
- Learning outcomes: Are you a school librarian? Do you work with school librarians or children? The School Librarian’s Workshop will provide useful information for grades K-12, including Ben’s Guide to the U.S. Government and Kids.gov. The webinar will explore specific agency sites which provide information, in English and Spanish, appropriate for elementary and secondary school students. Teachers and school librarians will discover information on Federal laws and regulations and learn about resources for best practices in the classroom.
- Expected level of knowledge for participants: No prerequisite knowledge required.
Closed captioning will be available for this webinar.
The webinar is free, however registration is required. Upon registering, a confirmation email will be sent to you. This registration confirmation email includes the instructions for joining the webinar.
Registration confirmations will be sent from sqldba[at]icohere.com. To ensure delivery of registration confirmations, registrants should configure junk mail or spam filter(s) to permit messages from that email address. If you do not receive the confirmation, please notify GPO.
GPO’s eLearning platform presents webinars using WebEx. In order to attend or present at a GPO-hosted webinar, a WebEx plug-in must be installed in your internet browser(s). Download instructions.
Visit FDLP Academy for access to FDLP educational and training resources. All are encouraged to share and re-post information about this free training opportunity.
The post School librarian’s workshop: federal government resources for K-12 appeared first on District Dispatch.
Many web sites have explicit terms of service. For example, here are the terms of service that "govern your use of certain New York Times digital products". They start with this clause:
1.1 If you choose to use NYTimes.com (the “Site”), NYT’s mobile sites and applications, any of the features of this site, including but not limited to RSS, API, software and other downloads (collectively, the "Services"), you will be agreeing to abide by all of the terms and conditions of these Terms of Service between you and The New York Times Company ("NYT", “us” or “we”).So, just by using the services of nytimes.com, the New York Times claims that I have agreed to a whole lot of legal terms and conditions. I didn't have to click a check-box agreeing to them, or do anything explicit. The terms and conditions are not on the front page itself, they're just linked from it. The link is hard to find, in faint type at the very bottom of the page, wedged blandly between "Privacy" and the eye-glazing "Terms of Sale."
Among the terms that I'm deemed to have agreed to are:
2.3 You may download or copy the Content and other downloadable items displayed on the Services for personal use only, ... Copying or storing of any Content for other than personal use is expressly prohibited ... So, if the Terms of Service apply, Web archives are clearly violating the terms of service. Interestingly, there is an exception:
5.2 ... THE SERVICES AND ALL DOWNLOADABLE SOFTWARE ARE DISTRIBUTED ON AN "AS IS" BASIS WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE OR IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. YOU HEREBY ACKNOWLEDGE THAT USE OF THE SERVICES IS AT YOUR SOLE RISK.The New York Times claims not to be liable. Even if you thought "arguing with a man who buys ink by the barrel" was a good idea:
11.1 These Terms of Service have been made in and shall be construed and enforced in accordance with New York law. Any action to enforce these Terms of Service shall be brought in the federal or state courts located in New York City. Good luck with that.
So the interesting question is whether, in the absence of any explicit action on my (or an archive's crawler's) part, the terms of service bind me (or the archive)? Now, IANAL, and even actual lawyers appear to believe the answer isn't obvious. But writing on the Technology and Law blog a year ago, Venkat Balasubramani suggests that unless there is an explicit action indicating assent, the terms are unlikely to apply:
In place of the flawed browsewrap/clickwrap typology, we can use a simple non-overlapping typology for web interfaces: Category A is a click-through presentation where a user clicks while knowing that the click signals assent to the applicable terms; and Category B is everything else, which is not a contract.Let us assume for the moment that Balasubramani is correct and if there was no click-through the terms are not binding. In the good old days of Web archiving, this would mean there was no problem because the crawler would not have clicked the "I agree" box. But in today's Web, browser-based crawlers are clicking on things. Lots of things. In fact, they're clicking on everything they can find. Which might well be an "I agree" box. Lawyers will be able to argue whether the crawler clicked on it "knowing that the click signals assent to the applicable terms".
Making this assumption, Jefferson and I argued as follows. Suppose my, or the archive's, browser were configured to include in the HTTP request to nytimes.com, a Link header with "rel=license" pointing to the Terms of Service that apply to the services available from the requesting browser. The New York Times would have been notified of these terms far more directly than I had been of their terms by the faint type link at the bottom of the page that few have ever consciously clicked on. Thus, using exactly the same argument that the New York Times used to bind me to their terms, they would have been bound to my terms.
What's sauce for the goose is sauce for the gander. If an explicit action is required, archive crawlers that don't click on an "I agree" box are not bound by the terms. If no explicit action is required, only some form of notification, browsers and browser-based crawlers can bind websites to their terms by providing a suitable notification.
What Terms of Service would be appropriate for using my browser? Based on the New York Times' terms, perhaps they should include:
1.2 We may change, add or remove portions of these Terms of Service at any time, which shall become effective immediately upon posting. It is your responsibility to review these Terms of Service prior to each use of the Browser and by continuing to use this Browser, you agree to any changes.and:
1.4 We may change, suspend or discontinue any aspect of the Services at any time, including the availability of any Services feature, database, or content. We may also impose limits on certain features and services or restrict your access to parts or all of the Services without notice or liability. and:
4.1 You may not access or use, or attempt to access or use, the Services to take any action that could harm us or a third party. You may not access parts of the Services to which you are not authorized. You may not attempt to circumvent any restriction or condition imposed on your use or access, or do anything that could disable or damage the functioning or appearance of the Services, I.e. you and your advertising networks better not send us any malware. And, of course, we need the perennial favorite:
5.2 ... THE SERVICES AND ALL INFORMATION THEY CONTAIN ARE DISTRIBUTED ON AN "AS IS" BASIS WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE OR IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. YOU HEREBY ACKNOWLEDGE THAT USE OF THE SERVICES IS AT YOUR SOLE RISK.A reverse EULA. Wouldn't you like to be able to do this?
So far, this may sound like a parody or a paranoid fantasy. But many online media companies have begun to target client-side browser information to police content delivery. Sites like Forbes, Wired, and maybe even (gasp) The New York Times are now disallowing access to their sites for those with ad-blocking browser add-ons:
We noticed you still have ad blocker enabled. By turning it off or whitelisting Forbes.com, you can continue to our site and receive the Forbes ad-light experience.It turns out that the "Forbes ad-light experience" includes free bonus malware!
that using an ad-blocker detector script is basically doing the same sort of thing as a cookie in terms of spying on client-side information within one's web browser, and a letter he received from the EU Commission apparently confirms his assertion.Thus running a script that collects information from an EU citizen's browser (which is what the vast majority do) apparently requires explicit permission. If Hanff's efforts succeed, anticipate European Web publishers going non-linear.
As the web has grown into a processing environment, it presumes a reciprocal interactivity, the parameters of which are still shifting and unbalanced. In the end the terms of this overall interplay of information exchange and license seem, as they so often do, inequitable. The future is here, it's just not evenly licensed. On one end, media and other corporate content sites target user browsers, inject (accidentally or via 3rd parties) potentially malicious scripts, monitor for plug-in screeners, install browsing trackers, analyze cookies and add all sorts of profiling and monitoring scripts, all generally without any explicit agreement on our part. On the other hand, we, simple users, often are presumed to agree to prolix legalese and verbose, obscure license agreements, all simply so we can read about people doing yoga with their dogs.