You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 4 days 23 hours ago

District Dispatch: New director named for the U.S. National Library of Medicine

Thu, 2016-05-12 16:06

Yesterday, the National Institutes of Health (NIH) Director Dr. Francis Collins announced the appointment of Dr. Patricia Flatley Brennan as the next director of the National Library of Medicine (NLM), the world’s largest medical library and a component of NIH. Dr. Brennan comes to NLM from the University of Wisconsin-Madison, where she is the Lillian L. Moehlman Bascom Professor, School of Nursing and College of Engineering. She will be the first woman and first nurse to lead NLM. Dr. Brennan is expected to assume her post in August.

“Dr. Brennan brings her incredible experience of having cared for patients as a practicing nurse, improved the lives of home-bound patients by developing innovative information systems and services designed to increase their independence, and pursued cutting-edge research in data visualization and virtual reality,” said Dr. Collins.

NLM, based on the campus of NIH in Bethesda, Maryland, was founded in 1836 and has earned a reputation for innovation and public service. ALA has had the pleasure of working with a number of NLM staffers, and we look forward to collaborating with Dr. Brennan and her team.

The previous director, Dr. Donald Lindberg, led NLM from 1984 until his retirement in 2015. Among his many achievements was the founding of the National High-Performance Computing and Communications (HPCC) Office in 1992 and his service as its first director for three years. Establishing the HPCC Office was an important early milestone in the development, growth, and institutionalization of advanced information technology within and across federal agencies. I mention HPCC as it was my employer (though now evolved and re-named to the National Coordination Office for Networking Information Technology Research & Development) prior to coming to ALA.

The post New director named for the U.S. National Library of Medicine appeared first on District Dispatch.

LITA: LITA events @ ALA Annual 2016

Thu, 2016-05-12 15:00
Going to ALA Annual? Check out all the great LITA events.

Go to the LITA at ALA Annual conference web page.

ATTEND THE LITA PRESIDENT’S PROGRAM FEATURING DR. SAFIYA NOBLE

Sunday June 26, 2016 from 3:00 pm to 4:00 pm

Safiya Noble

Dr. Noble is an Assistant Professor in the Department of Information Studies in the Graduate School of Education and Information Studies at UCLA. She conducts research in socio-cultural informatics; including feminist, historical and political-economic perspectives on computing platforms and software in the public interest. Her research is at the intersection of culture and technology in the design and use of applications on the Internet.

Register for ALA Annual and Discover Ticketed Events

SIGN UP FOR YOUR CHOICE OF 3 PRE-CONFERENCES

All on Friday, June 24 from 1:00 pm – 4:00pm

Digital Privacy and Security: Keeping You and Your Library Safe and Secure in a Post-Snowden World
Presenters: Jessamyn West, Library Technologist at Open Library and Blake Carver, LYRASIS

Islandora for Managers: Open Source Digital Repository Training
Presenters: Erin Tripp, Business Development Manager at discoverygarden inc. and Stephen Perkins, Managing Member of Infoset Digital Publishing

Technology Tools and Transforming Librarianship
Presenters: Lola Bradley, Reference Librarian, Upstate University; Breanne Kirsch, Coordinator of Emerging Technologies, Upstate University; Jonathan Kirsch, Librarian, Spartanburg County Public Library; Rod Franco, Librarian, Richland Library; Thomas Lide, Learning Engagement Librarian, Richland Library

OTHER FEATURED LITA EVENTS INCLUDE

Top Technology Trends
Sunday June 26, 2016 from 1:00 pm to 2:30 pm

This regular program features our ongoing roundtable discussion about trends and advances in library technology by a panel of LITA technology experts. The panelists will describe changes and advances in technology that they see having an impact on the library world, and suggest what libraries might do to take advantage of these trends. Panelists will be announced soon. More information on Top Tech Trends go to: http://ala.org/lita/ttt

Imagineering – Science Fiction/Fantasy and Information Technology: Where We Are and Where We Could Have Been
Saturday June 25, 2016, 1:00 pm – 2:30 pm

Science Fiction and Fantasy Literature have a unique ability to speculate about things that have never been, but can also be predictive about things that never were. Through the lens provided by alternate history/counterfactual literature one can look at how the world might have changed if different technologies had been pursued. For examples what if instead of developing microprocessors computing depended on vacuum tubes or something fantastic like the harmonies in the resonance of crystals? Join LITA, the Imagineering Interest Group, and a panel of distinguished Science Fiction and Fantasy writers as they discuss what the craft can tell us about not only who we are today, but who, given a small set of differences, we could have been. The availability of authors can change, currently slated authors are:

  • Charlie Jane Anders — All the Birds in the Sky
  • Katherine Addison — The Goblin Emperor
  • Catheryne Valente — Radiance
  • Brian Staveley — The Providence of Fire

Open House
Friday June 24, 2016, 3:00 pm – 4:00 pm

LITA Open House is a great opportunity for current and prospective members to talk with Library and Information Technology Association (LITA) leaders and learn how to make connections and become more involved in LITA activities.

Happy Hour
Sunday June 26, 2016, 5:30 pm – 8:00 pm

This year marks a special LITA Happy Hour as we kick off the celebration of LITA’s 50th anniversary. Make sure you join the LITA Membership Development Committee and LITA members from around the country for networking, good cheer, and great fun! Expect lively conversation and excellent drinks; cash bar. Help us cheer for 50 years of library technology.

Find all the LITA programs and meetings using the conference scheduler.

MORE INFORMATION AND REGISTRATION

Go to the LITA at ALA Annual conference web page.

David Rosenthal: The Future of Storage

Thu, 2016-05-12 15:00
My preparation for a workshop on the future of storage included giving a talk at Seagate and talking to the all-flash advocates. Below the fold I attempt to organize into a coherent whole the results of these discussions and content from a lot of earlier posts.

I'd like to suggest answers to five questions related to the economics of long-term storage:
  • How far into the future should we be looking?
  • What do the economics of storing data for that long look like?
  • How long should the media last?
  • How reliable do the media need to be?
  • What should the architecture of a future storage system look like?
How far into the future?Source: Disks for Data CentersDiscussions of storage tend to focus on the sexy, expensive, high-performance market. Those systems are migrating to flash. The data in those systems is always just a cache. In the long term, that data lives further down the hierarchy. What I'm talking about is the next layer down the hierarchy, the capacity systems where all the cat videos, snapshots and old blog posts live. And the scientific data.

Iain Emsley's talk at PASIG2016 on planning the storage requirements of the 1PB/day Square Kilometer Array mentioned that the data was expected to be used for 50 years. How hard a problem is planning with this long a horizon? Lets go back 50 years and see.
DiskIBM2314s (source)In 1966 as I was writing my first program disk technology was about 10 years old; the IBM 350 RAMAC was introduced in 1956. The state of the art was the IBM 2314. Each removable disk pack stored 29MB on 11 platters with a 310KB/s data transfer rate. Roughly equivalent to 60MB/rack. The SKA would have needed to add nearly 17M, or about 10 square kilometers, of racks each day.

R. M. Fano's 1967 paper The Computer Utility and the Community reports that for MIT's IBM 7094-based CTSS:
the cost of storing in the disk file the equivalent of one page of single-spaced typing is approximately 11 cents per month. It would have been hard to believe a projection that in 2016 it would be more than 7 orders of magnitude cheaper.

IBM2401s By Erik Pitti CC BY 2.0.The state of the art in tape storage was the IBM 2401, the first nine-track tape drive, storing 45MB per tape with a 320KB/s maximum transfer rate. Roughly equivalent to 45MB/rack of accessible data.

Your 1966 alter-ego's data management plan would be correct in predicting that 50 years later the dominant media would be "disk" and "tape", and that disk's lower latency would carry a higher cost per byte. But its hard to believe that any more detailed predictions about the technology would be correct. The extraordinary 30-year history of 30-40% annual cost per byte decrease, the Kryder rate, had yet to start.

Although disk is a 60-year old technology, a 50-year time horizon for a workshop on the Future of Storage may seem too long to be useful. But a 10-year time horizon is definitely too short to be useful. Storage is not just a technology, but also a multi-billion dollar manufacturing industry dominated by a few huge businesses, with long, hard-to-predict lead times.

Seagate 2008 roadmapTo illustrate the lead times, here is a Seagate roadmap slide from 2008 predicting that perpendicular magnetic recording (PMR) would be replaced in 2009 by heat-assisted magnetic recording (HAMR), which would in turn be replaced in 2013 by bit-patterned media (BPM).

In 2016, the trade press is reporting that:
Seagate plans to begin shipping HAMR HDDs next year.ASTC 2016 roadmap Here is a recent roadmap from ASTC showing HAMR starting in 2017 and BPM in 2021. So in 8 years HAMR has gone from next year to next year, and BPM has gone from 5 years out to 5 years out. The reason for this real-time schedule slip is that as technologies get closer and closer to the physical limits, the difficulty and above all cost of getting from lab demonstration to shipping in volume increases exponentially.

A recent TrendFocus report suggests that the industry is preparing to slip the new technologies even further:
The report suggests we could see 14TB PMR drives in 2017 and 18TB SMR drives as early as 2018, with 20TB SMR drives arriving by 2020.I believe this is mostly achieved by using helium-filled drives to add platters, and thus cost, not by increasing density above current levels.
Tape Historically, tape was the medium of choice for long-term storage. Its basic recording technology is around 8 years behind hard disk, so it has a much more credible technology road-map than disk. But its importance is fading rapidly. There are several reasons:
  • Tape is a very small market in unit terms:Just under 20 million LTO cartridges were sent to customers last year. As a comparison let's note that WD and Seagate combined shipped more than 350 million disk drives in 2015; the tape cartridge market is less than 0.00567 per cent of the disk drive market in unit terms
  • In effect there is now a single media supplier, raising fears of price gouging and supply vulnerability. The disk market has consolidated too, but there are still two very viable suppliers.
  • The advent of data-mining and web-based access to archives make the long access latency of tape less tolerable.
  • To maximize the value of the limited number of slots in the robots it is necessary to migrate data to new, higher-capacity cartridges as soon as they appear. This has two effects. First, it makes the long data life of tape media less important. Second, it consumes a substantial fraction of the available bandwidth, up to a quarter in some cases.
FlashSource: The RegisterFlash as a data storage technology is almost 30 years old. Eli Harari filed the key enabling patent in 1988, describing multi-level cell, wear-leveling and the Flash Translation Layer. Flash has yet to make a significant impact on the capacity storage market. Probably, at some point in the future it will displace hard disk as the medium for this level of the hierarchy. There are two contrasting views as to how long this will take.

Exabytes shippedFirst, the conventional wisdom as expressed by the operators of cloud services and the disk industry, and supported by these graphs showing how few exabytes of flash are shipped in comparison to disk. Although flash is displacing disk from markets such as PCs, laptops and servers, Eric Brewer's fascinating keynote at this year's FAST conference started from the assertion that the only feasible medium for bulk data storage in the cloud was spinning disk.

NAND vs. HDD capex/TBThe argument is that flash, despite its many advantages, is and will remain too expensive for the capacity layer. The graph of the ratio of capital expenditure per TB of flash and hard disk shows that each exabyte of flash contains about 50 times as much capital as an exabyte of disk. Because:
factories to build 3D NAND are vastly more expensive than plants that produce planar NAND or HDDs -- a single plant can cost $10 billionno-one is going to invest the roughly $80B needed to displace hard disks because the investment would not earn a viable return.

WD unit shipmentsSecond, the view from the flash advocates. They argue that the fabs will be built, because they are no longer subject to conventional economics. The governments of China, Japan, and other countries are stimulating their economies by encouraging investment, and they regard dominating the market for essential chips as a strategic goal, something that justifies investment. They are thinking long-term, not looking at the next quarter's results. The flash companies can borrow at very low interest rates, so even if they do need to show a return, they only need to show a very low return.

Seagate unit shipmentsIf the fabs are built, the increase in supply will increase the Kryder rate of flash. This will increase the trend of storage moving from disk to flash. In turn, this will increase the rate at which disk vendor's unit shipments decrease. In turn, this will decrease their economies of scale, and cause disk's Kryder rate to go negative. The point at which flash becomes competitive with disk moves closer in time. Disk enters a death spiral.

The result would be that the Kryder rate for the capacity market, which has been very low, would get back closer to the historic rate sooner, and thus that storing bulk data for the long term would be significantly cheaper. But this isn't the only effect. When Data Domain's disk-based backup displaced tape, greatly reducing the access latency for backup data, the way backup data was used changed. Instead of backups being used mostly to cover media failures, they became used mostly to cover operator errors.

Similarly, if flash were to displace disk, the access latency for stored data would be significantly reduced, and the way the data is used would change. Because it is more accessible, people would find more ways to extract value from it. The changes induced by reduced latency would probably significantly increase the perceived value of the stored data, which would itself accelerate the turn-over from disk to flash.

I hope everyone is familiar with the concept of "stranded assets", for example the idea that if we're not to fry the planet oil companies cannot develop many of the reserves they carry on their books. Both views of the future of disk vs. flash involve a reduction in the unit volume of drives. The disk vendors cannot raise prices significantly, doing so would accelerate the reduction in unit volume. Thus their income will decrease, and thus their ability to finance the investments needed to get HAMR and then BPM into the market. The longer they delay these investments, the more difficult it becomes to afford them. Thus it is likely that HAMR and BPM will be "stranded technologies", advances we know how to, but never actually deploy.
Alternate MediaMedia trends to 2014Robert Fontana of IBM has an excellent overview of the roadmaps for tape, disk, optical and NAND flash (PDF) through the early 2020s. Clearly no other technology will significantly impact the storage market before then.

SanDisk shipped the first flash SSDs to GRiD Systems in 1991. Even if flash impacts the capacity market in 2018, it will have been 27 years after the first shipment. The storage technology that follows flash is probably some form of Storage Class Memory (SCM) such as XPoint. Small volumes of some forms of SCM have been shipping for a couple of years. Like flash, SCMs leverage much of the semiconductor manufacturing technology. Optimistically, one might expect SCM to impact the capacity market sometime in the late 2030s.

I'm not aware of any other storage technologies that could compete for the capacity market in the next three decades. SCMs have occupied the niche for a technology that exploits semiconductor manufacturing. A technology that didn't would find it hard to build the manufacturing infrastructure to ship the thousands of exabytes a year the capacity market will need by then.
Economics of Long-Term StorageCost vs. Kryder rateHere is a graph from a model of the economics of long-term storage I built back in 2012 using data from Backblaze and the San Diego Supercomputer Center. It plots the net present value of all the expenditures incurred in storing a fixed-size dataset for 100 years against the Kryder rate. As you can see, at the 30-40%/yr rates that prevailed until 2010, the cost is low and doesn't depend much on the precise Kryder rate. Below 20%, the cost rises rapidly and depends strongly on the precise Kryder rate.

2014 cost/byte projectionAs it turned out, we were already well below 20%. Here is a 2014 graph from Preeti Gupta, a Ph.D. student at UC Santa Cruz, plotting $/GB against time. The red lines are projections at the industry roadmap's 20% and my less optimistic 10%. It shows three things:
  • The slowing started in 2010, before the floods hit Thailand.
  • Disk storage costs in 2014, two and a half years after the floods, were more than 7 times higher than they would have been had Kryder's Law continued at its usual pace from 2010, as shown by the green line.
  • If the industry projections pan out, as shown by the red lines, by 2020 disk costs will be between 130 and 300 times higher than they would have been had Kryder's Law continued.
The funds required to deliver on a commitment to store a chunk of data for the long term depend strongly on the Kryder rate, especially in the first decade or two. Industry projections of the rate have a history of optimism, and are vulnerable to natural disasters, industry consolidation, and so on. We aren't going to know the cost, and the probability is that it is going to be a lot more expensive than we expect.
Long-Lived Media?Every few months there is another press release announcing that some new, quasi-immortal medium such as 5D quartz or stone DVDs has solved the problem of long-term storage. But the problem stays resolutely unsolved. Why is this? Very long-lived media are inherently more expensive, and are a niche market, so they lack economies of scale. Seagate could easily make disks with archival life, but they did a study of the market for them, and discovered that no-one would pay the relatively small additional cost.  The drives currently marketed for "archival" use have a shorter warranty and a shorter MTBF than the enterprise drives, so they're not expected to have long service lives.

The fundamental problem is that long-lived media only make sense at very low Kryder rates. Even if the rate is only 10%/yr, after 10 years you could store the same data in 1/3 the space. Since space in the data center racks or even at Iron Mountain isn't free, this is a powerful incentive to move old media out. If you believe that Kryder rates will get back to 30%/yr, after a decade you could store 30 times as much data in the same space.

The reason why disks are engineered to have a 5-year service life is that, at 30-40% Kryder rates, they were going to be replaced within 5 years simply for economic reasons. But, if Kryder rates are going to be much lower going forward, the incentives to replace drives early will be much less, so a somewhat longer service life would make economic sense for the customer. From the disk vendor's point of view, a longer service life means they would sell fewer drives. Not a reason to make them.

Additional reasons for skepticism include:
  • The research we have been doing in the economics of long-term preservation demonstrates the enormous barrier to adoption that accounting techniques pose for media that have high purchase but low running costs, such as these long-lived media.
  • The big problem in digital preservation is not keeping bits safe for the long term, it is paying for keeping bits safe for the long term. So an expensive solution to a sub-problem can actually make the overall problem worse, not better.
  • These long-lived media are always off-line media. In most cases, the only way to justify keeping bits for the long haul is to provide access to them (see Blue Ribbon Task Force). The access latency scholars (and general Web users) will tolerate rules out off-line media for at least one copy. As Rob Pike said "if it isn't on-line no-one cares any more".
  • So at best these media can be off-line backups. But the long access latency for off-line backups has led the backup industry to switch to on-line backup with de-duplication and compression. So even in the backup space long-lived media will be a niche product.
  • Off-line media need a reader. Good luck finding a reader for a niche medium a few decades after it faded from the market - one of the points Jeff Rothenberg got right two decades ago.
    Ultra-Reliable Media?The reason that the idea of long-lived media is so attractive is that it suggests that you can be lazy and design a system that ignores the possibility of failures. But current media are many orders of magnitude too unreliable for the task ahead, so you can't:
    • Media failures are only one of many, many threats to stored data, but they are the only one long-lived media address.
    • Long media life does not imply that the media are more reliable, only that their reliability decreases with time more slowly.
    Even if you could ignore failures, it wouldn't make economic sense. As Brian Wilson, CTO of Backblaze points out, in their long-term storage environment:
    Double the reliability is only worth 1/10th of 1 percent cost increase. ...

    Replacing one drive takes about 15 minutes of work. If we have 30,000 drives and 2 percent fail, it takes 150 hours to replace those. In other words, one employee for one month of 8 hour days. Getting the failure rate down to 1 percent means you save 2 weeks of employee salary - maybe $5,000 total? The 30,000 drives costs you $4m.

    The $5k/$4m means the Hitachis are worth 1/10th of 1 per cent higher cost to us. ACTUALLY we pay even more than that for them, but not more than a few dollars per drive (maybe 2 or 3 percent more).
    Moral of the story: design for failure and buy the cheapest components you can. :-)Eric Brewer made the same point in his 2016 FAST keynote. Because for availability and resilience against disasters they need geographic diversity, they have replicas from which to recover. So spending more to increase media reliability makes no sense, they're already reliable enough. This is because the systems that surround the drives have been engineered to deliver adequate reliability despite the current unreliability of the drives. Thus engineering away the value of more reliable drives.
    Future Storage System Architecture?What do we want from a future bulk storage system?
    • An object storage fabric.
    • With low power usage and rapid response to queries.
    • That maintains high availability and durability by detecting and responding to media failures without human intervention.
    • And whose reliability is externally auditable.
    At the 2009 SOSP David Anderson and co-authors from C-MU presented FAWN, the Fast Array of Wimpy Nodes. It inspired me to suggest, in my 2010 JCDL keynote, that the cost savings FAWN realized without performance penalty by distributing computation across a very large number of very low-power nodes might also apply to storage.

    The following year Ian Adams and Ethan Miller of UC Santa Cruz's Storage Systems Research Center and I looked at this possibility more closely in a Technical Report entitled Using Storage Class Memory for Archives with DAWN, a Durable Array of Wimpy Nodes. We showed that it was indeed plausible that, even at then current flash prices, the total cost of ownership over the long term of a storage system built from very low-power system-on-chip technology and flash memory would be competitive with disk while providing high performance and enabling self-healing.

    Two subsequent developments suggest we were on the right track. First, Seagate's announcement of its Kinetic architecture and Western Digital's subsequent announcement of drives that ran Linux, both exploited the processing power available from the computers in the drives that perform command processing, internal maintenance operations, and signal processing to delegate computation from servers to the storage media, and to get IP communication all the way to the media, as DAWN suggested. IP to the drive is a great way to future-proof the drive interface.

    FlashBlade hardwareSecond, although flash remains more expensive than hard disk, since 2011 the gap has narrowed from a factor of about 12 to about 6. Pure Storage recently announced FlashBlade, an object storage fabric composed of large numbers of blades, each equipped with:
    • Compute: 8-core Xeon system-on-a-chip, and Elastic Fabric Connector for external, off-blade, 40GbitE networking,
    • Storage: NAND storage with 8TB or 52TB raw capacity of raw capacity and on-board NV-RAM with a super-capacitor-backed write buffer plus a pair of ARM CPU cores and an FPGA,
    • On-blade networking: PCIe card to link compute and storage cards via a proprietary protocol.
    FlashBlade clearly isn't DAWN. Each blade is much bigger, much more powerful and much more expensive than a DAWN node. No-one could call a node with an 8-core Xeon, 2 ARMs, and 52TB of flash "wimpy", and it'll clearly be too expensive for long-term bulk storage. But it is a big step in the direction of the DAWN architecture.

    DAWN exploits two separate sets of synergies:
    • Like FlashBlade, DAWN moves the computation to where the data is, rather then moving the data to where the computation is, reducing both latency and power consumption. The further data moves on wires from the storage medium, the more power and time it takes. This is why Berkeley's Aspire project's architecture is based on optical interconnect technology, which when it becomes mainstream will be both faster and lower-power than wires. In the meantime, we have to use wires.
    • Unlike FlashBlade, DAWN divides the object storage fabric into a much larger number of much smaller nodes, implemented using the very low-power ARM chips used in cellphones. Because the power a CPU needs tends to grow faster than linearly with performance, the additional parallelism provides comparable performance at lower power.
    So FlashBlade currently exploits only one of the two sets of synergies. But once Pure Storage has deployed this architecture in its current relatively high-cost and high-power technology, re-implementing it in lower-cost, lower-power technology should be easy and non-disruptive. They have done the harder of the two parts.

    Storage systems are extremely reliable, but at scale nowhere near reliable enough to mean data loss can be ignored. Internal auditing, in which the system detects and reports it own losses, for example by hashing the stored data and comparing the result with a stored hash, is important but is not enough. The system's internal audit function will itself have bugs, which are likely to be related to the bugs in the underlying functionality causing data loss. Having the system report "I think everything is fine" is not as reassuring as one would like.

    Auditing a system by extracting its entire contents for integrity checking does not scale, and is likely itself to cause errors. Asking a storage system for the hash of an object is not adequate, the system could have remembered the object's hash instead of computing it afresh. Although we don't yet have a perfect solution to the external audit problem, it is clear that part of the solution is the ability to supply a random nonce that is prepended to the object's data before hashing. The result is different every time, the system cannot simply remember it.
    AcknowledgementsI'm grateful to Seagate for (twice) allowing me to pontificate about their industry, to Brian Berg for his encyclopedic knowledge of the history of flash, and Tom Coughlin for illuminating discussions and the graph of exabytes shipped. This isn't to say that they agree with any of the above.

    Equinox Software: Successful Integration Testing Between meeScan and Evergreen

    Thu, 2016-05-12 12:09

    FOR IMMEDIATE RELEASE

    Duluth, Georgia–May 12, 2016

    Equinox and Bintec conduct successful integration testing between meeScan and Evergreen

    Equinox is pleased to announce successful integration testing between the meeScan self checkout system provided by Bintec Library Services and the Evergreen open source ILS. Additional information regarding how to configure Evergreen to work with meeScan will be made available to the Evergreen community.

    Galen Charlton, Added Services Manager at Equinox, said, “One of the strengths of Evergreen is its ability to integrate with other library software. By performing interoperability testing with firms such as Bintec, Equinox helps to identify and resolve technical roadblocks before they become an issue for libraries.”

    Peter Trenciansky, Director for Bintec Library Services commented “We are very excited to offer meeScan to Evergreen libraries around the world. Our service provides a modern and fresh way to check out items while eliminating traditional challenges associated with self-service kiosks. The collaboration between Equinox and Bintec is another milestone towards our goal of  contributing to the development of a new generation of welcoming and engaging libraries.”

    About Bintec

    Bintec Library Services Inc. is a technology company dedicated to the development of solutions that provide added value to libraries and enrich the user experience. The knowledgeable team behind Bintec delivers software and hardware solutions encompassing electromagnetic (EM) security, radio-frequency identification (RFID) technologies, ILS systems integration, large cloud-based architecture and mobile app development. The company is based in Toronto, Canada and services customers across North America and other parts of the world. To find out more visit binteclibraryservices.com

    About meeScan

    meeScan is a cloud based self checkout system that lets patrons use their smartphones to check out books anywhere in their library. The system uses the built-in camera of the patron’s smartphone or tablet to scan the item barcode. With support for both EM and RFID, it is a full featured alternative to conventional self-check kiosks at a fraction of the cost. meeScan is extremely user friendly, it is simple to setup and requires virtually zero maintenance by the library. Find out more at meescan.com

    About Equinox Software, Inc.

    Equinox was founded by the original developers and designers of the Evergreen ILS. We are wholly devoted to the support and development of open source software in libraries, focusing on Evergreen, Koha, and the FulfILLment ILL system. We wrote over 80% of the Evergreen code base and continue to contribute more new features, bug fixes, and documentation than any other organization. Our team is fanatical about providing exceptional technical support. Over 98% of our support ticket responses are graded as “Excellent” by our customers. At Equinox, we are proud to be librarians. In fact, half of us have our ML(I)S. We understand you because we *are* you. We are Equinox, and we’d like to be awesome for you. For more information on Equinox, please visit http://www.esilibrary.com.

    About Evergreen

    Evergreen is an award-winning ILS developed with the intent of providing an open source product able to meet the diverse needs of consortia and high transaction public libraries. However, it has proven to be equally successful in smaller installations including special and academic libraries. Today, almost 1400 libraries across the US and Canada are using Evergreen including NC Cardinal, SC Lends, and B.C. Sitka. For more information about Evergreen, including a list of all known Evergreen installations, see http://evergreen-ils.org.

    Cynthia Ng: BCLA Pre-Conference: Why Accessible Library Service Matters in Public Libraries

    Wed, 2016-05-11 22:01
    Disability Awareness Training for Library Staff Summary Margarete Wiedemann, North Vancouver City Public Library last Canadian census: 1 in 7 Canadians live with a disability public libraries are generally accessible to a degree survey findings: what is helpful: online catalogue, home delivery, plain language, barriers: physical envionrment, time on computer, standing in line, crowded seating, … Continue reading BCLA Pre-Conference: Why Accessible Library Service Matters in Public Libraries

    Library of Congress: The Signal: Your Personal Archiving Project: Where Do You Start?

    Wed, 2016-05-11 18:35

    “Simplify, simplify.” — Henry David Thoreau, Walden.

    Before and After: the Herbert A. Philbrick Papers. Photo by Laura Kells, the Library of Congress.

    Most of us comb through a lifelong collection of personal papers and photos either when we have plenty of free time (typically in retirement) or when we have to deal with the belongings of a deceased loved one. All too often the job seems so daunting and overwhelming that our natural response is to get discouraged and say, “I don’t know where to begin” or “It’s too much; I’ll do it some other time” or worse, “I’ll just get rid of it all.”

    At the Library of Congress, archivists process every type of collection imaginable. They often acquire — along with scholarly and historical works — personal papers and mementos, things that had special meaning to the owner, not only letters and photos but also locks of hair, newspaper clippings and beverage-stained documents. One recent collection contained a piece of bark. Some collections arrive neatly organized and others arrive heaped into makeshift containers. How do professional archivists create order from clutter? Where do they start? And what we can we learn from their work and apply to our own personal archiving projects?

    For this story, I spoke with Laura Kells and Meg McAleer, two senior archivists from the Library of Congress’s Manuscript Division. Both exude the good-natured patience and relaxed humor that comes from years of dealing with a constant inflow of often-disorganized paper and digital files. [Watch their presentation, titled “The Truth about Original Order, or What to Do When Your Collection Arrives in Trash Cans.”]

    Photo by Laura Kells, the Library of Congress.

    I found it striking that, throughout our interview, they rarely dictated how something must be done. Instead they offered well-seasoned advice about archiving but they left the decisions up to the individual. In the end, their main message was this: if you want to get through the project and not make yourself crazy and despondent over it, start simply, separate items broadly at first and, in the end, accept your final sorting decisions as “good enough.”

    Start Simply

    First, approach your collection as a single unit of stuff. Don’t dwell on individual photos or letters yet. Think about the entire collection as a mass of related things. Kells said, “You’ll scare yourself if you think, ‘I have two hundred things.’ The project will seem bigger.” It is one collection.

    Clumps

    Consider devoting a rainy weekend to pulling out your collection. At this point you will be surveying its broad landscape. Begin by sorting items from your collection into what McAleer and Kells expertly call “clumps.” This is your first pass, so just group things into general categories such as letters and photos. You decide on your categories. Be consistent but accept that there might be overlap between categories. If you want to categorize clumps by year, fine. Or phases of a person’s life. Or holidays. Or type of materials (letters, photos).

    “What you try to do is identify the clumps that already exist,” McAleer said. “And hopefully clumping naturally occurs. For instance, you could have gotten all of your grandmother’s papers after her death. That’s a clump. Trips? That’s a clump. Christmas stuff, that’s a clump. Photographs, that’s a clump.”

    WARNING: Don’t get sidetracked. Resist the temptation to savor any one thing right now. “If you begin engaging with individual items at this point, then you’re sunk,” McAleer said. “You can paralyze yourself by over scrutinizing.” Whatever it is, no matter how wonderful it is, put it in its rightful clump and come back to it later.

    Photo by Laura Kells, the Library of Congress.

    Be Realistic About Work Space and Time

    There are two important things you should address early on: space and time. Your collection will take up space in your house as you sift through it, so plan your work space realistically. Set aside a temporary work space if you can – a room or a corner of a room — or plan to unpack and re-pack your collection for each sorting session. “In most people’s homes they don’t have a great deal of space to have things sitting out for a long time,” McAleer said. “At some point you will really need that dining room table for dinner.”

    Don’t eat or drink in the work area. Kells said, “Just step away. When you’ve got big piles and you reach your drink and you knock it over, you’ll be real sorry if you spill your coffee all over your documents or your photographs.” McAleer said, “It happens in an instant. None of us anticipate it. It can be tragic.”

    As for time, McAleer said, “Do not start out with a commitment that every single item within this collection is going to be organized perfectly.” Kells said, “That could make you feel a sense of defeat. Just start out by saying, ‘I want to improve the organization.’ ”

    Nothing is Perfect

    After sorting the collection into clumps, you could put everything into envelopes or other containers and be happy about your progress. “You can feel good because you’ve done something,” Kells said. “As long as there is some order. It’s probably chaotic within those clumps but just by identifying and labeling and boxing those clumps, you have some intellectual control over it that you didn’t have before.”

    You could leave the project at that or you could continue on, from a rough sort to a refined sort. “If you have the energy, you just work in layers and keep improving it,” Kells said. “Then you can gauge how much time you have and how much space you have to do this. Anything new is gravy.”

    Letters sorted by correspondent. Photo by Laura Kells, the Library of Congress.

    For example, you could sort letters by date or by topic or sort photos by location or by who is in each photo. “It is a matter of constant refinement, where you’re going to be getting more and more information about the content over time,” McAleer said. “It’s like building a house. You start out building the structure of a house and then you add furniture into each room.”

    It’s a good time to throw things away too. Decide if you really want to save paid bills, cancelled checks or grocery lists. McAleer said, “In the long run, just save the things that you’re going to value over time. It is up to you how far down you drill in terms of arranging the material. At some point you have to say to yourself, ‘This is so much better than it was. I know what I have. This may be as good as it gets. I have put some organization on it and that is going to make it more accessible.’ ”

    Scanning

    Scanning is a terrific way to preserve and share digital versions of papers and photographs. The Library of Congress explains the basics of scanning in a blog post and an instructional video. You can also add descriptions into your digital photos, in much the same way as you would write on the back of a paper photo.

    Scan newspaper clippings too. Newspaper ages poorly, when folded it can rip at the creases and it can crumble when being handled. Print a scanned copy if you want a hard copy. Computer paper ages better than newspaper does.

    Another reason to scan photos is to rescue them. Photos may fade due to their chemical composition or because they may have been in direct sunlight for a long time. (Institutions rotate their collections regularly to avoid the damage from light and environmental exposure.) “Resist the idea of framing things,” McAleer said. “They really should not be exposed to light for too long. You can make a copy and frame that but keep the original out of the light.”

    Photo by Laura Kells, the Library of Congress.

    If you have hundreds of photos, think about if you really want to scan them all. That may add pressure on you. Again, be realistic with your time. Consider being selective and only scanning the special photos or documents that you value the highest. Most institutions don’t have the resources to scan everything so they digitize their collections selectively; maybe you should too.

    Disks and Digital Storage Media

    If the collection includes computer disks, scan the disks for viruses before you open the contents. Don’t put everything else on your computer at risk. Before opening a file, make a duplicate of it and open the duplicate to avoid any accidental modifications. That way you’ll still have the original if you mess something up.

    If the disks contain files in an old format that you can’t access, but you believe those files might contain something of interest or value, archive those files with your other digital stuff. You can either find a professional service to open them or someday you might find a resource that will enable you to open them.

    Digital Preservation

    Save your digital files properly. Organize the scanned files on your computer and back them up on a separate drive. If you acquire disorganized computer files, organize the clutter as best you can within a file system. To help you find specific files again, you can rename those files, without affecting their contents.

    Archiving a Life Story

    Organizing personal collections can be a way to tell a story about your life or the life of a loved one. “I don’t think people should be afraid to curate these collections,” McAleer said. “Zooming in and narrowing in on one particular story or one particular item can actually have a little bit more impact.”

    Kells said, “Old letters give you a sense of the people, even if there’s not much to the letters and cards. It shows you what they valued. What they did, what they ate, what holidays they celebrated.” McAleer said, “Letters provide a voice and by grouping them together you release a kind of narrative.”

    What was in her wallet or purse? What did she keep near to her? “There are probably certain things in a drawer somewhere that tell a story,” Kells said. “You could create a time capsule about a loved one.

    “Not everyone values this stuff but if you archive it, it will be there for somebody in a later generation. There may be one person who really cares about their family history and will be glad to have it.”

    [For more information, visit the Library of Congress’s pages on “Collections Care” and “Personal Digital Archiving.”]

    DPLA: A Librarian in situ: Adventures at DPLAfest in Washington, DC

    Wed, 2016-05-11 15:00


    This guest post was written by Jasmine Burns, Image Technologies and Visual Literacy Librarian, Indiana University and DPLA + DLF ‘Cross-Pollinator.’ (Twitter: @jazz_with_jazz)

    Thanks to the generous support of the DPLA + DLF Cross-Pollinator Grant, I spent two fully-packed days wandering through some of the most beautiful (both architecturally and intellectually) institutions in Washington DC. DPLAfest was perfectly self-described: a festival of workshops, conversations, and collaborations between hundreds of librarians, authors, coders, publishers, educators, and more. This community that converged on Capitol Hill left me feeling inspired and exhausted, as I returned home with a laundry list of new ideas and long-term goals.

    My initial interest in attending DPLAfest was to gain a closer glimpse into the large and growing community of the Digital Public Library of America. I graduated from an MLIS program last May and immediately started my first professional position in an academic library as the Image Technologies and Visual Literacy Librarian. As an emerging professional, I am still navigating the transient landscape of useful and applicable tools, pedagogies, and resources that are relevant to the needs of my campus community. The programming at DPLAfest seemed to combine many of the topics and areas that I have been utilizing as a visual resources professional. The opportunity to dig much deeper into these resources with the mission of creating collaborations and connections with the DLF community was an ideal framework for my experience in Washington.

    Copyright + digital libraries. Packed room! #DPLAfest

    — Jasmine Burns (@Jazz_with_Jazz) April 14, 2016

    The first day of the fest kicked off at the Library of Congress with breakfast and coffee (!!), the debut of RightsStatements.org (VERY exciting in library-land), the release of the 100 Primary Source Sets (which I promptly emailed to my K-12 teacher friends), and the first ever selfie to be added to DPLA! For the remainder of the day I attended a workshop on geovisualization, sat in on a fantastic conversation about Authorship in the Digital Age, learned all about GIFs and how to make them (by far my favorite!), attended a totally packed, standing-room only session on copyright, and finally got to hear about the fantastic public domain drop at NYPL Labs.

    US National Archives exhibition, “Records of Rights”

    Somewhere in between the action, I even had the chance to pop over to the Madison building to catch up with some of my old co-workers at the Prints and Photographs Division and eat lunch in the Great Hall! After running to my hotel to catch my breath, I meandered down to the National Archives, where I had drinks and hors d’oeuvre with the Declaration of Independence and got completely lost in the exhibits (both literally and figuratively). I was so busy geeking out about how the exhibits actually looked like archives (solander boxes and everything) that I forgot to do much socializing at all!

    The next morning, I headed back to the National Archives to start round two (and coincidentally ran into my cousin on the street, I guess DC is more of a small town than I thought!). Day two started with a much appreciated breakfast buffet, and a session showcasing some fabulous digital projects. Next, I learned everything I ever wanted to know about IIIF, listened in on presentations about API Development, and rounded out the whole shebang with a train ride back to my family in Virginia, all while participating in the #DPLAfest tweetstorm.

    I have never been to a conference with a sign language interpreter! #DPLAfest! #inclusive #loveit

    — Jasmine Burns (@Jazz_with_Jazz) April 14, 2016

    This was the first conference I have attended where I wasn’t presenting, organizing, or attending committee meetings. I felt like I could sit back, absorb the content, and tweet away to my heart’s desire. I had never had the time to live-tweet a conference, and this was my first time archiving my thoughts in 140 character chunks. I felt that the most important benefits of the conference were moments when I was able to recognize the human element behind the digital resources that I use all the time by putting a face behind a platform (specifically NYPL Labs, IIIF, DPLA Developers, etc). It is not often the case that I leave a conference wishing that it had been longer or that I could have spoken to more people, but DPLAfest exceeded many of my expectations from the start, and I am grateful to DLF for this trip.

     

    Special thanks to the Digital Library Federation for making the DPLAfest Cross-Pollinator grant possible.

    LibUX: Alexa-ish Top 100 Library Websites

    Tue, 2016-05-10 23:48

    Alexa — the web-traffic data folks, not the all-seeing skynet precursor — ranks sites by traffic, which makes for an easily accessible sample that data-nerds can use to gauge the average speed of the top e-commerce pages or the state of accessibility among the most popular destinations on the web.

    You might find it huh-worthy that Alexa actually has a top-site list for libraries, were you to follow the breadcrumb Top Sites > Category > Reference > Libraries. The caveat is that Alexa’s list as-is includes sites that don’t really fit (e.g., Goodreads, Blackboard), so I did a little cleanup.

    I stripped —

    • university homepages that happened to mention libraries
    • library vendors and third-party apps
    • repositories — large and small — that didn’t actually represent a library
    • presidential, country, and state libraries
    • some art museums

    — because either these were algorithmic goofs or didn’t totally represent the mean.

    That said, I put together the following list of 100 high-traffic library websites in the order they appear. Let me know if you find it useful.

    Want to help?

    Amy Drayer pointed out that there are better-ranked sites missing from this list, which can be tricky to suss out if Alexa decides they aren’t libraries. If you find an error or want to help, we started a party on github.

    Top 100 Library Websites by Traffic Name URL New York Public Library nypl.org University of Texas Libraries lib.utexas.edu Penn Libraries www.library.upenn.edu University of Toronto Libraires library.utoronto.ca University of Wisconsin-Madison Libraries library.wisc.edu Cornell University Library library.cornell.edu University of Minnesota Libraries lib.umn.edu University of Illinois at Urbana-Champaign Library library.illinois.edu University of Washington Libraries lib.washington.edu Virginia Tech University Libraries lib.vt.edu Vanderbilt Jean and Alexander Heard Library library.vanderbilt.edu Yale University Library library.yale.edu California Digital Library cdlib.org Berkeley Library lib.berkeley.edu Purdue University Libraries lib.purdue.edu OhioLINK ohiolink.edu NYU Libraries library.nyu.edu Penn State University Libraries libraries.psu.edu University of British Columbia Library library.ubc.ca Duke University Libraries library.duke.edu University of Iowa Libraries lib.uiowa.edu University of Chicago Library lib.uchicago.edu Boston Public Library bpl.org Michigan State University Libraries lib.msu.edu BYU Harold B. Lee Library lib.byu.edu Rutgers University Libraries libraries.rutgers.edu University of Massachusetts Amherst Libraries library.umass.edu University of Maryland Libraries lib.umd.edu Georgetown University Library library.georgetown.edu University of Utah J. Willard Marriott Library lib.utah.edu Texas A&M University Libraries library.tamu.edu University of Alberta Libraries library.ualberta.ca University of Arizona Libraries library.arizona.edu University of Kansas Libraries lib.ku.edu UCLA Library library.ucla.edu Northwestern University Library www.library.northwestern.edu/ Brown University Library library.brown.edu/ Florida State University Libraries lib.fsu.edu Colorado State University Libraries lib.colostate.edu UC Santa Barbara Library library.ucsb.edu Western University Libraries lib.uwo.ca Columbia University Libraries library.columbia.edu University of Cambridge Library www.lib.cam.ac.uk/ Ohio University Libraries www.library.ohiou.edu University of Florida George A. Smathers Libraries cms.uflib.ufl.edu/ University of Rochester Libraries lib.rochester.edu The Huntington huntington.org University of Pittsburgh Library System library.pitt.edu Ohio State University Libraries library.osu.edu University ofGuelph Library lib.uoguelph.ca UNC Chapel Hill Libraries library.unc.edu University of Notre Dame Hesburgh Libraries library.nd.edu Southern Illinois University Libraries lib.siu.edu Miami University Libraries lib.miamioh.edu Stony Brook University Libraries library.stonybrook.edu University of Cincinnati Libraries libraries.uc.edu Kent State University Libraries library.kent.edu Princeton University Library library.princeton.edu University of Hawaii at Manoa Library library.manoa.hawaii.edu Biblioteka Narodowa bn.org.pl/ University of Tennessee Knoxville Libraries lib.utk.edu University of Alabama Libraries lib.ua.edu Bibliotheque de Universite Laval bibl.ulaval.ca Queen’s University Library library.queensu.ca UCSF Library library.ucsf.edu Oklahoma State University Library library.okstate.edu UC Santa Cruz Library library.ucsc.edu Boston University Libraries bu.edu/library/ UC Riverside Library library.ucr.edu Iowa State University Library lib.iastate.edu MIT Libraries libraries.mit.edu UCI Libraries lib.uci.edu UC San Diego Library libraries.ucsd.edu MacOdrum Library library.carleton.ca University of Virginia Library library.virginia.edu Temple University Libraries library.temple.edu University of Pittsburgh Health Sciences Library System hsls.pitt.edu lSU Libraries lib.lsu.edu Cleveland Public Library cpl.org University of Oregon Libraries library.uoregon.edu Washington State University Libraries wsulibs.wsu.edu University of Manchester Library www.library.manchester.ac.uk/ Carnegie Mellon University Libraries library.cmu.edu Georgia Tech Library library.gatech.edu Welch Medical Library welch.jhmi.edu www.nuk.uni-lj.si/ lSE Library lse.ac.uk/library/ Claude Moore Health Sciences Library Hsl.virginia.edu Warwick the Library www2.warwick.ac.uk/services/library/ Harvard Business School Baker Library library.hbs.edu University of Kentucky Libraries libraries.uky.edu Auburn University Libraries lib.auburn.edu McMaster University Library library.mcmaster.ca lane Medical Library lane.stanford.edu UIC University Library library.uic.edu Oregon State University Libraries osulibrary.oregonstate.edu University of Waterloo Library lib.uwaterloo.ca University of Houston Libraries info.lib.uh.edu University of Nebraska-Lincoln Libraries libraries.unl.edu George Washington University Health Sciences Library himmelfarb.gwu.edu

    The post Alexa-ish Top 100 Library Websites appeared first on LibUX.

    M. Ryan Hess: W3C’s CSS Framework Review

    Tue, 2016-05-10 22:24

    I’m a longtime Bootstrap fan, but recently I cheated on my old framework. Now I’m all excited by the W3C’s new framework.

    Like Bootstrap, the W3C’s framework comes with lots of nifty utilities and plug and play classes and UI features. Even if you have a good CMS, you’ll find many of their code libraries quite handy.

    And if you’re CMS-deficient, this framework will save you time and headaches!

    Why a Framework?

    Frameworks are great for saving time. You don’t have to reinvent the wheel for standard UI chunks like navigation, image positioning, responsive design, etc.

    All you need to do is reference the framework in your code and you can start calling the classes to make your site pop.

    And this is really great since not all well-meaning web teams have an eye for good design. Most quality frameworks look really nice, and they get updated periodically to keep up with design trends.

    And coming from this well-known standards body, you can also be assured that the W3C’s framework complies with all the nitty-gritty standards all websites should aspire to.

    Things to Love

    Some of the things I fell in love with include:

    • CSS-driven navigation menus. There’s really no good reason to rely on JavaScript for a responsive, interactive navigation menu. The W3C agrees.
    • Icon support. This framework allows you to choose from three popular icon sets to bring icons right into your interface.
    • Image support: Lots of great image styling including circular cropping, shadowing, etc.
    • Cards. Gotta love cards in your websites and this framework has some very nice looking card designs for you to use.
    • Built-in colors. Nuff sed.
    • Animations. There are plenty of other nice touches like buttons that lift off the screen, elements that drop into place and much more.

    I give it a big thumbs up!

    Check it out at the W3C.org.

     

     


    LITA: Dr. June Abbas Wins 2016 LITA/OCLC Kilgour Research Award

    Tue, 2016-05-10 19:34

    Dr. June Abbas, Professor of Library and Information Studies at the University of Oklahoma, has been selected as the recipient of the 2016 Frederick G. Kilgour Award for Research in Library and Information Technology sponsored by OCLC and the Library and Information Technology Association (LITA).

    The Kilgour Award is given for research relevant to the development of information technologies, especially work which shows promise of having a positive and substantive impact on any aspect(s) of the publication, storage, retrieval and dissemination of information, or the processes by which information and data is manipulated and managed. The winner receives $2,000, a citation, and travel expenses to attend the LITA Awards Ceremony & President’s Program at the ALA Annual Conference in Orlando (FL).

    Dr. Abbas has published more than 100 articles with the h-index of 13 since 2008, which demonstrates a significant impact on the field as seen from the more than 600 citations that many of those publications received. She has also authored and edited two books, contributed 10 book chapters, and developed several research/technical reports and specifications. In addition, she obtained over $1,600,000 in grant awards resulting in 23 funded grant projects. Two recent projects among those are “The Digital Latin Library: Implementation Grant” with the award amount of $1,000,000 funded by the Andrew W. Mellon Foundation in 2015 and “Partnering to Build a 21st Century Community of Oklahoma Academic Librarians” with the award amount of $414,545 funded by the Institute of Museum and Library Services from 2009-2013.

    Her research areas are information-seeking and information system use and design, organization of information, and the changing nature of information and systems. As such, her research fits well with the purpose of the Kilgour Award, which aims at bringing attention to research relevant to the development of information technologies. The nomination letter states: “Dr. Abbas’ work has contributed substantially to our understanding of the provision of information resources in the context of libraries and our entire digital society through the study of processes by which information and data are manipulated and managed. The core purpose of Dr. Abbas’s research program is to provide individuals and communities with effortless access to accurate, relevant information through the development of information technologies that facilitate the storage, retrieval and dissemination of data and information.”

    Bohyun Kim, Chair of the Kilgour Award Committee noted that “Dr. Abbas’ outstanding record of interrelated and cutting edge papers, books, and conference publications, and the garnering of over $1,600,000 in funding to advance her work, are all a brilliant testament to how this talented and productive puzzle master’s in-depth explorations have helped to solve, and will continue to break new ground in, problems in the provision of information resources in this era where information systems are highly dynamic and new forms of information ecologies seem to develop in the blink of an eye. The Committee was impressed with her work and believes that it will have continued impact on the field of library and information technologies.”

    When notified she had won the Award, Dr. Abbas said, “I am deeply honored to accept the Fredrick G. Kilgour Award. He was a librarian, innovator, visionary but pragmatic thinker, leader and guide whose contributions to libraries, cataloging, and interlibrary loan are unparalleled. Under his direction OCLC has not only changed the way libraries and information organizations create and share bibliographic records worldwide but how the world views the potentials of networked information. His spirit truly embodied one of the core values of librarianship to which I hold dear, providing equitable, user-centered access to information. Developing new ways in which the emerging technologies of computers and networks could be used in meaningful ways to provide access, enable collaboration, and sharing of electronic records are but a few of the many legacies he has gifted the profession and the world. I am grateful and humbled to be named the 2016 LITA/OCLC Fredrick G. Kilgour Award winner.”

    The members of the 2016 Frederick G. Kilgour Award Committee are: Bohyun Kim (Chair); Ellen Bahr; Jason Simon; Margaret Heller; Tabatha Farney; Tao Zhang, and Roy Tennant (OCLC Liaison).

    About OCLC

    OCLC is a nonprofit global library cooperative providing shared technology services, original research and community programs so that libraries can better fuel learning, research and innovation. Through OCLC, member libraries cooperatively produce and maintain WorldCat, the most comprehensive global network of data about library collections and services. Libraries gain efficiencies through OCLC’s WorldShare, a complete set of library management applications and services built on an open, cloud-based platform. It is through collaboration and sharing of the world’s collected knowledge that libraries can help people find answers they need to solve problems. Together as OCLC, member libraries, staff and partners make breakthroughs possible.

    About LITA

    Established in 1966, the Library and Information Technology Association (LITA) is the leading organization reaching out across types of libraries to provide education and services for a broad membership of nearly 2,700 systems librarians, library technologists, library administrators, library schools, vendors, and many others interested in leading edge technology and applications for librarians and information providers. LITA is a division of the American Library Association. Follow us on our Blog, Facebook, or Twitter.

    Dan Cohen: Ken Burns and Mrs. Jennings

    Tue, 2016-05-10 19:06

    As the Chairman of the National Endowment for the Humanities, William Adams, noted at the beginning of last night’s Jefferson Lecture, Ken Burns was an extraordinarily apt choice to deliver this honorary talk in the celebratory 50th year of the Endowment. Tens of millions of Americans have viewed his landmark documentaries on the Civil War, jazz, baseball, and other topics pivotal to U.S. history and culture.

    Burns began his talk with a passionate defense of the humanities. The humanities and history, by looking at bygone narratives and especially by listening to the voices of others from the past—and showing their faces in Burns’s films, as Chairman Adams helpfully highlighted—prod us to understand the views of others, and thus, we hope, expand our capacity for tolerance. We have indeed lost the art of seeing through others’ eyes—perspective-taking—to disastrous results online and off. It was good to hear Burns’s fiery rhetoric on this subject.

    His sense that the past is still so very present, especially the deep scar of slavery and racism, was equally powerful. As Burns reminded us, the very lecture he was giving was named after a Founder and American president who owned a hundred people and who failed to liberate even one during his lifetime.

    While there were many grand and potent themes to Burns’s lecture, and many beautiful and haunting phrases, in my mind the animating and central element in his talk was a personal story, and a person. And it is worth thinking more about that smaller history to understand Burns’s larger sense of history. (Before reading further, I encourage you to read the full lecture, which is now up on the NEH website.)

    * * *

    When Burns was just a small boy, only 9 years old, his mother became terminally ill with cancer, and the family needed help as their lives unraveled. His father hired Mrs. Jennings, an African-American woman who was literally from the other side of the tracks in Newark, Delaware. Burns clearly bonded strongly with Mrs. Jennings; he loved her as a “surrogate mother” and someone who loved him and stood strong for him in a time of great stress and uncertainty.

    Then came a moment that haunts Burns to this day, a moment he admits to thinking about every week for over 50 years. His father took a job at the University of Michigan, in part so that his deteriorating wife could get medical care at the university hospital. The family would have to move. They packed up, and on the way out of town, took a final stop at Mrs. Jennings’ house. As Burns recounts the moment:

    She greeted us warmly, as she always did, but she was also clearly quite upset and worried to see us go, concerned about our family’s dire predicament. Just as we were about to head off for the more than twelve-hour drive to our new home, Mrs. Jennings leaned into the back of the car to give me a hug and kiss goodbye. Something came over me. I suddenly recoiled, pressed myself into the farthest corner of the back seat, and wouldn’t let her.

    Burns sees this moment, which he had never recounted publicly before last night and which immediately hushed the audience, as a horrific emergence of racism in his young self. Internalizing the “n-word” that was used all around him in the early 1960s, he couldn’t bring himself, at this crucial moment, to simply lean forward and hug and kiss Mrs. Jennings.

    In this way, and in this story, Ken Burns’s Jefferson lecture was, perhaps more than anything, a plea for forgiveness. In the largely white audience, you could sense, at that tense, core moment of his talk, the self-recognition of those in the darkness, who knew that they, too, had had moments like Ken’s—a deep-seated inability to treat a black friend or colleague or neighbor with the humanity they deserved and desired.

    * * *

    Upon further reflection, I think there is something in the story of Ken Burns and Mrs. Jennings that Burns may not have fully articulated, but that, even through his painful self-criticism, he may understand.

    That moment of “recoil” is, I believe, more emotionally complex. Undoubtedly it includes the terrible mark of racism that Burns identified. But he was also a 9-year-old boy whose mother was dying, who was being driven away from his childhood home, the address of which he still remembers by heart as a 62 year old.

    Young children respond to intensely stressful moments in ways that adults cannot understand. Surely Ken’s recoil also included feelings of not wanting to leave, not wanting to acknowledge that he was being driven away from all that he knew, with another, certain, grim loss on the horizon. Perhaps most of all, Ken didn’t want to be separated from someone he deeply loved as a human being: Mrs. Jennings. Kids don’t have the same coping mechanisms or situational behavior that adults have. Sometimes when they don’t want to affirm the horror of their present, they retreat into themselves. I hope that Ken Burns can let that possibility in, and begin to forgive himself, as much as he wishes that Mrs. Jennings and his father, who lashed out at him for his recoil, could return and do the forgiving.

    If he can begin to forgive himself and recognize the complex feelings of that moment, then the story of Ken Burns and Mrs. Jennings can serve as both an example of the cruel, ongoing impact of racism in the United States, and also as a source of how change happens, albeit all too slowly. Surely Ken Burns’s unconscious reflection on this moment with Mrs. Jennings has been writing itself, subliminally, into his documentaries, and through them, into our own views of American history.

    Burns mentioned toward the end of the lecture how African-American pioneers and geniuses such as Louis Amstrong and Jackie Robinson changed the racial views of many white Americans. But just as important, and perhaps more so, are the more complicated, daily interactions such as that between boyhood Ken Burns and Mrs. Jennings, experiences in which cold, dehumanizing stereotyping battles warm, humanizing sentiment. It takes constant work from us all for the latter to win.

    [With thanks to my always insightful wife for our conversation about the lecture.]

    DPLA: DPLA and the International Image Interoperability Framework

    Tue, 2016-05-10 18:15

    DPLA, along with representatives of a number of institutions including Stanford University, the Yale Center for British Art, the Bibliothèque nationale de France, and more, is presenting at Access to the World’s Images, a series of events related to the International Image Interoperability Framework (IIIF) in New York City, hosted by the Museum of Modern Art and the New York Academy of Medicine. The events will showcase how institutions are leveraging IIIF to reduce total cost and time to deploy image delivery solutions, while simultaneously improving end user experience with a new host of rich and dynamic features, and promote collaboration within the IIIF community through facilitated conversations and working group meetings.

    The IIIF community provides the following overview for its mission and goals:

    Access to image-based resources is fundamental to research, scholarship and the transmission of cultural knowledge. Digital images are a container for much of the information content in the Web-based delivery of images, books, newspapers, manuscripts, maps, scrolls, single sheet collections, and archival materials. Yet much of the Internet’s image-based resources are locked up in silos, with access restricted to bespoke, locally built applications. … IIIF has the following goals: to give scholars an unprecedented level of uniform and rich access to image-based resources hosted around the world; to define a set of common application programming interfaces that support interoperability between image repositories; and to develop, cultivate and document shared technologies, such as image servers and web clients, that provide a world-class user experience in viewing, comparing, manipulating and annotating images.

    Just like our Hubs are much more than data providers to DPLA, we are also much more than an aggregation. Our core services, which are driven by metadata aggregation, are proving to be successful, indicated by our traffic, with 57% of that traffic via portal and 43% through our API. However, part of our larger role is to stand behind the development of efforts that make cultural heritage materials easier to use and share by anyone who wants to use them. Accordingly, we believe it aligns with our larger mission to support the development and implementation of standards and software that make encourage interoperability and reuse of the materials that we aggregate. In turn, from our perspective, DPLA’s support of the International Image Interoperability Framework aligns naturally with these efforts, and we have been encouraging the adoption across our network.

    DPLA has a number of motivations for promoting the adoption of IIIF within our network of partners. As noted, we see a high level of value in the use of open standards both within our community, as well as within allied communities within which we participate. However, IIIF also allows us to begin to address some larger needs at DPLA as well, particularly in terms of improving the user experience of accessing, delivering, reusing, and annotating image resources from our Hubs and partners. Our experience has shown us that this work will also have high value internally at DPLA, allowing us to more easily reuse image content in exhibitions and other curatorial contexts. In particular, we are aware of user experience issues through user testing of the DPLA portal, much of which relate to the “last mile” aspects of delivery to resources that we have aggregated. While some of these issues are not necessarily related to images, these aspects nonetheless impact images for us consistently. Across the board, we have discovered that access to images can often be unclear for many users, especially once they land on a DPLA item page. Furthermore, this is not only true for portal users, but API users as well. A lack of a reliable API to identify images that may provide zoomable views or at specific sizes is essentially impossible right now, without crawling the remote site. The consistency that IIIF would bring to the DPLA community would allow for greater possibilities of reuse.

    An image from photographer and moving image pioneer Eadweard Muybridge’s Animal Locomotion series, contributed by Boston Public Library via Digital Commonwealth, http://dp.la/item/7fba90b480b0bcd8ff414238b4c86773.

    Currently, there are five DPLA hubs with production IIIF implementations. Three Hubs, Digital Commonwealth, Harvard University Library, and the Internet Archive, all have production services running. Two additional hubs have implementations of the Image API. California Digital Library’s new version of Calisphere supports IIIF for a subset of images from a number of specific institutions in the University of California system, including UC Riverside and UC Merced. Finally, two additional hubs, The David Rumsey Map Collection and ARTstor, have been working to push their implementations to production.

    Nonetheless, there are some issues that serve at least minor barriers to an exhaustive rollout of IIIF at DPLA, regardless of the value or possibilities implementation would provide. First, DPLA needs to establish how to best represent IIIF-accessible resources within DPLA Metadata Application Profile is based. We have been communicating with both the IIIF community and staff at Europeana and the National Library of Wales about potential modeling decisions, and significant progress was made at the IIIF meetings in Ghent, Belgium in December 2015. Secondly, DPLA doesn’t always know that IIIF resources exist for a given item we’ve harvested, often because the institution hasn’t specified this in the metadata about the item. We are interested in hearing from Hubs and institutions willing to work with us to determine a reliable and consistent way to do this. In addition, we are also concerned about the potential user experience mismatch between IIIF-accessible resources and those which are not, and how to best provide guidance on understanding usage statistics for IIIF image access. We hope to address this in conversation with the IIIF and DPLA communities in the coming year. Finally, we realize that IIIF might be a high bar to cross for some institutions, so we have been considering a number of options, including speaking with vendors and possibly providing an IIIF service, to make it easier to expose image resources effectively.

    We are enthusiastic about the possibilities, and hope to be able to prototype IIIF implementations with content from DPLA partners in the coming months. We are interested in hearing your thoughts on this, particularly if you’re part of the DPLA network and have implemented or considering implementing IIIF, so please contact us!

    District Dispatch: Ray Patterson would smile

    Tue, 2016-05-10 18:02

    Members of the Copyright Review Management System (CRMS) team.

    The Copyright Review Management System (CRMS) managed by Melissa Levine, Head Copyright Officer and staff at the University of Michigan, is the 2016 L. Ray Patterson Award winner. The University of Michigan library staff created the CRMS to identify works in the HathiTrust digital library collection that are in the public domain.  Thus far, 323,334 titles have been identified, which is really quite astonishing. All of these works are now available as full text in the HathiTrust collection.  They are free for anyone to use in any way that they want because they are not protected by copyright. What makes this effort especially nice is that scores of librarians from across the country have been trained to use the CRMS and contribute to the effort.

    One might think identifying works in the public domain should be an easy thing to do – just identify those works that are published before 1923 – but it is a lot more complicated than that. The term of copyright protection has changed several times in the last 50 years, creating a new protection regime from a set number of years –initially 14 years with one opportunity for an additional 14 year renewal – to one based on the life of the author plus 50 years, and then 70 years in 2002.  In addition, legal requirements necessary to formally obtain copyright protection, such as registration and notice, were eliminated because they were thought to be too burdensome for authors and other rights holders. When such formalities were required however, failure to renew and/or failure to place a copyright notice on the physical work led to many works moving into the public domain.  It is a complete mess, compounded by the fact that authoritative records about death dates and copyright transfer records do not exist.  Bottom line is that works published after 1923 may be in the public domain, but a thorough investigation is necessary.  Even then, it is often impossible to really know for sure if a title is public domain because of lack of evidence.  And in rarer, screwy situations, works published before 1923 may still be protected.

    ALA’s Office for Information Technology Policy (OITP) gives the Patterson award to an individual or group that demonstrates dedication to a balanced U.S. copyright system through advocacy for a robust fair use doctrine and public domain. The award is named after L. Ray Patterson, a key legal figure who explained and justified the importance of users’ rights to information. Patterson helped articulate that copyright law was shifting from its original purpose and favoring the interests of copyright holders over those of the general public. Peter Hirtle, Affiliate Fellow at Harvard University’s Berkman Center for Internet & Society, was one of several people who nominated the CRMS for the award. He remarked, “Among his many accomplishments, Patterson recognized the critical importance of the public domain. I would be hard-pressed to think of a group that has done more to assist librarians in identifying, understanding, and expanding the public domain than CRMS.”

    Congratulations to all! Additional information on the L. Ray Patterson Copyright Award is on the ALA Web site.

    The post Ray Patterson would smile appeared first on District Dispatch.

    HangingTogether: Mapping the Role that Technology Plays in Your Life: The Visitors and Residents App

    Tue, 2016-05-10 17:55

    Do you ever wonder about the role that technology plays in your life and what services and apps you use? OCLC began collaborating on the Digital Visitors and Residents (V&R) project with funding from Jisc in 2011 to investigate how US and UK individuals engage with technology and how this engagement may or may not change as the individuals transition through their educational stages (White and Connaway 2011-2014). Since that time we have broadened the research to include interviews with individuals in Spain and Italy to include a comparative analysis to identify any geographical or cultural differences. The OCLC team also has conducted an online survey with approximately 150 high school, undergraduate and graduate students and college and university faculty. We hope to have these data analyzed so that we are able to share our findings.

    We also began conducting mapping sessions with students, librarians, and faculty using the Visitors and Residents framework and differentiating between engagement in professional/academic and personal contexts and situations. Participation in the mapping exercise is a way for individuals to become aware of how they work, play, and interact with others in a digital environment. If the maps are shared with others, it can help individuals better understand why communication seems to work well with some, but not with others.

    These mapping sessions were conducted using paper and pencil or pen. Examples of these maps are included in the EDUCAUSE Review paper, “I always stick with the first thing that comes up on Google…” Where People Go for Information, What They Use, and Why (Connaway, Lanclos, and Hood 2013). In order to collect and analyze these handwritten maps we had to ask the creators of the maps to take photos of them and to email them to us.

    After much discussion with my colleague, William Harvey, PhD, OCLC Consulting Engineer, he developed an app that can be used on most smartphones, tablets, laptops, and desktop computers. William co-led the usability testing of the app with Mike Prasse, OCLC Lead User Experience Researcher. High school, undergraduate, and graduate students, faculty, and librarians used the app on different devices and provided feedback on what was fun, what worked, and what functionalities they thought we should add. Based on their feedback, the app was enhanced and now is available at oc.lc/VRmap, with comic book instructions. A video also was created by Carey Champoux, OCLC Video Content Manager, and Andy Havens, OCLC Manager, Branding and Creative Services, to explain how to use the V&R mapping app.

     

    Once individuals complete their maps, they may share with others and they may submit to OCLC Research. If submitted to OCLC Research, we will add the maps created using the app to the other maps collected and analyze them in the aggregate, anonymizing any individual’s identifying information. Those who map their patterns of communication and engagement with technology and submit to OCLC Research will help us make informed recommendations to library staff for the development of services and technologies that are a better fit for library users’ and potential users’ personal and academic lifestyles and to position the library in the life of its users.

    As part of the research, I have been conducting V&R mapping sessions with students, faculty, and librarians. After the individuals complete their maps, we display the maps of those who are interested in sharing and discussing them with the group. I conduct the sessions in much the same way one would conduct a semi-structured interview.* The individuals talk about what they included in their maps and I probe and ask more questions based upon their discussion.

    Some of the students have been very surprised at the amount of time they spend online. One doctoral student at a US university was very surprised that she was able to draw every icon for every app or social media site from memory. She commented, “I spend way too much time online and using social media than I ever thought I did.”

    Others have discussed work-arounds for the library web page and catalog, which I have been able to share with the library staff so that they can make changes to the system or interface. Several doctoral students said that they could not figure out how to email or text themselves bibliographic citations that they found in the university online catalog so they took photos of the display on the library computer screens and texted or emailed the information to themselves. They said they wanted to use their smartphones since they are more convenient than having to take out their laptops or tablets. This also has implications for evaluating how the library web page and catalog display on smartphones, which is the preferred device for many individuals.

    Conducting the mapping sessions as semi-structured interviews in a group also has made me aware that not everyone understands the definitions that we have used for Visitors and Residents. We define the visitor mode as one in which “people treat the web as a series of tools. They decide what they want to achieve, chose and appropriate online tool, and then log off. They leave no social trace of themselves online. In resident mode, people live a portion of their lives online and approach the web as a place where they can express themselves and spend time with people. When acting as residents, people visit social networking platforms, and aspects of their digital identity maintain a presence even when they’re not online through their social media profiles” (Connaway, Lanclos, and Hood 2013). However, some individuals, who participate in the mapping exercise, relate the terms visitor and resident to the amount of time one spends engaging with a device, app, etc. This equates to a visitor not using the device or app much and to a resident using the device or app most or all of the time, which is not the intended definitions of the terms. Based on this misconception, I have been thinking about and talking to colleagues about changing the terminology. However, none of the terms suggested seem to be descriptive enough. I welcome any ideas, discussion, and thoughts on new terminology for visitors and residents that would more accurately describe the online presence or lack of online visibility.

    This brings up something else that I have been pondering about the visitors and residents framework. That is the fact that we are missing the opportunity to capture individuals’ engagement with the physical environment and resources. I had thought of this early in the individual semi-structured interview data collection stage when the importance of the face-to-face and human contact emerged from the data and were included in our code book and analysis. However, these are not captured in the mapping exercises, which became more evident as I began structuring the group mapping sessions as semi-structured interviews. I am struggling with how to depict the physical environment and resources in the V&R map. Again, I welcome any ideas, discussion, and thoughts on this.

    *Connaway and Radford (Forthcoming) define semi-structured interviews as an interview “in which control is shared and questions are open-ended.”

    References

    Connaway, Lynn Silipigni, Donna Lanclos, and Erin M. Hood. 2013. “’I always stick with the first thing that comes up on Google…’ Where  People Go for Information, What They Use, and Why.” EDUCAUSE Review Online (December 6), http://www.educause.edu/ero/article/i-always-stick-first-thing-comes-google-where-people-go-information-what-they-use-and-why.

    Connaway, Lynn Silipigni, and Marie L. Radford. Forthcoming. Basic Research Methods in Library and Information Science. 6th ed. Santa Barbara, CA: Libraries Unlimited.

    About Lynn Connaway

    Senior Research Scientist at OCLC Research. I study how people get & use information & engage with technology.

    Mail | Web | Twitter | More Posts (3)

    Tim Ribaric: Another hot take on discovery systems

    Tue, 2016-05-10 17:15

    Happened again.

    read more

    DPLA: Announcing the second class of the Ebooks Curation Corps

    Tue, 2016-05-10 17:09

    We are very pleased to welcome and introduce six new members of the DPLA Curation Corps, a group of librarians and other information professionals from around the country who will continue to select and curate the books available in Open eBooks and provide guidance for DPLA’s ongoing work to help maximize access to our collections of ebooks and library ebooks overall.

    Selected from a strong group of applicants, the newest Curation Corps members will join returning members to select new content for the Open eBooks app that represent diverse, compelling, and appropriately targeted books for students from elementary to high school.  Member represent a wide range of experiences, serving diverse communities, special education classrooms, and military families. For more on our new and returning members, visit the Curation Corps page.

    The Curation Corps will select top content, highlight kid favorites, and help categorize titles to make them more discoverable inside the app to ensure that there is something for every child to read, enjoy, and learn from. Members also provide outreach about the program within their communities via blogs, social media, professional affiliations, and regional presentations.

    To learn more about Open eBooks, visit the Open eBooks page. If you’re interested in learning more about DPLA’s efforts to expand access to eBooks, visit the DPLA and eBooks page. Questions? Email us.

    2016 Curation Corps Members:

    Visit the Curation Corps page for full bios.

    • Elisha Brookover *
    • Edith Campbell
    • Patricia Dollisch *
    • Deb Fidali *
    • Rob Fleisher *
    • Daniela Guardiola
    • Dorothy M. Hughes
    • Emily Kean
    • Savannah Kitchens
    • Lucretia Miller
    • Maura O’Toole
    • Vandy Pacetti-Donelson
    • Kelly McGorray Roberts *
    • Jessica Zillhart *

    *Denotes new member

     

    Jonathan Rochkind: Commercial gmail plugin to turn gmail into a help desk

    Tue, 2016-05-10 15:46

    This looks like an interesting product; I didn’t even know this level of gmail plugin was supported by gmail.

    http://www.keeping.com/

    Help desk ticketing, with assignment, priorities, notes, and built-in response-time metrics, all within your gmail inbox (support emails are in a separate tab from your regular email).

    The cost is $49/month for the ‘unlimited’ plan (capped at 5 users for $29/month).

    I think this product could be a good fit for libraries dealing with patron reference/help questions, I think many libraries don’t have very user-friendly interfaces for this at present. I think the price is pretty reasonable at $1000/year, probably cheaper than most alternatives and within the budgets of many libraries.


    Filed under: General

    FOSS4Lib Recent Releases: Format Identification for Digital Objects - 1.3.3

    Tue, 2016-05-10 14:33

    Last updated May 10, 2016. Created by Peter Murray on May 10, 2016.
    Log in to edit this page.

    Package: Format Identification for Digital ObjectsRelease Date: Tuesday, May 10, 2016

    William Denton: Conforguring dotfiles

    Tue, 2016-05-10 14:24

    I’ve added my dotfiles to Conforguration: they are there as raw files in the dotfiles directory, and conforguration.org has code blocks that will put them in place on localhost or remote machines.

    I did some general cleanup to the file as well. There’s a lot of duplication, which I think some metaprogramming might fix, but for now it works and does what I need. My .bashrc is now finally the same everywhere (custom settings go in .bash.$HOSTNAME.rc) which is a plus.

    Pages