Inc.’s John Brandon recently wrote about The Slow, Sad, and Ultimately Predictable Decline of 3D Printing. Uh, not so fast.
3D Printing is just getting started. For libraries whose adopted mission is to introduce people to emerging technologies, this is a fantastic opportunity to do so. But it has to be done right.Another dead end?
Brandon cites a few reasons for his pessimism:
- 3D printed objects are low quality and the printers are finicky
- 3D printing growth is falling behind initial estimates
- people in manufacturing are not impressed
- and the costs are too high
I won’t get into all that’s wrong with this analysis, as I feel like most of it is incorrect, or at the very least, a temporary problem typical of a new technology. Instead, I’d like to discuss this in the library maker context. And in fact, you can apply these ideas to any tech project.How to make failure a win—no matter what
Libraries are quick to jump on tech. Remember those QR Codes that would revolutionize mobile access? Did your library consider a Second Life branch? How about those Chromebooks!
Inevitably, these experiments are going to fail. But that’s okay.
As this blog often suggests, failure is a win when doing so teaches you something. Experimenting is the first step in the process of discovery. And that’s really what all these kinds of projects need to be.
In the case of a 3D Printing project at your library, it’s important to keep this notion front and center. A 3D Printing pilot with the goal of introducing the public to the technology can be successful if people simply try it out. That seems easy enough. But to be really successful, even this kind of basic 3D Printing project needs to have a fair amount of up-front planning attached to it.
Chicago Public Library created a successful Maker Lab. Their program was pretty simple: Hold regular classes showing people how to use the 3D printers and then allow those that completed the introductory course to use the printers in open studio lab times. When I tried this out at CPL, it was quite difficult to get a spot in the class due to popularity. The grant-funded project was so successful, based on the number of attendees, that it was extended and continues to this day.
As a grant-funded endeavor, CPL likely wrote out the specifics before any money was handed over. But even an internally-funded project should do this. Keep the goals simple and clear so expectations on the front line match those up the chain of command. Figure out what your measurements of success are before you even purchase the first printer. Be realistic. Always document everything. And return to that documentation throughout the project’s timeline.Taking it to the next level
San Diego Public Library is an example of a Maker Project that went to the next level. Uyen Tran saw an opportunity to merge startup seminars with their maker tools at her library. She brought aspiring entrepreneurs into her library for a Startup Weekend event where budding innovators learned how the library could be a resource for them as they launched their companies. 3D printers were part of this successful program.
It’s important to note that Uyen already had the maker lab in place before she launched this project. And it would be risky for a library to skip the establishment of a rudimentary 3D printer program before trying for this more ambitious program.
But it could be done if that library was well organized with solid project managers and deep roots in the target community. But that’s a tall order to fill.What’s the worst thing that could go wrong?
The worst thing that could go wrong is doubling down on failure: repeating one failed project after another without changing the flawed approach behind it.
I’d also add that libraries are often out ahead of the public on these technologies, so dead ends are inevitable. To address this, I would also add one more tactic to your tech projects: listening.
The public has lots of concerns about a variety of things. If you ask them, they’ll tell you all about them. Many of their concerns are directly related to libraries, but we can often help. We have permission to do so. People trust us. It’s a great position to be in.
But we have to ask them to tell us what’s on their mind. We have to listen. And then we need to think creatively.
Listening and thinking outside the box was how San Diego took their 3D Printers to the next level.The Long Future of 3D Printing
The Wright Brothers first flight managed only 120 feet in the air. A year later, they flew 24 miles. These initial attempts looked nothing like the jet age and yet the technology of flight was born from these humble experiments.
Already, 3D printing is being adopted in multiple industries. Artists are using it to prototype their designs. Astronauts are using it to print parts aboard the International Space Station. Bio-engineers are now looking at printing stem-cell structures to replace organs and bones. We’re decades away from the jet age of 3D printing, but this tech is here to stay.
John Brandon’s read is incorrect simply because he’s looking at the current state and not seeing the long-term promise. When he asks a Ford engineer for his take on 3D Printing in the assembly process, he gets a smirk. Not a hotbed of innovation. What kind of reaction would he have gotten from an engineer at Tesla? At Apple? Fundamentally, he’s approaching 3D Printers from the wrong perspective and this is why it looks doomed.
Libraries should not make this mistake. The world is changing ever more quickly and the public needs us to help them navigate the new frontier. We need to do this methodically, with careful planning and a good dose of optimism.
Starting in 2012, the British Library replaced its interlibrary loan service with a license document delivery agreement with the International Association of Scientific, Technical & Medical Publishers (STM) and the Publishers Association. Perhaps to improve turnaround time to provide better service, perhaps to save money by outsourcing, or perhaps because of fear of infringement, the British Library agreed to switch to the International Non-Commercial Document Supply (INCD) service. Their previous interlibrary loan service was extremely popular and apparently lawful because UK copyright law has interlibrary loan copyright exception similar to the one we have in US copyright law – that libraries could send journal articles to other libraries to meet the request of a user. But did it cover international ILL?
The abandoned interlibrary loan service provided resources to 59 countries that did not have the materials requested by faculty, researchers, and students at their own libraries. Being one of the largest research collections in the world, interlibrary loan from the British Library was naturally, heavily relied upon. After moving to the INCD service however, the popular interlibrary loan service deteriorated in spectacular fashion, detailed by Teresa Hackett from Electronic Information for Libraries) (EIFL). In her blog post entitled “Licensed to Fail,” Hackett describes the swift demise of INCD service, and, through a freedom of information request, has the data to bolster her argument. You must read it, although you likely will not be surprised.
Back in 2012, when announcing the INCD partnership, Michael Mabe, CEO of the STM said that “the British Library framework license (INCD) will give publishers, including our members, contractual control over the international cross-border delivery of copies from their material via an established and respected document supply service. It will also allow the British Library to improve the service, and delivery times, available to its authorized users.” Alas, the British Library cancelled the service this month. It did not fit the bill, dramatically reducing the access to research materials (while delivering on publisher contractual control).
One wonders. Maybe this explains the popularity of Sci-Hub.
The past 3 weeks, I’ve been doing a lot of work on MarcEdit. These initial changes impact just the windows and linux version of MarcEdit. I’ll be taking some time tomorrow and Wed. to update the Mac version. The current changes are as follows:
* Enhancement: Language files have been updated
* Enhancement: Command-line tool: -task option added to support tasks being run via the command-line.
* Enhancement: Command-line tool: -clean and -validate options updated to support structure validation.
* Enhancement: Alma integration: Updating version numbers and cleaned up some windowing in the initial release.
* Enhancement: Small update to the validation rules file.
* Enhancement: Update to the linked data rules file around music headings processing.
* Enhancement: Linked Data Platform: collections information has been moved into the configuration file. This will allow local indexes to be added so long as they support a json return.
* Enhancement: Merge Records — 001 matching now looks at the 035 and included oclc numbers by default.
* Enhancement: MarcEngine: Updated the engine to accommodate invalid data in the ldr.
* Enhancement: MARC SQL Explorer — added an option to allow mysql database to be created as UTF8.
* Enhancement: Handful of odd UI changes.
You can get the update from the downloads page (http://marcedit.reeset.net/downloads) or via the automated update tools.
MarcEdit’s command-line function has always had the ability to run validation tasks against the MarcEdit rules file. However, the program hasn’t included access to the cleaning functions of the validator. As of the last update, this has changed. If the –validate command is invoked without a rules file defined, the program will validate the structure of the data. If the –clean option is passed, the program will remove invalid structural data from the file.
Here’s an example of the command:
>> cmarcedit.exe -s “C:\Users\rees\Desktop\CLA_UCB 2016\Data File\sample data\bad_sample_records.mrc” –validate
MarcEdit’s task list functionality has made doing repetitive tasks in MarcEdit a fairly simple process. But one limitation has always been that the tasks must be run from within MarcEdit. Well, that limitation has been lifted. As of the last update, a new option has been added to the command-line tool: –task. When run with a path to a task to run, MarcEdit will preform the task from the command-line.
Here’s an example of a command:
cmarcedit.exe -s “C:\Users\rees\Desktop\withcallnumbers.mrk” -d “C:\Users\rees\Desktop\remote_task.mrk” -task “C:\Users\rees\AppData\Roaming\marcedit\macros\tasksfile-2016_06_17_190213223.txt”
This functionality is only available in the Windows and Linux version of the application.
Journal of Web Librarianship: Pakistani University Library Web Sites: Features, Contents, and Maintenance Issues
Muhammad Abbas Ganaee
Do you have an inventive VIVO application or exemplary linked open data set? Show off your creativity at the VIVO conference! Submit your work to the VIVO App or Linked Open Data Contests and give it the recognition that it deserves. Winners will be announced and recognized at the conference. Submissions are due by August 1, 2016. Instructions can be found here.
Book your room now for VIVO2016
FOSS4Lib Upcoming Events: FOLIO Open Source Project to build a Library Services Platform – Questions and Answers
Last updated July 11, 2016. Created by Peter Murray on July 11, 2016.
Log in to edit this page.
Open Library Community Forum: Wednesday, July 13, 2016, at 11 AM EDT/3 PM GMT
Please come join the Open Library Community Forum!
Speakers will answer questions about FOLIO, the open source project to build a library services platform (LSP), and attendees can learn more about the project as well as how they can participate. Members of the audience are welcome to submit questions during the Forum. Questions not answered during the Forum will be answered in a soon-to-come blog posting from the FOLIO Project Leaders.
Last updated July 11, 2016. Created by Peter Murray on July 11, 2016.
Log in to edit this page.
A community collaboration to develop an open source Library Services Platform (LSP) designed for innovation.Package Type: Integrated Library SystemLicense: Apache 2.0Development Status: In Development Package Links Browser/Cross-Platform Upcoming Events for the FOLIO Package
Academic libraries have long provided workshops that focus on research skills and tools to the community. Topics often include citation software or specific database search strategies. Increasingly, however, libraries are offering workshops on topics that some may consider untraditional or outside the natural home of the library. These topics include using R and other analysis packages, data visualization software, and GIS technology training, to name a few. Librarians are becoming trained as Data and Software Carpentry instructors in order to pull from their established lesson plans and become part of a larger instructional community. Librarians are also partnering with non-profit groups like Mozilla’s Science Lab to facilitate research and learning communities.
Traditional workshops have generally been conceived and executed by librarians in the library. Collaborating with outside groups like Software Carpentry (SWC) and Mozilla is a relatively new endeavor. As an example, certified trainers from SWC can come to campus and teach a topic from their course portfolio (e.g. using SQL, Python, R, Git). These workshops may or may not have a cost associated with them and are generally open to the campus community. From what I know, the library is typically the lead organizer of these events. This shouldn’t be terribly surprising. Librarians are often very aware of the research hurdles that faculty encounter, or what research skills aren’t being taught in the classroom to students (more on this later).
Librarians are helpers. If you have some biology knowledge, I find it useful to think of librarians as chaperone proteins, proteins that help other proteins get into their functional conformational shape. Librarians act in the same way, guiding and helping people to be more prepared to do effective research. We may not be altering their DNA, but we are helping them bend in new ways and take on different perspectives. When we see a skills gap, we think about how we can help. But workshops don’t just *spring* into being. They take a huge amount of planning and coordination. Librarians, on top of all the other things we do, pitch the idea to administration and other stakeholders on campus, coordinate the space, timing, refreshments, travel for the instructors (if they aren’t available in-house), registration, and advocate for the funding to pay for the event in order to make it free to the community. A recent listserv discussion regarding hosting SWC workshops resulted in consensus around a recommended minimum six week lead time. The workshops have all been hugely successful at the institutions responding on the list and there are even plans for future Library Carpentry events.
A colleague once said that everything that librarians do in instruction are things that the disciplinary faculty should be doing in the classroom anyway. That is, the research skills workshops, the use of a reference manager, searching databases, the data management best practices are all appropriately – and possibly more appropriately – taught in the classroom by the professor for the subject. While he is completely correct, that is most certainly not happening. We know this because faculty send their students to the library for help. They do this because they lack curricular time to cover any of these topics in depth and they lack professional development time to keep abreast of changes in certain research methods and technologies. And because these are all things that librarians should have expertise in. The beauty of our profession is that information is the coin of the realm for us, regardless of its form or subject. With minimal effort, we should be able to navigate information sources with precision and accuracy. This is one of the reasons why, time and again, the library is considered the intellectual center, the hub, or the heart of the university. Have an information need? We got you. Whether those information sources are in GitHub as code, spreadsheets as data, or databases as article surrogates, we should be able to chaperone our user through that process.
All of this is to the good, as far as I am concerned. Yet, I have a persistent niggle at the back of my mind that libraries are too often taking a passive posture. [Sidebar: I fully admit that this post is written from a place of feeling, of suspicions and anecdotes, and not from empirical data. Therefore, I am both uncomfortable writing it, yet unable to turn away from it.] My concern is that as libraries extend to take on these workshops because there is a need on campus for discipline-agnostic learning experiences, we (as a community) do so without really fomenting what the expectations and compensations of an academic library are, or should be. This is a natural extension of the “what types of positions should libraries provide/support?” question that seems to persist. How much of this response is based on the work of individuals volunteering to meet needs, stretching the work to fit into a job description or existing work loads, and ultimately putting user needs ahead of organizational health? I am not advocating that we ignore these needs; rather I am advocating that we integrate the support for these initiatives within the organization, that we systematize it, and that we own our expertise in it.
This brings me back to the idea of workshops and how we claim ownership of them. Are libraries providing these workshops only because no one else on campus is meeting the need? Or are we asserting our expertise in the domain of information/data shepherding and producing these workshops because the library is the best home for them, not a home by default? And if we are making this assertion, then have we positioned our people to be supported in the continual professional development that this demands? Have we set up mechanisms within the library and within the university for this work to be appropriately rewarded? The end result may be the same – say, providing workshops on R – but the motivation and framing of the service is important.
Information is our domain. We navigate its currents and ride its waves. It is ever changing and evolving, as we must be. And while we must be agile and nimble, we must also be institutionally supported and rewarded. I wonder if libraries can table the self-reflection and self-doubt regarding the appropriateness of our services (see everything ever written regarding libraries and data, digital humanities, digital scholarship, altmetrics, etc.) and instead advocate for the resourcing and recognition that our expertise warrants.
Library of Congress: The Signal: FADGI MXF Video Specification Moves Up an Industry-organization Approval Ladder
The following is a guest post by Carl Fleischhauer, who organized the FADGI Audio-Visual Working Group in 2007. Fleischhauer recently retired from the Library of Congress.
The Federal Agencies Digitization Guidelines Initiative Audio-Visual Working Group is pleased to announce a milestone in the development of the AS-07 MXF video-preservation format specification. AS-07 has taken shape under the auspices of a not-for-profit trade group: the Advanced Media Workflow Association. AS-07 is now an official AMWA Proposed Specification, and the current version (CC by SA Creative Commons license and all) has been posted at the AMWA website. Although this writer retired from the Library in April, he helped shepherd the specification through this phase.
AS-07 is one of three new AMWA specifications announced in June. Another one is the organization’s new process rule book. The new AMWA process is patterned on the Requests for Comment approach used by the Internet Engineering Task Force. In the new AMWA scheme, there are three levels of maturity:
- Work in Progress
- Proposed Specification
Two earlier versions of AS-07 were exposed for community comment at the AMWA website, beginning in September 2014, and this met the requirements for a Work in Progress. For more information about the history of AS-07, refer to the FADGI website.
AS-07 is a standards-based specification. For the most part it is a cookbook recipe for a particular subtype of the MXF standard. MXF stands for Material eXchange Format, and that format’s complex and lengthy set of rules and options is spelled out in more than thirty standards from the Society of Motion Picture and Television Engineers. AS-07 also enumerates a number of permitted encodings and other components, each of which is based on other standards from SMPTE, the International Organization for Standardization and International Electrotechnical Commission, the European Broadcast Union, and special White Paper documents from the British Broadcasting Corporation. It is no wonder that a cookbook recipe is called for!
Why the emphasis on standards? The short answer is that standards underpin interoperability, in the digital world just as surely as they have for, say, the dimensions of railroad tracks, so my boxcar will roll down your rail line. It is worth saying that, in our preservation context, interoperability has both current and future dimensions. Today, cooperating archives may exchange preservation master files and these must be readable by both parties. More important, however, is temporal interoperability: today’s content must be readable by the archive of tomorrow. AS-07’s extensive use of standards-based design supports both types of interoperability.
At a high level, the objectives for video archival master files (aka preservation masters) are like those for the digital preservation reformatting for other categories of content. Archives want their masters to reproduce picture and sound at very high levels of quality. In addition, the preservation masters should be complete and authentic copies of the originals, i.e., in the case of video, they should retain components like multiple timecodes, closed captions and multiple soundtracks. And–back to temporal interoperability–the files must support access by future users.
What are some of the features of AS-07? The specification emphasizes encodings that ensure the highest possible quality of picture and sound, including requirements for declaring the correct aspect ratio and handling the intricacies of interlaced picture, a characteristic of pre-digital video. Beyond those elements, AS-07 also specifies options for the following:
- Captions and Subtitles
- retain and provide carriage for captions and subtitles
- translate binary-format captions and subtitles to XML Timed Text
- Audio Track Layout and Labeling
- provide options for audio track layout and labeling
- Content integrity
- provide support for within-file content integrity data
- provide coherent master timecode
- retain legacy timecode
- label multiple timecodes
- Embedding Text-Based and Binary Data
- provide carriage of supplementary metadata (text-based data)
- provide carriage of captions and subtitles in the form of Timed Text (text-based data)
- provide carriage of a manifest (text-based data)
- provide carriage of still images, documents, EBU STL, etc. (binary data)
- Language Tagging
- provide a means to tag Timed Text languages
- retain language tagging associated with legacy binary caption or subtitle data
- provide a means to tag soundtrack languages
- provide support for segmented content
AS-07 has not been exclusively developed in writing (“on paper,” in oldspeak). The format is based on pioneering work done by Jim Lindner in the early 2000s, when he developed a system called SAMMA (System for the Automated Migration of Media Archives). SAMMA produces MXF files for which the picture data is encoded as lossless JPEG 2000 frame images. It also operates in a robotic mode, to support high-volume reformatting.
Jim’s design for SAMMA was motivated by the forecasts for high-volume reformatting at the Library’s audio-visual center in Culpeper, Virginia (today’s Packard Campus for Audio-Visual Conservation), which was then in its planning phase. The Packard Campus began operation in 2007 and, since then, more than 160,000 videotapes have been reformatted using the SAMMA system. AS-07 is very much a refinement and elaboration of the SAMMA format. In order to get a better look at those refinements, in 2015, the AS-07 team commissioned the production of custom-made sample files.
What next? The interesting — and I think proper — feature of the new AMWA process concerns the movement from Proposed Specification to Specification. The rulebook lists several bullets as requirements but the gist is this: you gotta have implementation and adoption. AS-07 at this time is, metaphorically, a recipe ready to test in the kitchen. Now it is time to cook and taste the pudding. After there are instances of implementation and adoption, these will be reported to the AMWA board with a request to advance AS-07 to the level of [approved] Specification. (Of course, if the process reveals problems, the specification will be modified.)
The first steps toward implementation are under way. On FADGI’s behalf, the Library has contracted with Audiovisual Preservation Solutions and EVS to assemble additional test files, and to have them reviewed by an outside expert. At the same time, James Snyder, the Senior Systems Administrator at the Packard Campus, is working with vendors to do some actual workups. (James oversees the campus’s use of SAMMA and has been an active AS-07 team member.) We trust that these implementation efforts will bear fruit during the remaining months of 2016.
D-Lib Magazine has just published my analysis of the 2015 International Linked Data Survey for Implementers.* I published the results of the 2014 linked data survey in a series of blog posts here between 28 August 2014 and 8 September 2014 (1 — Who’s doing it; 2 — Examples in production; 3 — Why and what institutions are consuming; 4 — Why and what institutions are publishing; 5 — Technical details; 6 — Advice from the implementers). Discussions with OCLC Research Library Partners metadata managers prompted these surveys, as they thought there were more linked data projects that had been implemented than they were aware of.
I had two objectives for repeating the 2014 survey in 2015:
- Increase survey participation, especially by national libraries.
- Identify changes in the linked data environment, as described in my 1 June 2015 posting, What’s changed in linked data implementations?
We met the first objective. Few national libraries were represented in the 48 responding institutions to the 2014 survey (those that had implemented or were implementing linked data projects or services), and several commentators noted their absence. To address this gap, we conducted the 2015 survey earlier, between 1 June and 31 July (rather than 7 July and 15 August in 2014). We were also more pro-active in recruiting responses. We indeed had increased participation, receiving responses from 71 institutions that had implemented or were implementing linked data projects or services, including 14 from national libraries (compared to just 4 in 2014). The number of projects described also increased, from 76 in 2014 to 112 in 2015.
The idea that we could compare responses to the same set of questions to identify changes or trends proved to be unrealistic for three reasons:
- Although I asked each responding institution in 2014 to also respond to the 2015 survey, only 29 did so. This is too small a pool to provide any over-arching “changes in the linked data environment.”
- One year is insufficient to note significant changes.
- Although repeat respondents had access to their responses in 2014, a number of their 2015 responses differed in areas that were not likely to change within a year (such as licenses, platforms, serializations, vocabularies used). It was unclear whether they really represented a change or just a different answer.
It is easier to note what did not change between the two surveys. For example:
- Most linked data projects or services both consume and publish linked data. Those that publish linked data only (and not consume it) are relatively few in both survey results.
- The chief motivations for publishing linked data are the same: expose data to a larger audience on the Web and demonstrate what could be done with datasets as linked data (80% or more of all respondents in each survey).
- Similarly, the chief motivations for consuming linked data are the same: provide local users with a richer experience and enhance local data by consuming linked data from other sources (74% or more of all respondents in each survey).
- Most respondents in each survey were libraries or networks of libraries. We had few responses from outside the library domain. In hindsight this is not surprising, as our social networks are with those who work in, for or with libraries
The 2015 survey results may be considered a partial snapshot of the (mostly) library linked data environment. Museums and digital humanities linked data projects are not well represented. I have been asked whether I plan to repeat the survey. I haven’t decided – what do you think?
If you’re interested in looking at the responses from institutions you consider your peers, or would like to analyze the results for yourself, all responses to both the 2015 and 2014 surveys (minus the contact information which we promised to keep confidential) are available at: http://www.oclc.org/content/dam/research/activities/linkeddata/oclc-research-linked-data-implementers-survey-2014.xlsx
* Full citation: Smith-Yoshimura, Karen. 2016. Analysis of International Linked Data Survey for Implementers. D-Lib Magazine 22 (7/8) doi:10.1045/july2016-smith-yoshimuraAbout Karen Smith-Yoshimura
Karen Smith-Yoshimura, senior program officer, works on topics related to creating and managing metadata with a focus on large research libraries and multilingual requirements.Mail | Web | Twitter | More Posts (68)
DuraSpace News: VIVO Updates for July 10–VIVO16 Conference News, VIVO 1.9 and Vitro 1.9 Release Candidates, OpenVIVO Update
From Mike Conlon, VIVO Project Director
Apps Contest and Linked Data Contest. Do you have an application that uses VIVO data? Do you have a set of linked data that you can share? The VIVO Conference is holding its annual Application Contest and Linked Data Contest. You will receive an email this week with instructions for applying. Applications will be due August 1. It's easy to apply. Winners will be recognized in the program and recognized in OpenVIVO!
Before we can posit any solutions to the problems that I have noted in these posts, we need to at least know what questions we are trying to answer. To me, the main question is:
What should happen between the search box and the bibliographic display?
Or as Pauline Cochrane asked: "Why should a user ever enter a search term that does not provide a link to the syndetic apparatus and a suggestion about how to proceed?" I really like the "suggestion about how to proceed" that she included there. Although I can think of some exceptions, I do consider this an important question.
If you took a course in reference work at library school (and perhaps such a thing is no longer taught - I don't know), then you learned a technique called "the reference interview." The Wikipedia article on this is not bad, and defines the concept as an interaction at the reference desk "in which the librarian responds to the user's initial explanation of his or her information need by first attempting to clarify that need and then by directing the user to appropriate information resources." The assumption of the reference interview is that the user arrives at the library with either an ill-formed query, or one that is not easily translated to the library's sources. Bill Katz's textbook "Introduction to Reference Work" makes the point bluntly:
"Be skeptical of the of information the patron presents" 
If we're so skeptical that the user could approach the library with the correct search in mind/hand, then why then do we think that giving the user a search box in which to put that poorly thought out or badly formulated search is a solution? This is another mind-boggler to me.
So back to our question, what SHOULD happen between the search box and the bibliographic display? This is not an easy question, and it will not have a simple answer. Part of the difficulty of the answer is that there will not be one single right answer. Another difficulty is that we won't know a right answer until we try it, give it some time, open it up for tweaking, and carefully observe. That's the kind of thing that Google does when they make changes in their interface, but we haven't got either Google's money nor its network (we depend on vendor systems, which define what we can and cannot do with our catalog).
Since I don't have answers (I don't even have all of the questions) I'll pose some questions, but I really want input from any of you who have ideas on this, since your ideas are likely to be better informed than mine. What do we want to know about this problem and its possible solutions?
(Some of) Karen's QuestionsWhy have we stopped evolving subject access?Is it that keyword access is simply easier for users to understand? Did the technology deceive us into thinking that a "syndetic apparatus" is unnecessary? Why have the cataloging rules and bibliographic description been given so much more of our profession's time and development resources than subject access has? 
Is it too late to introduce knowledge organization to today's users?The user of today is very different to the user of pre-computer times. Some of our users have never used a catalog with an obvious knowledge organization structure that they must/can navigate. Would they find such a structure intrusive? Or would they suddenly discover what they had been missing all along? 
Can we successfully use the subject access that we already have in library records?Some of the comments in the articles organized by Cochrane in my previous post were about problems in the Library of Congress Subject Headings (LCSH), in particular that the relationships between headings were incomplete and perhaps poorly designed. Since LCSH is what we have as headings, could we make them better? Another criticism was the sparsity of "see" references, once dictated by the difficulty of updating LCSH. Can this be ameliorated? Crowdsourced? Localized?
We still do not have machine-readable versions of the Library of Congress Classification (LCC), and the machine-readable Dewey Decimal Classification (DDC) has been taken off-line (and may be subject to licensing). Could we make use of LCC/DDC for knowledge navigation if they were available as machine-readable files?
Given that both LCSH and LCC/DDC have elements of post-composition and are primarily instructions for subject catalogers, could they be modified for end-user searching, or do we need to develop a different instrument altogether?
How can we measure success?Without Google's user laboratory apparatus, the answer to this may be: we can't. At least, we cannot expect to have a definitive measure. How terrible would it be to continue to do as we do today and provide what we can, and presume that it is better than nothing? Would we really see, for example, a rise in use of library catalogs that would confirm that we have done "the right thing?"
Notes*Modern Subject Access in the Online Age: Lesson 3
Author(s): Pauline A. Cochrane, Marcia J. Bates, Margaret Beckman, Hans H. Wellisch, Sanford Berman, Toni Petersen, Stephen E. Wiberley and Jr.
Source: American Libraries, Vol. 15, No. 4 (Apr., 1984), pp. 250-252, 254-255
Stable URL: http://www.jstor.org/stable/25626708
 Katz, Bill. Introduction to Reference Work: Reference Services and Reference Processes. New York: McGraw-Hill, 1992. p. 82 http://www.worldcat.org/oclc/928951754. Cited in: Brown, Stephanie Willen. The Reference Interview: Theories and Practice. Library Philosophy and Practice 2008. ISSN 1522-0222
 One answer, although it doesn't explain everything, is economic: the cataloging rules are published by the professional association and are a revenue stream for it. That provides an incentive to create new editions of rules. There is no economic gain in making updates to the LCSH. As for the classifications, the big problem there is that they are permanently glued onto the physical volumes making retroactive changes prohibitive. Even changes to descriptive cataloging must be moderated so as to minimize disruption to existing catalogs, which we saw happen during the development of RDA, but with some adjustments the new and the old have been made to coexist in our catalogs.
 Note that there are a few places online, in particular Wikipedia, where there is a mild semblance of organized knowledge and with which users are generally familiar. It's not the same as the structure that we have in subject headings and classification, but users are prompted to select pre-formed headings, with a keyword search being secondary.
 Simon Spero did a now famous (infamous?) analysis of LCSH's structure that started with Biology and ended with Doorbells.
One of my favourite exercises from library school is perhaps one that you had to do as well. We were instructed to find a particular term from the Library of Congress Subject Heading “Red Books” and develop that term into a topic map that would illustrate the relationships between the chosen term and its designated broader terms, narrower terms and related terms. Try as I might, I cannot remember the term that I used in my assignment so many years ago so, here is such a mapping for existentialism.
Recently we’ve been spending much attention on the language of these subject headings as we come to recognize those particular headings that are reductive and problematic. For example, undocumented students are denied their basic humanity when they are described as illegal aliens. And as most of you already know, the act of reforming this particular heading was seriously hindered by Republicans in the House of Representatives.
As troubling as this interference is, this is not what I want to write about LCSH for you today. For this post, I want to bring greater attention to something else about subject headings. I want to share something that Karen Coyle has pointed out repeatedly but that I have only recently finally grokked.
When we moved to online library catalogues, we stripped all the relationship context from our subject headings — all those related terms, broader terms, all those relationships that placed a concept in relationship with other concepts. As such, all of our subject headings may as well be ‘tags’ for how they are used in our systems. Furthermore, the newer standards that are being developed to replace MARC (FRBR, Bibframe, RDF) either don’t capture this information or if they do, the systems being developed around these standards do not to use these subject relationships or hinder subject ordering [ed. text corrected].
From the slides of “How not to waste catalogers’ time: Making the most of subject headings“, a code4lib presentation from John Mark Ockerbloom:Here’s another way we can view and explore works on a particular subject. This is a catalog I’ve built of public domain and other freely readable texts available on the Internet. It organizes works based on an awareness of subjects and how subjects are cataloged. The works we see at the top of the list on the right, for instance, tend to be works where “United States – History – Revolution, 1775-1783” was the first subject assigned. Books where that subject was further down their subject list tend to appear appear further down in this list. I worry about whether I’ll still be able to do this when catalogs migrate to RDF. [You just heard in the last talk] that in RDF, unlike in MARC, you have to go out of your way to preserve property ordering. So here’s my plea to you who are developing RDF catalogs: PLEASE GO OUT OF YOUR WAY AND PRESERVE SUBJECT ORDERING!
I highly recommend reading Karen Coyle’s series of posts on Catalog and Context in which she patiently presents the reader the history and context of why Library of Congress Subject Headings were developed, how they were used and then explains what has been lost and why.
It begins like this:
Imagine that you do a search in your GPS system and are given the exact point of the address, but nothing more.
Without some context showing where on the planet the point exists, having the exact location, while accurate, is not useful.
In essence, this is what we provide to users of our catalogs. They do a search and we reply with bibliographic items that meet the letter of that search, but with no context about where those items fit into any knowledge map.
And what was lost? While our online catalogs make known-item searching very simple, our catalogues are terrible!dismal!horrible! for discovery and exploration.
Perhaps this is one of the reasons why there is so much interest in outsider-libraries that are built for discovery, like The Prelinger Library.
This remarkable library – which is run by only two people – turns a collection of ephemera, found material and of library discards into a collection built for visual inspiration and support of the independent scholar through careful selection and an unique arrangement that was developed by Megan Prelinger:
Inspired by Aby Warburg’s “law of the good neighbor” the Prelinger Library’s organization does not follow conventional classification systems such as the Dewey Decimal System. Instead it was custom-designed by Megan Shaw Prelinger in a way that would allow visitors to browse and encounter titles by accident or, better yet, by good fortune. Furthermore, somewhat evoking the shifts in magnitudes at play in Charles and Ray Eames’s Powers of Ten (1977) the shelves’ contents are arranged according to a geospatial model departing from the local material specifically originating from or dealing with San Francisco and ending with the cosmic where books on both outer space and science fiction are combined with the more ethereal realms of math, religion, and philosophy.
Of particular note: The Prelinger Library does not have a library catalogue and they don’t support query based research. They think query based research is reductive (Situated Systems, Issue 3: The Prelinger Library).
One thing I wonder: why do we suggest that catalogers work the reference desk but don't suggest that reference folks work in cataloging?
— Erin Leach (@erinaleach) June 26, 2016
Frankly, I’m embarrassed how little I know about the intellectual work behind our systems that I use and teach as a liaison librarian. I do understand that libraries, like many other organizations such as museums, theatre and restaurants, have a “front of house” and “back of house” with separate practices and cultures and that there are very good reasons for specialization. That being said, I believe that the force of digitization has collapsed the space between the public and technical services of the library. In fact, I would go as far to say that the separation is largely a product of past organizational practice and it doesn’t make much sense anymore.
Inspired by Karen Coyle, Christina Harlow, and the very good people of mashcat, I’m working on improving my own understanding of the systems and if you are interested, you can follow my readings in this pursuit on my reading journal, Reading is Becoming. It contains quotes like this:
GV: You mentioned “media archeology” and I was wondering if you’re referring to any of Shannon Mattern’s work…
RP: Well, she’s one of the smartest people in the world. What Shannon Mattern does that’s super-interesting is she teaches both urban space and she teaches libraries and archives. And it occurred to me after looking at her syllabi — and I know she’s thought about this a lot, but one model for thinking about archives in libraries — you know, Megan was the creator of the specialized taxonomy for this pace, but in a broader sense, collections are cities. You know, there’s neighborhoods of enclosure and openness. There’s areas of interchange. There’s a kind of morphology of growth which nobody’s really examined yet. But I think it’s a really productive metaphor for thinking about what the specialty archives have been and what they might be. [Mattern’s] work is leading in that position. She teaches a library in her class.
I understand the importance of taking a critical stance towards the classification systems of our libraries and recognizing when these systems use language that is offensive or unkind to the populations we serve. But critique is not enough. These are our systems and the responsibility to amend them, to improve them, to re-imagine them, and to re-build them as necessary- these are responsibilities of those of our profession.
We know where we need to go. We already have a map.
Fellow LITA Members:
By now you are aware of the violence that occurred overnight in Dallas, Texas. Five police officers were killed and nine officers and civilians were wounded when a gunman opened fire during a peaceful protest about the recent deaths of black men at the hands of police in other cities. Our thoughts and prayers go out to those who have had loved ones killed or injured, as well as to the residents of Dallas who will be facing a great deal of uncertainty in the coming days.
As you may know, LITA will hold its annual LITA Forum in Dallas’ sister city, Fort Worth, this November. LITA staff and leadership will monitor events in the Metroplex over the coming weeks and will stay in communication with our contacts at the Forum venue. We will pass along any LITA-related news or opportunities to support the community as they become available.
In the meantime, I would ask that you reach out to friends, family members, and colleagues who may be distressed by last night’s events. As we learned from the recent shooting at the Pulse nightclub in Orlando, you don’t have to be physically near to this kind of violence to be deeply affected by it. Often the most powerful thing we can do is to reach out with compassion to those who are hurting.
Be well. – Aimee
Aimee Fifarek, LITA President
If you’d told me upon joining the staff of ALA’s Office for Information Technology Policy (OITP) two-plus years ago that I’d be invited to spend a workday mulling over the proper way to credit creators of 3D printed objects, I would have told you to take your time machine back to Tomorrowland for repairs…It must be on the blink, because it transported you to a universe separate from our own…And if you’d informed me I’d spend that day with a gaggle of intellectual property lawyers and digital designers, I would have told you to scrap your time machine altogether. Luckily, I’ve had the privilege of immersing myself in the 3D space as a member of the OITP team, so when I found myself in this exact situation last week, I was confident I hadn’t lost my cosmic bearings.
While I wasn’t bewildered, I certainly was honored. The day consisted of a series of legal, design and technology discussions on the NASA Ames Campus in Mountain View, California. The discussions were sponsored by Creative Commons (CC) – the non-profit that offers standard open licenses for the use and remixing of copyrighted content. They brought together representatives from some very recognizable players in the 3D printing realm: MakerBot, Shapeways, Aleph Objects and the National Institutes of Health 3D Print Exchange, to name a handful.
In addition to feeling honored, I was just a bit tired. I arrived at the discussions straight from the ALA Annual Conference in Orlando. After tramping about the Orange County Convention Center and its expansive environs for six days, swinging out to The Golden State had me craving a jet-fuel-grade cup of coffee. But I digress.
The principal question at hand over the course of the day: How to create a standard method of author attribution for CC-licensed 3D designs once they’ve been built by a printer. Attributing the CC-licensed design in digital form is relatively straightforward. As Creative Commons staffer Jane Park mentions in a recent blog post, major digital design-sharing platforms allow for design files to be marked with Creative Commons licenses that include source metadata. But once a printer converts a CC-licensed design into physical form, the design’s Creative Commons and source information are lost.
Although it’s technically an open question whether or not clear attribution must be present on physical representations of CC-licensed designs, all but one Creative Commons license – CC0 – includes an attribution requirement. So, Creative Commons and all those supportive of the pro-information-access value on which they were founded, have a vested interest in finding a standard attribution mechanism for 3D printed objects. A standard attribution mechanism of this kind would also help 3D designers track when and how their designs are being used after the print button is pushed.
I wish I could say we found one, but we didn’t get that far. Last week’s discussions were only the beginning of what will surely be a robust and deliberative discourse. Several possible solutions were propounded – e.g., the use of RFID tags or barcodes (à la Thingiverse’s “print tag things”) – but none were explored from all angles. Nonetheless, one thing that gained significant traction was the idea that all of the attribution information that is gained about 3D printed objects moving forward should be indexed in a registry of some kind.
So, mile one of the marathon is in the books. ALA appreciates the opportunity to participate in the attribution discussion for 3D printed objects from the starting line. We – and I personally – would like to thank Creative Commons and Michael Weinberg of Shapeways for organizing and hosting the event. You can read Michael’s thorough overview on the challenge of attribution in 3D printing here. Do you have ideas on how to solve the challenge? Share them in the comments section.
The post California talks address challenge of attribution in 3D printing appeared first on District Dispatch.
Back in December I read an article about Seattle Public Library’s WiFi HotSpot lending program. At the time of the article they had 325 devices available for checkout and a waiting list of more than 1,000 patrons. The program was started via a grant with Google but at the end of the year SPL needed to find a permanent solution for paying for the program, which they did. Their goal is 775 units in all. You can see in the graphic above SPL has 566 HotSpots (as of July 7, 2016) with 1,211 holds.
It’s an ambitious program but given the size of the population they serve it may be too small. The New York Public Library has 10,000 devices in its HotSpot program, but again that’s probably not enough. In fact, according to their website all the devices are out (they’re doing a program where patrons get the HotSpot for up to a full year). Of course, comparing Seattle to New York City isn’t exactly apples to apples, but the goal is the same: providing home internet access to people who currently don’t have it.
Both SPL and NYPL started their programs with grants. They also have large taxing bodies in order to support such large programs. I opted to dedicate a portion of my budget to start a pilot program with five devices running unlimited data on both 5G and 2.4G networks. With practically no marketing—we put up signs in the library and included it in our newsletter—the HotSpots were all checked out on the first day they were available. Patrons check them out for one week at a time and can renew the HotSpot up to three times as long as there are no holds.
We have 37 holds on those five devices.
Clearly we need to expand the program. Almost immediately after launching our five HotSpots I got an email from TechSoup—they’re a non-profit who provides technology for other non-profits including libraries—detailing an offer for HotSpots through a company called Mobile Beacon. I requested maximum number of devices, ten, through TechSoup. We are cataloging and processing the HotSpots so we can get them into our patrons’ hands as quickly as possible.
Just like Seattle and New York, we want to provide mobile internet access to patrons. Our program is smaller in size and ambition but no less important to the people we serve. Our school district provides iPads to all students K-12. That works great when the students are in school or at the library, but many of them do not have internet access at home. Now they can check out a HotSpot and have that access at home.
We have patrons who take HotSpots up north camping. Coverage was about what you’d expect as you get more remote so we tell patrons to check the coverage map before they check out a HotSpot. Other patrons used the HotSpot on long road trips (we assumed the driver did not also use the HotSpot).
Will 15 HotSpots be enough for our patrons? Time will tell; we can always add more. I’d rather have fewer devices that I can turn over regularly than a lot of devices sitting unused.
Have you started a HotSpot program at your library?