AfterwordThere is no question that FRBR represents a great leap forward in the theory of bibliographic description. It addresses the “work question” that so troubled some of the great minds of library cataloging in the twentieth century. It provides a view of the “bibliographic family” through its recognition of the importance of the relationships that exist between created cultural objects. It has already resulted in vocabularies that make it possible to discuss the complex nature of the resources that libraries and archives gather and manage.
As a conceptual model, FRBR has informed a new era of library cataloging rules. It has been integrated into the cataloging workflow to a certain extent. FRBR has also inspired some non-library efforts, and those have given us interesting insight into the potential of the conceptual model to support a variety of different needs.
The FRBR model, with its emphasis on bibliographic relationships, has the potential to restore context that was once managed through alphabetical collocation to the catalog. In fact, the use of a Semantic Web technology with a model of entities and relations could be a substantial improvement in this area, because the context that brings bibliographic units together can be made explicit: “translation of,” “film adaptation of,” “commentary on.” This, of course, could be achieved with or without FRBR, but because the conceptual model articulates the relationships, and the relationships are included in the recent cataloging rules, it makes sense to begin with FRBR and evolve from there.
However, the gap between the goals developed at the Stockholm meeting in 1991 and the result of the FRBR Study Group’s analysis is striking. FRBR defined only a small set of functional requirements, at a very broad level: find, identify, select, and obtain. The study would have been more convincing as a functional analysis if those four tasks had been further analyzed and had been the focus of the primary content of the study report. Instead, from my reading of the FRBR Final Report, it appears that the entity-relation analysis of bibliographic data took precedence over user tasks in the work of the FRBR Study Group.
The report’s emphasis on the entity-relation model, and the inclusion of three simple diagrams in the report, is mostly likely the reason for the widespread belief that the FRBR Final Report defines a technology standard for bibliographic data. Although technology solutions can and have been developed around the FRBR conceptual model, no technology solution is presented in the FRBR Final Report. Even more importantly, there is nothing in the FRBR Final Report to suggest that there is one, and only one, technology possible based on the FRBR concepts. This is borne out by the examples we have of FRBR-based data models, each of which interprets the FRBR concepts to serve their particular set of needs. The strength of FRBR as a conceptual model is that it can support a variety of interpretations. FRBR can be a useful model for future developments, but it is a starting point, not a finalized product.
There is, of course, a need for technology standards that can be used to convey information about bibliographic resources. I say “standards” in the plural, because it is undeniable that the characteristics of libraries and their users have such a wide range of functions and needs that no one solution could possibly serve all. Well-designed standards create a minimum level of compliance that allows interoperability while permitting necessary variation to take place. A good example of this is the light bulb: with a defined standard base for the light bulb we have been able to move from incandescent to fluorescent and now to LED bulbs, all the time keeping our same lighting fixtures. We must do the same for bibliographic data so that we can address the need for variation in the different approaches between books and non-books, and between the requirements of the library catalog versus the use of bibliographic data in a commercial model or in a publication workflow.
Standardization on a single over-arching bibliographic model is not a reasonable solution. Instead, we should ask: what are the minimum necessary points of compliance that will make interoperability possible between these various uses and users? Interoperability needs to take place around the information and meaning carried in the bibliographic description, not in the structure that carries the data. What must be allowed to vary in our case is the technology that carries that message, because it is the rapid rate of technology change that we must be able to adjust to in the least disruptive way possible. The value of a strong conceptual model is that it is not dependent on any single technology.
It is now nearly twenty years since the Final Report of the FRBR Study Group was published. The FRBR concept has been expanded to include related standards for subjects and for persons, corporate bodies, and families. There is an ongoing Working Group for Functional Requirements for Bibliographic Records that is part of the Cataloguing Section of the International Federation of Library Associations. It is taken for granted by many that future library systems will carry data organized around the FRBR groups of entities. I hope that the analysis that I have provided here encourages critical thinking about some of our assumptions, and fosters the kind of dialog that is needed for us to move fruitfully from broad concepts to an integrative approach for bibliographic data.
From FRBR, Before and After, by Karen Coyle. Published by ALA Editions, 2015
©Karen Coyle, 2015
FRBR, Before and After by Karen Coyle is licensed under a Creative Commons Attribution 4.0 International License.
“Telling DSpace Stories” is a community-led initiative aimed at introducing project leaders and their ideas to one another while providing details about DSpace implementations for the community and beyond. The following interview includes personal observations that may not represent the opinions and views of the University of Konstanz or the DSpace Project.
I’ve been meaning to write this post up for a while. It is still very much a work in progress, so please forgive the winding, rambling nature this post will take. I’m trying to pull together and process ideas and experiences that I can eventually use in my own improvement, or maybe as an essay or article proposal on ‘reskilling catalogers’ and how it is part of a larger re-imagining of library metadata work beyond just teaching catalogers to code or calling cataloging ‘metadata’. If you have feedback on this, please let me know: email@example.com or @cm_harlow on Twitter. Thanks!My Background and Goals as a Supervisor
First, a bit on me, my work background briefly, and my current job, as well as my idealism for metadata work.
My current position is both my first ‘librarian’ position (although, FYI, I think the term ‘entry-level librarian’ has serious flaws, and it is a really sore spot with me personally) and my first time as a supervisor in a library. I supervise the Cataloging Unit (5 f/t staff members), sometimes referred to by the catalogers themselves (but nobody else at present) as the ‘Cataloging & Metadata Unit’, in a medium-sized academic library. Before this, I was temporarily a ‘professional’ but non-librarian metadata munger, and before that, a support staff or paraprofessional in a large academic library in a variety of posts. Some of those posts involved supervising students, but not officially - I’d be there to assign/guide work, check hours, schedule, do all the on-the-ground stuff, but wasn’t the person who would sign the timesheets or do the hiring. Often, and more frequently in recent years, I was a bit of an unofficial liaison, tutor, whatever you want to call it, for some of the librarians looking to expand technical practices and/or skills, but very much unofficially. A lot of this kind of work came to me because I love exploring new technology and ideas, and I absolutely love informal workshops/skillshares. Outside of libraries, I’ve got some supervisory experience, as well as a year as a public NYC middle school math teacher, under my belt.
In taking my current position, there was a lot more involved in making that decision, but one reason included that I was actually pretty excited to take on being a Cataloging & Metadata Unit supervisor (as well as pretty nervous, of course). I wanted to see how I would adapt both to this position and adapt the position to me. I continue to hope I have a lot to offer to the catalogers I work with because I spent years as a libraries paraprofessional before deciding to get my MLIS and move ‘up the ladder’, and I’m highly suspect of that ladder.
Additionally, I hope this can be a way for me to lead library data work into a new imagining and model through example and experience. Many people talk about how Cataloging == Metadata, and we see more and more traditional MARC cataloging positions being called ‘Metadata’ positions. They might even involve some non-MARC metadata work, but usually remaining divorced from MARC work by differing platforms, standards, data models, or other. There are plenty of people declaring (rightfully in my opinion!) metadata and cataloging to be the same work, yet these statements are usually from one side of the still-existent fence unfortunately. Actually integrating decades of data silos, distinct sets of standards and communities, toolsets/editors, functional units, workflows and procedures, among so many other divisions both real and perceived, is something I want to make actually happen, though I freely admit how daunting it can be. Trying my hand at being a supervisor was one way for me to help us as a library technology and data community work towards this integration.
A lot of what I’ve focused on in the first months of this job is assessing what already exists - catalogers’ areas of expertise and interests, workflows, toolsets, communication lines, expectations - then trying to lay down foundations for where I hope we as a unit can go. As it stands, there was a lot of change going on around my arrival in this position, especially for the catalogers. My library migrated (in an over-rushed fashion, but hindsight is 20-20) ILSes a few months before my arrival. Cataloging procedures had been haphazardly moved to RDA according to particular areas of MARC expertise and interest (such as Music and Video cataloging was moved to RDA policies because the particular catalogers focused on that area are invested in learning RDA). The digital collections metadata work was partially given to the catalogers vis-a-vis a very much locked-down MODS metadata editor, before being taken back over by digital library developers, digitization staff, and archivists (and now managed by me). And there is imminent but yet-to-be-well-defined (due to a number of reasons, including many retirements) technical services re-organization going on, both of department structure and space. As regards non-MARC metadata, though not the metadata work the catalogers were involved in before my arrival, there is migration of multiple digital library platforms to one, an IR platform migration in the works, and migration/remediation of all the previous digital projects metadata from varying versions of DC to MODS.
So, a lot of change to walk into as the new cataloging & metadata unit supervisor, as well as the only cataloging and/or metadata librarian. Even more changes for the catalogers to endure with now a new and relatively green supervisor.
I was pretty prepped to expect that I would be taking on a new sort of library data leadership role that works across departments - to re-imagine, as I understand it, where, how and why cataloging and metadata expertise/work can be applied. And to make sure that all of our library data practices are not just interoperable, but accessible to metadata enhancement and remediation work by the catalogers. This has meant the creation of new workflow, data piplines, tools, and most importantly, comfort areas for the catalogers. Working with them at the forefront of my change efforts has really forced me to develop new skills rather quickly, including trying to situate not just myself, but a team of talented people with varying experiences and goals in a rapidly changing field. Change doesn’t scare me, but it’s not just about me now.Stop Dumping on Technical Services & Stop Holding onto the Past, Technical Services
Beyond all of these local changes, it is pretty well documented that libraries, in particular, academic libraries’ technical services departments are changing. Some might say shrinking, and I understand that, but I want to see it as positive change - we can take our metadata skills and expertise, and generalize them outside of MARC and the ILSes that so many catalogers associate directly with their work. That generalized skillset - and I hesitate at using the word generalized, perhaps something like more easily transferable, or integrated, or interoperable is better - can then be applied to many different and new library workflows; in particular, all the areas growing around data work writ large in libraries.
In a presentation from a while ago, I made a case for optimism in library technical services, if we can be imaginative and ready to adapt, as well as libraries at a higher level be prepared for what can be best described as more modular and integrated data workflows - no more data/workflow/functional/platform silos. I try not just to say that ‘cataloging is metadata work’, but involve metadata work across data platforms and pipelines, and show the value of making this work responsive and iterative - almost agile, though I feel uncomfortable taking that term from a context I’m less familiar with (agile development). I especially want to divorce cataloging expertise from knowing how to work with a particular ILS or OCLC Connexion editor.
In the Ithaka S+R US Library Survey 2013, the question “Will your library add or reduce staff resources in any of the following areas over the next 5 years?” showed a steep decline of staff resources for technical services in response - close to 30%, and far more of a decline than any other academic library area mentioned in the context of this question. However, we see a lot of growth in response to that question for areas that can use the data expertise currently under-tapped in cataloging and metadata work: areas such as Digital preservation and archiving; Archives, rare books, and special collections; Assessment and data analytics; Specialized faculty research support (including data management); and Electronic resources management. This all uses the skills of cataloging and metadata workers in different ways, but we also need to recognize that there are different and varied skills represented in cataloging and metadata work as it exists now. One way to conceptualize this is the divide in skills required between original MARC cataloging, where the focus is very much on the details of a single object and following numerous standards, versus what may have previously been called ‘database maintenance’ and is more generally seen, to me, now as batch library data munging - where it is necessary to understand the data models involved and how to target enhancements to a set of records while avoiding errors in data outliers.Cataloging versus Metadata & Where Semantics Hit Institutional Culture
A note on ‘cataloging’ versus ‘metadata’ as a term to describe the work: yes, I agree that its all metadata, and that continuing to support the divide between MARC and non-MARC work is a problem. However, I also recognize that departmental and institutional organizations and culture are not going to change overnight, and that these terms are very much tied into those. There is disruption, then there is alienation, and as a supervisor, I’ve been very aware of the tense balance required therein. I don’t want to isolate the catalogers; I really cannot afford to isolate the administration that helps to decide the catalogers’ professional futures (if job lines remain upon vacancy; if their work continues to be recognized and supported; if they get reassigned to other units with easier to explain areas of operation and outreach; etc.). But I know things needs to change. This explains in part why I am wary of the use of new terms (though metadata is not a new term, but it has only recently grown exponentially in use for describing MARC work) because they can carry the possibility of turning people away from changes, as folks might see the new labels as part of a gimmick and not real, substantive change. I will generally go with describing all of this work as metadata in most contexts, because I do feel like we are beginning to integrate our data work in a way that the catalogers now buy into what is meant really by saying metadata. Yet in certain contexts, I do continue to use cataloging to mean MARC cataloging and metadata as non-MARC work, because it is admittedly an easy shorthand as well as tied into other (perhaps political, perhaps not) considerations.
Back to the post at hand, what I’ve started to build, and see some forward-movement on (as well as some hesitation), is a more integrated cataloging & metadata unit. The catalogers did do some metadata work before I arrived, by which I mean non-MARC metadata creation. However, this was severely limited to simply working with descriptive metadata in a vaccuum - namely, a metadata editor made explicitly for a particular project. From what I can tell, the metadata model and application profile was created outside the realm of the catalogers; they were just brought in to fill in the form for one object at a time. This is not unusual, but hardly touches on what metadata work can be. Worse, the metadata work the catalogers did ended up not being meaningfully used in any platform or discovery layer, resulting in some disenchantment with non-MARC metadata work as a whole (seeing it as not important as ‘traditional MARC cataloging’, or as unappreciated work). I can absolutely understand how this limited-view editor and metadata work decisions can make things more efficient; I somewhat understand the constant changes in project management that left a lot of metadata work unused; but I am trying to unravel now just what this means for the catalogers’ understanding of high-level data processes outside of MARC and how the work they do in MARC records can apply similarly to the work done elsewhere for descriptive metadata. I also need to rebuild their trust of their work being appreciated and used in contexts beyond the MARC catalog. The jury is still out on how this is going.Cataloging/Metadata Reskilling Workflows So Far
So yeah, yeah, lots of thoughts and hot air on what I am trying to do, what I hope happens. What have I tried? And how is it going? How are the catalogers reacting? Here are a few examples.Metadata Remediation Sprint
When I first arrived, we had a ‘metadata remediation sprint’. This was a chance for us all to get to know each other in a far less formal work environment - as well as a chance for the catalogers to get to know some of my areas of real interest in data work, in particular, non-MARC metadata remediation using OpenRefine, a set of Python scripts, and GitHub for metadata versioning. This event built on the excitement of the recently announced Digital Library of Tennessee, a DPLA Service Hub with aggregation and metadata work happening at UTK (I’m the primary metadata contact for this work). The catalogers knew something about what this meant, and not only did they want to learn more, but they wanted to get involved. I tried my best to build a data remediation and transformation pipeline for our own UTK collections that could involve them in this work, but some groundwork for batch metadata remediation had to be laid first, and this sprint helped with that.
The day involved having a 8:30 AM meeting (with coffee and pie for breakfast) where I explained the metadata sets, OAI-PMH feeds of XML records, the remediation foci - moving DC to MODS, reconciling certain fields against chosen vocabularies, cleaning up data outliers - and working with this metadata in OpenRefine. There was some talk about the differences between working with data record by record versus working with a bunch of records in batch, as we had at that point about 80,000 DC records needing to be pulled, reviewed, remediated and transformed, collection by collection. Then, each cataloger was given a particular dataset (chosen according to topical interest), and given the day to play around with migration this metadata work. It was seen as a group focus on a particular project, so a kind of ‘sprint’.
The sprint was also a way for me to gauge each cataloger’s interest possibly in doing more of this batch metadata work, who really wanted to dive into learning new tools, and the ability each had for working with metadata sets. This is not to say at all that each cataloger couldn’t learn and excel at batch metadata work, using new tools, or metadata work generally; but matching different aspects of metadata work to folk’s work personalities was key in my admittedly limited opinion. In assigning new projects and reskilling, I didn’t want to throw anyone into new areas of work that they wouldn’t be a good fit for or have some sort of overlapping expertise with, as there was already enough change going on. Cataloging & metadata work is not always consistent or uniform, so there is and remains different types of projects to be better integrated into workflows and given to the person best able to really take ownership (in a positive way) of that project and excel with it.
The catalogers had so much untapped expertise already, that the sprint went very well. Some catalogers warmed to OpenRefine right away, with the ability to facet, see errors, and repair/normalize across records. Other catalogers preferred to stick with using Excel and focusing in on details for each record. All the datasets, each a collection pulled from the OAI-PMH feed and prepared as CSV and as OpenRefine projects by me beforehand, were pulled from GitHub repositories, giving the catalogers a view of version control and one possible use of Git (without me saying, ‘Hey, I’m going to teach you version control and coding stuff’ - the focus was on their area of work, metadata). Better yet, I was able to get their work into either migration paths for our new digital collections platform or even into the first group of records for the DPLA in Tennessee work, meaning the catalogers saw immediately that their work was being used and greatly appreciated (if only by me at first, those others have taken note of this work as well).
The catalogers have done amazingly well with all of this, and I know how lucky I am to work with a team that is this open to change.Moving Some to Batch Data Work
This movement in part towards batch metadata work and remediation doesn’t just stick with the original focus on non-MARC metadata for that sprint day. In particular, 2 of the catalogers have really taken on a lot of the batch metadata normalization and enhancement with our MARC data as well, informed perhaps by seeing batch data work outside of the context of MARC/non-MARC or specific paltforms during that day or in other such new projects given to them. Though, to be fair, I need to admit two things (at least):
- one of the catalogers is already the ‘database maintenance’ person, or what I’d call data administrator, though her position (not HR) title was, upon my arrival, still blank. This fact is tied up to ideas in administration of this database maintenance work not being ‘cataloging’ in a traditional understanding - highlighting the record by record creation versus data munging divide that seems to exist in too many places still. I think this work will lead metadata work in the future, especially as content specialists are more often the metadata creators in digital collections, and catalogers need to be brought in increasingly for data review, remediation, enhancement, and education/outreach. Don’t think this will happen with MARC records? I think it already is when we consider the poor state of most vendor MARC records we often accept. We need to find better ways to review/enhance these records while balanced against the possibility they’ll be overwritten. Leading to my second admission…
- The MARC/non-MARC work is still very much tied to platforms, especially the Alma ILS which our department has really bought into at a high level. One of the catalogers who did very well with OpenRefine is now working with the vendor records for electronic resources using MARCEdit outside of the ILS. She has really done very well in being able to review these records in MARCEdit in batch, apply some normalization routines, and only then import those into our Alma ILS. While these do eventually end up in the ILS, it is my hope that the work with the data itself outside of Alma gives the non-MARC data work outside of other platforms and editors more context for her. I don’t know if this is the case, however.
For the catalogers who are more record-focused, we’ve gotten some cleanup projects requiring more manual review lined up - this includes reviewing local records where RDA conversion scripts/rules cannot be applied automatically because they need a closer review, or sets of metadata where fields are used too inconsistently to have metadata mappings applied in batch. This work is not pressing/urgent, so it can be worked on when a break from traditional MARC cataloging is needed, or the platforms for traditional MARC cataloging are down (which seems to occur more and more often).Centralized, Public, Group-created Documentation
In all of this, one of the key things I’ve needed to do is to get centralized, responsive (as in changing according to new needs and use cases), and open/transparent documentation somewhere. There was some documentation stored in various states in a shared drive when I arrived, but a lot of it had not been updated since the previous supervisor. There were multiple version of procedures floating about in the shared drive as well as in print-outs, leading to other points of confusion. Additionally, it was difficult, sometimes impossible, for other UTK staff who sometimes need to perform minor cataloging work or understand how cataloging happens to access these documents in that shared drive.
Upon my arrival, the digital initiatives department was already planning a move to confluence wikis for their own documentation; I immediately signed up for a Cataloging wiki space as well. In getting this wiki set-up, a lot of the issue was (and remains) buy-in - not just for reading the wiki, but for using and updating the wiki documentation. Documentation can be a pain to write up, and there can be fear about ‘writing the wrong thing’ for everyone to see, particularly in a unit that has had many different workflows and communication whirlpools about.
I’ve tried my best to get wiki documentation buy-in by example and creating an open atmosphere, though I worry at how successful I’ve been with this. I link to everything from procedures, legacy documentation in process of being updated, data dictionaries, mappings, meetings notes, and unit goals in the wiki. Catalogers are asked to fill in lacunae that I can’t fill myself either due to lack of UTK-specific knowledge/experience or time. I try to acknowledge their work on documentation wherever possible - meetings, group emails, etc. Other staff members outside of the Cataloging Unit are often pointed to the wiki documentation for questions and evolving workflows. I hope this gives them a sense of appreciation for doing this work.
Documentation and wiki buy-in remains a struggle, but not because the catalogers don’t see the value of this work (I believe), but because documentation takes time and can be hard to create. To not push too hard on getting this documentation filled out immediately, thus risking burn out, I’ve not pushed on rewriting all possible policies and procedures at once, despite there being many standing documentation gaps. Instead, we aim to focus on documenting areas that we run across in projects or that the catalogers are particularly interested in (like music cataloging, special collections procedures, etc.) or working through currently. I’m heartened to say that, increasingly, they are sharing their expertise more and more in the wiki.To be continued…
I have outstanding ideas and actions to discuss, including our policy on cataloger statistics (and how they are used), the recent experience of revising job descriptions, and the difficulty between both being a metadata change agent and the advocate for the catalogers when cataloging work is often overlooked or underestimated by either administration or other departments (particularly as more metadata enhancement instead of or in tandem with metadata creation is done). But this will need to be part of a follow-up post.
I’m new to all this, and I’m trying my best to be both a good colleague and supervisor while wanting to move the discussion on what metadata work is in our library technology communities. I have a lot of faults and weaknesses, and as such, if you’re reading this and have ideas, recommendations, criticisms, or other, please get in touch - firstname.lastname@example.org or @cm_harlow on Twitter (and thanks for doing so). Whatever happens in the future, whether I stay a supervisor or not in the years to come (I do sorely miss having my primary focus on metadata ‘research and development’ so to speak), this has been a really engaging experience so far.
Today I found the following resources and bookmarked them on Delicious.
- Vector Vector is a new, fully open source communication and collaboration tool we’ve developed that’s open, secure and interoperable. Based on the concept of rooms and participants, it combines a great user interface with all core functions we need (chat, file transfer, VoIP and video), in one tool.
- ResourceSpace Open source digital asset management software is the simple, fast, & free way to organize your digital assets
Digest powered by RSS Digest
Check out the latest LITA web course:
Personal Digital Archiving for Librarians
Instructor: Melody Condron, Resource Management Coordinator at the University of Houston Libraries.
Offered: October 6 – November 11, 2015
A Moodle based web course with asynchronous weekly content lessons, tutorials, assignments, and group discussion.
Most of us are leading very digital lives. Bank statements, interaction with friends, and photos of your dog are all digital. Even as librarians who value preservation, few of us organize our digital personal lives, let alone back it up or make plans for it. Participants in this 4 week online class will learn how to organize and manage their digital selves. Further, as librarians participants can use what they learn to advocate for better personal data management in others. ‘Train-the-trainer’ resources will be available so that librarians can share these tools and practices with students and patrons in their own libraries after taking this course.
At the end of this course, participants will:
- Know best practices for handling all of their digital “stuff” with minimum effort
- Know how to save posts and data from social media sites
- Understand the basics of file organization, naming, and backup
- Have a plan for managing & organizing the backlog of existing personal digital material in their lives (including photographs, documents, and correspondence)
- Be prepared to handle new documents, photos, and other digital material for ongoing access
- Have the resources to teach others how to better manage their digital lives
Melody Condron is the Resource Management Coordinator at the University of Houston Libraries. She is responsible for file loading and quality control for the library database (basically she organizes and fixes records for a living). At home, she is the family archivist and recently completed a 20,000+ family photo digitization project. She is also the Chair of the LITA Membership Development Committee (2015-2016).
October 6 – November 11, 2015
- LITA Member: $135
- ALA Member: $195
- Non-member: $260
Moodle login info will be sent to registrants the week prior to the start date. The Moodle-developed course site will include weekly new content lessons and is composed of self-paced modules with facilitated interaction led by the instructor. Students regularly use the forum and chat room functions to facilitate their class participation. The course web site will be open for 1 week prior to the start date for students to have access to Moodle instructions and set their browser correctly. The course site will remain open for 90 days after the end date for students to refer back to course material.
Register Online, page arranged by session date (login required)
Mail or fax form to ALA Registration
call 1-800-545-2433 and press 5
Questions or Comments?
For all other questions or comments related to the course, contact LITA at (312) 280-4268 or Mark Beatty, email@example.com
- Hide technical complexity
- Allows control over scoring components and result ordering
- Allows balancing of these scoring components against each other
- Provides feedback
- Allows visualization of the result of their changes
The post How Getty Images Executes Managed Search with Apache Solr appeared first on Lucidworks.
Last night I dined at the bar of a run-of-the-mill chain restaurant. On the road for business this is my usual modus operandi, with the variant of dining in the hotel bar instead. You get the picture.
So my bartender in this instance turns out to be flat out awesome. She’s there when I want her and not when I don’t. A simple signal while I’m on a long phone call with my wife answers any question. She’s attentive but not hovering. She knows which questions to ask and when, and also when to stay away. She even recognizes me from previous visits, often a year apart. She gives, in other words, astonishing service. Believe me, I know it when I see it.
At this little chain restaurant in a town that most people have never heard of, I was getting the kind of service that I’ve received at some of the most expensive restaurants in Sonoma, Napa, Chicago, New York, Paris, San Francisco — you name it. And often (sadly) better.
The point is this: great service is not always tied to the money being paid for that service. I agree that if you are paying top dollar at an expensive restaurant you expect excellent service. But the converse is not true: that you will necessarily receive poor service at a much less expensive restaurant. This is because service has more to do with the individual providing the service than it does with anything else.
Sure, good training can be key. But some servers learn on the job and intuitively understand what great service means. And libraries are no different. Individuals can be given the tools they need to provide excellent customer service regardless of the monetary resources at hand.
Great service, I assert, can be boiled down to a few principles that can be employed in any organization that attempts to provide it:
- Attentiveness. A moment of breakthrough understanding about service for me came when I was at a restaurant and I happened to notice a waitperson standing aside, surveying the tables. He/she (it doesn’t matter which) was looking for anything that needed doing. Was anyone light on water? Was a table finishing their meal? Would someone need to be alerted to bring the bill? This level of attentiveness to the entire enterprise is, sadly, rare, whether it be a restaurant or a library. What would happen, do you think, if you set a library staffer to simply observe users of the library and try to discern what they needed before they even express it?
- Distance. What may appear at first glance to be the opposite of attentiveness is distance, but it isn’t. True attentiveness also means perceiving when to stay away. Frankly, I find it quite annoying to be interrupted in the middle of a conversation with my dinner partner simply for him/her to ask if everything is OK. One of the secrets of great service is to know when to step back and let the magic happen. Ditto with libraries, although we are less cursed with this particular mistake due to lack of staff.
- Listening. To know what someone wants, you need to actively listen and even, as any reference librarian knows, ask any necessary clarifying questions.
- Anticipation. Outstanding service anticipates needs. Libraries try to do this in various ways, but I also believe that we can do a better job of this.
- Permission. I cut my teeth in libraries by running circulation operations. As an academic library circulation supervisor, I understood how important it was to provide permission to my workers to make exceptions to certain rules. For other rules, they were to escalate the issue up to me so I could decide if a rule could be bent. But you should always provide your staff with clear guidance on ways in which public service could be enhanced when necessary by variance in enforcement and the permission to apply the fix.
These are just some of the strategies that occur to me in developing astonishing public service. Feel free to share your thoughts in a comment below. Libraries are nothing if not public service organizations, so getting this really right is essential to our success.
What do you do when a patron or a parent finds a book in your library offensive and wants to take it off yourshelves? How do you remain sensitive to the needs of all patrons while avoiding banning a title? How can you bring attention to the issue of book banning in an effective way? In this 1-hour webinar presented by ALA’s Office for Intellectual Freedom and SAGE, three experienced voices will share personal experiences and tips for protecting and promoting the freedom to read.
Tuesday, September 29 |9am PDT| 10am MDT|11am CDT| 12pm EDT
Part I: How to use open communication to prevent book challenges
Kate Lechtenberg, teacher librarian at Iowa’s Ankeny Community School District, finds that conversations between librarians, teachers, students, and parents are a key way to creating a culture that understands and supports intellectual freedom. “The freedom to read is nothing without the freedom to discuss the ideas we find in books.”
Part II: How to handle a book challenge after it happens
Kristin Pekoll, assistant director of ALA’s Office for Intellectual Freedom, will share her unique experiences facing several book challenges (and a potential book burning!) when she served as a young adult librarian. How did she address the needs of upset parents and community members while maintaining unrestricted access to information and keeping important books on her shelves?
Part III: How to bring attention to the issue of banned books
Why would a supporter of free speech and open learning purposely ban a book? Scott DiMarco, director of the North Hall Library at Mansfield University, reveals how he once banned a book to shed light on library censorship and what else he is doing to support the freedom to read on his Pennsylvania campus.
The post Webinar: Protect the freedom to read in your library appeared first on District Dispatch.
Mark your calendars! OITP’s Copyright Education Subcommittee sponsors CopyTalk on the first Thursday of every month at 2:00 pm (Eastern). Upcoming webinars include the College Art Association’s best practices for fair use, fan fiction copyright issues, and state government documents aka “Is the state tax code protected by copyright?” Our October 1st webinar will be on the Trans-Pacific Partnership (TPP) and what it could mean for libraries with Krista Cox, Director of Public Policy Initiatives from the Association of Research Libraries (ARL).
If you want to suggest topics for CopyTalk webinars, let us know via email (firstname.lastname@example.org) and use the subject heading “CopyTalk.”
Oh yes! The webinars are free, and we want to keep it that way. We have 100 seat limit but any additional seats are outrageously expensive! If possible, consider watching the webinar with colleagues or joining the webinar before start time. And remember, there is an archive.
The post CopyTalk webinar on copyright court rulings now available appeared first on District Dispatch.
Aida Marissa Smith
Bradford Lee Eden
Bradford Lee Eden
Journal of Web Librarianship: Toward a Usable Academic Library Web Site: A Case Study of Tried and Tested Usability Practices
So it seems right to add the coda — Oyster is going out of business.
One of the challenges that Oyster faced faced was having to constantly placate publishers concerns. The vast majority of them are very apprehensive about going the same route music or movies went.
In a recent interview with the Bookseller, Arnaud Nourry, the CEO of Hachette said“We now have an ecosystem that works. This is why I have resisted the subscription system, which is a flawed idea even though it proliferates in the music business. Offering subscriptions at a monthly fee that is lower than the price of one book is absurd. For the consumer, it makes no sense. People who read two or three books a month represent an infinitesimal minority.”
Penguin Random House’s CEO Tom Weldon echoed Arnaud’s sentiments at the Futurebook conference a little awhile ago in the UK. “We have two problems with subscription. We are not convinced it is what readers want. ‘Eat everything you can’ isn’t a reader’s mindset. In music or film you might want 10,000 songs or films, but I don’t think you want 10,000 books.”
The closure of Oyster comes two months after Entitle, another e-book subscription service, closed. With Entitle and now Oyster gone there is one remaining standalone e-book service, Scribd, as well Amazon’s Amazon Unlimited service.
What could have done Oyster in? Oh, I don’t know, perhaps another company with a subscription e-book service and significantly more resources and consumers. Like, say, Amazon? It was pretty clear back when Amazon debuted “Kindle Unlimited” in July 2014 that the service could spell trouble for Oyster. The price was comparable ($9.99 a month) as was the collection of titles (600,000 on Kindle Unlimited as compared to about 500,000 at the time on Oyster). Not to mention that Amazon Prime customers already had complimentary access to one book a month from the company’s Kindle Owner’s Lending Library (selection that summer: more than 500,000). In theory, Oyster’s online e-book store was partly created to strengthen its bid against Amazon, but even here the startup was fighting a losing battle, with many titles priced significantly higher there than on Jeff Bezos’ platform.
Where Oyster failed to take Amazon on, however, it’s conceivable that Google plus a solid portion of Oyster’s staff could succeed. The Oyster team has the experience, while Google has the user base and largely bottomless pockets. By itself, Oyster wasn’t able to bring “every book in the world” into its system. But with Google, who knows? The Google Books project, a sort of complement to the Google Play Store, is already well on its way to becoming a digital Alexandria. Reincarnated under the auspices of that effort, Van Lancker’s dream may happen yet.
Filed under: General
National Library Card Sign-up Month got a shout out by two progressive voices on Capitol Hill: Ohio Reps. Marcy Kaptur (D-9th) and James B. Renacci (R-16th). In an op ed article published in The Hill’s Congress Blog, Reps. Kaptur and Renacci advised their fellow congressmen of the strong link between libraries and student performance, telling them to “leverage the power of our nation’s 16,536 public libraries and hundreds of thousands of librarians working in schools and public libraries to drive academic success.”
Their article is highlighted below, but make sure to check out the full article on The Hill’s Congress Blog(according to The Hill staff, last month Congress Blog had 587,000 visitors).
“September is National Library Card Sign-up Month, and we are urging families across Ohio and the nation to celebrate with a trip to the library. In our congressional districts, Cuyahoga County Public Library – in collaboration with Parma City School District and with the support of Mayor Timothy DeGeeter – is issuing library cards to the approximately 11,000 K-12 students in the district.
Library cards help our students succeed. We have seen first-hand the impact libraries and librarians have on the lives of families in our districts:
–More than 87% of K-2 students who participate in Cuyahoga County Public Library’s free, one-on-one reading tutoring program for at-risk kids report reading improvement after the program year.
–Cuyahoga County Public Library’s Homework Center program serves nearly 2,000 students in grades K-8 annually, and 93% of participants’ parents/guardians report seeing improved grades as a result.”
The authors conclude their column by noting that the increasingly technology-rich programs and services that libraries offer “…serve as a bridge to educational and economic opportunity for students of all incomes and backgrounds. This is why we believe communities throughout the country should create or strengthen partnerships with their libraries so that every child enrolled in school can receive a library card.
“During National Library Card Sign-up Month, we invite our colleagues to visit their local libraries, support important community connections between our schools and libraries, and encourage families to get an essential education and learning resource: a library card.”
This is the slightly tweaked transcript from a short talk I gave September 10, 2015, at the WordPress Miami monthly. [slides]
The last year has been good for slick designs. We have seen the popularization of big visuals, full-width / full-bleed images, background videos, which commit so many antiquated notions we have about the fold to distant memory.
These align with the goal to remove complexity from the screen, reflected in our resistance to skeuomorphism — that on the heels of material design returned full-circle.
Parallax is here in a big way, paired often with pages that scroll infinitely.
And, of course, there’s the design element at the crux of Pinterest, every social network, and the like – maybe a more ubiquitous trend than anything: the card. Cards contain content that can stand alone – a title, blurb, author and publication information, sharing options, images – which do not necessarily have to relate to the cards above or below it.
Our vocabulary for talking about design hearkens back to the things we find in our homes: cards, canvases, bars, blocks, drawers. For instance, the side-drawer navigation is seen as a solution to invasive menus by shuffling some content off-screen. Often, these menus are toggled by a switch.
They come in various flavors. Each – let’s go ahead and admit it – awfully swanky.
However, whatever it is that inspires one to adopt the latest-and-greatest, whether it is cool, pervasive, or through flat-out demands from stakeholders and clients, the questions with which we tend to occupy ourselves — “does it look good” or “can it be done” — aren’t the ones we should be concerned with.Does it work?
This differs from whether the design functions — yes, the carousel goes around and around and around. What matters, instead, is whether folks look at the carousel, engage with it, share its content. In so many words, does the carousel turn clicks into cash? Probably not.
We know because we can plot both qualitative and quantitative data that we use to determine the measure of an overall user experience. The value of the user experience is holistic. Is it easy to use, does it have utility, is there demonstrable need, is it easy to navigate, is it accessible, credible, secure, desirable, ethical?
And we care because a good user experience is good business.
By extension, this means that the most important factor determining success is the user experience: the best distributors / aggregators / market-makers win by providing the best experience, which earns them the most consumers / users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle.Ben Thompson
- 14.4% more customers are willing to purchase the product,
- 15.8% fewer customers are willing to consider doing business with a competitor,
- 16.6% more customers are likely to recommend their products or services.
There is a demonstrable need for research-y folks to join dev operations and infuse the decision-making process not just with data but user-centric data using a myriad of research methods.
And, so, in the process of user research we find that while carousels are popular for stakeholders / clients, while carousels are must-have features for any Themeforest theme’s success, that rather than add a sense of pizzazz to a website they pull the overall measure of the user experience down.
The convenience of the design convention, the power [and ease] of jQuery, the wow-factor and high-level professionality associated with a slick animation leads to intuitive leaps of faith that carousels as design-elements actually work. This is largely because people’s capacity for cruft is diminishing. We are pretty adept at ignoring content we didn’t seek out. It is how we have adapted to too much bullshit.
Web designers recoiled similarly and thus embraced “content first.” Do away with the drop shadows, the clutter, flatten the design, and embrace the content – even radically so. Attractive copy. No sidebars.
For many, even having a menu bar was a little too much, and for good reason. When there are more than a few menu items, the cognitive load is pretty high.Out of sight, out of mind
In fact, large menus – we figure – have such a high interaction cost that – perhaps – it detracts from the whole shebang. Facebook fire-started the hamburger-menu bandwagon, Google jumped on board, NBC, Time Magazine …, it’s hard to miss. But, hey, our hearts are in the right place.
What seems so obvious now is that menus that are out of sight are out of mind. So, engagement invariably drops.<noscript>[<a href="http://storify.com/alikaragoz/out-sight-out-of-mind" target="_blank">View the story “Out sight, out of mind” on Storify</a>]</noscript> Better designed clutter
The persistence of the hamburger menu’s popularity is because sweeping one’s content problems under the rug is mighty attractive. Easy, lazy, and looks pretty good. Think of the attraction and the strange logic: with less on the page we can put more on the page. Eschew the clutter for better designed clutter. This is the impetus for the big visuals that started this spiel.
But big visuals have big implications. The web is getting really, really heavy. Images are the culprit, and the device with which we increasingly access the web is smaller and less powerful than its under-the-desk ancestor. Almost 90% of web traffic is mobile in many places around the world, and in terms of device complexity for which web designers must accommodate – shit’s getting weird.
The speed with which a website loads is more important than ever. Millisecond delays negatively impact conversion – whether the goal of the site is to sell a product or get emails on a list. Bloated sites cost their owners money.
So the tragic irony of content-first big-visual designs, if implemented poorly, kind of suck. Performance is a real problem that tanks a user experience that might otherwise have gone favorably.Function before Feng Shui
The issue at hand is not the aesthetic of the web design. For many of us that aesthetic is precisely its allure. Our web is increasingly capable. We flex and bend its boundaries to celebrate its power to tell stories, to affect.
The issue is that we tend to prioritize the look and feel of the web – its art – over its purpose.
But the quality of our websites are determined not by their aesthetic but by their success at achieving goals for which they are intended. The page meant to enlist volunteers to some noble purpose is poorly made if its design distracts from signup. OKCupid’s lagging carousel-of-single-people can prevent the smartphone dependent population from finding love.
The irony, I realize, of my subheading is that feng shui is meant to harmonize a person with his or her stuff, designed-to-purpose that in our lingo intends to improve the overall user experience by not just desirability but usability and utility.
The artfulness of a web design matters, but when poorly implemented it matters negatively.
Figuring out whether certain design decisions work toward the purpose of the application or site is the key challenge to its success. It is the question of design efficacy with which we must concern ourselves first, resisting the compulsion to gawp at the groovy layout and remember that design is not art – design is function.
Four national library organizations today argued in support of the Federal Communications Commission’s (FCC) strong, enforceable rules to protect and preserve the open internet with an amici filing with the U.S. Court of Appeals for the District of Columbia Circuit.
With other network neutrality allies also filing legal briefs, the American Library Association (ALA), Association of College & Research Libraries (ACRL), Association of Research Libraries (ARL) and the Chief Officers of State Library Agencies (COSLA) focused their filing on four key points to support the FCC and rebut petitioners in the case of United States Telecom Association, et al., v. Federal Communications Commission and United States of America:
- Libraries need strong open internet rules to fulfill our missions and serve our patrons;
- Libraries would be seriously disadvantaged without rules banning paid prioritization;
- The FCC’s General Conduct Rule is an important tool to ensure the internet remains open against future harms that cannot yet be defined; and
- The participation of library and higher education groups in the FCC rulemaking process demonstrates sufficient notice of the proposed open internet rules.
Oral arguments are scheduled for December 4, 2015.
ALA looks forward to continued collaboration with national library organizations in our policy advocacy, consistent with the strategy and theme of the Policy Revolution! initiative. For this brief, we appreciate the leadership of Krista Cox of ARL in preparing the submission and coordinating with other network neutrality advocates. Stay posted for developments in network neutrality and other policy issues via the District Dispatch.
The post ALA, ACRL, ARL, COSLA file network neutrality amicus appeared first on District Dispatch.