Mark your calendars! OITP’s Copyright Education Subcommittee sponsors CopyTalk on the first Thursday of every month at 2:00 pm (Eastern). Upcoming webinars include the College Art Association’s best practices for fair use, fan fiction copyright issues, and state government documents aka “Is the state tax code protected by copyright?” Our October 1st webinar will be on the Trans-Pacific Partnership (TPP) and what it could mean for libraries with Krista Cox, Director of Public Policy Initiatives from the Association of Research Libraries (ARL).
If you want to suggest topics for CopyTalk webinars, let us know via email (firstname.lastname@example.org) and use the subject heading “CopyTalk.”
Oh yes! The webinars are free, and we want to keep it that way. We have 100 seat limit but any additional seats are outrageously expensive! If possible, consider watching the webinar with colleagues or joining the webinar before start time. And remember, there is an archive.
The post CopyTalk webinar on copyright court rulings now available appeared first on District Dispatch.
Aida Marissa Smith
Bradford Lee Eden
Bradford Lee Eden
Journal of Web Librarianship: Toward a Usable Academic Library Web Site: A Case Study of Tried and Tested Usability Practices
So it seems right to add the coda — Oyster is going out of business.
One of the challenges that Oyster faced faced was having to constantly placate publishers concerns. The vast majority of them are very apprehensive about going the same route music or movies went.
In a recent interview with the Bookseller, Arnaud Nourry, the CEO of Hachette said“We now have an ecosystem that works. This is why I have resisted the subscription system, which is a flawed idea even though it proliferates in the music business. Offering subscriptions at a monthly fee that is lower than the price of one book is absurd. For the consumer, it makes no sense. People who read two or three books a month represent an infinitesimal minority.”
Penguin Random House’s CEO Tom Weldon echoed Arnaud’s sentiments at the Futurebook conference a little awhile ago in the UK. “We have two problems with subscription. We are not convinced it is what readers want. ‘Eat everything you can’ isn’t a reader’s mindset. In music or film you might want 10,000 songs or films, but I don’t think you want 10,000 books.”
The closure of Oyster comes two months after Entitle, another e-book subscription service, closed. With Entitle and now Oyster gone there is one remaining standalone e-book service, Scribd, as well Amazon’s Amazon Unlimited service.
What could have done Oyster in? Oh, I don’t know, perhaps another company with a subscription e-book service and significantly more resources and consumers. Like, say, Amazon? It was pretty clear back when Amazon debuted “Kindle Unlimited” in July 2014 that the service could spell trouble for Oyster. The price was comparable ($9.99 a month) as was the collection of titles (600,000 on Kindle Unlimited as compared to about 500,000 at the time on Oyster). Not to mention that Amazon Prime customers already had complimentary access to one book a month from the company’s Kindle Owner’s Lending Library (selection that summer: more than 500,000). In theory, Oyster’s online e-book store was partly created to strengthen its bid against Amazon, but even here the startup was fighting a losing battle, with many titles priced significantly higher there than on Jeff Bezos’ platform.
Where Oyster failed to take Amazon on, however, it’s conceivable that Google plus a solid portion of Oyster’s staff could succeed. The Oyster team has the experience, while Google has the user base and largely bottomless pockets. By itself, Oyster wasn’t able to bring “every book in the world” into its system. But with Google, who knows? The Google Books project, a sort of complement to the Google Play Store, is already well on its way to becoming a digital Alexandria. Reincarnated under the auspices of that effort, Van Lancker’s dream may happen yet.
Filed under: General
National Library Card Sign-up Month got a shout out by two progressive voices on Capitol Hill: Ohio Reps. Marcy Kaptur (D-9th) and James B. Renacci (R-16th). In an op ed article published in The Hill’s Congress Blog, Reps. Kaptur and Renacci advised their fellow congressmen of the strong link between libraries and student performance, telling them to “leverage the power of our nation’s 16,536 public libraries and hundreds of thousands of librarians working in schools and public libraries to drive academic success.”
Their article is highlighted below, but make sure to check out the full article on The Hill’s Congress Blog(according to The Hill staff, last month Congress Blog had 587,000 visitors).
“September is National Library Card Sign-up Month, and we are urging families across Ohio and the nation to celebrate with a trip to the library. In our congressional districts, Cuyahoga County Public Library – in collaboration with Parma City School District and with the support of Mayor Timothy DeGeeter – is issuing library cards to the approximately 11,000 K-12 students in the district.
Library cards help our students succeed. We have seen first-hand the impact libraries and librarians have on the lives of families in our districts:
–More than 87% of K-2 students who participate in Cuyahoga County Public Library’s free, one-on-one reading tutoring program for at-risk kids report reading improvement after the program year.
–Cuyahoga County Public Library’s Homework Center program serves nearly 2,000 students in grades K-8 annually, and 93% of participants’ parents/guardians report seeing improved grades as a result.”
The authors conclude their column by noting that the increasingly technology-rich programs and services that libraries offer “…serve as a bridge to educational and economic opportunity for students of all incomes and backgrounds. This is why we believe communities throughout the country should create or strengthen partnerships with their libraries so that every child enrolled in school can receive a library card.
“During National Library Card Sign-up Month, we invite our colleagues to visit their local libraries, support important community connections between our schools and libraries, and encourage families to get an essential education and learning resource: a library card.”
This is the slightly tweaked transcript from a short talk I gave September 10, 2015, at the WordPress Miami monthly. [slides]
The last year has been good for slick designs. We have seen the popularization of big visuals, full-width / full-bleed images, background videos, which commit so many antiquated notions we have about the fold to distant memory.
These align with the goal to remove complexity from the screen, reflected in our resistance to skeuomorphism — that on the heels of material design returned full-circle.
Parallax is here in a big way, paired often with pages that scroll infinitely.
And, of course, there’s the design element at the crux of Pinterest, every social network, and the like – maybe a more ubiquitous trend than anything: the card. Cards contain content that can stand alone – a title, blurb, author and publication information, sharing options, images – which do not necessarily have to relate to the cards above or below it.
Our vocabulary for talking about design hearkens back to the things we find in our homes: cards, canvases, bars, blocks, drawers. For instance, the side-drawer navigation is seen as a solution to invasive menus by shuffling some content off-screen. Often, these menus are toggled by a switch.
They come in various flavors. Each – let’s go ahead and admit it – awfully swanky.
However, whatever it is that inspires one to adopt the latest-and-greatest, whether it is cool, pervasive, or through flat-out demands from stakeholders and clients, the questions with which we tend to occupy ourselves — “does it look good” or “can it be done” — aren’t the ones we should be concerned with.Does it work?
This differs from whether the design functions — yes, the carousel goes around and around and around. What matters, instead, is whether folks look at the carousel, engage with it, share its content. In so many words, does the carousel turn clicks into cash? Probably not.
We know because we can plot both qualitative and quantitative data that we use to determine the measure of an overall user experience. The value of the user experience is holistic. Is it easy to use, does it have utility, is there demonstrable need, is it easy to navigate, is it accessible, credible, secure, desirable, ethical?
And we care because a good user experience is good business.
By extension, this means that the most important factor determining success is the user experience: the best distributors / aggregators / market-makers win by providing the best experience, which earns them the most consumers / users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle.Ben Thompson
- 14.4% more customers are willing to purchase the product,
- 15.8% fewer customers are willing to consider doing business with a competitor,
- 16.6% more customers are likely to recommend their products or services.
There is a demonstrable need for research-y folks to join dev operations and infuse the decision-making process not just with data but user-centric data using a myriad of research methods.
And, so, in the process of user research we find that while carousels are popular for stakeholders / clients, while carousels are must-have features for any Themeforest theme’s success, that rather than add a sense of pizzazz to a website they pull the overall measure of the user experience down.
The convenience of the design convention, the power [and ease] of jQuery, the wow-factor and high-level professionality associated with a slick animation leads to intuitive leaps of faith that carousels as design-elements actually work. This is largely because people’s capacity for cruft is diminishing. We are pretty adept at ignoring content we didn’t seek out. It is how we have adapted to too much bullshit.
Web designers recoiled similarly and thus embraced “content first.” Do away with the drop shadows, the clutter, flatten the design, and embrace the content – even radically so. Attractive copy. No sidebars.
For many, even having a menu bar was a little too much, and for good reason. When there are more than a few menu items, the cognitive load is pretty high.Out of sight, out of mind
In fact, large menus – we figure – have such a high interaction cost that – perhaps – it detracts from the whole shebang. Facebook fire-started the hamburger-menu bandwagon, Google jumped on board, NBC, Time Magazine …, it’s hard to miss. But, hey, our hearts are in the right place.
What seems so obvious now is that menus that are out of sight are out of mind. So, engagement invariably drops.<noscript>[<a href="http://storify.com/alikaragoz/out-sight-out-of-mind" target="_blank">View the story “Out sight, out of mind” on Storify</a>]</noscript> Better designed clutter
The persistence of the hamburger menu’s popularity is because sweeping one’s content problems under the rug is mighty attractive. Easy, lazy, and looks pretty good. Think of the attraction and the strange logic: with less on the page we can put more on the page. Eschew the clutter for better designed clutter. This is the impetus for the big visuals that started this spiel.
But big visuals have big implications. The web is getting really, really heavy. Images are the culprit, and the device with which we increasingly access the web is smaller and less powerful than its under-the-desk ancestor. Almost 90% of web traffic is mobile in many places around the world, and in terms of device complexity for which web designers must accommodate – shit’s getting weird.
The speed with which a website loads is more important than ever. Millisecond delays negatively impact conversion – whether the goal of the site is to sell a product or get emails on a list. Bloated sites cost their owners money.
So the tragic irony of content-first big-visual designs, if implemented poorly, kind of suck. Performance is a real problem that tanks a user experience that might otherwise have gone favorably.Function before Feng Shui
The issue at hand is not the aesthetic of the web design. For many of us that aesthetic is precisely its allure. Our web is increasingly capable. We flex and bend its boundaries to celebrate its power to tell stories, to affect.
The issue is that we tend to prioritize the look and feel of the web – its art – over its purpose.
But the quality of our websites are determined not by their aesthetic but by their success at achieving goals for which they are intended. The page meant to enlist volunteers to some noble purpose is poorly made if its design distracts from signup. OKCupid’s lagging carousel-of-single-people can prevent the smartphone dependent population from finding love.
The irony, I realize, of my subheading is that feng shui is meant to harmonize a person with his or her stuff, designed-to-purpose that in our lingo intends to improve the overall user experience by not just desirability but usability and utility.
The artfulness of a web design matters, but when poorly implemented it matters negatively.
Figuring out whether certain design decisions work toward the purpose of the application or site is the key challenge to its success. It is the question of design efficacy with which we must concern ourselves first, resisting the compulsion to gawp at the groovy layout and remember that design is not art – design is function.
Four national library organizations today argued in support of the Federal Communications Commission’s (FCC) strong, enforceable rules to protect and preserve the open internet with an amici filing with the U.S. Court of Appeals for the District of Columbia Circuit.
With other network neutrality allies also filing legal briefs, the American Library Association (ALA), Association of College & Research Libraries (ACRL), Association of Research Libraries (ARL) and the Chief Officers of State Library Agencies (COSLA) focused their filing on four key points to support the FCC and rebut petitioners in the case of United States Telecom Association, et al., v. Federal Communications Commission and United States of America:
- Libraries need strong open internet rules to fulfill our missions and serve our patrons;
- Libraries would be seriously disadvantaged without rules banning paid prioritization;
- The FCC’s General Conduct Rule is an important tool to ensure the internet remains open against future harms that cannot yet be defined; and
- The participation of library and higher education groups in the FCC rulemaking process demonstrates sufficient notice of the proposed open internet rules.
Oral arguments are scheduled for December 4, 2015.
ALA looks forward to continued collaboration with national library organizations in our policy advocacy, consistent with the strategy and theme of the Policy Revolution! initiative. For this brief, we appreciate the leadership of Krista Cox of ARL in preparing the submission and coordinating with other network neutrality advocates. Stay posted for developments in network neutrality and other policy issues via the District Dispatch.
The post ALA, ACRL, ARL, COSLA file network neutrality amicus appeared first on District Dispatch.
I love the video that Microsoft recently put out about Inclusive Design. It uses several design stories to illustrate how inclusive design needs to start with an individual and be user centered. I learned so much from many of the amazing people featured in this video. Also, it’s delightful to watch a video that presents such strong ideas and has such high production values.
This quote from interaction designer Mike Vanis at the 5 minute mark really stuck with me:
If you start with technology, then it just becomes a feature list. But if you start with the person then this really amazing thing happens. They dictate the technology and you come to surprises. You arrive at a point where the technology and the person feel so close, so intimate, that you don’t actually see the technology at all anymore.
One of the stories in Inclusive is about Skype Translator (starts at 13:42). There are two threads to this story. First this video shows a school in Seattle and a school in Beijing that are using Skype Translator to bridge their linguistic differences and video chat with each other. Skype Translator is impressive, it uses speech to text, machine translation, and then text to speech to translate what someone is saying in one language into another. As part of this exchange the text of what is being said is included on the screen. The second thread is that this technology is useful in including Deaf and Hard of Hearing students in mainstreamed hearing classrooms.
Will Lewis, Principal Technical PM, Microsoft Research, says that for Deaf or Hard of Hearing students in a hearing classroom they “often require an interpreter, whether that’s a sign language interpreter, or closed captioning. The problem is that it doesn’t scale.” The underlying assumption is that there is a problem with people who are Deaf or Hard of Hearing and that there is a problem in making them fit in a hearing classroom.
This story doesn’t fit with the fundamental concept that it’s important to start with the individual and should have been left out of this video. This segment focuses on how amazing Skype Translator is as a technology (which it is) and then tacks on two Deaf or Hard of Hearing students as an afterthought. Also, presenting cochlear implants as an amazing value neutral technology is an example of audism, or “is the notion that one is superior based on one’s ability to hear or to behave in the manner of one who hears, or that life without hearing is futile and miserable, or an attitude based on pathological thinking which results in a negative stigma toward anyone who does not hear”.
Talking this through with a friend who is Hard of Hearing and a PhD candidate revealed some underlying privacy concerns. Assuming that the machine translation is occurring on Microsoft servers means that conversations are being saved temporarily, to translate their words, and also likely permanently saved in order to improve the technology. So if Deaf and Hard of Hearing people are reliant on this technology they will be under more surveillance than hearing people. This is really problematic. If the design process had started with the individual who valued privacy and dictated the technology the amazing thing that Mike Vanis talked about might have happened. Instead the story of Skype Translator is just a software feature list.
In another step forward as a mature and sustainable open source project, the Islandora community has adopted an official process for defining and nominating Committers. As with things like our Licensed Software Acceptant Procedure and our Contributor Licence Agreements, we opted to follow a model tried and tested by one of our fellow travellers in the world of open source repositories and at the guidelines used by Fedora Committers (who were guided in turn by how it's done at Apache). We particularly liked their method for selecting new Committers, with its emphasis on community engagement at several levels - and how it would leave the selection of new Committers to those in the best position to judge: existing Committers.
With Fedora for an example, Nick Ruest wrote up a proposed set of guidelines for Committers in the Islandora Community. To sum up some key points, Committers have the right to:
- Write access to the codebase
- Nomination privileges of new committers
- Release management privileges
- Binding votes on procedural, code modification, and release issues
- Access to the private committers mailing list
Balanced by the responsibility to:
- Monitor and respond to project mailing lists
- Attend project and technical meetings
- Monitor and vet bug-tracker issues
- Review and commit code contributions
- Ensure code contributions are properly licensed
- Guide and mentor new committers
There are 17 initial Committers, consisting of those community members who already had push access to the Islandora GitHub:
- Daniel Aitken, discoverygarden inc.
- Morgan Dawe, discoverygarden inc.
- Jordan Dukart, discoverygarden inc.
- Nelson Hart, discoverygarden inc.
- Mark Jordan, Simon Fraser University
- Danny Lamb, discoverygarden inc.
- Rosemary LeFaive, University of Price Edward Island
- Mitch MacKenzie, discoverygarden inc.
- Donald Moses, University of Price Edward Island
- William Panting, discoverygarden inc.
- Matthew Perry, discoverygarden inc.
- Diego Pino, REUNA
- Paul Pound, University of Price Edward Island
- Nick Ruest, York University
- Alan Stanley, Agile Humanities
- Adam Vessey, discoverygarden inc.
- Jared Whiklo, University of Manitoba
discoverygarden's historical status as the primary contributor of Islandora code is reflected in the composition of the list, but the project's growth as a software owned and created by a wider community is also apparent - and we expect that as the list grows, so too will the diversity of institutions represented.
Want to become a Committer? Here's what they will be looking for:Ability to be a mentor How do we evaluate? By the interactions they have through mail. By how clear they are and how willing they are to point at appropriate background materials (or even create them). Community How do we evaluate? By the interactions they have through mail. Do they help to answer questions raised on the mailing list; do they show a helpful attitude and respect for other's ideas. Committment How do we evaluate? By time, by sticking through tough issues, by helping on not-so-fun tasks as well. Personal skill/ability How do we evaluate? A solid general understanding of the project. Quality of discussion in mail. Patches (where applicable) easy to apply with only a cursory review.
I attended the 79th Annual Meeting of the Society of American Archivists (SAA) last month in Cleveland, Ohio and was invited to participate on the Research Libraries Roundtable panel on Data Management and Curation in 21st Century Archives. Dan Noonan, e-Records/Digital Resources Archivist, moderated the discussion. Wendy Hagenmaier, Digital Collections Archivist, Georgia Tech Library and Sammie Morris, Director, Archives and Special Collections & University Archivist, Purdue University Libraries joined me on the panel. Between the three of us there was a nice variety of perspectives given our different experiences and interests.
It was a great panel so I decided to discuss it in a two parts. In this part, Managing and Curating Data with Reuse in Mind, I summarize key points from my presentation. In Part 2, I will highlight key points from Wendy and Sammie’s presentations that made an impression on me.
Managing and Curating Data with Reuse in Mind
I was excited to be invited to participate in a panel discussion on Data Management and Curation in 21st Century Archives at SAA, given my perspective is not that of an archivist. I’ve been studying data reuse in academic communities and more recently I’ve been examining libraries’ role in e-research and data on their campuses.
Given my experiences and interests my goal in participating on the panel was to convince archivists to bring their expertise to the table with an eye toward satisfying, perhaps even delighting, data reusers. I believe revolving conversations about data management and curation in 21st century archives around the needs of data reusers serves to inform the preservation of data’s meaning as well as other archival practices, particularly the partnerships archivists form, the questions they ask, and the activities they pursue.
Preservation of data’s meaning
When we think about preserving the meaning of research data, the goal is that someone not involved in the study can come along and make sense of the data. It’s no surprise that contextual information about how data are collected is critical.
For instance, a zoologist uses field notes to sort out whether a wolf might have been a dog or coyote hybrid. A social scientist references the instructions and layout of a survey to understand differences between survey responses. An archaeologist thinks artifacts are meaningless in absence of information about where they came from and how they were acquired and excavated.
While the need for data collection information is obvious, what is often surprising to some is the level of contextual detail reusers want about it and the additional kinds of context they seek, including information about the data producer, data repository, data analysis, digitization and curation, preservation, and prior reuse.
Questions asked: It’s not just about context
Ask data reusers what contextual information they need as well as why they need it and where they go to get it. What we have heard has enlightened us about disciplinary attitudes and practices. We have learned what constitutes data quality and how it contributes to their decision making and satisfaction. Our understanding of data quality has become more nuanced and given us something tangible to work toward given its importance in data management and curation.
A zoologist deciding whether to combine data from different studies needs to know if the definitions for a concept hold across the two different data sources. We call this need to evaluate whether and how data from different studies can be integrated ease of operation. A social scientist determining whether data are relevant given research objectives looks at how variables are defined and measured. An archaeologist relies on information about data producers to judge whether their data are credible.
When asking researchers to talk about how they reused other’s data, we’ve learned that it’s not just about capturing context so researchers can understand data. Our findings show researchers judge other’s data in various ways to decide if the data are worthy of reuse. We need to know more about these judgements. If we are going to curate and preserve data to be reusable we need to have a better sense of what reusable means.
Partnerships formed: It’s bigger than the archive
Looking at data management and curation from a reuser’s perspective also might influence the partnerships archivists form. It’s bigger than the archive. Archivists cannot go it alone. Our work shows how actions in one part of data’s lifecycle influence other parts.
How data producers collect, record, and document their data impacts repository staff and data reusers. For instance, we found archaeologists collecting data in the field had systems to identify tooth wear, but there were no guidelines for documenting tooth wear. Consequently they recorded it in different ways impacting repository staff’s data processing time and reusers’ understanding. We’ve also found instances where repository staff’s actions motivated data producers to share and impacted the satisfaction of data reusers and where data reusers influenced repository policy and data producers’ future actions.
While we’ve revealed interdependencies in an attempt to improve data sharing, management, and reuse experiences, we’ve only looked at three roles. Of those roles, we’ve only considered one that sits between data producers and data reusers – repository staff (i.e. the data curator). We know there are more – archivists, librarians, technologists, compliance officers, administrative staff, etc.
When asked what facilitates research data services, two-thirds of librarians mentioned communication, coordination, and collaboration with people from other units on their campus as a means to define, develop, and deliver services, pool expertise, and outline roles and responsibilities. Our research suggests that the key will be managing these stakeholders’ interdependencies through data’s lifecycle by identifying pain points and supportive actions that will move things forward.
Activities pursued: It’s always about designated communities
Lastly incorporating data reusers’ perspectives and practices into conversations about data management and curation might influence the activities archivists pursue. It’s always about the designated community of users. We witnessed this in our interviews with staff at three data repositories – Inter-university Consortium for Political and Social Research, University of Michigan Museum of Zoology, and Open Context. Findings showed staff dealt with six types of change in data repositories, one of which was responding to their user communities.
At each repository, staff were found to adjust their processes and procedures to accommodate the developing needs of their users. The museum developed new specimen preparation, preservation, and loan procedures when DNA testing became available. ICPSR staff were deciding when and how they could meet demand for new data formats such as video. Rather than design the Open Context website to be “Flickry” and collaborative, staff decided on a more straightforward publication platform because archaeologists wanted something more professional that they could cite on their CVs.
In our roles, whether archivists, librarians, technologists, researchers, etc., we need to think about how we can talk, listen, observe, learn from, teach, and delight data reusers. We should strive to ensure our actions encompass the audience we are trying to reach.
Are any of you actively engaged with your scholarly communities to understand data management, curation, and reuse from their perspective? If so, please comment or respond to this post and tell us about your experiences – How have you done it? What have you learned? What have they learned? What challenges remain?About Ixchel Faniel
Ixchel M. Faniel is a Research Scientist at OCLC. She is currently working on projects examining data reuse within academic communities to identify how contextual information about the data that supports reuse can best be created and preserved. She also examines librarians' early experiences designing and delivering research data services with the objective of informing practical, effective approaches for the larger academic community.Mail | Web | More Posts (2)
Information Technology and Libraries: Editorial Board Thoughts: Information Technology and Libraries: Anxiety and Exhilaration
That the practice of libraries and librarianship is changing is an understatement. Throughout their history, libraries have adapted and evolved to better meet the needs of the communities served. Framed against the historical development of the library commons and technological support, this piece introduces the concept of an innovation commons as a natural evolution for libraries, from information through learning commons, to the organic development and incorporation of library makerspaces.
Information Technology and Libraries: Self-Archiving with Ease in an Institutional Repository: Microinteractions and the User Experience
Details matter, especially when they can influence whether or not users engage with a new digital initiative that relies heavily on their support. During the recent development of MacEwan University’s institutional repository, the librarians leading the project wanted to ensure the site would offer users an easy and effective way to deposit their works, in turn helping to ensure the repository’s long-term viability. The following paper discusses their approach to user-testing, applying Dan Saffer’s framework of microinteractions to how faculty members experienced the repository’s self-archiving functionality. It outlines the steps taken to test and refine the self-archiving process, shedding light on how others may apply the concept of microinteractions to better understand a website’s utility and the overall user experience that it delivers.