The 7 biggest problems facing science, according to 270 scientists by Julia Belluz, Brad Plumer, and Brian Resnick at Vox is an excellent overview of some of the most serious problems, with pointers to efforts to fix them. Their 7 are:
- Academia has a huge money problem:
In the United States, academic researchers in the sciences generally cannot rely on university funding alone to pay for their salaries, assistants, and lab costs. Instead, they have to seek outside grants. "In many cases the expectations were and often still are that faculty should cover at least 75 percent of the salary on grants," writes John Chatham, ... Grants also usually expire after three or so years, which pushes scientists away from long-term projects. Yet as John Pooley ... points out, the biggest discoveries usually take decades to uncover and are unlikely to occur under short-term funding schemes.
- Too many studies are poorly designed:
An estimated $200 billion — or the equivalent of 85 percent of global spending on research — is routinely wasted on poorly designed and redundant studies, according to meta-researchers who have analyzed inefficiencies in research. We know that as much as 30 percent of the most influential original medical research papers later turn out to be wrong or exaggerated.
- Replicating results is crucial — and rare:
A 2015 study looked at 83 highly cited studies that claimed to feature effective psychiatric treatments. Only 16 had ever been successfully replicated. Another 16 were contradicted by follow-up attempts, and 11 were found to have substantially smaller effects the second time around. Meanwhile, nearly half of the studies (40) had never been subject to replication at all.
- Peer review is broken:
numerous studies and systematic reviews have shown that peer review doesn’t reliably prevent poor-quality science from being published.
- Too much science is locked behind paywalls:
"Large, publicly owned publishing companies make huge profits off of scientists by publishing our science and then selling it back to the university libraries at a massive profit (which primarily benefits stockholders)," Corina Logan, an animal behavior researcher at the University of Cambridge, noted. "It is not in the best interest of the society, the scientists, the public, or the research." (In 2014, Elsevier reported a profit margin of nearly 40 percent and revenues close to $3 billion.)
- Science is poorly communicated:
Science journalism is often full of exaggerated, conflicting, or outright misleading claims. If you ever want to see a perfect example of this, check out "Kill or Cure," a site where Paul Battley meticulously documents all the times the Daily Mail reported that various items — from antacids to yogurt — either cause cancer, prevent cancer, or sometimes do both.
Indeed, one review in BMJ found that one-third of university press releases contained either exaggerated claims of causation (when the study itself only suggested correlation), unwarranted implications about animal studies for people, or unfounded health advice.
- Life as a young academic is incredibly stressful:
A 2015 study at the University of California Berkeley found that 47 percent of PhD students surveyed could be considered depressed
Dr. Larson and his colleagues calculated R0s for various science fields in academia. There, R0 is the average number of Ph.D.s that a tenure-track professor will graduate over the course of his or her career, with an R0 of one meaning each professor is replaced by one new Ph.D. The highest R0 is in environmental engineering, at 19.0. It is lower — 6.3 — in biological and medical sciences combined, but that still means that for every new Ph.D. who gets a tenure-track academic job, 5.3 will be shut out. In other words, Dr. Larson said, 84 percent of new Ph.D.s in biomedicine “should be pursuing other opportunities” — jobs in industry or elsewhere, for example, that are not meant to lead to a professorship.Again, amen. A friend of mine spotted this problem years ago and has been making a business advising grad students and post-docs how to transition to "real work".
I’ve lost the religion that I long held that if library programmers are finally given access to everything the world would be better. I think we’re moving into a somewhat darker world where everything is in the cloud, things are locked down, and they’re probably pretty good. Tim Spalding
The post If library programmers were given access to everything appeared first on LibUX.
House Appropriations Bills include support for LSTA and Title IV, but defund IAL
The twelve FY 2017 appropriations bills continued to progress slowly in the House and Senate as Congress leaves today for a lengthy summer recess. This past week, the House Appropriations Committee marked up (approved) its contentious FY2017 Labor, Health and Human Services, Labor, Education, and Related Agencies Appropriations bill (commonly called “LHHS”), approving the bill 31-19 with the vote splitting along party lines. The Senate approved its version of LHHS last month by a decidedly bipartisan vote of 29-1.
Most important to ALA members, the LHHS funding bill in both chambers includes funding for several programs of significant importance to the library community: LSTA, IAL, and Title IV of the Every Student Succeeds Act (ESSA). While the House Committee bill provides increased support for two of our priorities, it also eliminates funding for another.
First, the good news. Both the House and Senate funding bills include increased funding for LSTA and its Grants to States program, rejecting the President’s disappointing proposal to cut funding for both. LSTA Grants to States would receive $155.9 million in the House bill: a slight increase over FY 2016 ($155.7 million), though $240,000 less than what the Senate requested in May ($156.1 million). The President had requested only $154.8 million. Overall LSTA funding would be boosted in the House bill to $183.0 million. That compares with $183.3 in the Senate, $182.4 in the President’s request, and $182.9 ultimately approved by Congress in FY 2016). ALA continues to oppose cuts to LSTA programs.
The House also made slight upward adjustments to three other LSTA programs from FY2016 levels. Native American Library Services grants would receive $4.1 million ($3.8 in Senate and $4.0 in FY2016); the National Leadership: Libraries program received $13.1 million ($13.4 million in Senate and $13.0 in FY 2016); and the Laura Bush 21st Century Librarian program received level funding at $10 million.
New this year is a block grant created with bipartisan support under Title IV of ESSA The “Student Support and Academic Enrichment Grants” (SSAEG will provide supplemental funding to help states and school districts underwrite a “well-rounded” educational experience for students, including STEM and arts programs. Best of all, libraries are expressly authorized to receive SSAEG funds. Although SSAEG was originally authorized in the ESSA at $1.65 billion, the President and Congress appear willing to fund the program at much lower levels. The President requested only $500 million while the Senate approved less at $300 million. The House approved a higher level of $1 billion but still below authorized levels for FY2017.
Next, the decidedly bad news is that House appropriators have proposed to eliminate all funding for school libraries through the Innovative Approaches to Literacy (IAL) program.. According to the House Committee’s Report, “The Committee has chosen to focus resources on core formula-based education programs instead of narrowly-focused competitive grants such as this one.” IAL received $27 million in FY2016, which was also the funding level requested by the President and supported in the Senate bill. One half of IAL funding is reserved for school libraries with the remaining open to any national non-profit by application.
Likely limiting its chances of passage, however, the House LHHS bill included a number of divisive policy riders addressing issues including highly controversial issues like family planning, NLRB joint employer standards, and “Obamacare”. The bill also includes education-related policy riders addressing the “gainful employment” rule aimed at for-profit colleges, forthcoming teacher preparation rules, and the federal definition of a credit hour. All of the amendments introduced at the full Committee mark up to strike these riders or to restore cuts in education funding failed along party lines.
Congress will return from its recess in September, leaving them only a few weeks to adopt funding measures to keep the government open beyond the October 1 start of the Fiscal Year. That’s unlikely, so Congress probably will be forced to enact a “Continuing Resolution,” or CR, to fund the Government. Under CR rules the previous year’s level of funding is maintained for most programs. Vigorous discussions on the Hill already have begun as to what the length of the CR can and should be. If a CR that extends into the new calendar year is adopted, the new President will be forced to negotiate government-wide spending levels with Congress soon after being sworn in, possibly even before key Cabinet and other budget-related positions are filled.
The post Congress leaves town with funding bills unfinished appeared first on District Dispatch.
Since last year Open Knowledge has been developing OpenTrials, an open, online database linking the publicly available data and documents on all clinical trials conducted – something that has been talked about for many years but never created. The project is funded by The Laura and John Arnold Foundation and directed by Dr. Ben Goldacre, an internationally known leader on clinical trial transparency. Having an open and freely re-usable database of the world’s clinical trial data will increase discoverability, facilitate research, identify inconsistent data, enable audits on the availability and completeness of this information, support advocacy for better data, and drive standards around open data in evidence-based medicine.
The project is currently in its first phase (which runs until March 2017), where the focus is on building and populating the first prototype of the OpenTrials database, as well as raising awareness of the project in the community and getting user involvement and feedback. The progress that has been made so far was presented last month at the Evidence Live conference in Oxford, which brought together leaders across the world of Evidence Based Medicine, including researchers, doctors, and the pharmaceutical industry. This was an excellent opportunity to demonstrate the project and speak to both researchers who want to use the platform as well as people with a general enthusiasm for its impact on medicine.
Around 40 people attended our talk which explained why OpenTrials is an important infrastructure project for medicine, covered some of the technical aspects of the platform, details of what data we’ve imported so far, and lastly a quick demo.
If you’re feeling impatient, here are the slides from the talk, or scroll down for a summary.
- 331,999 deduplicated trials, collected from nine clinical trial registries:
- ANZCTR 11,645
- ClinicalTrials.gov 205,422
- EU CTR 35,159
- GSK 4,131
- ISRCTN 14,256
- Pfizer 1,567
- Takeda 1,142
- UMIN 20,557
- WHO ICTRP 298,688
- ~22,000 research summaries from the Health Research Authority
- ~510,000 publications from PubMed (~24,000 linked with trials)
- Basic search (by keyword)
- Searching for trials with publications
- Uploading missing data/documents for a particular trial
- Showing trials with discrepancies (e.g. target sample size)
- Systematic Reviews from:
- Clinical Study Report Synopses (e.g. YODA)
- Cochrane Schizophrenia data
- FDA Drug Approval Packages – OpenTrialsFDA won $80,000 to develop prototype to unlock ‘hidden’ data in non-searchable PDFs
Want to get early access to the data and be a user tester? Sign up and we’ll be in touch soon.
I was pleased to read last week that the National Digital Newspaper Program, which has sponsored the digitization of over 1 million historically significant newspaper pages , has announced that it has expanded its scope to include content published up to 1963, as long as public domain status can be established. I’m excited about this initiative, which will surface content of historic interest that’s in many readers’ living memory. I’ve advocated opening access to serials up to 1963 for a long time, and have worked on various efforts to surface information about serial copyright renewals (like this one), to make it easier to find public domain serial content that can be made freely readable online. (In the US, renewal became automatic for copyrights secured after 1963, making it difficult to republish most newspapers after that date. Up till then, though, there’s a lot that can be put online.)
Copyright in contributions
Clearing copyright for newspapers after 1922 can be challenging, however. Relatively few newspapers renewed copyrights for entire issues– as I noted 10 years ago, none outside of New York City did before the end of World War II. But newspapers often aggregate lots of content from lots of sources, and determining the copyright status of those various pieces of content is necessary as well, as far as I can tell. While section 201(c) of copyright law normally gives copyright holders of a collective work, such as a magazine or newspaper, the right to republish contributions as part of that work, people digitizing a newspaper that didn’t renew its own copyright aren’t usually copyright holders for that newspaper. (I’m not a lawyer, though– if any legal experts want to argue that digitizing libraries get similar republication rights as the newspaper copyright holders, feel free to comment.)
As I mentioned in my last post, we at Penn are currently going through the Catalog of Copyright Entries to survey which periodicals have contributions with copyright renewals, and when those renewals started. (My previous post discussed this in the context of journals, but the survey covers newspapers as well.) Most of the contributions in the section we’re surveying are text, and we’ve now comprehensively surveyed up to 1932. In the process, we’ve found a number of newspapers that had copyright-renewed text contributions, even when they did not have copyright-renewed issues. The renewed contributions are most commonly serialized fiction (which was more commonly run in newspapers decades ago than it is now). Occasionally we’ll see a special nonfiction feature by a well-known author renewed. I have not yet seen any contribution renewals for straight news stories, though, and most newspapers published in the 1920s and early 1930s have not made any appearance in our renewal survey to date. I’ll post an update if I see this pattern changing; but right now, if digitizers are uncertain about the status of a particular story or feature article in a newspaper, searching for its title and author in the Catalog of Copyright Entries should suffice to clear it.
Photographs and advertisements
Newspapers contain more than text, though. They also include photos, as well as other graphical elements, which often appear in advertisements. It turns out, however, that the renewal rate for images is very low, and the renewal rate for “commercial prints”, which include advertisements, is even lower. There isn’t yet a searchable text file or database for these types of copyright renewals (though I’m hoping one can online before long, with help from Distributed Proofreaders), and in any case, images typically don’t have unambiguous titles one can use for searching. However, most news photographs were published just after they were taken, and therefore they have a known copyright year and specific years in which a renewal, if any, should have been filed. It’s possible to go through the complete artwork and commercial prints of any given year, get an overview of all the renewed photos and ads that exist, and look for matches. (It’s a little cumbersome, but doable, with page images of the Catalog of Copyright Entries; it will be easier once there are searchable, classified transcriptions of these pages.)
Fair use arguments may also be relevant. Even in the rare case where an advertisement was copyright-renewed, or includes copyright-renewed elements (like a copyrighted character), an ad in the context of an old newspaper largely serves an informative purpose, and presenting it there online doesn’t typically take away from the market for that advertisement. As far as I can tell, what market exists for ads mostly involves relicensing them for new purposes such as nostalgia merchandise. For that matter, most licensed reuses of photographs I’m aware of involve the use of high-resolution original prints and negatives, not the lower-quality copies that appear on newsprint (and that could be made even lower-grade for purposes of free display in a noncommercial research collection, if necessary). I don’t know if NDNP is planning to accommodate fair use arguments along with public domain documentation, but they’re worth considering.
Syndicated and reprinted content: A thornier problem
Many newspapers contain not only original content, but also content that originated elsewhere. This type of content comes in many forms: wire-service stories and photos, ads, and syndicated cartoons and columns. I don’t yet see much cause for concern about wire news stories; typically they originate in a specific newspaper, and would normally need to be renewed with reference to that newspaper. And at least as far as 1932, I haven’t yet seen any straight news stories renewed. Likewise, I suspect wire photos and national ads can be cleared much like single-newspaper photos and ads can be.
But I think syndicated content may be more of a sticky issue. Syndicated comics and features grew increasingly popular in newspapers in the 20th century, and there’s still a market for some content that goes back a long way. For instance, the first contribution renewal for the Elizabethan Star, dated September 8, 1930, is the very first Blondie comic strip. That strip soon became wildly popular, published by thousands of newspapers across the country. It still enjoys a robust market, with its official website noting it runs in over 2000 newspapers today. Moreover, its syndicator, King Features, also published weekly periodicals of its own, with issues as far back as 1933 renewed. (As far as I can tell, it published these for copyright purposes, as very few libraries have them, but according to WorldCat an issue “binds together one copy of each comic, puzzle, or column distributed by the syndicate in a given week”. Renew that, and you renew everything in it.) King Features remains one of the largest syndicators in the world. Most major newspapers, then, include at least some copyrighted (and possibly still marketable) material at least as far back as the early 1930s.
Selective presentation of serial content
The most problematic content of these old newspapers from a copyright point of view, though, is probably the least interesting content from a researcher’s point of view. Most people who want to look at a particular locale’s newspaper want to see the local content: the news its journalists reported, the editorials it ran, the ads local businesses and readers bought. The material that came from elsewhere, and ran identically in hundreds of other newspapers, is of less research interest. Why not omit that, then, while still showing all the local content?
This should be feasible given current law and technology. We know from the Google and Hathitrust cases that fair use allows completely copyrighted volumes to be digitized and used for certain purposes like search, as long as users aren’t generally shown the full text. And while projects like HathiTrust and Chronicling America now typically show all the pages they scan, commonly used digitized newspaper software can either highlight or blank out not only specific pages but even the specific sections of a page in which a particular article or image appears.
This gives us a path forward for providing access to newspapers up to 1963 (or whatever date the paper started being renewed in its entirety). Specifically, a library digitization project can digitize and index all the pages, but then only expose the portions of the issues it’s comfortable showing given its copyright knowledge. It can summarize the parts it’s omitting, so that other libraries (or other trusted collaborators) can research the parts it wasn’t able to clear on its own. Sections could then be opened up as researchers across the Internet found evidence to clear up their status. Taken as a whole, it’s a big job, but projects like the Copyright Review Management System show how distributed copyright clearance can be feasibly done at scale.
Moreover, if we can establish a workable clearance and selective display process for US newspapers, it will probably also work for most other serials published in the US. Most of them, whether magazines, scholarly journals, conference proceedings, newsletters, or trade publications, are no more complicated in their sources and structures than newspapers are, and they’re often much simpler. So I look forward to seeing how this expansion in scope up to 1963 works out for the National Digital Newspaper Program. And I hope we can use their example and experience to open access to a wider variety of serials as well.
Open Knowledge Foundation: Open Access: Why do scholarly communication platforms matter and what is the true cost of gold OA?
During the past 2,5 years Open Knowledge has been a partner in PASTEUR4OA, a project focused on aligning open access policies for European Union research. As part of the work, a series of advocacy resources was produced that can be used by stakeholders to promote the development and reinforcement of such open access policies. The final two briefing papers, written by Open Knowledge, have been published this week and deal with two pressing issues around open access today: the financial opacity of open access publishing and its potential harmful effects for the research community, and the expansion of open and free scholarly communication platforms in the academic world – explaining the new dependencies that may arise from those platforms and why this matters for the open access movement.Revealing the true cost of gold OA
“Reducing the costs of readership while increasing access to research outputs” has been a rallying cry for open access publishing, or Gold OA. Yet, the Gold OA market is largely opaque and makes it hard for us to evaluate how the costs of readership actually develop. Data on both the costs of subscriptions (for hybrid OA journals) and of APCs are hard to gather. If they can be obtained, they only offer partial but very different insights into the market. This is a problem for efficient open access publishing. Funders, institutions, and individual researchers are therefore increasingly concerned that a transition to Gold OA could leave research community open for exploitative financial practices and prevent effective market coordination.
Which factors contribute to the current opacity in the market? Which approaches are taken to foster financial transparency of Gold OA? And what are recommendations to funders, institutions, researchers and publishers to increase transparency?
The paper Revealing the true costs of Gold OA – Towards a public data infrastructure of scholarly publishing costs, written by researchers of Open Knowledge International, King’s College London and the University of London, presents the current state of financial opacity in scholarly journal publishing. It describes what information is needed in order to obtain a bigger, more systemic picture of financial flows, and to understand how much money is going into the system, where this money comes from, and how these financial flows might be adjusted to support alternative kinds of publishing models.
Why do scholarly communication platforms matter for open access? Over the past two decades, open access advocates have made significant gains in securing public access to the formal outputs of scholarly communication (e.g. peer reviewed journal articles). The same period has seen the rise of platforms from commercial publishers and technology companies that enable users to interact and share their work, as well as providing analytics and services around scholarly communication.
How should researchers and policymakers respond to the rise of these platforms? Do commercial platforms necessarily work the interests of the scholarly community? How and to what extent do these proprietary platforms pose a threat to open scholarly communication? What might public alternatives look like?The paper Infrastructures for Open Scholarly Communication provides a brief overview of the rise of scholarly platforms – describing some of their main characteristics as well as debates and controversies surrounding them. It argues that in order to prevent new forms of enclosure, it is essential that public policymakers should be concerned with the provision of public infrastructures for scholarly communication as well as public access to the outputs of research. It concludes with a review of some of the core elements of such infrastructures, as well as recommendations for further work in this area.
Catherine E. Kerrigan
Recent postings from ACRL indicate that the library world is paying more attention than ever to demonstrating the impact we have on student learning, faculty productivity, serving our communities, and the overall missions of our institutions. Megan Oakleaf has written extensively on this issue, and her work revolves around the way we can try to make connections between assessment efforts and student learning, among other things.
Blame shrinking budgets, clueless campus administrators, or just a lack of sharing the great work we do, but we are all faced with the reality of validating our role on our respective campuses in one way or another. I don’t want to get into the merits of such an argument, but rather to offer a possible solution to this issue-one of many options, to be sure.
Setting annual reports aside, which are at best long-winded and most likely end up in a forgotten file-folder, chances are we only have few and brief opportunities to communicate that which is very difficult to encapsulate, much less quantify. So how can you pack that proverbial punch? Enter the increasingly popular infographic. At OSU, we’ve embarked on an ambitious project to do just that, and we are in the throes of deciding how to best harness the power of such a tool for our purposes.
There are really two broad issues to take into consideration if you would like to use this type of tool: what to include and how to design for maximum impact.
First, you’ll need to think about the information you want to collect, both quantitative and qualitative. A good Google Form, Excel spreadsheet, or Springshare’s LibAnalytics will do the trick. But beware, things may not be as simple as they appear. Numbers are easy-put a 3 or a 10 and off you go. What’s harder to capture is the story behind that figure. Make sure that all of your quantitative data have a qualitative equivalent. Which is where defining your categories comes into play. For example, if you want to capture how many successful consultations librarians averaged in a given year, make sure they understand exactly what you mean by that term. Some may interpret it as all the reference questions they answer, others may only report appointment-based interactions, while others still might think this relates only to a particular user group.
In addition, whatever non-numerical information you capture should be able to answer the question “So what?” If you can’t determine its importance, chances are neither will someone outside the library no matter how much you try to explain it. Ideally, whatever categories you select either match your library or institutional strategic goals (or both) so that you can directly correlate them to the areas which are important on a broader level and aggregate individual efforts into a composite snapshot for the semester or the year. This section will allow you to tell that ever important story and show how the numbers are actually meaningful. The recent article by Anne Kenney speaks more directly to liaison work, but her insights can easily be extrapolated to more general terms. In other words, focus on the impact of the activity rather than measuring its existence.
Which leads me to the next point, whatever data is captured, start by actually capturing it! You can have the most perfect form in the world, but if no one is filling it out, it’s pointless. Consistency is also key, and for this you may need the help of a department head or library administration to help nudge participation in the right direction. But even some data, however incomplete, is better than none at all and you can always build on your efforts, but you have to start somewhere and establish that initial benchmark.
Formatting and creating the infographic is just as important as what’s in it. Luckily, there are several free tools out there which help to make this work a little easier:
- What actions/learning are you trying to enable? Do you want to simply inform or perhaps persuade?
- What questions are you trying to answer?
- What do you want to show? What story are you trying to tell?
- Who is your audience? What are their priorities and level of knowledge about your information?
- What key information do you want to relay? Where do you want the reader to focus and on what?
Knowing the answers to these questions will help you decide layout and formatting choices. Keep things simple and choose complementary colors. Make sure the infographic is easy to print out and can be viewed online just as easily-try to avoid making it too long so that the person has to scroll endlessly to see everything. And most importantly, keep trying!
*Images taken from Pixabay
Yesterday, S. 2893, legislation introduced by Senator Schumer, passed! It authorizes the National Library Service for the Blind and Physically Handicapped (NLS) to extend its service by providing refreshable Braille display devices to NLS users. Previously, NLS could only supply Braille books in print which are expensive to produce and costly to ship. The NLS did have the capability of sending Braille files to users, but many could not afford the refreshable Braille display devices. Braille readers— popular with many people with print disabilities— allow readers the ability to read Braille from a device connected to a computer keyboard. With so much content now displayed on a computer screen, Braille readers are indispensable. Isn’t technology cool?
Kudos to Senator Schumer for acting on a recommendation from the Government Accountability Office (GAO) in its recent report entitled “Library Services For Those With Disabilities: Additional Steps Needed to Ease Services and Modernize Technology” to “give NLS the opportunity to provide braille in a modernized format and potentially achieve cost savings, Congress should consider amending the law to allow the agency to use federal funds to provide its users playback equipment for electronic braille files (i.e., refreshable braille devices).”
The VIAF API is undergoing enhancements in an upcoming July install scheduled for 7/19/2016.
“Yeas 74, Nays 18”: with those few magic words yesterday, Dr. Carla Hayden was confirmed overwhelmingly by the United States Senate to serve as the nation’s 14th Librarian of Congress. ALA strongly endorsed Dr. Hayden’s nomination, worked hard for her confirmation as an organization, and is proud to have enabled tens of thousands of Americans (librarians and many others alike) to communicate their pride in and support of Dr. Hayden to their Senators.
Today’s magic words are the ones that our parents first acquainted us with – “thank you.” Too often, in the heat of legislative debate and public advocacy, they’re forgotten, but not by librarians and the people who support what (and who) we stand for. Today, keep calling, emailing, and Tweeting the Senators who voted “Yea” to confirm Dr. Hayden (complete list by state below), and no matter where you live, also thank:
- Senate Majority Leader Mitch McConnell for initiating and enabling yesterday’s historic vote;
- Senate Majority Whip John Cornyn for influentially supporting Dr. Hayden with his vote;
- The Rules Committee’s indefatigable staff and leadership, Chairman Roy Blunt and Ranking members Chuck Schumer; and, by no means least
- Dr. Hayden’s biggest boosters in the Senate, her home state of Maryland’s Senators Barbara Mikulski and Ben Cardin.
Dr. Hayden’s nomination, Rules Committee vetting, hearing and ultimate consideration on the floor of the Senate were, appropriately, not partisan. They were done right, done fairly and done well and the nation will benefit for a decade from that model process.
Saying “thank you” is appropriate, easy, and it’s the right thing to do. Please, pass it on proudly and loudly – #HaydenISLoC
The post Thank your Senators for the new Librarian of Congress appeared first on District Dispatch.
Great to hear Koha’s Nicole Engard and Brendan Gallagher interviewed on FLOSS Weekly episode 236 talking about the integrated library system. Six (!) years ago Evergreen was on FLOSS Weekly episode 132, with Mike Rylander and the rich radio-friendly baritone voice of Ontario’s own Dan Scott explaining about the other free and open ILS written in Perl.
Austin, TX The peak of summer is also the mid-point in the annual DuraSpace Membership Campaign. Many thanks to those in our community who have become 2016 DuraSpace Members. We are pleased to report that we are within reach of our Membership Campaign goal of $1,250,000. Financial contributions come from our members, registered service providers and our corporate sponsors.
This afternoon, the Senate voted to confirm Dr. Carla Hayden as the 14th Librarian of Congress! Dr. Hayden will be the first professional librarian to hold the position in over 40 years, as well as the first woman and first African American Librarian of Congress.
You can join our celebration on social media (#HaydenISLoC) and by taking a moment to thank the 74 Senators who voted to confirm Dr. Hayden!
The post Hayden confirmed as the 14th Librarian of Congress appeared first on District Dispatch.
CRRA Update Spring 2016
(December, January, February)
Please see the PDF for the more visually rich version.
Open Knowledge Foundation: Why Open Source Software Matters for Government and Civic Tech – and How to Support It
Today we’re publishing a new research paper looking at whether free/open source software matters for government and civic tech. Matters in the sense that it should have a deep and strategic role in government IT and policy rather than just being a “nice to have” or something “we use when we can”.
As the paper shows the answer is a strong yes: open source software does matter for government and civic tech — and, conversely, government matters for open source. The paper covers:
- Why open software is especially important for government and civic tech
- Why open software needs special support and treatment by government (and funders)
- What specific actions can be taken to provide this support for open software by government (and funders)
We also discuss how software is different from other things that government traditionally buy or fund. This difference is why government cannot buy software like it buys office furniture or procures the building of bridges — and why buying open matters so much.
The paper is authored by our President and Founder Dr Rufus Pollock.Read the Full Version of the Paper Online »
Download PDF Version of the paper »
Discussion and Comments » Why Open Software
We begin with four facts about software and government which form a basis for the conclusions and recommendations that follow.
- The economics of software: software has high fixed costs and low (zero) marginal costs and it is also incremental in that new code builds on old. The cost structure creates a fundamental dilemma between finding ways to fund the fixed cost, e.g. by having proprietary software and raising prices; and promoting optimal access by setting the price at the marginal cost level of zero. In resolving this dilemma, proprietary software models favour the funding of fixed costs but at the price of inefficiently raised pricing and hampering future development, whilst open source models favour efficient pricing and access but face the challenge of funding the fixed costs to create high quality software in the first place. The incremental nature of software sharpens this dilemma and contributes to technological and vendor lock-in.
Switching costs are significant: it is (increasingly) costly to switch off a given piece of software once you start using it. This is because you make “asset (software) specific investments”: in learning how to use the software, integrating the software with your systems, extending and customizing the software, etc. These all mean there are often substantial costs associated with switching to an alternative later.
The future matters and is difficult to know: software is used for a long time — whether in its original or upgraded form. Knowing the future is therefore especially important in purchasing software. Predictions about the future in relation to software are especially hard because of its complex nature and adaptability; behavioural biases mean the level of uncertainty and likely future change are underestimated. Together these mean lock-in is under-estimated.
Governments are bad at negotiating, especially in this environment, and hence the lock-in problem is especially acute for Government. Government are generally poor decision-makers and bargainers due to the incentives faced by government as a whole and by individuals within government. They are especially weak when having to make trade-offs between the near-term and the more distant future. They are even weaker when the future is complex, uncertain and hard to specify contractually up front. Software procurement has all of these characteristics, making it particularly prone to error compared to other government procurement areas.
Note: numbers in brackets e.g. (1) refer to one of the four observations of the previous section.
A. Lock-in to Proprietary Software is a Problem
Incremental Nature of Software (1) + Switching Costs (2)
Lock-in happens for a software technology, and, if it is proprietary, to a vendor
Zero Marginal Cost of Software (1) + Uncertainty about the Future in user needs and technologies (3) + Governments are Poor Bargainers (4)
Lock-in to proprietary software is a problem
Lock-in has high costs and is under-estimated – especially so for government
B. Open Source is a Solution
Lock-in is a problem
Strategies that reduce lock-in are valuable
Economics of Software (1)
Open-source is a strategy for government (and others) to reduce future lock-in
Why? Because it requires the software provider to make an up-front commitment to making the essential technology available both to users and other technologists at zero cost, both now and in the future
Together these two points
Open source is a solution
And a specific commitment to open source in government / civic tech is important and valuable
C. Open Source Needs Support
And Government / Civic Tech is an area where it can be provided effectively
Software has high fixed costs and a challenge for open source is to secure sufficient support investment to cover these fixed costs (1 – Economics)
Governments are large spenders on IT and are bureaucratic: they can make rules to pre-commit up front (e.g. in procurement) and can feasibly coordinate whether at local, national or, even, international levels on buying and investment decisions related to software.
Government is especially well situated to support open source
Government has the tools to provide systematic support
Government should provide systematic support
We have established in the previous section that there is a strong basis for promoting open software. This section provides specific strategic and tactical suggestions for how to do that. There are five proposals that we summarize here. Each of these is covered in more detail in the main section below. We especially emphasize the potential of the last three options as it does not require up-front participation by government and can be boot-strapped with philanthropic funding.
1. Recognize and reward open source in IT procurement.
Give open source explicit recognition and beneficial treatment in procurement. Specifically, introduce into government tenders: EITHER an explicit requirement for an open source solution OR a significant points value for open source in the scoring for solutions (more than 30% of the points on offer).
2. Make government IT procurement more agile and lightweight.
Current methodologies follow a “spec and deliver” model in which government attempts to define a full spec up front and then seeks solutions that deliver against this. The spec and deliver model greatly diminishes the value of open source – which allows for rapid iteration in the open, and more rapid switching of provider – and implicitly builds lock-in to the selected provider whose solution is a black-box to the buyer. In addition, whilst theoretically shifting risk to the supplier of the software, given the difficulty of specifying software up front it really just inflates upfront costs (since the supplier has to price in risk) and sets the scene for complex and cumbersome later negotiations about under-specified elements.
3. Develop a marketing and business development support organization for open source in key markets (e.g. US and Europe).
The organization would be small, at least initially, and focused on three closely related activity areas (in rough order of importance):
- General marketing of open source to government at both local and national level: getting in front of CIOs, explaining open source, demystifying and derisking it, making the case etc. This is not specific to any specific product or solution.
Supporting open source businesses, especially those at an early-stage, in initial business development activities including: connecting startups to potential customers (“opening the rolodex”) and guidance in navigating the bureaucracy of government procurement including discovering and responding to RFPs.
Promoting commercialization of open source by providing advice, training and support for open source startups and developers in commercializing and marketing their technology. Open source developers and startups are often strong on technology and weak on marketing and selling their solutions and this support would help address these deficiencies.
4. Open Offsets: establish target levels of open source financing combined with a “offsets” style scheme to discharge these obligations.
An “Open Offsets” program would combine three components:
- Establish target commitments for funding open source for participants in the program who could include government, philanthropists and private sector. Targets would be a specific measurable figure like 20% of all IT spending or $5m.
Participants discharge their funding commitment either through direct spending such as procurement or sponsorship or via purchase of open source “offsets”. “Offsets” enable organizations to discharge their open source funding obligation in an analogous manner to the way carbon offsets allow groups to deliver on their climate change commitments.
Administrators of the open offset fund distribute the funds to relevant open source projects and communities in a transparent manner, likely using some combination of expert advice, community voting and value generated (this latter based on an estimate of the usage and value of created by given pieces of open software).
5. “Choose Open”: a grass-roots oriented campaign to promote open software in government and government run activities such as education.
“Choose Open” would be modelled on recent initiatives in online political organizing such as “Move On” in the 2004 US Presidential election as well as online initiatives like Avaaz. It would combine central provision of message, materials and policy with localized community participation to drive change.Read the Full Version of the Paper Online »
Download PDF Version of the paper »
Discussion and Comments »
In an earlier post I speculated about the plateau in ebook adoption. According to recent statistics from publishers we are now actually seeing a decline in ebook sales after a period of growth (and then the leveling off that I discussed before). Here’s my guess about what’s going on—an educated guess, supported by what I’m hearing from my sources and network.
First, re-read my original post. I believe it captured a significant part of the story. A reminder: when we hear about ebook sales we hear about the sales from (mostly) large publishers and I have no doubt that ebooks are a troubled part of their sales portfolio. But there are many other ebooks than those reported by the publishers that release their stats, and ways to acquire them, and thus there’s a good chance that there’s considerable “dark reading” (as I called it) that accounts for the disconnect between the surveys that say that e-reading is growing while sales (again, from the publishers that reveal these stats) are declining.
The big story I now perceive is a bifurcation of the market between what used to be called high and low culture. For genre fiction (think sexy vampires) and other genres where there is a lot of self-publishing, readers seem to be moving to cheap (often 99 cent) ebooks from Amazon’s large and growing self-publishing program. Amazon doesn’t release its ebook sales stats, but we know that they already have 65% of the ebook market and through their self-publishing program may reach a disturbing 90% in a few years. Meanwhile, middle- and high-brow books for the most part remain at traditional publishers, where advances still grease the wheels of commerce (and writing).
Other changes I didn’t discuss in my last post are also happening that impact ebook adoption. Audiobook sales rose by an astonishing 40% over the last year, a notable story that likely impacts ebook growth—for the vast majority of those with smartphones, they are substitutes (see also the growth in podcasts). In addition, ebooks have gotten more expensive in the past few years, while print (especially paperback) prices have become more competitive; for many consumers, a simple Econ 101 assessment of pricing accounts for the ebook stall.
I also failed to account in my earlier post for the growing buy-local movement that has impacted many areas of consumption—see vinyl LPs and farm-to-table restaurants—and is, in part, responsible for the turnaround in bookstores—once dying, now revived—an encouraging trend pointed out to me by Oren Teicher, the head of the American Booksellers Association. These bookstores were clobbered by Amazon and large chains late last decade but have recovered as the buy-local movement has strengthened and (more behind the scenes, but just as important) they adopted technology and especially rapid shipping mechanisms that have made them more competitive.
Personally, I continue to read in both print and digitally, from my great local public library and from bookstores, and so I’ll end with an anecdotal observation: there’s still a lot of friction in getting an ebook versus a print book, even though one would think it would be the other way around. Libraries still have poor licensing terms from publishers that treat digital books like physical books that can only be loaned to one person at a time despite the affordances of ebooks; ebooks are often not that much cheaper, if at all, than physical books; and device-dependency and software hassles cause other headaches. And as I noted in my earlier post, there’s still not a killer e-reading device. The Kindle remains (to me and I suspect many others) a clunky device with a poor screen, fonts, etc. In my earlier analysis, I probably also underestimated the inertial positive feeling of physical books for most readers—which I myself feel as a form of consumption that reinforces the benefits of the physical over the digital.
It seems like all of these factors—pricing, friction, audiobooks, localism, and traditional physical advantages—are combining to restrict the ebook market for “respectable” ebooks and to shift them to Amazon for “less respectable” genres. It remains to be seen if this will hold, and I continue to believe that it would be healthy for us to prepare for, and create, a better future with ebooks.
Austin, TX DuraSpace is pleased to announce the launch of a new DuraCloud web site: http://duracloud.org The site makes it easy to request a customized DuraCloud quote or to create a free trial account. Simple navigation points users to more information about the service, and four different subscription plans. Please let us know what you think!
We are excited to announce that the second face-to-face Mashcat event in North America will be held on January 24th, 2017, in downtown Atlanta, Georgia, USA. We invite you to save the date. We will be sending out a call for session proposals and opening up registration in the late summer and early fall.
Not sure what Mashcat is? “Mashcat” was originally an event in the UK in 2012 aimed at bringing together people working on the IT systems side of libraries with those working in cataloguing and metadata. Four years later, Mashcat is a loose group of metadata specialists, cataloguers, developers and anyone else with an interest in how metadata in and around libraries can be created, manipulated, used and re-used by computers and software. The aim is to work together and bridge the communications gap that has sometimes gotten in the way of building the best tools we possibly can to manage library data. Among our accomplishments in 2016 was holding the first North American face-to-face event in Boston in January and running webinars. If you’re unable to attend a face-to-face meeting, we will be holding at least one more webinar in 2016.
Thanks for considering, and we hope to see you in January.