You are here

Feed aggregator

DPLA: Perspectives from DPLAfest 2016 Part II: Growing our Network and Building Community

planet code4lib - Fri, 2016-05-06 15:00

With DPLAfest 2016 larger than ever, we reached out to a few attendees ahead of the event to help us capture the (many) diverse experiences of fest participants.  These ‘special correspondents’ have graciously volunteered to share their personal perspectives on the fest.  In this second guest post by our special correspondents, Kristen Yarmey and Patrick Murray-John, both of whom are part of the DPLA Community Reps program, join Kerry Dunne, Sara Stephenson, and Emily Pfotenhauer in reflecting on their fest experiences from the perspectives of their fields and interests: the growth of DPLA and its network and sharing and creative reuse of cultural heritage materials.

An Open, Distributed Network: Thoughts from DPLAFest 2016

By Kristen Yarmey, Associate Professor and Digital Services Librarian, The University of Scranton, DPLA Community Rep and member of the PA Digital Partnership Founders’ Group and Metadata Team

DPLAfest welcomed some of DPLA’s founders as well as newcomers.

“I was there when DPLA was born,” intoned David Ferrario, Archivist of the United States, at DPLAFest’s opening session. He was hardly the only one feeling nostalgic; the entire conference was peppered with DPLA remembrances. Maura Marx of IMLS displayed her laptop and its well-worn DPLA sticker as an attestation that she, too, was there when it all began. At the closing session, Dan Cohen recalled attending DPLA’s first plenary in the same room, five years earlier.

My own DPLA origin story is comparatively unimpressive. I’m not a founder. I wasn’t on any of the early committees or work groups. I wasn’t at any important meetings or planning sessions. I didn’t even make it to DPLAFest until the third go-round!

But I was at the Northeast Document Conservation Center’s Digital Directions conference in Boston back in June 2012. One of the very last sessions was a presentation by Emily Gore (then a librarian at Florida State University), titled “The Power of Where Your Collections Can Go: Towards a Digital Public Library of America.” Her presentation wasn’t the first I’d heard of DPLA (I’d seen some announcements), but it was my first real, in-depth exposure to the values and vision behind DPLA.

Emily described fragmented, institutional collections taking on new lives and coming together into cohesive, virtual collections. She outlined plans for a strategic network of interoperable code and open data. She showed us a diagram that sketched out a distributed “hub and spoke” model for contributors, and she emphasized the importance of collective action, telling us to “get on the list and participate.” I’m ordinarily a pretty pragmatic person, not prone to swoon over new ideas. But it’s no exaggeration to say that this presentation and this idea spoke to my digital librarian soul. It just made so much sense. Right then and there began my love affair with the big dream of DPLA.

The unspoken truth about big dreams, though, is that they are complicated, complex, and difficult to fulfill. There are no shortcuts when it comes to building good, strong, networked infrastructure. There’s no magic wand for establishing and sustaining partnerships between diverse and multitudinous stakeholders. Long-term funding sources are elusive, if not illusory. So progress is almost inevitably slow and almost necessarily frustrating.

At DPLAFest, many sessions addressed the daunting challenges and problems that the organization and its participants face. As a group, our attention was focused on the tasks ahead. At the same time, though, there were calls for celebration: Jon Voss of HistoryPin, for example, reminded us that DPLAFest is and should be an actual Fest.

Representatives from hubs Minnesota Digital Library, Digital Commonwealth, and The Portal to Texas History wore headwear representing their home states while sharing models for contributing to DPLA.

I’d like to celebrate, then, an observation that struck me repeatedly at DPLAFest, which is how the fruitfully and verdantly the DPLA network is growing. I loved the emerging spirit of friendly competition between hubs—a sense of egging each other on—but also the willingness to openly share their ideas, workflows, code, documents, and resources. Speakers confessed again and again to stealing ideas from other hubs or partners. Conference participants took care to raise concerns or issues on behalf of partners who could not be present; a recurring question was “How might this impact [insert name of beloved local historical society or small museum]?” I kept thinking that this was precisely what DPLA was meant to do, that it’s not simply a network of content but also a network of effort, action, and expertise. And so on the second day I teared up when a DPLA board member thanked us and told us it was “so energizing” to see the DPLA network working the way it was intended.

It was particularly exciting for me to attend DPLAFest with the PA Digital team. My pride in Pennsylvania and the hard work that so many of my colleagues have done to establish our service hub is such that I got irrationally defensive when a single, slightly outdated PowerPoint slide displayed a DPLA hub map without Pennsylvania highlighted.  Two years of collaboration on DPLA have strengthened ties among our team and across our state, bringing new meaning and energy to our geographical proximity.

All this said, collaboration does not imply or require conformity. What struck me about DPLA’s distributed model back in 2012, and what strikes me still, is that it’s an environment of “both/and” rather than “either/or.” We can have both local control and unified access. We can benefit from data standards without erasing “nonstandard” data. We can work together but also apart. We can be many voices, not one. And so when I heard points of disagreement and debate during DPLAFest, I rejoiced that DPLA fosters what Sarah Vowell calls “the luxury of dissent.”

DPLAFest presents a moment, an annual milestone where we can pause to acknowledge how far we have come. It is true that DPLA is not perfect, but it is good—very good—and getting better every day. (How could it not, with such brilliant people working on it?) In these past few years, we have faced knotty problems. We have stared down dirty metadata, fought funding cuts, and drafted institutional policies. We partnered up to develop new open source software platforms and worked with publishers to find new ways to encourage kids to read. We’ve connected teachers and students with primary sources that bring history and its complications into vivid relief. We have advocated for changes to outdated and ineffective copyright laws and established a new set of standardized rights statements. We have increased access to African American history and proved conclusively that Lady Gaga was not the first to wear a food dress. We contributed 13 million remarkable items of cultural heritage. 13 million!

No matter how hard it is for us to stop and celebrate when there’s so much to do, these are accomplishments that deserve a quiet moment of sincere, heartfelt recognition. So I take this moment to say thank you, all of you, for bringing the big dream of DPLA ever closer to reality. Cheers, and well done.

Ownership, Sharing, and Involvement: Some reflections on DPLAFest 2016

By Patrick Murray-John, Web Developer and Assistant Research Professor at the Center for History and New Media at George Mason University and DPLA Community Rep

First, any reflection needs to acknowledge and say thanks to all the people who put the Fest together. From arranging wonderful spaces, to organizing proposals and sessions, to addressing all the last-minute quirks of a major conference, they did a fantastic job. Many thanks!

From the sessions I visited, two major take-away themes appeared to me. On Thursday, the theme was ‘ownership and sharing’. On Friday, ‘involvement’.

Newly launched rights statements are already implemented for the current alpha version of Omeka S. Learn more at RightsStatements.org

The major reveal of the opening session that hits the theme I see of ‘ownership and sharing’ is the announcement of RightsStatements.org. The site, a joint project by Europeana and DPLA, provides a much needed list of eleven statements describing the copyright status of cultural heritage artifacts. Seen here as John Flatness has implemented it for the current alpha version of Omeka S, the statements provide a standardized vocabulary for describing how we can use the materials surfaced by our institutions. Having a common vocabulary for knowing the sharing potential will help creators, artists, and innovators stand on better footing for seeing their ideas for promoting the progress of science and useful arts.

Speaking of the progress of science and useful arts, the session Do We Need to Worry?: Update on Copyright Matters Affecting Digital Libraries highlighted important decisions in court cases that affect the openness of our cultural heritage. Being transformative seems a key point. That encourages me greatly, since it seems to promote seeing documents as also data to be worked with. That’s a key thing in how I envision digital humanities. Court decisions will inform important aspects of ownership and the potential for sharing, and guidance and knowledge from the session points to a good future.

Complications to the public sense of ownership and sharing appeared in Pond to Lake to Ocean: Partnerships for Moving Cultural Heritage Materials into the DPLA. There, I saw many public archival projects negotiating the desire to bring personal records into our archives and negotiating between the desires of donors to have some control over how their materials are used, and how the materials are accessioned. As it becomes easier and easier to digitize and document history from the public, that balance seems to be a growing technological and ethical consideration.

My takeaway from Friday’s sessions was an impressive sense of involvement from many individuals and communities. I spent my time at the Technology shorts and Building Tools with the DPLA API and Developer Showcase sessions (as well as the “Omeka S” session), but it seemed like other sessions, like Testing a Linked Data Fragments Server using DPLA Data & PCDM, IIIF, and Interoperability reflect the same principle. I saw many different ways that people want to interact with documents and data from DPLA. DPLA is still a young venture, and it is heartening to see how much work is being done on the two sides of pushing more up to DPLA and Europeana, and pulling more down to use the heritage in creative and interesting ways.

DuraSpace News: AVAILABLE: Prague PASIG Presentations

planet code4lib - Fri, 2016-05-06 00:00

From Art Pasquinelli, Oracle

Redwood Shores, CA  More than 150 university, scientific and cultural institution, and industry representatives attended the recent PASIG (Preservation and Archiving Interest Group) meeting held March 9-11 at the Czech National Library of Technology in Prague.

Presentations from Prague PASIG are available here.

LibUX: Learn React with us

planet code4lib - Thu, 2016-05-05 21:52

Hi there. We formed a study group ( #sg-react ) in our slack channel to learn React – an increasingly popular javascript library spearheaded by Facebook for building user interfaces. It’s good to know, but because it approaches the design and development of the web a little differently from other tools you might be used to, we thought it would be fun to wade in to the mire with a little backup.

Starting June 6th we will spend the summer stepping through the free-as-in-kittens React.js Fundamentals course puzzled together by Tyler McGinnis. Twelve lectures over twelve weeks. At the end of each section, we’ll meetup in Google Hangouts, complain, and level-up together.

When all is said in done, I am going to put together a little certification page on LibUX for everyone who made it to the end as a thank-you. All the recordings of our meetups — both video and audio — as well as any other takeaways will be there too.

What do you need to do?
  1. First – register below.
  2. Then join our Slack channel. If you aren’t already slacking – well, you should be. Plus, the LibUX slack is booming. If you’re reading this somehow, then imagine an IRC on steroids filled to the brim with smart people who just love to talk about library user experience.
  3. Finally – register for the free React.js Fundamentals course.*


  • I am totally indebted to John David Garza for pointing out my broken links.

The post Learn React with us appeared first on LibUX.

District Dispatch: DOL extols benefits of workforce system collaboration with public libraries

planet code4lib - Thu, 2016-05-05 21:21

Image Source: Youth Action Project

In a Training and Employment Notice (TEN) issued this week by the U.S. Department of Labor’s (DOL) Employment and Training Administration (ETA), Assistant Secretary Portia Wu encouraged leaders of our nation’s workforce investment system to continue building upon the role that public libraries play in addressing the needs of American workers, job seekers, and employers.

The notice, directed to state workforce agencies, state workforce liaisons, state and local workforce boards, and American Job Center directors, advises that “collaboration with public libraries can increase the quality and quantity of access points for individuals to receive needed career information and assistance.”

The vital role that libraries play in assisting job seekers and unemployed workers is recognized in the Workforce Innovation and Opportunity Act (WIOA), signed in 2014. Recognizing that libraries both help visitors find workforce and labor market information and assist them in searching for jobs, WIOA specifically designates public libraries among the available partners for American Job Centers.

Citing examples such as the 2015 grant initiative involving the New Jersey Department of Labor and Workforce Development and 26 municipal and county libraries in the state, as well as a 2012 partnership between the Maryland Department of Labor and Maryland Public Libraries, the advisory also highlights existing partnerships and activities between libraries and DOL efforts on the national level. For example, public libraries are included in America’s Service Locator, a national online search tool that can be used to locate both the nearest library and an American Job Center or social service provider within a community. DOL’s ETA has also provided training for librarians and other staff on national electronic tools, including the workforce information portal CareerOneStop and the occupation database O*NET. District Dispatch readers may also recall webinars hosted by ALA, IMLS, the U.S. Department of Education, and the U.S. Department of Labor with the goal of introducing WIOA to libraries and stakeholders across the country.

Beyond these existing partnerships, the notice encourages several areas of further collaboration to extend career and employment services in libraries, such as:

  • Leveraging digital literacy activities occurring in public libraries;
  • Collaborating to educate library staff about in-person and virtual employment and training resources available through the public workforce system;
  • Including libraries as a stop on the route of mobile American Job Centers;
  • Using space available at a libraries to provide career assistance and employment services to library patrons (e.g. familiarizing patrons with career resources available electronically or in-person at American Job Centers) or to host career events (e.g. job fairs);
  • Sharing workforce and labor market information, including data on high-growth industries and occupations, between the public workforce system and libraries;
  • Signing Memoranda of Understanding or other formal agreements; and
  • Co-locating American Job Centers and libraries.

Interested in how your library can grow its employment and training services? Be sure to reach out to the workforce development center in your local area to learn more about opportunities to collaborate on serving career and employment needs in your community.

The post DOL extols benefits of workforce system collaboration with public libraries appeared first on District Dispatch.

District Dispatch: Successful National Library Legislative Day concludes

planet code4lib - Thu, 2016-05-05 17:24

2016 group photo during NLLD Photo credit: Adam Mason

This week, the ALA Washington Office kicked off the 42nd National Library Legislative Day (NLLD) in Washington, DC, with a little help from over 400 library supporters. Attendees had the opportunity to hear from keynote speaker and former Member of Congress Rush Holt, and also heard from a number of expert speakers. Panels included an issue briefing with Washington Office staff, training on press, media, and holding productive meetings with legislators, and tips on how to keep libraries in the spotlight during an election year. A recording of the first half of the day is available on the ALA Youtube channel.

ALA President Sari Feldman presents the 2016 WHCLIST award to winner

This year, National Library Legislative Day advocates asked their congressional representatives to fund the Library Services and Technology Act (LSTA) and support programs that provide school libraries with needed funds for materials. Advocates also asked legislators to support the nomination of Dr. Carla Hayden for Librarian of Congress, ECPA reform, and the Marrakesh Treaty.

During the event, the ALA Washington Office awarded Dan Aldrich, a library advocate from Georgia, the 2016 White House Conference on Library and Information Services (WHCLIST) Award. Given to a non-librarian participant attending National Library Legislative Day, the award covers hotel fees in addition to a $300 stipend to reduce the cost of attending the event.

This year, the NLLD advoacy efforts were bolstered by a collaboration with the Harry Potter Alliance (HPA). Over 700 HPA members agreed to take action as part of Virtual Library Legislative Day, along with over 700 of the ALA’s own membership who pledged to take similar action and amplify the message being sent in Washington this week.

NLLD participants during one of their Hill meetings. Photo Credit: Adam Mason

The ALA also took this opportunity to launch a new video series, also a collaboration with the HPA. Called Spark, the series aims to help demystify the advocacy process for young adults and new advocates.

Many thanks to all of the library supporters who participated in National Library Legislative Day, both in person and virtually. It is your hard work and support that made this event a success.

The post Successful National Library Legislative Day concludes appeared first on District Dispatch.

LITA: Congratulate 2016 LITA/Ex Libris Student Writing Award Winner Tanya Johnson

planet code4lib - Thu, 2016-05-05 16:45

Tanya Johnson has been selected as the winner of the 2016 Student Writing Award sponsored by Ex Libris Group and the Library and Information Technology Association (LITA) for her paper titled “Let’s Get Virtual: An Examination of Best Practices to Provide Public Access to Digital Versions of Three-Dimensional Objects.” Johnson is a MLIS candidate at the Rutgers School of Communication and Information.

“Tanya Johnson’s paper on best practices for providing public access to digital versions of three-dimensional objects stood out to the selection committee due to her clear writing and practical, informative content. We are delighted to grant Tanya the 2016 LITA/ExLibris Award,” said Brianna Marshall, the Chair of this year’s selection committee.

The LITA/Ex Libris Student Writing Award recognizes outstanding writing on a topic in the area of libraries and information technology by a student or students enrolled in an ALA-accredited library and information studies graduate program. The winning manuscript will be published in Information Technology and Libraries (ITAL), LITA’s open access, peer reviewed journal, and the winner will receive $1,000 and a certificate of merit.

The Award will be presented LITA Awards Ceremony & President’s Program at the ALA Annual Conference in Orlando, Florida, Orlando, on Sunday, June 26, 2016.

The members of the 2016 LITA/Ex Libris Student Writing Award Committee are: Brianna Marshall (Chair); Rebecca Rose (Vice-Chair); Sandra Barclay (Past-Chairperson); Julia Bauder; Elizabeth McKinstry; Phillip Joseph Suda; and Olga Karanikos (Ex Libris Liaison).

About Ex Libris

Ex Libris, a ProQuest company, is a leading global provider of cloud-based solutions for higher education. Offering SaaS products for the management and discovery of the full spectrum of library and scholarly materials, as well as mobile campus solutions driving student engagement and success, Ex Libris serves thousands of customers in 90 countries. For more information about Ex Libris, see our website, and join us on Facebook, YouTube, LinkedIn, and Twitter.

About LITA

Established in 1966, the Library and Information Technology Association (LITA) is the leading organization reaching out across types of libraries to provide education and services for a broad membership of nearly 2,700 systems librarians, library technologists, library administrators, library schools, vendors, and many others interested in leading edge technology and applications for librarians and information providers. LITA is a division of the American Library Association. Follow us on our Blog, Facebook, or Twitter.

David Rosenthal: Signal or Noise?

planet code4lib - Thu, 2016-05-05 16:23
I've been blogging critically about the state of scientific publishing since my very first post 9 years ago. In particular, I've been pointing out that the several billion dollars a year that go to the publisher's bottom lines, plus the several billion dollars a year in unpaid work by the reviewers, is extremely poor value for money. The claim is that the peer-review process guarantees the quality of published science. But the reality is that it doesn't; it cannot even detect most fraud or major errors.

The fundamental problem is that all participants have bad incentives. Follow me below the fold for some recent examples that illustrate their corrupting effects.

Publishers tend to choose reviewers who are prominent and in the mainstream of their subject area. This hands them a powerful mechanism for warding off threats to the subject's conventional wisdom. Ian Leslie's The Sugar Conspiracy is a long and detailed examination of how prominent nutritionists used this and other mechanisms to suppress for four decades the evidence that sugar, not fat, was the cause of obesity. The result was illustrious careers for the senior scientists, wrecked lives for the dissidents, and most importantly a massive, world-wide toll of disease, disability and death. I'm not quoting any of Leslie's article because you have to read the whole of it to understand the disaster that occurred.

At Science Translational Medicine Derek Lowe's From the Far Corner of the Basement has more on this story, with a link to the paper in BMJ that re-evaluated the data from the original, never fully published study:
It’s impossible to know for sure, but it seems likely that Franz and Keys may have ended up regarding this as a failed study, a great deal of time and effort more or less wasted. After all, the results it produced were so screwy: inverse correlation with low cholesterol and mortality? No benefit with vegetable oils? No, there must have been something wrong. Dahlia Lithwick's Pseudoscience in the Witness Box, based on a Washington Post story, describes another long-running disaster based on bogus science. The bad incentives in this case were that the FBI's forensic scientists were motivated to convict rather than exonerate defendants:
This study was launched after the Post reported that flawed forensic hair matches might have led to possibly hundreds of wrongful convictions for rape, murder, and other violent crimes, dating back at least to the 1970s. In 90 percent of the cases reviewed so far, forensic examiners evidently made statements beyond the bounds of proper science. There were no scientifically accepted standards for forensic testing, yet FBI experts routinely and almost unvaryingly testified, according to the Post, “to the near-certainty of ‘matches’ of crime-scene hairs to defendants, backing their claims by citing incomplete or misleading statistics drawn from their case work.”The death toll is much smaller:
"the cases include those of 32 defendants sentenced to death.” Of these defendants, 14 have already been executed or died in prison. Via Dave Farber's IP list and Pascal-Emmanuel Gobry at The Week I find William A. Wilson's Scientific Regress.Wilson starts from the now well-known fact that many published results are neither replicated nor possible to replicate, because the incentives to publish in a form that can be replicated, and to replicate published results, are lacking:
suppose that three groups of researchers are studying a phenomenon, and when all the data are analyzed, one group announces that it has discovered a connection, but the other two find nothing of note. Assuming that all the tests involved have a high statistical power, the lone positive finding is almost certainly the spurious one. However, when it comes time to report these findings, what happens? The teams that found a negative result may not even bother to write up their non-discovery. After all, a report that a fanciful connection probably isn’t true is not the stuff of which scientific prizes, grant money, and tenure decisions are made.
And even if they did write it up, it probably wouldn’t be accepted for publication. Journals are in competition with one another for attention and “impact factor,” and are always more eager to report a new, exciting finding than a killjoy failure to find an association. In fact, both of these effects can be quantified. Since the majority of all investigated hypotheses are false, if positive and negative evidence were written up and accepted for publication in equal proportions, then the majority of articles in scientific journals should report no findings. When tallies are actually made, though, the precise opposite turns out to be true: Nearly every published scientific article reports the presence of an association. There must be massive bias at work. He points out the ramifications of this problem:
If peer review is good at anything, it appears to be keeping unpopular ideas from being published. Consider the finding of another (yes, another) of these replicability studies, this time from a group of cancer researchers. In addition to reaching the now unsurprising conclusion that only a dismal 11 percent of the preclinical cancer research they examined could be validated after the fact, the authors identified another horrifying pattern: The “bad” papers that failed to replicate were, on average, cited far more often than the papers that did! As the authors put it, “some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis.” And, as illustrated by The Sugar Conspiracy, this is a self-perpetuating process:
What they do not mention is that once an entire field has been created—with careers, funding, appointments, and prestige all premised upon an experimental result which was utterly false due either to fraud or to plain bad luck—pointing this fact out is not likely to be very popular. Peer review switches from merely useless to actively harmful. It may be ineffective at keeping papers with analytic or methodological flaws from being published, but it can be deadly effective at suppressing criticism of a dominant research paradigm. Even if a critic is able to get his work published, pointing out that the house you’ve built together is situated over a chasm will not endear him to his colleagues or, more importantly, to his mentors and patrons. Science is supposed to provide a self-correcting mechanism to handle this problem, and The Sugar Conspiracy actually shows that in the end it works, but
even if self-correction does occur and theories move strictly along a lifecycle from less to more accurate, what if the unremitting flood of new, mostly false, results pours in faster? Too fast for the sclerotic, compromised truth-discerning mechanisms of science to operate? The result could be a growing body of true theories completely overwhelmed by an ever-larger thicket of baseless theories, such that the proportion of true scientific beliefs shrinks even while the absolute number of them continues to rise.The four-decade reign of the fat hypothesis shows this problem.

In The Prevalence of Inappropriate Image Duplication in Biomedical Research Publications, Elisabeth M Bik, Arturo Casadevall and Ferric C Fang report on a study of the images in biomedical publications. From their abstract:
This study attempted to determine the percentage of published papers containing inappropriate image duplication, a specific type of inaccurate data. The images from a total of 20,621 papers in 40 scientific journals from 1995-2014 were visually screened. Overall, 3.8% of published papers contained problematic figures, with at least half exhibiting features suggestive of deliberate manipulation. The prevalence of papers with problematic images rose markedly during the past decade. Additional papers written by authors of papers with problematic images had an increased likelihood of containing problematic images as well. As this analysis focused only on one type of data, it is likely that the actual prevalence of inaccurate data in the published literature is higher. The marked variation in the frequency of problematic images among journals suggest that journal practices, such as pre-publication image screening, influence the quality of the scientific literature.At least this is one instance in which some journals are adding value. But lets look at the set of journal value-adds Marcia McNutt. the editor-in-chief of Science, cites in her editorial attacking Sci-Hub (quotes in italics):
  • [Journals] help ensure accuracy, consistency, and clarity in scientific communication. If only. Many years ago, the peer-reviewed research on peer-review showed conclusively that only the most selective journals (such as McNutt's Science) add any detectable value to their articles. And that is before adjusting for the value their higher retraction rate subtracts.
  • editors are paid professionals who carefully curate the journal content to bring readers an important and exciting array of discoveries. This is in fact a negative. The drive to publish and hype eye-catching, "sexy" results ahead of the competition is the reason why top journals have a higher rate of retraction. This drive to compete in the bogus "impact factor" metric, which can be easily gamed, leads to many abuses. But more fundamentally, any ranking of journals as opposed to the papers they publish is harmful.
  • They make sure that papers are complete and conform to standards of quality, transparency, openness, and integrity. Clearly, if the result is a higher rate of retraction the claim that they conform to these standards is bogus.
  • There are layers of effort by copyeditors and proofreaders to check for adherence to standards in scientific usage of terms to prevent confusion. This is a task that can easily be automated, we don't need to pay layers of humans to do it.
  • Illustrators create original illustrations, diagrams, and charts to help convey complex messages. Great, the world is paying the publishers many billions of dollars a year for pretty pictures?
  • Scientific communicators spread the word to top media outlets so that authors get excellent coverage and readers do not miss important discoveries. And the communicators aren't telling the top media outlets that the "important discoveries" are likely to get retracted in a few years.
  • Our news reporters are constantly searching the globe for issues and events of interest to the research and nonscience communities. So these journals are just insanely expensive versions of the New York Times?
  • Our agile Internet technology department continually evolves the website, so that authors can submit their manuscripts and readers can access the journals more conveniently. Even if we accept the ease of submission argument, the ease of access argument is demolished by, among others, Justin Peters and John Dupuis. Its obviously bogus; the whole reason people use Sci-Hub is that it provides more convenient access! Also, lets not forget that the "Internet technology department" is spending most of their efforts in the way the other Web media do, monetizing their readers, and contributing to the Web obesity crisis. Eric Hellman's study 16 of the top 20 Research Journals Let Ad Networks Spy on Their Readers gave Science a D because:10 Trackers. Multiple advertising networks.To  be fair, Eric also points out that Sci-Hub uses trackers and Library Genesis sells Google ads too.
McNutt's incentives are clearly not aligned with the interests of researchers. Note the reference above to Science Translational Medicine. In Stretching the "peer reviewed" brand until it snaps, I wrote:
a trend publishers themselves started many years ago of stretching the "peer reviewed" brand by proliferating journals. If your role is to act as a gatekeeper for the literature database, you better be good at being a gatekeeper. Opening the gate so wide that anything can get published somewhere is not being a good gatekeeper.The wonderful thing about Elsevier's triggering of the Streisand Effect is that it has compelled even Science to advertise Sci-Hub, and to expose the flimsy justification for the exorbitant profits of the major publishers.

Terry Reese: MarcEdit and Windows XP

planet code4lib - Thu, 2016-05-05 15:07

I’ve been supporting XP now for close to 15 years in MarcEdit, and I’m finding the number of areas in the code where I have to work around XP limitations is continually growing. The tipping point for me occurred about a month ago, when I had to write a new URI parser because the version found in current version of .NET and the one found on XP are worlds different and what XP provides wasn’t robust enough and has a number of problematic bugs.

So, you can probably guess where I’m going with this. I’m starting to think about plans for essentially dropping XP support and freezing a version of MarcEdit (that wouldn’t be updated) for those libraries still using XP. Ideally, I’d like to not provide a frozen version at all because this version will become out of date very quickly – but I’m also unsure of how many users still run XP and how long XP will continue to kick around within the library environment. I haven’t picked a date yet, but I definitely want to have this conversation. Does XP support continue to be important to this community, and more importantly, if we look out say 1-1.5 years, will that still be true.

One last thing; I plan on doing a little bit of log analysis to understand more about the current MarcEdit XP user community. If this community is largely international, I may just suck it up and continue finding a way to make it work. I want to be sensitive to the fact I work in an academic bubble, and I know that many libraries have to struggle simply to be open for their patrons. For anyone in that position, XP probably works good enough. But I think that it’s time to start asking this question and evaluating what the tipping points might be within the MarcEdit community around XP and it’s continued use. 

At some point, XP support will need to end.  It’s just so long in the tooth, that continuing to support it will eventually limit some of the work I might to do with MarcEdit.  The question at this point is when that might happen…1 year from now, 2 years?   I just don’t know.

Thanks,

–tr

District Dispatch: The strange case of Congress and the confounding (re)classifications

planet code4lib - Thu, 2016-05-05 13:16

You wouldn’t think that a decision by the Library of Congress about what subject headings libraries generally should use in, for example, an online catalog would create a political flap. Then again, in Washington – like the world on the other side of Alice’s looking glass – the usual rules of, well, almost anything tend not to apply. Here’s the strange tale . . .

John Tenniel – Illustration from The Nursery Alice (1890)

In late March of this year, after an extensive process consistent with long-standing library principles and practice, the Library of Congress routinely proposed updating almost a hundred out-moded subject headings. Two announced changes would replace the subject heading classification “Aliens” with “Noncitizens,” and “Illegal aliens” with two headings: “Noncitizens” and/or “Unauthorized immigration.” Similar, but not identical, changes previously had been requested by Dartmouth College and also were endorsed by a formal ALA resolution adopted at the 2016 Midwinter Meeting in Boston.

In mid-April, however, third-term Tennessee Representative Diane Black (R-TN6) introduced a bill that would bar the Library from making those specific changes. No reason was given, but the bill’s title provides a clue. H.R. 4926, the “Stopping Partisan Policy at the Library of Congress Act,” had 20 cosponsors on introduction and they now number 33. All are Members of the House majority. None sit on the Committee on House Administration, to which the bill was referred. The bill also has the backing of the Federation for American Immigration Reform (F.A.I.R.), which described the Library’s reclassification proposal as “blatant capitulation to political correctness” and “pandering to pro-amnesty groups.”

Four days after H.R. 4926’s introduction, the Legislative Branch Subcommittee of the House Appropriations Committee adopted language on April 17 that would, in effect, countermand the Library’s professional judgments and interdict the proposed reclassifications noted above. (The Report adopted by the Subcommittee states: “To the extent practicable, the Committee instructs the Library to maintain certain subject headings that reflect terminology used in title 8, United States Code.”) The full House Appropriations Committee will meet in mid-May and has the power to undo the Subcommittee’s action.

On April 28, the Presidents of ALA and ALCTS (ALA’s division of members expert in cataloging and classification) wrote to the Committee’s leaders and members asking that they do so. They emphasized that the Library’s reclassification proposals were solidly grounded in long-standing principles and practices of professional cataloging, recent history, and were manifestly non-political. Accordingly, the presidents called upon Committee members to remove the Subcommittee’s language above from any Legislative Branch appropriations bill that they consider when the House returns from recess next week. The Committee could meet as early as May 17 to consider the bill and this issue.

The post The strange case of Congress and the confounding (re)classifications appeared first on District Dispatch.

William Denton: Conforguration

planet code4lib - Thu, 2016-05-05 04:09

Conforguration is a basic working example of configuration management in Org. I use source code blocks and tangling to make shell scripts that get synced to a remote machine and then download, install and configure R from source.

conforguration.org (that’s a file, not a site) has all the code. It really did work for me, and it might work for you. Is this a reasonable way of doing configuration management? I don’t know, but it’s worth trying. I’ll add things as I come across them.

I don’t know anything about formal configuration management, and I’ve never done literate programming and tangling in Org before. Anyone who’s interested in having a go at conforguring something else is most welcome to do so!

LITA: Jobs in Information Technology: May 4, 2016

planet code4lib - Wed, 2016-05-04 18:54

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

University of Nevada Las Vegas, Web Developer [16148], Las Vegas, NV

University of Nevada Las Vegas, Digital Library Developer [16160], Las Vegas, NV

City of Long Beach, Manager of Automated Services Bureau – Department of Library Services, Long Beach, CA

Brown University, Library Systems Analyst (REQ123753), Providence, RI

Macquarie Capital, Document Management, New York, NY

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

Nicole Engard: Taking a Break

planet code4lib - Wed, 2016-05-04 12:19

I’m sure those of you who are still reading have noticed that I haven’t been updating this site much in the past few years. I was sharing my links with you all but now Delicious has started adding ads to that. I’m going to rethink how I can use this site effectively going forward. For now you can read my regular content on Opensource.com at https://opensource.com/users/nengard.

The post Taking a Break appeared first on What I Learned Today....

Related posts:

  1. KohaCon10: eBooks: Why they break ISBNs
  2. Taking the Catalog out of the mix
  3. E-book reading on the rise

FOSS4Lib Upcoming Events: Archivematica Webinar: Automation tools

planet code4lib - Wed, 2016-05-04 02:17
Date: Thursday, May 26, 2016 - 09:00 to 10:00Supports: Archivematica

Last updated May 3, 2016. Created by Peter Murray on May 3, 2016.
Log in to edit this page.

From the announcement:

William Denton: Installing R from source (updated)

planet code4lib - Wed, 2016-05-04 01:22

Last year I wrote up how I install R from source. I’ve refined it a bit so it’s easier to copy and paste, and here it is, suitable for use with the fresh release of R 3.3.0.

cd /usr/local/src/R VERSION=3.3.0 curl -O http://cran.utstat.utoronto.ca/src/base/R-3/R-$VERSION.tar.gz tar xzvf R-$VERSION.tar.gz cd R-$VERSION ./configure make && make check cd .. rm R Rscript ln -s R-$VERSION/bin/R R ln -s R-$VERSION/bin/Rscript Rscript PACKAGE_LIST="dplyr readr ggplot2 devtools lubridate shiny knitr ggvis seriation igraph arules arulesViz tm wordcloud cluster fpc topicmodels" for PKG in $PACKAGE_LIST; do ./Rscript --vanilla -e "install.packages('$PKG', repos=c('https://cran.hafro.is/'))"; done ./Rscript --vanilla -e "devtools::install_github('rstudio/shinyapps')"

When 3.3.1 comes out, just change VERSION, rerun, and there you go. There’s nothing to catch errors, but I’m pretty sure everything will always work, and if there’s some horrible accident and it doesn’t, the previous version of R is still there and it’s just a matter of changing symlinks.

The aim of the symlinks is to always be able to refer to /usr/local/src/R/R and /usr/local/src/R/Rscript in a stable way, so this addition to my $PATH in .bashrc always works:

PATH=/usr/local/src/R:$PATH

If you have that set, and you can write to /usr/local/src/, then you can paste in those shell commands and it should all just work (assuming you’ve already installed the necessary packages for building from source generally, and topicmodels requires the GNU Scientific Library).

I was talking to someone the other day who uses Ansible and explained how he uses it for keeping all his machines in sync and set up the way he likes. It looks very powerful, but right now it’s not for me. I’ll keep the block above in an Org file and copy and paste as needed, and I’ll do something similar with other packages. I could even run them remotely from Org.

Open Library Data Additions: Amazon Crawl: part ip

planet code4lib - Wed, 2016-05-04 01:04

Part ip of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Archive BitTorrent, Data, Metadata, Text

Open Library Data Additions: Amazon Crawl: part ir

planet code4lib - Wed, 2016-05-04 01:00

Part ir of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Archive BitTorrent, Data, Metadata, Text

Open Library Data Additions: Amazon Crawl: part is

planet code4lib - Wed, 2016-05-04 00:57

Part is of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata, Text

Open Library Data Additions: Amazon Crawl: part gv

planet code4lib - Wed, 2016-05-04 00:54

Part gv of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Archive BitTorrent, Data, Metadata, Text

Open Library Data Additions: Amazon Crawl: part ih

planet code4lib - Wed, 2016-05-04 00:49

Part ih of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Archive BitTorrent, Data, Metadata, Text

Open Library Data Additions: Amazon Crawl: part hr

planet code4lib - Wed, 2016-05-04 00:49

Part hr of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Archive BitTorrent, Data, Metadata, Text

Pages

Subscribe to code4lib aggregator