District Dispatch: Policy Revolution! and COSLA in Wyoming: Bountiful in bibliophiles but barren of bears
I just returned from the Annual Meeting of the Chief Officers of State Library Agencies (COSLA), held in Teton Village, Wyo., just down the road from Grand Teton National Park and Jackson. From the moment I left the airport, I knew I was not in D.C. any longer, as there were constant reminders about avoiding animals. There were road signs informing drivers about “moose on the loose;” strong suggestions about hiking in groups and to carry bear spray; and warnings about elk hunting so “please wear bright colors.” In D.C., we only worry about donkeys and elephants engaging in political shenanigans.
Work on our Policy Revolution! Initiative attracted me to the COSLA meeting, to leverage the presence of the state librarians, and also librarians from the mountain states. Our session focused on four aspects of work related to developing a national public policy agenda:
- From a library leader’s perspective, what are the most important national goals that would advance libraries in the next 5-10 years?
- From the U.S. President’s perspective, how could libraries and libraries best contribute to the most important national goals, and what national initiatives are needed to realize these contributions?
- From the many good ideas that we can generate, how can we prioritize among them?
- What does a national public policy agenda look like? What are its characteristics?
The wide open spaces and rugged individualistic culture of Wyoming, symbolized by Steamboat, reminded me of the vastness of the United States, and great resources and resourcefulness of our people. In this time of library revolution, we need to move beyond our conventional views of the world to figure out how libraries may best serve the nation for decades to come. With the next presidential election just around the corner, and with it the certainty of a new occupant in the White House, it is timely and urgent to develop and coalesce around a common library vision.
One thought on the way home was stimulated by the Wyoming session. What should be the priority for national action? Three possibilities occur to me:
- Increase direct funding (i.e., show me the money)
- Effect public policy changes that may or may not directly implicate funding, such as copyright, privacy, licensing regimes, accommodations for people with disabilities, but are changes that can only be achieved at the national level, or at least best addressed at the national level
- Promote a new vision and positioning for libraries in national conversation (i.e., bully pulpit)
Should a national public policy agenda systematically favor one of these directions?
Many thanks to COSLA for hosting us, with particular thanks to Ann Joslin and Tim Cherubini (and his staff). I also appreciated the opportunity to sit in a number of sessions that included generous doses of our long-time friends E-rate, ebooks and digital services. We had a special treat as Wyoming’s senior U.S. Senator, Michael Enzi (R-WY), addressed the group, regaling the audience with his love of reading and libraries.
I had the opportunity for a quick tour around the area. I was impressed with the large, modern Teton County Library (in Jackson), which has good wireless access—yay! After seeing the Grand Tetons and tooling about Jenny Lake, it is gonna be hard to settle back down to the political chaos that is Washington, D.C.
The post Policy Revolution! and COSLA in Wyoming: Bountiful in bibliophiles but barren of bears appeared first on District Dispatch.
Everyone is getting tired of the sage-on-the-stage style of preconferences, so when Deborah Fritz suggested a hackathon (thank you Deborah!) to the RDA Dev Team, we all climbed aboard and started thinking about what that kind of event might look like, particularly in the ALA Midwinter context. We all agreed: there had to be a significant hands-on aspect to really engage those folks who were eager to learn more about how the RDA data model could work in a linked data environment, and, of course, in their own home environment.
We’re calling it a Jane-athon, which should give you a clue about the model for the event: a hackathon, of course! The Jane Austen corpus is perfect to demonstrate the value of FRBR, and there’s no lack of interesting material to look at– media materials, series, spin-offs of every description–in addition to the well known novels. So the Jane-athon will be partially about creating data, and partially about how that data fits into a larger environment. And did you know there is a Jane Austen bobblehead?
We think there will be a significant number of people who might be interested in attending, and we figured that getting the world out early would help prospective participants make their travel arrangements with attendance in mind. Sponsored by ALA Publishing, the Jane-athon will be on the Friday before the midwinter conference (the traditional pre-conference day), and though we don’t yet have registration set up, we’ll make sure everyone knows when that’s available. If you think, as we do that this event will be the hit of Midwinter, be sure to watch for that announcement, and register early! If the event is successful, you’ll be seeing others in subsequent ALA conferences.
So, what’s the plan and what will participants get out of it?
The first thing to know is that there will be tables and laptops to enable small groups to work together for the ‘making data’ portion of the event. We’ll be asking folks who have laptops they can bring to Chicago to plan on bringing theirs. We’ll be using the latest version of a new bibliographic metadata editor called RIMMF (“RDA In Many Metadata Formats”–not yet publicly available–but soon. Watch for it on the TMQ website). We encourage interested folks to download the current beta version and play with it–it’s a cool tool and really is a good one to learn about.
In the morning, we’ll form small cataloging groups and use RIMMF to do some FRBRish cataloging, starting from MARC21 and ending up with RDA records exported as RDF Linked Data. In the afternoon we’ll all take a look at what we’ve produced, share our successes and discoveries, and discuss the challenges we faced. In true hackathon tradition we’ll share our conclusions and recommendations with the rest of the library community on a special Jane-athon website set up to support this and subsequent Jane-athons.
Who should attend?
We believe that there will be a variety of people who could contribute important skills and ideas to this event. Catalogers, of course, but also every flavor of metadata people, vendors, and IT folks in libraries would be warmly welcomed. But wouldn’t tech services managers find it useful? Oh yes, they’d be welcomed enthusiastically, and I’m sure their participation in the discussion portion of the event in the afternoon will bring out issues of interest to all.
Keep in mind, this is not cataloging training, nor Toolkit training, by any stretch of the imagination. Neither will it be RIMMF training or have a focus on the RDA Registry, although all those tools are relevant to the discussion. For RIMMF, particularly, we will be looking at ways to ensure that there will be a cadre of folks who’ve had enough experience with it to make the hands-on portion of the day run smoothly. For that reason, we encourage as many as possible to play with it beforehand!
Our belief is that the small group work and the discussion will be best with a variety of experience informing the effort. We know that we can’t provide the answers to all the questions that will come up, but the issues that we know about (and that come up during the small group work) will be aired and discussed.
The post Podcast: Solr Usability with Steve Rowe & Tim Potter appeared first on Lucidworks.
I mentioned the location of our latest Islandora Camp was beautiful, right? Well, don't take my word for it. One of our campers shared these lovely photos from around town:
(also, check out Ashok Modi's blog about his experiences at camp)
Brendan Howley opened up the Internet Librarian conference this year. Brian designs stories that insight people to “do something”. He’s here to talk to us about the world of media and desired outcomes – specifically the desired outcomes for our libraries. Brendan collected stories from local library constituents to find out what libraries needed to do to get to the next step. He found (among other things) that libraries should be hubs for culture and should connect community media.
Three things internet librarians need to know:
- why stories world and what really matters
- why networks form (power of the weak not the strong)
- why culture eats strategy for lunch (Peter Drucker)
“The internet means that libraries are busting out of their bricks and mortars”
Brendan shared with us how Stories are not about dumping data, they’re about sharing data and teachable moments.
Data is a type of story and where data and stories meet is where change found. If you want to speak to your community you need to keep in mind that we’re in a society of “post-everything” – there is only one appetite left in terms of storytelling – “meaning”. People need to find it relevant and find meaning in the story. The most remarkable thing about librarians is that we give “meaning” away every day.
People want to know what we stand for and why – values are the key piece to stories. People want to understand why libraries still exist. People under the age of 35 want to know how to find the truth out there – the reliable sources – they don’t care about digital literacy. It’s those who are scared of being left behind – those over 35 (in general) who care about digital literacy.
The recipe for a successful story is: share the why of the how of what you do.
The sharing of stories creates networks. Networks lead to the opportunity to create value – and when that happens you’ve proved your worth as a civic institution. Networks are the means by which those values spread. They are key to the future of libraries.
A Pattern Language by Christopher Alexander is a must read by anyone designing systems/networks.
You need to understand that it’s the weak ties that matter. Strong ties are really quite rare – this sounds a lot like the long tail to me.
Libraries are in the business of giving away context – that means that where stories live, breathe, gather and cause people to do things is in the context. We’re in a position where we can give this context away. Libraries need to understand that we’re cultural entrepreneurs. Influencers fuel culture – and that’s the job description for librarians.
It has been a while since our last foray into the Long Tail of Islandora. Some of those modules have moved all the way from the tail to the head and become part of our regular release. We have been quietly gathering them in our Resources section, but it's more than time for another high level review of the awesome modules that are out there in the community, just waiting to make your repo better.Islandora XQuery
The ability to batch edit has long been the impossible dream in Islandora. Well, with this little module from discoverygarden, Inc., the dream has arrived. With a basic knowledge of XQuery, you can attack the metadata in your Fedora repository en masse.
Putting Islandora XQuery into production should be approached with caution for the same reason that batch editing has been so long elusive: if you mass-edit your data, you can break things. That said, the module does come with a helpful install script, so getting it working in your Islandora Installation may be the easiest part!Islandora Entity Bridge
Much like Islandora Sync, Ashok Modi's Islandora Entity Bridge endeavours to build relationships between Fedora objects and Drupal so you can apply a wider variety of Drupal modules to the contents of your repository without recreating your objects as nodes.
Ashok presented on this module at the recent Islandora Camp in Denver, so you can learn more from his slides here.Islandora Plupload
This simple but very effective module has been around a while. It makes use of the Plupload library to allow you to exceed PHP file limits when uploading large files.Islandora Feeds
Mark Jordan has created this tool so you can use the Feeds contrib module to create Islandora objects. This module is still in development, so you can help it to move forward by telling Mark your use cases.Islandora Meme Solution Pack
The latest in islandora demo/teaching modules, developed at Islandora Camp Colorado by dev instructors Daniel Lamb and Nick Ruest to help demonstrate the joys of querying Solr. This module is not meant to be used in your repo, but rather to act as a learning tool, especially when used in combination with our Islandora VM.
I’ve never been a big tablet user. This may come as a surprise to some, given that I assist patrons with their tablets every day at the public library. Don’t get me wrong, I love my Nexus 7 tablet. It’s perfect for reading ebooks, using Twitter, and watching Netflix; but the moment I want to respond to an email, edit a photo, or work my way through a Treehouse lesson, I feel helpless. Several library patrons have asked me if our public computers will be replaced by iPads and tablets. It’s hard to say where technology will take us in the coming years, but I strongly believe that a library without computers would leave us severely handicapped.
One of our regular library patrons, let’s call her Jane, is a diehard iPad fan. She is constantly on the hunt for the next great app and enjoys sharing her finds with me and my colleagues. Jane frequently teases me about preferring computers and whenever I’m leading a computer class she’ll ask “Can I do it on my iPad?” She’s not the only person I know who thinks that computers are antiquated and on their way to obsoletion, but I have plenty of hope for computers regardless of the iPad revolution.
In observing how patrons use technology, and reflecting on how I use technology in my personal and professional life, I find that tablets are excellent tools for absorbing and consuming information. However, they are not designed for creation. 9 times out of 10, if you want to make something, you’re better off using a computer. In a recent Wired article about digital literacy, Ari Geshner poses the question “Are you an iPad or are you a laptop? An iPad is designed for consumption.” He explains that literacy “means moving beyond a passive relationship with technology.”
So Jane is an iPad and I am a laptop. We’ve managed to coexist and I think that’s the best approach. Tablets and computers may both fall under the digital literacy umbrella, but they are entirely different tools. I sincerely hope that public libraries will continue to consider computers and tablets separately, encouraging a thirst for knowledge as well as a desire to create.
It has been interesting watching Research Information Management or RIM emerge as a new service category in the last couple of years. RIM is supported by a particular system category, the Research Information Management System (RIMs), sometimes referred to by an earlier name, the CRIS (Current Research Information System).
For reasons discussed below, this area has been more prominent outside the US, but interest is also now growing in the US. See for example, the mention of RIMs in the Library FY15 Strategic Goals at Dartmouth College.Research information management
The name is unfortunately confusing - a reserved sense living alongside more general senses. What is the reserved sense? Broadly, RIM is used to refer to the integrated management of information about the research life-cycle, and about the entities which are party to it (e.g. researchers, research outputs, organizations, grants, facilities, ..). The aim is to synchronize data across parts of the university, reducing the burden to all involved of collecting and managing data about the research process. An outcome is to provide greater visibility onto institutional research activity. Motivations include better internal reporting and analytics, support for compliance and assessment, and improved reputation management through more organized disclosure of research expertise and outputs.
A major driver has been the need to streamline the provision of data to various national university research assessment exercises (for example, in the UK, Denmark and Australia). Without integrated support, responding to these is costly, with activities fragmented across the Office of Research, individual schools or departments, and other support units, including, sometimes, the library. (See this report on national assessment regimes and the roles of libraries.)
Some of the functional areas covered by a RIM system may be:
- Award management and identification of award opportunities. Matching of interests to potential funding sources. Supporting management of and communication around grant and contracts activity.
- Publications management. Collecting data about researcher publications. Often this will be done by searching in external sources (Scopus and Web of Science, for example) to help populate profiles, and to provide alerts to keep them up to date.
- Coordination and publishing of expertise profiles. Centralized upkeep of expertise profiles. Pulling of data from various systems. This may be for internal reporting or assessment purposes, to support individual researchers in providing personal data in a variety of required forms (e.g. for different granting agencies), and for publishing to the web through an institutional research portal or other venue.
- Research analytics/reporting. Providing management information about research activity and interests, across departments, groups and individuals.
- Compliance with internal/external mandates.
- Support of open access. Synchronization with institutional repository. Managing deposit requirements. Integration with sources of information about Open Access policies.
To meet these goals, a RIM system will integrate data from a variety of internal and external systems.Typically, a university will currently manage information about these processes across a variety of administrative and academic departments. Required data also has to be pulled from external systems, notably data about funding opportunities and publications.Products
Several products have emerged specifically to support RIM in recent years. This is an important reason for suggesting that it is emerging as a recognized service category.
- Pure (Elsevier). "Pure aggregates your organization's research information from numerous internal and external sources, and ensures the data that drives your strategic decisions is trusted, comprehensive and accessible in real time. A highly versatile system, Pure enables your organization to build reports, carry out performance assessments, manage researcher profiles, enable expertise identification and more, all while reducing administrative burden for researchers, faculty and staff." [Pure]
- Converis (Thomson Reuters). "Converis is the only fully configurable research information management system that can manage the complete research lifecycle, from the earliest due diligence in the grant process through the final publication and application of research results. With Converis, understand the full scope of your organization's contributions by building scholarly profiles based on our publishing and citations data--then layer in your institutional data to more specifically track success within your organization." [Converis]
- Symplectic Elements. "A driving force of our approach is to minimise the administrative burden placed on academic staff during their research. We work with our clients to provide industry leading software services and integrations that automate the capture, reduce the manual input, improve the quality and expedite the transfer of rich data at their institution."[Symplectic]
Pure and Converis are parts of broader sets of research management and analytics services from, respectively, Elsevier (Elsevier research intelligence) and Thomson Reuters (Research management and evaluation). Each is a recent acquisition, providing an institutional approach alongside the aggregate, network level approach of each company's broader research analytics and management services.
Symplectic is a member of the very interesting Digital Science portfolio. Digital Science is a company set up by Macmillan Publishers to incubate start-ups focused on scientific workflow and research productivity. These include, for example, Figshare and Altmetric.
Other products are also relevant here. As RIM is an emerging area, it is natural to expect some overlap with other functions. For example, there is definitely overlap with backoffice research administration systems - Ideate from Consilience or solutions from infoEd Global, for example. And also with more publicly oriented profiling and expertise systems on the front office side.
With respect to the latter, Pure and Symplectic both note that they can interface to VIVO. Furthermore, Symplectic can provide "VIVO services that cover installation, support, hosting and integration for institutions looking to join the VIVO network". It also provides implementation support for the Profiles Research Networking Software.
As I discuss further below, one interesting question for libraries is the relationship between the RIMs or CRIS and the institutional repository. Extensions have been written for both Dspace and Eprints to provide some RIMs-like support. For example, Dspace-Cris extends the Dspace model to cater for the Cerif entities. This is based on work done for the Scholar's Hub at Hong Kong University.
It is also interesting to note that none of the three open source educational community organizations - Kuali, The Duraspace Foundation, or The Apereo Foundation - has a directly comparable offering, although there are some adjacent activities. In particular, Kuali Coeus for Research Administration is "a comprehensive system to manage the complexities of research administration needs from the faculty researcher through grants administration to federal funding agencies", based on work at MIT. Duraspace is now the organizational home for VIVO.
Finally, there are some national approaches to providing RIMs or CRIS functionality, associated with a national view of research outputs. This is the case in South Africa, Norway and The Netherlands, for example.Standards
Another signal that this is an emerging service category is the existence of active standards activities. Two are especially relevant here:CERIF (Common European Research Information Format) from EuroCRIS, which provides a format for exchange of data between RIM systems, and the Casrai dictionary. CASRAI is the Consortia Advancing Standards in Research Administration Information.Libraries
So, what about research information management (in this reserved sense) and libraries? One of the interesting things to happen in recent years is that a variety of other campus players are developing service agendas around digital information management that may overlap with library interests. This has happened with IT, learning and teaching support, and with the University press, for example. This coincides with another trend, the growing interest in tracking, managing and disclosing the research and learning outputs of the institution: research data, learning materials, expertise profiles, research reports and papers, and so on. The convergence of these two trends means that the library now has shared interests with the Office of Research, as well as with other campus partners. As both the local institutional and public science policy interest in university outputs grows, this will become a more important area, and the library will increasingly be a partner. Research Information Management is a part of a slowly emerging view of how institutional digital materials will be managed more holistically, with a clear connection to researcher identity.
As noted above, this interest has been more pronounced outside the US to date, but will I think become a more general interest in coming years. It will also become of more general interest to libraries. Here are some contact points.
- The institutional repository boundary. It is acknowledged that Institutional Repositories (IRs) have been a mixed success. One reason for this is that they are to one side of researcher workflows, and not necessarily aligned with researcher incentives. Although also an additional administrative overhead, Research Information Management is better aligned with organizational and external incentives. See for example this presentation (from Royal Holloway, U of London) which notes that faculty are more interested in the CRIS than they had been in the IR, 'because it does more for them'. It also notes that the library no longer talks about the 'repository' but about updating profiles and loading full-text. There is a clear intersection between RIMs and the institutional repository and the boundary may be managed in different ways. Hong Kong University, for example, has evolved its institutional repository to include RIMs or CRIS features. Look at the publications or presentations of David Palmer, who has led this development, for more detail. There is a strong focus here on improved reputation management on the web through effective disclosure of researcher profiles and outputs. Movement in the other direction has also occurred, where a RIMs or CRIS is used to support IR-like services. Quite often, however, the RIMs and IR are working as part of an integrated workflow, as described here.
- Management and disclosure of research outputs and expertise. There is a growing interest in researcher and research profiles, and the RIMs may support the creation and management of a 'research portal' on campus. An important part of this is assisting researchers to more easily manager their profiles, including prompting with new publications from searches of external sources. See the research portal at Queen's University Belfast for an example of a site supported by Pure. Related to this is general awareness about promotion, effective publishing, bibliometrics, and management of online research identity. Some libraries are supporting the assignment of ORCIDs. The presentations of Wouter Gerritsma, of Wageningen University in The Netherlands, provide useful pointers and experiences.
- Compliance with mandates/reporting. The role of RIMs in supporting research assessment regimes in various countries was mentioned earlier: without such workflow support, participation was expensive and inefficient. Similar issues are arising as compliance to institutional or national mandates needs to be managed. Earlier this year, the California Digital Library announced that it had contracted with Symplectic "to implement a publication harvesting system in support of the UC Open Access Policy". US Universities are now considering the impact of the OSTP memo "Increasing Access to the Results of Federally Funded Scientific Research," [PDF] which directs funding agencies with an annual R&D budget over $100 million to develop a public access plan for disseminating the results of their research. ICPSR summarises the memo and its implications here. It is not yet clear how this will be implemented, but it is an example of the growing science and research policy interest in the organized disclosure of information about, and access to, the outputs of publicly funded research. This drives a University wide interest in research information management. In this context, SHARE may provide some focus for greater RIM awareness.
- Management of institutional digital materials. I suggest above that RIM is one strand of the growing campus interest in managing institutional materials - research data, video, expertise profiles, and so on. Clearly, the relationship between research information management, whatever becomes of the institutional repository, and the management of research data is close. This is especially the case in the US, given the inclusion of research data within the scope of the OSTP memo. The library provides a natural institutional partner and potential home for some of this activity, and also expertise in what Arlitsch and colleagues call 'new knowledge work', thinking about the identifiers and markup that the web expects.
Whether or not Research Information Management become a new service category in the US in quite the way I have discussed it here, it is clear the issues raised will provide important opportunities for libraries to become further involved in supporting the research life of the university.
DuraSpace invites you to attend our tenth Hot Topics: The DuraSpace Community Webinar Series, "All About the SHared Access Research Ecosystem (SHARE)."
Curated by Greg Tananbaum, Product Lead, SHARE
DuraSpace News: New Open Source Preservation Solution—Run Archivematica 1.3.0 Locally or in DuraCloud
Archivematica 1.3.0 Features Full DuraCloud Integration
Cynthia Ng: Mozilla Festival Day 2: Notes from Having Fun and Sharing Gratitude in Distributed Online Communities
This post is part of our Open Access Week blog series to highlight great work in Open Access communities around the world. It is written by Celya Gruson-Daniel from Open Knowledge France and reports from “Open Access Xsprint”, a creative workshop held on October 20 in the biohackerspace La Paillasse in Paris – as announced here.
More and more information is available online about Open Access. However it’s difficult to process all this content when one is a busy PhD Student or researcher. Moreover, people already informed and convinced are often the main spectators. The question thus becomes : How to spread the world about Open Access to a large audience ? (researchers, students but also people who are not directly concerned). With the HackYourPhD community, we have been developing initiatives to invent new creative formats and to raise curiosity and/or interest about Open Access. Open Access Week was a perfect occasion to propose workshops to experiment with those kinds of formats.An Open Access XSprint at La Paillasse
During the Open Access Week, HackYourPhD with Sharelex design a creative workshop called the Open Access Xsprint (X standing for media). The evening was held on October 20 in the biohackerspace La Paillasse in Paris with the financial support of a Generation Open Grant (Right to Research Coalition)
The main objective was to produce appealing guidelines about the legal aspects and issues of Open Access through innovative formats such as livesketching, or comics. HackYourPhD has been working with Sharelex on this topic for several months. Sharelex aims at providing access to the law to everyone with the use of a collaborative workshop and forum. A first content has been produced in French and was used during the Open Access XSprint.One evening to invent creative formats about Open Access
These sessions brings together illustrators, graphic designers, students, researchers. After a short introduction to get to know each other, the group discussed about the meaning of Open Access and its definition. First Livesketching and illustration emerged.
In a second time, two groups were composed. One group worked on the different meaning of Open Access with a focus on the Creative Commons licences.
The other group discussed about the development of the different Open Access models and their evolution (Green Open Access, 100% Gold Open Access, hybrid Journal, Diamond, Platinum). The importance of Evaluation was raised. It appears to be one of the brakes in the Open Access transition.
After an open buffet, each group presented their work. A future project was proposed. It will consist of personalizing a scientific article and inventing its different “”life””. An ingenious way to present the different Open Access Models.
Explore also our storify “Open Access XSprint”Next Step: Improvisation Theatre and Open Access
To conclude the Open Access Week, another event will be organized on October 24 in a science center (Espace Pierre Gilles de Gennes) with HackYourPhD and Sharelex, and the financial support of Couperin/FOSTER.
This event aims at exploring new format to communicate about Open Access. An improvisation theatral company will participate to this event. The presentations of different speakers about Open Access will be interspersed with short improvisation. The main topic of this evening will be the stereotypes or false ideas about Open Access. Bring an entertaining and original view is a way to discuss about Open Access for a large public, and maybe a starter to help them to become curious and to continue exploring this crucial topic for researchers and all citizen.
Ce(tte) œuvre est mise à disposition selon les termes de la Licence Creative Commons Attribution – Partage dans les Mêmes Conditions 4.0 International.
This post is part of our Open Access Week blog series to highlight great work in Open Access communities around the world. It is written by Miguel Said from Open Knowledge Brazil and is a translated version of the original that can be found the Brazilian Open Science Working Group's blog.
Nature Publishing Group reported recently that in October, its Nature Communications journal will become open access only: all articles published after this date will be available for reading and re-using, free of charge (by default they will be published under a Creative Commons Attribution license, allowing virtually every type of use). Nature Communications was a hybrid journal, publishing articles with the conventional, proprietary model, or as open access if the author paid a fee; but now it will be exclusively open access. The publishing group that owns Science recently also revealed an open access only journal, Science Advances – but with a default CC-NC license, which prevents commercial usages.
So we made it: the greatest bastions of traditional scientific publishing are clearly signaling support for open access. Can we pop the champagne already?
This announcement obviously has positive aspects: for example, lives can be saved in poor countries where doctors may have access to the most up-to-date scientific information – information that was previously behind a paywall, unaffordable for most of the Global South. Papers published under open access also tend to achieve more visibility, and that can benefit the research in countries like Brazil, where I live.
The overall picture, however, is more complex than it seems at first sight. In both cases, Nature and Science adopt a specific model of open access: the so-called "gold model", where publication in journals is usually subject to a fee paid by authors of approved manuscripts (the article processing charge, or APC). In this model, access to articles is thus open to readers and users, but access to the publication space is closed, in a sense, being only available to the authors who can afford the fee. In the case of Nature Communications, the APC is $5000, certainly among the highest in any journal (in 2010, the largest recorded APC was US $ 3900 – according to the abstract of this article… which I cannot read, as it is behind a paywall).
This amounts to two months of the net salary of a professor in state universities in Brazil (those in private universities would have to work even longer, as their pay is generally lower). Who is up for spending 15%+ of their annual income to publish a single article? Nature reported that it will waive the fee for researchers from a list of countries (which does not include Brazil, China, India, Pakistan and Libya, among others), and for researchers from elsewhere on a "case by case" basis – but they did not provide any further objective information about this policy. (I suspect it is better not to count on the generosity of a publisher that charges us $32 to read a single article, or $18 for a single piece of correspondence [!] from its journals.)
On the other hand, the global trend seems to be that the institutions with which researchers are affiliated (the universities where they work, or the scientific foundations that fund their research) bear part of these charges, partly because of the value these institutions attach to publishing in high-impact journals. In Brazil, for example, FAPESP (one of the largest research foundations in Latin America) provides a specific line of funding to cover these fees, and also considers them as eligible expenses for project grants and scholarships. As it happens, however, the funds available for this kind of support are limited, and in general they are not awarded automatically; in the example of FAPESP, researchers compete heavily for funding, and one of the main evaluation criteria is – as in so many situations in academic bureaucracy today – the researcher's past publication record:Analysis criteria [...] a) Applicant's Academic Record a.1) Quality and regularity of scientific and / or technological production. Important elements for this analysis are: list of publications in journals with selective editorial policy; books or book chapters [...]
Because of this reason, the payment of APCs by institutions has a good chance of feeding the so called "cumulative advantage" feedback loop in which researchers that are already publishing in major journals get more money and more chances to publish, while the underfunded remain that way.
The advancement of open access via the gold model also involves another risk: the proliferation of predatory publishers. They are the ones that make open access publishing (with payment by authors or institutions) a business where profit is maximized through the drastic reduction of quality standards in peer review – or even the virtual elimination of any review: if you pay, you are published. The risk is that on the one hand, predatory publishing can thrive because it satisfies the productivist demands imposed on researchers (whose careers are continually judged under the light of the publish or perish motto); and on the other hand, that with the gold model the act of publishing is turned into a commodity (to be sold to researchers), marketable under high profit rates - even without the intellectual property-based monopoly that was key to the economic power mustered by traditional scientific publishing houses. In this case, the use of a logic that treats scientific articles strictly as commodities results in pollution and degradation of humankind's body of scientific knowledge, as predatory publishers are fundamentally interested in maximizing profits: the quality of articles is irrelevant, or only a secondary factor.
Naturally, I do not mean to imply that Nature has become a predatory publisher; but one should not ignore that there is a risk of a slow corruption of the review process (in order to make publishing more profitable), particularly among those publishing houses that are "serious" but do not have as much market power as Nature. And, as we mentioned, on top of that is the risk of proliferation of bogus journals, in which peer review is a mere facade. In the latter case, unfortunately this is not a hypothetical risk: the shady "business model" of predatory publishing has already been put in place in hundreds of journals.
Are there no alternatives to this commodified, market-oriented logic currently in play in scientific publishing? Will this logic (and its serious disadvantages) be always dominant, regardless if the journal is "proprietary" or open access? Well, not necessarily: even within the gold model, there are promising initiatives that do not adhere strictly to this logic – that is the case of the Public Library of Science (PLOS), an open access publishing house that charges for publication, but works as a nonprofit organization; because of that, it has no reason to eliminate quality criteria in the selection of articles in order to obtain more profits from APCs. Perhaps this helps explain the fact that PLOS has a broader and more transparent fee waiver policy for poor researchers (or poor countries) than the one offered by Nature. And finally, it is worth noting that the gold model is not the only open access model: the main alternative is the "green model", based on institutional repositories. This model involves a number of challenges regarding coordination and funding, but it also tends not to follow a strictly market-oriented logic, and to be more responsive to the interests of the academic community. The green model is hardly a substitute for the gold one (even because it is not designed to cover the costs of peer review), but it is important that we join efforts to strengthen it and avoid a situation where the gold model becomes the only way for scientists and scholars in general to release their work under open access.
(My comments here are directly related to my PhD thesis on commons and commodification, where these issues are explored in a bit more detail – especially in the Introduction and in Chapter 4, pp. 17-20 and 272-88; unfortunately, it's only available in Portuguese as of now. This post was born out of discussions in the Brazilian Open Science Working Group's mailing list; thanks to Ewout ter Haar for his help with the text.)
Citations are the ultimate "linked data" of academia, linking new work with related works. The problem is that the link is human-readable only and has to be interpreted by a person to understand what the link means. PLOS Labs have been working to make those citations machine-expressive, even though they don't natively provide the information needed for a full computational analysis.
Given what one does have in a normal machine-readable document with citations, they are able to pull out an impressive amount of information:
- What section the citation is found in. There is some difference in meaning whether a citation is found in the "Background" section of an article, or in the "Methodology" section. This gives only a hint to the meaning of the citation, but it's more than no information at all.
- How often a resource is cited in the article. This could give some weight to its importance to the topic of the article.
- What resources are cited together. Whenever a sentence ends with "", you at least know that those three resources equally support what is being affirmed. That creates a bond between those resources.
- ... and more
This is just a beginning, and their demo site, appropriately named "alpha," uses their rich citations on a segment of the PLOS papers. They also have an API that developers can experiment with.
I was fortunate to be able to spend a day recently at their Citation Hackathon where groups hacked on ongoing aspects of this work. Lots of ideas floated around, including adding abstracts to the citations so a reader could learn more about a resource before retrieving it. Abstracts also would add search terms for those resources not held in the PLOS database. I participated in a discussion about coordinating Wikidata citations and bibliographies with the PLOS data.
Being able to datamine the relationships inherent in the act of citation is a way to help make visible and actionable what has long been the rule in academic research, which is to clearly indicate upon whose shoulders you are standing. This research is very exciting, and although the PLOS resources will primarily be journal articles, there are also books in their collection of citations. The idea of connecting those to libraries, and eventually connecting books to each other through citations and bibliographies, opens up some interesting research possibilities.
This post is part of our Open Access Week blog series to highlight great work in Open Access communities around the world.
Open Access Week was celebrated for the first time in Nepal for the opening 2 days: October 20, 21. The event, which was led by newly founded Open Access Nepal, and supported by EIFL and R2RC, featured a series of workshops, presentation, and peer to peer discussions and training by country leaders in Open Access, Open Knowledge, and Open Data including a 3 hour workshop on Open Science and Collaborative Research by Open Knowledge Nepal on the second day.
Open Access Nepal is a student led initiative that mostly includes students of MBBS. Most of the audience of Open Access Week celebrations here, hence, included med students, but engineering students, management students, librarians, professionals, and academics were also well represented. Participants discussed open access developments in Nepal and their roles in promoting and advancing open access.
EIFL and Right to Research Coalition provided financial support for the Open Access Week in Nepal. EIFL Open Access Program Manager Iryna Kuchma attended the conference as speaker and facilitator of workshops.
Open Knowledge Nepal hosted an interactive session on Open Science and Collaborative Research on the second day of two. The session we led by Kshitiz Khanal, Team Leader of Open Access / Open Science for Open Knowledge Nepal with support from Iryna Kuchma and Nikesh Balami, Team Leader of Open Government Data. About 8-10 Open Access experts of the country were present inside the hall to assist participants. The session began a half an hour before lunch where participants were first asked to brainstorm till lunch was over about what they think Open Science and Collaborative Research is, and the challenges relevant to Open Access that they have faced / might face in their Research endeavors. The participants were seated in round tables in groups of 7-8 persons, making a total of 5 groups.
After lunch, one team member from each group took turns in the front to present the summary of their brain-storming in colored chart papers. Participants came up with near exact definitions and reflected the troubles researchers in the country have been facing regarding Open Access. As we can expect of industrious students, some groups impressed the session hosts and experts with interesting graphical illustrations.
Iryna followed the presentations by her presentation where she introduced the concept, principles, and examples related to Open Science. Kshitiz followed Iryna with his presentation on Collaborative Research.
Session on Collaborative Research featured industry – academia collaborations facilitated by government. Collaborative Research needs more attention in Nepal as World Bank’s data of Nepal shows that total R&D investment is only equivalent to 0.3% of total GDP. Lambert Toolkit, created by the Intellectual Property Office of the UK, was also discussed. The toolkit provides agreement samples for industry – university collaborations, multi–party consortiums and few decision guides for such collaborations. The session also introduced version control and discussed simple web based tools for Collaborative Research like Google Docs, Etherpads, Dropbox, Evernote, Skype etc.
On the same day, Open Nepal also hosted a workshop about open data, and a session on Open Access Button was hosted by the organizers. Sessions in the previous day included sessions that enlightened the audience about Introduction to Open Access, Open Access Repositories, and growing Open Access initiatives all over the world.
This event dedicated to Open Access in Nepal was well received in the Open Communities of Nepal which has mostly concerned themselves with Open Data, Open Knowledge, and Open Source Software. A new set of audience became aware of the philosophy of Open. This author believes the event was a success story.
From Information Today Inc:
This October, Information Today, Inc.’s most popular authors will be at Internet Librarian 2014. For attendees, it’s the place to meet the industry’s top authors and purchase signed copies of their books at a special 40% discount.
The following authors will be signing at the Information Today, Inc., on Monday, October 27 from 5:00 to 6:00 P.M. during the Grand Opening Reception
- Heading to Internet Librarian!
- Call for Chapters: More Library Mashups
- Information Today Inc. Book Sale