You are here

Feed aggregator

Max Planck Digital Library: Personal Accounts activated for OvidSP

planet code4lib - Fri, 2015-02-20 19:09

Users from the Max Planck Society have now access to the "Personal Account" feature in OvidSP. This enables you to create a private workspace to store search strategies, AutoAlerts, and more, by logging in at any time during an active OvidSP session.

In order to try out the Personal Account feature, please:

  • log into OvidSP (IP authenticated access)
  • select "My Workspace" in the top menu to be directed to a login screen

First time users will need to register via the "Create a new Personal Account" link above the login box. Further information can be found in the Ovid help and the video tutorials offered by Wolters Kluwer.

Please note that OvidSP Personal Accounts will replace the MPG/Ovid user login in the near future. All users with an active MPG/Ovid account will receive an email providing more details soon.

Patrick Hochstenbach: Some Cat Toons

planet code4lib - Fri, 2015-02-20 18:52
Filed under: Doodles Tagged: brushpen, cat, doodle, fudensuke, moleskine

District Dispatch: If you missed it live: E-rate webinar available “Just in Time”

planet code4lib - Fri, 2015-02-20 17:44

Heading into the home stretch of the 2015 E-rate application cycle, more than 100 librarians put their paperwork (or keyboards) aside to participate in yesterday’s E-rate webinar hosted by the Public Library Association (PLA) and the American Library Association’s (ALA) Office for Information Technology Policy (OITP). The webinar provided a detailed look at the filing process for the current (2015) funding year including: a review of changes to the eligible services list (ESL), tips for filing a successful form 470 (to initiate the application and request services) and form 471 (to give specifics on the services you’re requesting).

In addition to these specifics, the webinar also provides important links to information from the Schools and Libraries Division (SLD) such as pertinent News Briefs, the online training site for the forms (a must for beginners and seasoned applicants), the Online Learning Library, and more. The slides of this information are reason enough to view as you will have the most useful links all in one location (and annotated).

Please note one correction from yesterday’s presentation. Slide #5 references the Institute of Museum and Library Services (IMLS) locale codes for determining which libraries are eligible for $5.00 per square foot for Category 2 services. The correct locale codes are: 11, 12, and 21.

Need to see it for yourself? The archive of the webinar is available below:

While there are 35 days left of the 2015 application window (but please remember that the final day for filing the form 470 is February 26), after you hit the submit button and pour yourself a cup of tea (or try the E-rate adult beverage), we encourage you to begin planning for 2016. As noted in the webinar, some of the more significant program changes related to Category 1, specific to ensuring libraries have access to high-capacity broadband to their doors, take effect in 2016. To fully take advantage of these new opportunities, libraries must plan ahead.

ALA is already in the planning phase for 2016. And working with our library partners like PLA, we are focusing on more outreach activities to help ensure libraries are equipped with information and support so they can help themselves to a generous serving of available E-rate funding. Refer to the “Got E-rate?” page, follow us @oitp and hashtag #libraryerate, and check back here for updates.

The post If you missed it live: E-rate webinar available “Just in Time” appeared first on District Dispatch.

David Rosenthal: Report from FAST15

planet code4lib - Fri, 2015-02-20 16:00
I spent most of last week at Usenix's File and Storage Technologies conference. Below the fold, notes on the most interesting talks from my perspective.

KeynoteA Brief History of the BSD Fast Filesystem. My friend Kirk McKusick was awarded the 2009 IEEE Reynold B. Johnson Information Storage Systems Award at the 2009 FAST conferencefor his custody of this important technology, but he only had a few minutes to respond. This time he had an hour to review over 30 years of high-quality engineering. Two aspects of the architecture were clearly important.

The first dates from the beginning in 1982. It is the strict split between the mechanism of the on-disk bitmaps (code unchanged since the first release), and the policy for laying out blocks on the drive. It is this that means that, if you had an FFS disk from 1982 or its image, the current code would mount it with no problems. The blocks would be laid out very differently from a current disk (and would be much smaller) but the way this different layout was encoded on the disk would be the same. The mechanism guarantees consistency, there's no way for a bad policy to break the file system, it can just slow it down. As an example, over lunch after listening to Ao Ma et al's 2013 FAST paper ffsck: The Fast File System Checker Kirk implemented their layout policy for FFS. Ma et al's implementation added 1357 lines of code to the ext3 implementation.

The second dates from 1987 and, as Kirk tells it, resulted from a conversation with me. It is the clean and simple implementation of stacking vnodes, which allows very easy and modular implementation of additional file system functionality, such as user/group ID remapping or extended attributes. Most of Kirk's talk was a year-by-year recounting of incremental progress of this kind.
PapersAnalysis of the ECMWF Storage Landscape by Matthias Grawinkel et al is based on a collection of logs from two tape-based data archives fronted by disk cache (ECFS is 15PB with disk:tape ratio 1:43, MARS is 55PB with a 1:38 ratio). They have published the data:
  • ECFS access trace: Timestamps, user id, path, size of GET, PUT, DELETE, RENAME requests. 2012/01/02-2014/05/21.
  • ECFS / HPSS database snapshot: Metadata snapshot of ECFS on tape. Owner, size, creation/read/modification date, paths of files. Snapshot of 2014/09/05.
  • MARS feedback logs: MARS client requests (ARCHIVE, RETRIEVE, DELETE). Timestamps, user, query parameters, execution time, archived or retrieved bytes and fields. 2010/01/01-2014/02/27.
  • MARS / HPSS database snapshot: Metadata snapshot of MARS files on tape. Owner, size, creation/read/modification date, paths of files. Snapshot of 2014/09/06.
  • HPSS WHPSS logs / robot mount logs: Timestamps,tape ids, information on full usage lifecycle from access request till cartridges are put back to the library. 2012/01/01 - 2013/12/31 
This is extraordinarily valuable data for archival system design, and their analyses are very interesting. I plan to blog in detail about this soon.

Efficient Intra-Operating System Protection Against Harmful DMAs by Moshe Malka et al provides a fascinating insight into the cost to the operating system of managing IOMMUs such as those used by Amazon and NVIDIA and identifies major cost savings.

ANViL: Advanced Virtualization for Modern Non-Volatile Memory Devices by Zev Weiss et al looks at managing the storage layer of a file system the same way the operating system manages RAM, by virtualizing it with a page map. This doesn't work well for hard disk, because the latency of the random I/Os needed to do garbage collection is so long and variable. But for flash and its successors it can potentially simplify the file system considerably.

Reducing File System Tail Latencies with Chopper by Jun He et al. Krste Asanovic's keynote at the last FAST stressed the importance for large systems of suppressing tail latencies. This paper described ways to exercise the file system to collect data on tail latencies, and to analyse the data to understand where the latencies were coming from so as to fix their root cause. They found four problems in the ext4 block allocator that were root causes.

Skylight—A Window on Shingled Disk Operation by Abutalib Aghayev and Peter Desnoyers won the Best Paper award. One response of the drive makers to the fact that Shingled Magnetic Recording (SMR) turns hard disks from randomly writable to append-only media is Drive-Managed SMR, in which a Shingled Translation Layer (STL) hides this fact using internal buffers to make the drive interface support random writes. Placing this after the tail latency paper was a nice touch - one result of the buffering is infrequent long delays as the drive buffers are flushed! The paper is a very clear presentation of the SMR technology, the problems it poses, the techniques for implementing STLs, and their data collection techniques. These included filming the head movements with a high-speed camera through a window they installed in the drive top cover.

RAIDShield: Characterizing, Monitoring, and Proactively Protecting Against Disk Failures by Ao Ma et al shows that in EMC's environment they can effectively predict SATA disk failures by observing the reallocated sector count and, by proactively replacing drives whose counts exceed a threshold, greatly reduce RAID failures. This is of considerable importance in improving the reliability of disk-based archives.
Work-In-Progress talks and postersBuilding Native Erasure Coding Support in HDFS by Zhe Zhang et al - this WIP described work to rebuild the framework underlying HDFS so that flexible choices can be made between replication and erasure coding, between contiguous and striped data layout, and between erasure codes.

Changing the Redundancy Paradigm: Challenges of Building an Entangled Storage by Verónica Estrada Galiñanes and Pascal Felber - this WIP updated work published earlier in Helical Entanglement Codes: An Efficient Approach for Designing Robust Distributed Storage Systems. This is an alternative to erasure codes for efficiently increasing the robustness of stored data. Instead of adding parity blocks, they entangle incoming blocks with previously stored blocks:
To upload a piece of data to the system, a client must first download some existing blocks ... and combine them with the new data using a simple exclusive-or (XOR) operation. The combined blocks are then uploaded to different servers, whereas the original data is not stored at all. The newly uploaded blocks will be subsequently used in combination with future blocks, hence creating intricate dependencies that provide strong durability properties. The original piece of data can be reconstructed in several ways by combining different pairs of blocks stored in the system. These blocks can themselves be repaired by recursively following dependency chain It is an interesting idea that, at data center scale, is claimed to provide very impressive fault-tolerance for archival data.

OCLC Dev Network: Interlibrary Loan Policy Directory Maintenance February 21

planet code4lib - Fri, 2015-02-20 15:00

The Interlibrary Loan Policy Directory will be updated on February 21st.

Hydra Project: University of Alberta becomes a Hydra partner

planet code4lib - Fri, 2015-02-20 14:46

We are delighted to announce that the University of Alberta has become the latest formal Hydra Partner.  The University of Alberta has well over a decade of experience in large-scale digitization and repository projects, and has a strong team of librarians, developers, data curators and other experts migrating their existing systems to what they are calling “Hydra North.”

In their Letter of Intent, the University of Alberta says that they are committed to using their local needs as pathways to contribute to the Hydra community. Their primary areas of focus in this will be research data management, digital archives, and highly scalable object storage.

Welcome, University of Alberta!

DuraSpace News: OpenBU Adopts @mire's Managed Hosting

planet code4lib - Fri, 2015-02-20 00:00

By Ignace Deroost, @mire  

DuraSpace News: Play to Grow Your DSpace Development Skills

planet code4lib - Fri, 2015-02-20 00:00

From Ignace Deroost, @mire  When looking at the Github language statistics for the DSpace project, one could easily assume that a solid background in Java is all it takes to tackle most DSpace development challenges.

District Dispatch: Education and school library legislation is heating up

planet code4lib - Thu, 2015-02-19 22:01

It’s record cold in D.C., but we’re busy meeting with Senate staffers trying to promote school libraries. Both U.S. Senate Committee on Health, Education, Labor, and Pensions (HELP) Chairman Sen. Lamar Alexander (R-TN) and U.S. House Education and the Workforce Committee Chairman John Kline (R-MN) have committed to passing a reauthorization bill for the Elementary and Secondary Education Act (ESEA). In late January, Sen. Alexander released his discussion draft and received a lot of push back from the education community….including school libraries because libraries were not well integrated into the legislation. There was no acknowledgement of the importance of effective school library programs. He declared that the Committee would pass the bill out of committee the last week of February.

Tell Sen. Lamar Alexander to include school library program in ESEA reauthorization.(Photo by DOE PHOTO/Ken Shipp)

Sen. Alexander then met with HELP Committee Ranking Member Sen. Patty Murray and they decided that they would make Sen. Alexander’s bill more bipartisan, which will take some time. So the committee marking up the bill is pushing the markup to March. But the House passed their bill out of committee, with no amendments for school libraries passing in the committee.

Library advocates are calling their Senators about the SKILLS Act to see how much can be included for an effective school library program. This legislation has been so hard to pass–Congress has been trying since 2006 and they haven’t completed it yet–so we need to stay tuned. To learn more on ESEA legislative activities this Congress, read up on the SKILLS Act.

Take action for school library funding now!

The post Education and school library legislation is heating up appeared first on District Dispatch.

District Dispatch: ALA joins lengthy list of groups calling for balanced deficit reduction

planet code4lib - Thu, 2015-02-19 18:09

In 2013, the Bipartisan Budget Act negotiated by Representative Paul Ryan (R-WI) and Senator Patty Murray (D-WA) provided partial, temporary relief from sequestration. With the return of full sequestration in 2016, the American Library Association (ALA) is collaborating with NDD United, an alliance of organizations working together to protect nondefense discretionary funding, to renew efforts to bring an end to sequestration.

Today, ALA joined NDD United and more than 2,100 organizations from across all sectors of the economy and society to urge Congress and President Obama to work together to end sequestration. The letter (pdf) emphasizes (1) the importance of nondefense discretionary (NDD) programs, (2) the harmful effects of budget cuts to date, and (3) the equal importance of both defense and nondefense programs in America’s security at home and abroad, and thus the need for equal sequestration relief.

Sequestration cuts had significant impact on federal library programs. For example, school libraries already suffering from budget cuts, saw a 12.5 percent cut in Innovative Approaches to Literacy making less grant money available for low-income school libraries. LSTA funding was reduced nearly $10 million which reduced libraries abilities to provide services for education, employment and entrepreneurship, community engagement, and individual empowerment.

NDD published “Faces of Austerity” in 2013.

Cuts to date have had significant impacts on the lives of Americans as demonstrated in NDD United’s 2013 report “Faces of Austerity: How Budget Cuts Make Us Sicker, Poorer, and Less Secure (pdf).” Deficit reduction measures enacted since 2010 have come overwhelmingly from spending cuts, with the ratio of spending cuts to revenue increases far beyond those recommended by bipartisan groups of experts. And there is bipartisan agreement that sequestration is bad policy and ultimately hurts our nation. However, so far, Congress and the President have not been able to agree on other deficit reduction to replace the damaging cuts. As work begins on the 2016 budget, it is critical that Congress and the President find a replacement to sequestration to allow the government to keep making appropriate investments in Americans.

The post ALA joins lengthy list of groups calling for balanced deficit reduction appeared first on District Dispatch.

HangingTogether: The Five Stages of Code4Lib

planet code4lib - Thu, 2015-02-19 16:22

View of the Williamette River and Mount Hood from Downtown Portland OR

I had the good fortune to attend the Code4Lib 2015 conference in Portland OR last week.  It was a great event as usual, but it’s an event that I don’t always get to attend in person.  Does anyone else go through these five stages during the conference?  Right, me neither.

  1. I’m very familiar with all current technologies, but let’s see what others are up to.
  2. Oh, wait, it turns out that I don’t know anything and don’t belong here.
  3. Then again, I understood that last presentation and could totally do what they did.
  4. So now I need to throw out all my code and rewrite my apps using that framework I just heard about for the first time.
  5. I’m heading over to the Multnomah Whisk{e}y Library, anybody else interested?

About Bruce Washburn

Mail | Web | Twitter | Facebook | LinkedIn | Google+ | Flickr | More Posts (11)

DPLA: Family Bible records as genealogical resources

planet code4lib - Thu, 2015-02-19 15:45

Family tree from the Bullard family Bible records. Courtesy of the State Archives of North Carolina via the North Carolina Digital Heritage Center.

Interested in using DPLA to do family research, but aren’t sure where to start? Consider the family Bible. There are two large family Bible collections in DPLA—over 2,100 (transcribed) from the North Carolina Department of Cultural Resources, and another 90 from the South Carolina Digital Library. They’re filled with rich information about family connections and provide insight into how people of the American South lived and died during the—mainly—18th and 19th centuries.

Prior to October 1913 in North Carolina, and January 1915 in South Carolina, vital records (birth and death, specifically) were not documented at the state level. Some cities and counties kept official records before then, and in other cases births and deaths were documented—when at all—by churches or families. Private birth, death, and marriage events were most often recorded in family Bibles, which have become rich resources for genealogists in search of early vital records.

Family Bibles are Bibles passed down from one generation of relatives to the next. In some cases, such as the 1856 version held by the Hardison family, the Bible had pages dedicated to recording important events. In others, the inside covers or page margins were used to document births, deaths, and marriages. The earliest recorded date in a family Bible in DPLA is the birth of John Bullard in 1485.

Not only do family Bibles record the dates and names of those born, died, or married, but these valuable resources may identify where an event took place as well. Oftentimes, based on the way in which the event was recorded, the reader can sense the joy or heartache the recorder felt when they inscribed it in the Bible (for example, see the Jordan family Bible, page 8). You’ll even find poetry, schoolwork, correspondence, news clippings, and scribbles in family Bibles that provide insight into a family’s private life that might otherwise be lost (for examples, see the Abraham Darden, Gladney, and Henry Billings family Bibles).

Slave list, Horton family Bible records. Courtesy of the State Archives of North Carolina via the North Carolina Digital Heritage Center.

Family Bibles—especially those from the southern US—may be of particular interest to African American genealogists whose ancestry trails often go cold prior to the Civil War. Before the 1860s, there is little documentary evidence that ancestors even existed  beyond first names and estimated ages in bills of sale, wills, or property lists produced during slavery. Family Bibles are some of the only documents that contain the names of slaves, and in rare cases their ages, birthdates, and parentage.

A search on the subject term “Bible Records AND African Americans,” in the collection from the North Carolina Department of Cultural Resources, returns a set of 142 North Carolina family Bibles that contain at least one documented slave name. In a few cases, the list can extend to ten or more (for example, Simmons Family Bible, page 4). This information enables African American genealogists to begin to trace their ancestry to a place and time in history.

Because African Americans are listed among the slaveholding family’s names, it can sometimes be difficult to discern which are family members and which are their slaves, so some care is required when working with these records. Generally, slaves are listed without last names (for example, see page 7 of the Horton Family Bible).

Whether you are a family researcher or are simply interested in American history, the family Bibles from North and South Carolina will be of great interest. They tell deeply personal stories and expose a rich history hidden in the private collections of American citizens that remind us that all history is truly local.

 

Featured image credit: Detail from page 2 of the Debnam Family Bible Records. Courtesy of the State Archives of North Carolina via the North Carolina Digital Heritage Center.

All written content on this blog is made available under a Creative Commons Attribution 4.0 International License. All images found on this blog are available under the specific license(s) attributed to them, unless otherwise noted.

District Dispatch: Tweet questions about fair use and media resources

planet code4lib - Thu, 2015-02-19 15:11

Next week is Fair Use Week so let’s celebrate with a copyright tweetchat on Twitter. On February 25th from 3:00 to 4:00 p.m. (Eastern), legal expert Brandon Butler will be our primary “chatter” on fair use.

There are few specific copyright exceptions that libraries and educational institutions can rely on that deal specifically with media, so reliance on fair use is often the only option for limiting copyright when necessary. The wide array of media formats both analog and digital, the widespread availability of media content, the importance of media in the teaching and research, in addition to advances in computer technologies and digital networks were unheard of in the 1960-70s when Congress drafted the current copyright law.

But Congress recognized that a flexible exception like fair use would be an important user exception especially in times of dramatic change. Fair use can address the unexpected copyright situation that will occur in the future. Particularly with media, it’s a whole new world.

The tweetchat will address concerns like the following:

  • Can I make a digital copy of this video?
  • When is a public performance public?
  • When can I break digital rights technology on DVDs?
  • Is the auditorium a classroom?
  • How can libraries preserve born-digital works acquired via a license agreement?
  • And my favorite: What about YouTube? What can we do with YouTube?

Ask Brandon Butler your media question. Participate in the Twitter tweetchat by using #videofairuse on February 25, 2015, from 3:00 to 4:00 p.m. EST.

Brandon Butler has plenty experience with fair use. He is a Practitioner-in-Residence at American University’s Washington College of Law, where he supervises student attorneys in the Glushko-Samuelson Intellectual Property Law Clinic and teaches about copyright and fair use. Brandon is the co-facilitator, with Peter Jaszi and Patricia Aufderheide, of the Code of Best Practices in Fair Use for Academic and Research Libraries, a handy guide to thinking clearly about fair use published by the Association of Research Libraries and endorsed by all the major library associations, including the American Library Association (ALA).

Special thanks to Laura Jenemann for planning this event. Laura is Media Librarian and Liaison Librarian, Film Studies and Dance, at George Mason University, VA. She is also the current Chair of ALA’s Video Round Table.

The post Tweet questions about fair use and media resources appeared first on District Dispatch.

LITA: Tools for Creating & Sharing Slide Decks

planet code4lib - Thu, 2015-02-19 13:00

Lately I’ve taken to peppering my Twitter network with random questions. Sometimes my questions go unanswered but other times I get lively and helpful responses. Such was the case when I asked how my colleagues share their slide decks.

Figuring out how to share my slide decks has been one of those things that consistently falls to the bottom of my to-do list. It’s important to me to do so because it means I can share my ideas beyond the very brief moment in time that I’m presenting them, allowing people to reuse and adapt my content. Now that I’m hooked on the GTD system using Trello, though, I said to myself, “hey girl, why don’t you move this from the someday/maybe list and actually make it actionable.” So I did.

Here’s my dilemma. When I was a library school student I began using SlideShare. There are a lot of great things about it – it’s free, it’s popular, and there are a lot of integrations. However… I’m just not feeling the look of it anymore. I don’t think it has been updated in years, resulting in a cluttered, outdated design. I’ll be the first to admit that I’m snobby when it comes to this sort of thing. I also hate that I can’t reorder slide decks once they’re uploaded. I would like to make sure my decks are listed in some semblance of chronological order but in order to do so I have to upload them in backwards order. It’s just crazy annoying how little control you have over the final arrangement and look of the slides.

So now that you’ve got the backstory, this is where the Twitter wisdom comes in. As it turns out, I learned about more than slide sharing platforms – I also found out about some nifty ways to create slide decks that made me feel like I’ve been living under a rock for the past few years. Here are some thoughts on HaikuDeck, HTMLDecks, and SpeakerDeck.

HaikuDeck

screenshot: plenty of styling options + formats

This is really sleek and fun. You can create an account for free (beta version) and pull something together quickly. Based on the slide types HaikuDeck provides you with, you’re shepherded down a delightfully minimalistic path – you can of course create densely overloaded slides but it’s a little harder than normal. Because this is something I’m constantly working on, I am appreciative.

I haven’t yet created and presented using a slide deck from HaikuDeck but I’m going to make that a goal for this spring. However, you can see a quick little test slide deck here. I made it in about two minutes and it has absolutely no meaningful content, it’s just meant to give you an easy visual of one of their templates. (Incentive: make it through all three slides and you’ll find a picture of a giant cat.)

One thing to keep in mind is that you’ll want to do all of your editing within HaikuDeck. If you export to Powerpoint, nothing will be editable because each slide exports as an image. This could be problematic if you needed to do last minute edits and didn’t have an internet connection. Also, beware: at least one user has shared that it ate her slides.

HTMLDecks

screenshot: handy syntax chart + space to build, side-by-side

This is a simple way to build a basic slide deck using HTML. I don’t think it could get any simpler and I’m actually struggling with what to write that would be helpful for you to know about it. To expand what you can do, learn more about Markdown.

From what I can tell, there is no export feature – you do need to pull up your slide deck in a browser and present from there. Again, this makes me a little nervous given the unreliable nature of some internet connections.

I see the appeal of HTMLDecks, though I’m not sure it’s for me. (Anyone want to change my mind by pointing to your awesome slide deck? Show me in the comments!)

SpeakerDeck

screenshot: clean + simple interface for uploading your slides

I was so dejected when I looked at my sad SlideShare account. SpeakerDeck renewed my faith. This is the one for me!

What’s not to love? SpeakerDeck has the clean look I’ve craved and it automatically orders your slides based on the date you gave your presentation, most recent slides listed toward the top. Check out my profile here to see all of this in action.

One drawback is that by making the jump to SpeakerDeck I lost the number of views that I had accumulated over the years. On the same note, SpeakerDeck doesn’t integrate with my ImpactStory profile in the same way that SlideShare does. I haven’t published much so my main stats come from my slide decks. Not sure what I’m going to do about that yet, beyond lobby the lovely folks at ImpactStory to add SpeakerDeck integration.

One thing I would like to see a slide sharing platform implement is shared ownership of slides. I asked SpeakerDeck about whether they offered this functionality; they don’t at this time. You see, I give a lot of presentations on behalf of a group I lead, Research Data Services (RDS). Late last year I created a SlideShare account for RDS. I would love nothing more than to be able to link my RDS slide decks to my personal account so that they show up in both accounts.

Lastly, I would be remiss as a data management evangelizer if I didn’t note that placing the sole copies of your slides (or any files) on a web service is an incredibly bad idea. It’s akin to teenagers now keeping their photos on Facebook or Instagram and deleting the originals, a tale so sad it could keep me up at night. A better idea is to keep two copies of your final slide deck: one saved as an editable file and the other saved as a PDF. Then upload a copy of the PDF to your slide sharing platform. (Sidenote: I haven’t always been as diligent about keeping track of these files. They’ve lived in various versions of google drive, hard drives, and been saved as email attachments… basically all the bad things that I am employed to caution against. Lesson? We are all vulnerable to the slow creep of many versions in many places but it’s never too late to stop the digital hoarding.)

How do you share your slide decks? Do you have any other platforms, tools, or tips to share with me? Do tell.

Open Knowledge Foundation: Announcing the Open Data Day Coalition micro-grantees

planet code4lib - Thu, 2015-02-19 12:27

Two weeks back a coalition of Open Data Day supporters announced a micro-grant scheme in an open call for groups with good ideas for Open Data Day activities. The response was overwhelming and over 75 groups from all corners of the world found the time to send in an application for one of the 300 USD micro-grants.

We were absolutely overwhelmed with the number of applications and sadly could only reward a small fraction of them, despite the vast majority being more than worthy of financial support. Through dire deliberations the following groups were selected:

We, the coalition behind the micro-grants, congratulate them all and look forward to help them alongside all other groups organizing Open Data Day activities.

For all the groups who were unfortunately not awarded funds this time around, we were still tremendously excited to read about their plans. We were severely limited in the funds we had available and are disappointed that we couldn’t support more groups! We hope that those groups will still be able to carry on and organize their planned event and we are here to provide . The vast majority of Open Data Day events are organized without budget, and in the spirit of the global volunteer community we hope that they will be able to as well! We look forward to support all Open Data Day organizers in other ways and will be pushing Open Data Day heavily on social media, blog posts etc.

If you have plans to organise an event, don’t forget to add it to the wiki and on the official Open Data Day world map of events. It’s still not too late to organise, so roll up your sleeves and jump into it! More than 200 events are already in progress, let’s reach 300!

See Spanish translation of this post.

HangingTogether: Re-inventing the scholarly record: taking inspiration from Renaissance Florence

planet code4lib - Wed, 2015-02-18 23:15

Ponte Vecchio, Florence Italy

 

On February 11th, we presented the Evolving Scholarly Record (ESR) Framework at the EMEA Regional Council annual meeting, in Florence. The topic was on spot, as the plenary talks preceding the ESR break-out session had paved the way for a more in-depth discussion of how libraries can re-invent their future stewardship roles in the digital domain.

Keynote David Weinberger had argued compellingly the day before, that the Web was a much better place for information to be in than the fixed physical containers of books and journals, and that its shape allowed for unlimited expansion, so that “on the web, nothing is filtered out, only filtered forward.” He continued to say: “Researchers like to put their findings on the web because it allows for discussion and a multiplicity of views, including disagreement.” In the follow-up session, Jim Neal made the same observation but phrased it somewhat differently, saying “researchers dump their work everywhere,” denouncing the “repository chaos” and asking who was responsible for ensuring scholarly integrity on the web? He sent a strong message about the need to decide what of continuing value should be preserved and the imperative to devise new types of cooperative strategies to steer the scholarly ecosystem in the right direction.

As the first speaker at the ESR-break-out session, I presented the Framework, highlighting: 1) the scattering of research outputs on the web and the expanding boundaries of the scholarly record, 2) the increased use of common web platforms by scholars for sharing their work at the risk of compromising scholarly integrity practices and of losing the ability to capture and preserve the scholarly record and 3) the fast changing configuration of stakeholder roles and the need for innovative practices to ensure that the recording of the ESR is organized in consistent and reliable ways.  Ulf-Göran Nilsson (Jönköping University) wondered if the Framework might benefit from being complemented with an underlying economic framework, as he argued that the journal subscription model determined the traditional “Fix” and “Collect” roles of publishers and libraries respectively. He suggested that the economic models for OA-publishing are similarly likely to affect the dynamics of the ESR-stakeholder roles. Cendrella Habre (Lebanese American University) asked what libraries should do to start addressing the ESR-problem space?

Brian Schottlaender (UC San Diego), our second speaker, gave an enlightening reaction. He spoke about “rising to the stewardship challenge” and described how the curation of research data is becoming an increasingly important part of the stewardship tasks of the scholarly record. His “full-spectrum stewardship”-diagram gave a process view of the SR, with 1) the scholarly raw material as “inputs,” 2) the scholarly enquiry and discourse as “operators” and 3) scholarly publishing as “outputs.” Whilst libraries have traditionally focused on the outputs, they are now hiring archivists to capture the raw data as well. John MacColl (St Andrews), our third speaker/reactor, lifted the session to higher policy-levels – stressing the need for community conversations and for taking ownership of and control over stewardship. He thought the ESR-Framework could be instrumental in identifying problems and inefficiencies – and, solving these would in turn help counter chaos and “surrendering to the web.” With his metaphor of librarians as “hydraulic engineers of information flow,” he came full circle back to the theme of the Florentine meeting: “The art of invention.”

Discussion

The talks will have inspired the audience to ask questions and to add their perspectives to the discussion – however there was too little time left. I would therefore like to invite those who attended and those who read this blog post, to leave their comments behind and to continue the conversation right here!

About Titia van der Werf

Titia van der Werf is a Senior Program Officer in OCLC Research based in OCLC's Leiden office. Titia coordinates and extends OCLC Research work throughout Europe and has special responsibilities for interactions with OCLC Research Library Partners in Europe. She represents OCLC in European and international library and cultural heritage venues.

Mail | Web | More Posts (2)

Cynthia Ng: Using Regex in MarcEdit to Fix Repeated Subfields in MARC records

planet code4lib - Wed, 2015-02-18 23:07
Can you tell I’ve been doing a lot of MARC work? UPDATE: Apparently this is possible as a one step regex process. Go see Terry’s comment below! Ah well, live and learn. The Problem Today’s problem is repeated subfields. I don’t even have a particular use case for this except that I received a set … Continue reading Using Regex in MarcEdit to Fix Repeated Subfields in MARC records

LITA: Jobs in Information Technology: February 18

planet code4lib - Wed, 2015-02-18 20:53

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Library Technology Professional 2, Los Alamos National Laboratory, Los Alamos, NM

Systems & Information Technology Librarian (Assistant Professor), NYC College of Technology,  New York City,  NY

Visit the LITA Job Site for more available jobs and for information on submitting a  job posting.

 

District Dispatch: What we’ve been up to

planet code4lib - Wed, 2015-02-18 19:01

Photo by Joel Penner

If you have been following us here on District Dispatch, you probably have a pretty good idea what sort of policy and legislation we have had our attention turned to the past few months. From net neutrality to 3D printing, we do our best to keep you up to date on the happenings relevant to libraries here in the District.

That said, maybe you are new to District Dispatch! Or maybe you just can’t get enough of those Washington updates! Whatever the case, I present to you the latest 6 month report from the Washington Office. Find out what has been going on behind the scenes at the Office of Government Relations and the Office for Information Technology Policy by clicking the link below (PDF).

6 Month Report 2015

The post What we’ve been up to appeared first on District Dispatch.

Islandora: Islandora Camp EU2 - What to Expect

planet code4lib - Wed, 2015-02-18 18:26

Islandora Camp in Madrid is still several months away and we are still accepting proposals for presentations from the community, but we wanted to give a little preview of the kind of content you can expect, for those who might still be on the fence about attending.

Day One

On the first day of camp we address Islandora software and the community from a broad perspective. There are some standard topics that we cover at every camp, because they are always relevant and there are always updates:

  • An update on the project and what is happening in the community
  • A look at innovative Islandora sites around the world
  • A look at the current software stack and modules in our latest release (which, by the time if iCampEU2, will be Islandora 7.x-1.5)

On the not-so-standard front, we will have Fedora 4 Integration Project Director Nick Ruest as part of our Camp instructor team, and he will be giving an update and (hopefully) an early demo of our Fedora 4/Islandora 7.x integration.

Day Two

The second day of Islandora Camp is all about hands-on experience. If you are a librarian, archivist, or other front-end Islandora user (or interested in becoming one), the Admin track will go over how to use Islandora via its front-end menus and GUIs. We will cover basic site set up, collections, single and batch ingests, security configuration, and setting up Solr for discoverability. For the more technically inclined, we have a Dev track that delves into Islandora from the code side culminating in the development of s custom Islandora module so you can learn how it's done.

Day Three

The last day of the event is turned over to more specific presentation and sessions from the community. Right now we are looking at some sessions on linked data, FRBRoo ontologies in Islandora, themeing, and multi-lingual Islandoras, but our Call for Proposals is open until March 1st, so this line up could change. If you have something you'd like to share with the Islandora Community, please send us your proposal!

If you have any questions about Islandora Camp in Madrid, please contact me.

Pages

Subscribe to code4lib aggregator