You are here

Feed aggregator

Code4Lib: Code4Lib 2016 Conference Proposals

planet code4lib - Sun, 2015-02-22 18:53

Accumulated Proposals by closing date of 2015-02-20

Los Angeles Proposal - Los Angeles, CA

Philadelphia Proposal - Philadelphia, PA

Code4Lib Wiki 2016 Proposals Page - http://wiki.code4lib.org/2016_Hosting_Proposals

Host Voting is NOW OPEN 2015-02-23 00:00:00 UTC 2015-03-07 08:00:00 UTC. You can also watch the results.

Topic: conferences

Ian Davis: Another Blog Refresh

planet code4lib - Sun, 2015-02-22 10:04
Another Blog Refresh Internet Alchemy

est. 1999

2015 · 2010 · 2006 · 2002 2014 · 2009 · 2005 · 2001 2012 · 2008 · 2004 · 2000 2011 · 2007 · 2003 · 1999                   Sun, Feb 22, 2015 Another Blog Refresh

It’s time for another blog refresh, this time back to a static site after a few years being hosted by Wordpress.com. Once again I’m convinced by Aaron’s argument that baking is better than frying. It’s not about performance, it’s about simplicity and control.

While I liked the convenience of Wordpress.com, it never really felt like a place I could tailor to my own requirements. I thought having a nice web UI and mobile apps to edit posts would encourage me to post more. It actually made no difference whatsoever. Whatever holds me back from blogging isn’t related to the editing UI.

For this move I looked at various static site generators such as Jekyll, Hyde and Hugo but I settled on a mimimal one: gostatic. My reasoning (which I admit may not be entirely justified) is that feature-led software gets updated at a much higher rate than I post to my blog. When I come to post, invariably something important has changed in the core software or in a dependency and so I’ll need to upgrade or fix that before being able to publish. I find this particularly true of larger systems in dynamic languages like Ruby or PHP.

This time around I have a single binary (gostatic) to generate the site with no dependencies. It’s deliberately feature-poor so I don’t rely on things that may be changed or deprecated some time in the future and I have a script that does the rebuild and can sync to whatever laptop I’ll be using in the coming years. It’s documented for a future me.

A few technical notes:

  • Posts are written in markdown and are compatible with all the static site generators I mentioned above
  • This move is partly motivated by moving to a new web server. I’m going to be using nginx and serving either static files or fronting Go services. This is the first time I’ve had a server that isn’t running Apache+PHP.
  • Hopefully the atom feed works ok – it looks ok, but there’s almost certainly some weird software out there that will break on it.
  • There will be broken links, but already I have fixed hundreds of bad internal links by being able to grep over all the posts locally.
  • Formatting will be weird in places since the posts were exported via Wordpress’s XML export. I’ll get to tidying up the individual posts as an ongoing job.
  • There are no comments. I have the comments as part of the Wordpress export, and I’m planning to take a look at how to incorporate them into the blog archives. However I’m not planning on adding commenting to the blog. Thank you to all of you who have commented on my posts in the past, I have enjoyed reading them. But… it’s time to admit that commenting is a broken form of communication.

For a contrary views on baked vs fried and blog commenting, see my post on moving from Moveable Type to Wordpress back in 2004 or my post on moving from a dynamic system to Moveable Type, or even my post on moving to a hosted blog on Posterous ;)

For early blog archeology see my post on early versions of Internet Alchemy

Code4Lib: Code4Lib North 2015: St. Catharines, ON

planet code4lib - Sat, 2015-02-21 21:00
Topic: meetings

The sixth Code4Lib North meeting will be on June 4--5, 2015 at the St. Catharines Public Library, 54 Church St., St. Catharines, Ontario. St. Catharines is on the Niagara Peninsula on the south side of Lake Ontario, close to the American cities of Buffalo and Rochester in New York. See the wiki page for details and to sign up for a talk.

Code4Lib: Code4Lib 2015 videos

planet code4lib - Sat, 2015-02-21 20:51
Topic: code4lib 2015

All of Code4Lib 2015 was recorded and the videos are available on the YouTube Code4Lib channel.

John Miedema: Seven root categories for organizing non-fiction writing and optimizing Lila’s analytics

planet code4lib - Sat, 2015-02-21 14:03

Lila technology collaborates with an author engaged in a writing project. A model of the writing process is assumed, one that is considered natural for writing non-fiction, at least, and compliant with existing writing software. In this model, an author writes notes and organizes them into categories. Seven root categories are assumed to be fundamental to a writing project, folders than contain the written material. The categories are presented here not so much as Lila system requirements, but as a best practice, structures that optimize the writing process and Lila’s analytics. If you do not use these categories to organize your non-fiction writing project, you might consider doing so, whether or not you intend to use Lila.

Step in the Writing Process Structural Category/Folder Category/Folder Description Comparison with Pirsig’s categories 1 The author begins a project. A root Project folder is created, a repository for everything else. Project A single root folder. Contains all other folders and slips. Root folder may contain high level instructions regarding project plans, to do lists, etc., but these are not content  for Lila’s analysis. Like Pirsig’s PROGRAM slips, the Project folder may contain “instructions for what to do with the rest of the slips” but this information will not operate as a “program.” All programming functions will be handled by Lila code. 2 The author takes notes on ideas using various software programs on different devices. Many notes will require further thought before filing into the project. These notes get sent to an inbox, a temporary queue, a point for later conscious attention and classification. Project > Inbox The Inbox may be an email inbox or an Evernote notebook dedicated to an inbox function. There can be multiple inboxes. Notes in the inbox may be tentatively assigned categories and/or tags, but these will be reviewed. Inbox corresponds to Pirsig’s UNASSIMILATED category, “new ideas that interrupted what he was doing.” 3 Notes are filed into a main folder, a workspace for all the active content. Project > Work Notes in the Work folder are organized by categories and subject classified by tags. These notes are the target of Lila’s analytics. See upcoming post on subject classification for more information. The Work folder contains all the topic categories Pirsig developed as he was working. 4 Some ideas are considered worth noting, but either not sufficiently relevant or too disruptive to file into the main work. These notes should not be trashed, but parked for later evaluation. Project > Park Parked notes are excluded from Lila’s analytics, but can be brought back into play later. Park corresponds to Pirsig’s CRIT and TOUGH categories. I see these two categories as the positive and negative versions of the same thing, i.e., disruptive ideas. Don’t let them take over but don’t ignore them either. Let them hang out in the Park for awhile. 5 A primary function of Lila is to assist with the large volume of content that an author does not have time to read. On the web, the acronym TLDR is used, “Too Long; Didn’t Read.” Project > TLDR TLDR is not a flippant term. Content Management Systems typically have special handling for large files. Lila will generate notes (slips) from this unread content, and present it in context for embedded reading. Pirsig provided no special classification for unread content. Likely it just went in a pile, perhaps left unread. 6 Some notes, and chains of notes, seem important at one time but later are considered irrelevant or out of scope. This typically happens as the project matures and editing is undertaken. These notes are not trashed but archived for possible reuse later. Project > Archive Archived notes are excluded from Lila’s analytics. Perhaps a switch will allow them to be included. The archive could tie into version control for successive drafts. Pirsig filed these notes in JUNK, “slips that seemed of high value when he wrote them down but which now seemed awful.” 7 Other notes are just plain trash: duplicates, dead lines of thought. To avoid noise in the archive it’s best to trash them. Project > Trash Trashed notes are excluded from Lila’s analytics. These notes may be purged on occasion. Pirsig filed these notes in JUNK, but maintained them indefinitely.

 

Patrick Hochstenbach: Figure drawing on mondays

planet code4lib - Sat, 2015-02-21 12:15
Filed under: Figure Drawings Tagged: art model, brushpen, copic, Nude, Nudes

Max Planck Digital Library: Personal Accounts activated for OvidSP

planet code4lib - Fri, 2015-02-20 19:09

Users from the Max Planck Society have now access to the "Personal Account" feature in OvidSP. This enables you to create a private workspace to store search strategies, AutoAlerts, and more, by logging in at any time during an active OvidSP session.

In order to try out the Personal Account feature, please:

  • log into OvidSP (IP authenticated access)
  • select "My Workspace" in the top menu to be directed to a login screen

First time users will need to register via the "Create a new Personal Account" link above the login box. Further information can be found in the Ovid help and the video tutorials offered by Wolters Kluwer.

Please note that OvidSP Personal Accounts will replace the MPG/Ovid user login in the near future. All users with an active MPG/Ovid account will receive an email providing more details soon.

Patrick Hochstenbach: Some Cat Toons

planet code4lib - Fri, 2015-02-20 18:52
Filed under: Doodles Tagged: brushpen, cat, doodle, fudensuke, moleskine

District Dispatch: If you missed it live: E-rate webinar available “Just in Time”

planet code4lib - Fri, 2015-02-20 17:44

Heading into the home stretch of the 2015 E-rate application cycle, more than 100 librarians put their paperwork (or keyboards) aside to participate in yesterday’s E-rate webinar hosted by the Public Library Association (PLA) and the American Library Association’s (ALA) Office for Information Technology Policy (OITP). The webinar provided a detailed look at the filing process for the current (2015) funding year including: a review of changes to the eligible services list (ESL), tips for filing a successful form 470 (to initiate the application and request services) and form 471 (to give specifics on the services you’re requesting).

In addition to these specifics, the webinar also provides important links to information from the Schools and Libraries Division (SLD) such as pertinent News Briefs, the online training site for the forms (a must for beginners and seasoned applicants), the Online Learning Library, and more. The slides of this information are reason enough to view as you will have the most useful links all in one location (and annotated).

Please note one correction from yesterday’s presentation. Slide #5 references the Institute of Museum and Library Services (IMLS) locale codes for determining which libraries are eligible for $5.00 per square foot for Category 2 services. The correct locale codes are: 11, 12, and 21.

Need to see it for yourself? The archive of the webinar is available below:

While there are 35 days left of the 2015 application window (but please remember that the final day for filing the form 470 is February 26), after you hit the submit button and pour yourself a cup of tea (or try the E-rate adult beverage), we encourage you to begin planning for 2016. As noted in the webinar, some of the more significant program changes related to Category 1, specific to ensuring libraries have access to high-capacity broadband to their doors, take effect in 2016. To fully take advantage of these new opportunities, libraries must plan ahead.

ALA is already in the planning phase for 2016. And working with our library partners like PLA, we are focusing on more outreach activities to help ensure libraries are equipped with information and support so they can help themselves to a generous serving of available E-rate funding. Refer to the “Got E-rate?” page, follow us @oitp and hashtag #libraryerate, and check back here for updates.

The post If you missed it live: E-rate webinar available “Just in Time” appeared first on District Dispatch.

David Rosenthal: Report from FAST15

planet code4lib - Fri, 2015-02-20 16:00
I spent most of last week at Usenix's File and Storage Technologies conference. Below the fold, notes on the most interesting talks from my perspective.

KeynoteA Brief History of the BSD Fast Filesystem. My friend Kirk McKusick was awarded the 2009 IEEE Reynold B. Johnson Information Storage Systems Award at the 2009 FAST conferencefor his custody of this important technology, but he only had a few minutes to respond. This time he had an hour to review over 30 years of high-quality engineering. Two aspects of the architecture were clearly important.

The first dates from the beginning in 1982. It is the strict split between the mechanism of the on-disk bitmaps (code unchanged since the first release), and the policy for laying out blocks on the drive. It is this that means that, if you had an FFS disk from 1982 or its image, the current code would mount it with no problems. The blocks would be laid out very differently from a current disk (and would be much smaller) but the way this different layout was encoded on the disk would be the same. The mechanism guarantees consistency, there's no way for a bad policy to break the file system, it can just slow it down. As an example, over lunch after listening to Ao Ma et al's 2013 FAST paper ffsck: The Fast File System Checker Kirk implemented their layout policy for FFS. Ma et al's implementation added 1357 lines of code to the ext3 implementation.

The second dates from 1987 and, as Kirk tells it, resulted from a conversation with me. It is the clean and simple implementation of stacking vnodes, which allows very easy and modular implementation of additional file system functionality, such as user/group ID remapping or extended attributes. Most of Kirk's talk was a year-by-year recounting of incremental progress of this kind.
PapersAnalysis of the ECMWF Storage Landscape by Matthias Grawinkel et al is based on a collection of logs from two tape-based data archives fronted by disk cache (ECFS is 15PB with disk:tape ratio 1:43, MARS is 55PB with a 1:38 ratio). They have published the data:
  • ECFS access trace: Timestamps, user id, path, size of GET, PUT, DELETE, RENAME requests. 2012/01/02-2014/05/21.
  • ECFS / HPSS database snapshot: Metadata snapshot of ECFS on tape. Owner, size, creation/read/modification date, paths of files. Snapshot of 2014/09/05.
  • MARS feedback logs: MARS client requests (ARCHIVE, RETRIEVE, DELETE). Timestamps, user, query parameters, execution time, archived or retrieved bytes and fields. 2010/01/01-2014/02/27.
  • MARS / HPSS database snapshot: Metadata snapshot of MARS files on tape. Owner, size, creation/read/modification date, paths of files. Snapshot of 2014/09/06.
  • HPSS WHPSS logs / robot mount logs: Timestamps,tape ids, information on full usage lifecycle from access request till cartridges are put back to the library. 2012/01/01 - 2013/12/31 
This is extraordinarily valuable data for archival system design, and their analyses are very interesting. I plan to blog in detail about this soon.

Efficient Intra-Operating System Protection Against Harmful DMAs by Moshe Malka et al provides a fascinating insight into the cost to the operating system of managing IOMMUs such as those used by Amazon and NVIDIA and identifies major cost savings.

ANViL: Advanced Virtualization for Modern Non-Volatile Memory Devices by Zev Weiss et al looks at managing the storage layer of a file system the same way the operating system manages RAM, by virtualizing it with a page map. This doesn't work well for hard disk, because the latency of the random I/Os needed to do garbage collection is so long and variable. But for flash and its successors it can potentially simplify the file system considerably.

Reducing File System Tail Latencies with Chopper by Jun He et al. Krste Asanovic's keynote at the last FAST stressed the importance for large systems of suppressing tail latencies. This paper described ways to exercise the file system to collect data on tail latencies, and to analyse the data to understand where the latencies were coming from so as to fix their root cause. They found four problems in the ext4 block allocator that were root causes.

Skylight—A Window on Shingled Disk Operation by Abutalib Aghayev and Peter Desnoyers won the Best Paper award. One response of the drive makers to the fact that Shingled Magnetic Recording (SMR) turns hard disks from randomly writable to append-only media is Drive-Managed SMR, in which a Shingled Translation Layer (STL) hides this fact using internal buffers to make the drive interface support random writes. Placing this after the tail latency paper was a nice touch - one result of the buffering is infrequent long delays as the drive buffers are flushed! The paper is a very clear presentation of the SMR technology, the problems it poses, the techniques for implementing STLs, and their data collection techniques. These included filming the head movements with a high-speed camera through a window they installed in the drive top cover.

RAIDShield: Characterizing, Monitoring, and Proactively Protecting Against Disk Failures by Ao Ma et al shows that in EMC's environment they can effectively predict SATA disk failures by observing the reallocated sector count and, by proactively replacing drives whose counts exceed a threshold, greatly reduce RAID failures. This is of considerable importance in improving the reliability of disk-based archives.
Work-In-Progress talks and postersBuilding Native Erasure Coding Support in HDFS by Zhe Zhang et al - this WIP described work to rebuild the framework underlying HDFS so that flexible choices can be made between replication and erasure coding, between contiguous and striped data layout, and between erasure codes.

Changing the Redundancy Paradigm: Challenges of Building an Entangled Storage by Verónica Estrada Galiñanes and Pascal Felber - this WIP updated work published earlier in Helical Entanglement Codes: An Efficient Approach for Designing Robust Distributed Storage Systems. This is an alternative to erasure codes for efficiently increasing the robustness of stored data. Instead of adding parity blocks, they entangle incoming blocks with previously stored blocks:
To upload a piece of data to the system, a client must first download some existing blocks ... and combine them with the new data using a simple exclusive-or (XOR) operation. The combined blocks are then uploaded to different servers, whereas the original data is not stored at all. The newly uploaded blocks will be subsequently used in combination with future blocks, hence creating intricate dependencies that provide strong durability properties. The original piece of data can be reconstructed in several ways by combining different pairs of blocks stored in the system. These blocks can themselves be repaired by recursively following dependency chain It is an interesting idea that, at data center scale, is claimed to provide very impressive fault-tolerance for archival data.

OCLC Dev Network: Interlibrary Loan Policy Directory Maintenance February 21

planet code4lib - Fri, 2015-02-20 15:00

The Interlibrary Loan Policy Directory will be updated on February 21st.

Hydra Project: University of Alberta becomes a Hydra partner

planet code4lib - Fri, 2015-02-20 14:46

We are delighted to announce that the University of Alberta has become the latest formal Hydra Partner.  The University of Alberta has well over a decade of experience in large-scale digitization and repository projects, and has a strong team of librarians, developers, data curators and other experts migrating their existing systems to what they are calling “Hydra North.”

In their Letter of Intent, the University of Alberta says that they are committed to using their local needs as pathways to contribute to the Hydra community. Their primary areas of focus in this will be research data management, digital archives, and highly scalable object storage.

Welcome, University of Alberta!

DuraSpace News: OpenBU Adopts @mire's Managed Hosting

planet code4lib - Fri, 2015-02-20 00:00

By Ignace Deroost, @mire  

DuraSpace News: Play to Grow Your DSpace Development Skills

planet code4lib - Fri, 2015-02-20 00:00

From Ignace Deroost, @mire  When looking at the Github language statistics for the DSpace project, one could easily assume that a solid background in Java is all it takes to tackle most DSpace development challenges.

District Dispatch: Education and school library legislation is heating up

planet code4lib - Thu, 2015-02-19 22:01

It’s record cold in D.C., but we’re busy meeting with Senate staffers trying to promote school libraries. Both U.S. Senate Committee on Health, Education, Labor, and Pensions (HELP) Chairman Sen. Lamar Alexander (R-TN) and U.S. House Education and the Workforce Committee Chairman John Kline (R-MN) have committed to passing a reauthorization bill for the Elementary and Secondary Education Act (ESEA). In late January, Sen. Alexander released his discussion draft and received a lot of push back from the education community….including school libraries because libraries were not well integrated into the legislation. There was no acknowledgement of the importance of effective school library programs. He declared that the Committee would pass the bill out of committee the last week of February.

Tell Sen. Lamar Alexander to include school library program in ESEA reauthorization.(Photo by DOE PHOTO/Ken Shipp)

Sen. Alexander then met with HELP Committee Ranking Member Sen. Patty Murray and they decided that they would make Sen. Alexander’s bill more bipartisan, which will take some time. So the committee marking up the bill is pushing the markup to March. But the House passed their bill out of committee, with no amendments for school libraries passing in the committee.

Library advocates are calling their Senators about the SKILLS Act to see how much can be included for an effective school library program. This legislation has been so hard to pass–Congress has been trying since 2006 and they haven’t completed it yet–so we need to stay tuned. To learn more on ESEA legislative activities this Congress, read up on the SKILLS Act.

Take action for school library funding now!

The post Education and school library legislation is heating up appeared first on District Dispatch.

District Dispatch: ALA joins lengthy list of groups calling for balanced deficit reduction

planet code4lib - Thu, 2015-02-19 18:09

In 2013, the Bipartisan Budget Act negotiated by Representative Paul Ryan (R-WI) and Senator Patty Murray (D-WA) provided partial, temporary relief from sequestration. With the return of full sequestration in 2016, the American Library Association (ALA) is collaborating with NDD United, an alliance of organizations working together to protect nondefense discretionary funding, to renew efforts to bring an end to sequestration.

Today, ALA joined NDD United and more than 2,100 organizations from across all sectors of the economy and society to urge Congress and President Obama to work together to end sequestration. The letter (pdf) emphasizes (1) the importance of nondefense discretionary (NDD) programs, (2) the harmful effects of budget cuts to date, and (3) the equal importance of both defense and nondefense programs in America’s security at home and abroad, and thus the need for equal sequestration relief.

Sequestration cuts had significant impact on federal library programs. For example, school libraries already suffering from budget cuts, saw a 12.5 percent cut in Innovative Approaches to Literacy making less grant money available for low-income school libraries. LSTA funding was reduced nearly $10 million which reduced libraries abilities to provide services for education, employment and entrepreneurship, community engagement, and individual empowerment.

NDD published “Faces of Austerity” in 2013.

Cuts to date have had significant impacts on the lives of Americans as demonstrated in NDD United’s 2013 report “Faces of Austerity: How Budget Cuts Make Us Sicker, Poorer, and Less Secure (pdf).” Deficit reduction measures enacted since 2010 have come overwhelmingly from spending cuts, with the ratio of spending cuts to revenue increases far beyond those recommended by bipartisan groups of experts. And there is bipartisan agreement that sequestration is bad policy and ultimately hurts our nation. However, so far, Congress and the President have not been able to agree on other deficit reduction to replace the damaging cuts. As work begins on the 2016 budget, it is critical that Congress and the President find a replacement to sequestration to allow the government to keep making appropriate investments in Americans.

The post ALA joins lengthy list of groups calling for balanced deficit reduction appeared first on District Dispatch.

HangingTogether: The Five Stages of Code4Lib

planet code4lib - Thu, 2015-02-19 16:22

View of the Williamette River and Mount Hood from Downtown Portland OR

I had the good fortune to attend the Code4Lib 2015 conference in Portland OR last week.  It was a great event as usual, but it’s an event that I don’t always get to attend in person.  Does anyone else go through these five stages during the conference?  Right, me neither.

  1. I’m very familiar with all current technologies, but let’s see what others are up to.
  2. Oh, wait, it turns out that I don’t know anything and don’t belong here.
  3. Then again, I understood that last presentation and could totally do what they did.
  4. So now I need to throw out all my code and rewrite my apps using that framework I just heard about for the first time.
  5. I’m heading over to the Multnomah Whisk{e}y Library, anybody else interested?

About Bruce Washburn

Mail | Web | Twitter | Facebook | LinkedIn | Google+ | Flickr | More Posts (11)

DPLA: Family Bible records as genealogical resources

planet code4lib - Thu, 2015-02-19 15:45

Family tree from the Bullard family Bible records. Courtesy of the State Archives of North Carolina via the North Carolina Digital Heritage Center.

Interested in using DPLA to do family research, but aren’t sure where to start? Consider the family Bible. There are two large family Bible collections in DPLA—over 2,100 (transcribed) from the North Carolina Department of Cultural Resources, and another 90 from the South Carolina Digital Library. They’re filled with rich information about family connections and provide insight into how people of the American South lived and died during the—mainly—18th and 19th centuries.

Prior to October 1913 in North Carolina, and January 1915 in South Carolina, vital records (birth and death, specifically) were not documented at the state level. Some cities and counties kept official records before then, and in other cases births and deaths were documented—when at all—by churches or families. Private birth, death, and marriage events were most often recorded in family Bibles, which have become rich resources for genealogists in search of early vital records.

Family Bibles are Bibles passed down from one generation of relatives to the next. In some cases, such as the 1856 version held by the Hardison family, the Bible had pages dedicated to recording important events. In others, the inside covers or page margins were used to document births, deaths, and marriages. The earliest recorded date in a family Bible in DPLA is the birth of John Bullard in 1485.

Not only do family Bibles record the dates and names of those born, died, or married, but these valuable resources may identify where an event took place as well. Oftentimes, based on the way in which the event was recorded, the reader can sense the joy or heartache the recorder felt when they inscribed it in the Bible (for example, see the Jordan family Bible, page 8). You’ll even find poetry, schoolwork, correspondence, news clippings, and scribbles in family Bibles that provide insight into a family’s private life that might otherwise be lost (for examples, see the Abraham Darden, Gladney, and Henry Billings family Bibles).

Slave list, Horton family Bible records. Courtesy of the State Archives of North Carolina via the North Carolina Digital Heritage Center.

Family Bibles—especially those from the southern US—may be of particular interest to African American genealogists whose ancestry trails often go cold prior to the Civil War. Before the 1860s, there is little documentary evidence that ancestors even existed  beyond first names and estimated ages in bills of sale, wills, or property lists produced during slavery. Family Bibles are some of the only documents that contain the names of slaves, and in rare cases their ages, birthdates, and parentage.

A search on the subject term “Bible Records AND African Americans,” in the collection from the North Carolina Department of Cultural Resources, returns a set of 142 North Carolina family Bibles that contain at least one documented slave name. In a few cases, the list can extend to ten or more (for example, Simmons Family Bible, page 4). This information enables African American genealogists to begin to trace their ancestry to a place and time in history.

Because African Americans are listed among the slaveholding family’s names, it can sometimes be difficult to discern which are family members and which are their slaves, so some care is required when working with these records. Generally, slaves are listed without last names (for example, see page 7 of the Horton Family Bible).

Whether you are a family researcher or are simply interested in American history, the family Bibles from North and South Carolina will be of great interest. They tell deeply personal stories and expose a rich history hidden in the private collections of American citizens that remind us that all history is truly local.

 

Featured image credit: Detail from page 2 of the Debnam Family Bible Records. Courtesy of the State Archives of North Carolina via the North Carolina Digital Heritage Center.

All written content on this blog is made available under a Creative Commons Attribution 4.0 International License. All images found on this blog are available under the specific license(s) attributed to them, unless otherwise noted.

District Dispatch: Tweet questions about fair use and media resources

planet code4lib - Thu, 2015-02-19 15:11

Next week is Fair Use Week so let’s celebrate with a copyright tweetchat on Twitter. On February 25th from 3:00 to 4:00 p.m. (Eastern), legal expert Brandon Butler will be our primary “chatter” on fair use.

There are few specific copyright exceptions that libraries and educational institutions can rely on that deal specifically with media, so reliance on fair use is often the only option for limiting copyright when necessary. The wide array of media formats both analog and digital, the widespread availability of media content, the importance of media in the teaching and research, in addition to advances in computer technologies and digital networks were unheard of in the 1960-70s when Congress drafted the current copyright law.

But Congress recognized that a flexible exception like fair use would be an important user exception especially in times of dramatic change. Fair use can address the unexpected copyright situation that will occur in the future. Particularly with media, it’s a whole new world.

The tweetchat will address concerns like the following:

  • Can I make a digital copy of this video?
  • When is a public performance public?
  • When can I break digital rights technology on DVDs?
  • Is the auditorium a classroom?
  • How can libraries preserve born-digital works acquired via a license agreement?
  • And my favorite: What about YouTube? What can we do with YouTube?

Ask Brandon Butler your media question. Participate in the Twitter tweetchat by using #videofairuse on February 25, 2015, from 3:00 to 4:00 p.m. EST.

Brandon Butler has plenty experience with fair use. He is a Practitioner-in-Residence at American University’s Washington College of Law, where he supervises student attorneys in the Glushko-Samuelson Intellectual Property Law Clinic and teaches about copyright and fair use. Brandon is the co-facilitator, with Peter Jaszi and Patricia Aufderheide, of the Code of Best Practices in Fair Use for Academic and Research Libraries, a handy guide to thinking clearly about fair use published by the Association of Research Libraries and endorsed by all the major library associations, including the American Library Association (ALA).

Special thanks to Laura Jenemann for planning this event. Laura is Media Librarian and Liaison Librarian, Film Studies and Dance, at George Mason University, VA. She is also the current Chair of ALA’s Video Round Table.

The post Tweet questions about fair use and media resources appeared first on District Dispatch.

LITA: Tools for Creating & Sharing Slide Decks

planet code4lib - Thu, 2015-02-19 13:00

Lately I’ve taken to peppering my Twitter network with random questions. Sometimes my questions go unanswered but other times I get lively and helpful responses. Such was the case when I asked how my colleagues share their slide decks.

Figuring out how to share my slide decks has been one of those things that consistently falls to the bottom of my to-do list. It’s important to me to do so because it means I can share my ideas beyond the very brief moment in time that I’m presenting them, allowing people to reuse and adapt my content. Now that I’m hooked on the GTD system using Trello, though, I said to myself, “hey girl, why don’t you move this from the someday/maybe list and actually make it actionable.” So I did.

Here’s my dilemma. When I was a library school student I began using SlideShare. There are a lot of great things about it – it’s free, it’s popular, and there are a lot of integrations. However… I’m just not feeling the look of it anymore. I don’t think it has been updated in years, resulting in a cluttered, outdated design. I’ll be the first to admit that I’m snobby when it comes to this sort of thing. I also hate that I can’t reorder slide decks once they’re uploaded. I would like to make sure my decks are listed in some semblance of chronological order but in order to do so I have to upload them in backwards order. It’s just crazy annoying how little control you have over the final arrangement and look of the slides.

So now that you’ve got the backstory, this is where the Twitter wisdom comes in. As it turns out, I learned about more than slide sharing platforms – I also found out about some nifty ways to create slide decks that made me feel like I’ve been living under a rock for the past few years. Here are some thoughts on HaikuDeck, HTMLDecks, and SpeakerDeck.

HaikuDeck

screenshot: plenty of styling options + formats

This is really sleek and fun. You can create an account for free (beta version) and pull something together quickly. Based on the slide types HaikuDeck provides you with, you’re shepherded down a delightfully minimalistic path – you can of course create densely overloaded slides but it’s a little harder than normal. Because this is something I’m constantly working on, I am appreciative.

I haven’t yet created and presented using a slide deck from HaikuDeck but I’m going to make that a goal for this spring. However, you can see a quick little test slide deck here. I made it in about two minutes and it has absolutely no meaningful content, it’s just meant to give you an easy visual of one of their templates. (Incentive: make it through all three slides and you’ll find a picture of a giant cat.)

One thing to keep in mind is that you’ll want to do all of your editing within HaikuDeck. If you export to Powerpoint, nothing will be editable because each slide exports as an image. This could be problematic if you needed to do last minute edits and didn’t have an internet connection. Also, beware: at least one user has shared that it ate her slides.

HTMLDecks

screenshot: handy syntax chart + space to build, side-by-side

This is a simple way to build a basic slide deck using HTML. I don’t think it could get any simpler and I’m actually struggling with what to write that would be helpful for you to know about it. To expand what you can do, learn more about Markdown.

From what I can tell, there is no export feature – you do need to pull up your slide deck in a browser and present from there. Again, this makes me a little nervous given the unreliable nature of some internet connections.

I see the appeal of HTMLDecks, though I’m not sure it’s for me. (Anyone want to change my mind by pointing to your awesome slide deck? Show me in the comments!)

SpeakerDeck

screenshot: clean + simple interface for uploading your slides

I was so dejected when I looked at my sad SlideShare account. SpeakerDeck renewed my faith. This is the one for me!

What’s not to love? SpeakerDeck has the clean look I’ve craved and it automatically orders your slides based on the date you gave your presentation, most recent slides listed toward the top. Check out my profile here to see all of this in action.

One drawback is that by making the jump to SpeakerDeck I lost the number of views that I had accumulated over the years. On the same note, SpeakerDeck doesn’t integrate with my ImpactStory profile in the same way that SlideShare does. I haven’t published much so my main stats come from my slide decks. Not sure what I’m going to do about that yet, beyond lobby the lovely folks at ImpactStory to add SpeakerDeck integration.

One thing I would like to see a slide sharing platform implement is shared ownership of slides. I asked SpeakerDeck about whether they offered this functionality; they don’t at this time. You see, I give a lot of presentations on behalf of a group I lead, Research Data Services (RDS). Late last year I created a SlideShare account for RDS. I would love nothing more than to be able to link my RDS slide decks to my personal account so that they show up in both accounts.

Lastly, I would be remiss as a data management evangelizer if I didn’t note that placing the sole copies of your slides (or any files) on a web service is an incredibly bad idea. It’s akin to teenagers now keeping their photos on Facebook or Instagram and deleting the originals, a tale so sad it could keep me up at night. A better idea is to keep two copies of your final slide deck: one saved as an editable file and the other saved as a PDF. Then upload a copy of the PDF to your slide sharing platform. (Sidenote: I haven’t always been as diligent about keeping track of these files. They’ve lived in various versions of google drive, hard drives, and been saved as email attachments… basically all the bad things that I am employed to caution against. Lesson? We are all vulnerable to the slow creep of many versions in many places but it’s never too late to stop the digital hoarding.)

How do you share your slide decks? Do you have any other platforms, tools, or tips to share with me? Do tell.

Pages

Subscribe to code4lib aggregator