We are pleased to announce that booking for Hydra Connect 2016 is now open. Booking details, along with information on the conference hotel and the preferential rate we have arranged there, can be found at the Hydra Connect 2016 wiki page.
Get your Early Bird Access 2016 tickets now!
Early Bird Sales will end Monday July 11th. Don’t miss out on this amazing deal. Full conference tickets include admission to hackfest, two and a half days of our amazing single-stream conference and a half-day workshop on the last day.
Note: HST rates in New Brunswick go up to 15% on July 1st, so DON’T WAIT. A 2% percent savings means more lobster in your carry-on for the trip home.
It is all you can eat for one amazingly low price. And we mean that literally! Prices include continental breakfast and unlimited coffee at the Hackfest, Hackfest Social hors d’oeuvre and light fair, Opening Reception at the Beaverbrook Art Gallery and full breakfast and lunch each day during the regular conference.
Still unsure? Check out the our amazing line-up of speakers and keynotes.
Speakers should take advantage of the special speaker’s rate which also closes on July 11th.
We look forward to seeing you under beautiful Fall colours in Fredericton this October.
Learn about how to use Python to consume linked data from a specific graph URL.
Last updated June 9, 2016. Created by Peter Murray on June 9, 2016.
Log in to edit this page.
Islandora Camp will be visiting Kansas City, MO this Fall. October 12 - 14, you can join us at the University of Missouri-Kansas City for three days of Islandora sessions, workshops, and community presentations.
Camp consists of three days of Islandora content:
Lucidworks is pleased to announce the release of Fusion 2.4 (download, release notes, press release). This new release features several key enhancements allowing for the rapid building and deployment of data-driven experiences.Index Pipeline Simulator
The Index Pipeline Simulator provides a powerful interface for configuring index pipelines and previewing pipeline output with a sample data set before they are applied to the entire data source. This allows for easy debugging of index pipeline output in a sandbox environment.Time Series Partitioning
To allow for easy management and querying of time-series data, Fusion collections can now be configured by time window. Using a configurable set of Solr collections, each time series collection stores data for that given time window. Time based queries are automatically directed to the appropriate partition.SAML Support
Fusion now supports version 2.0 of the Security Assertion Markup Language (SAML), allowing businesses to use existing authentication identities for more finely-tuned and flexible access control.Spark Integration Updates
Our new Spark Jobs API allows for the management and configuration of Spark jobs from Fusion, as well as retrieving cluster information.Connector Enhancements
The Box.com connector now indexes metadata and supports authentication using OAuth 2, via the JWT Auth App Service.
The Jive connector now supports indexing of Jive groups and places.
All of the above along with faster pipeline stage processing and improved diagnostics for investigating deployment issues. Fusion 2.4 ships with Apache Solr 5.5.1 and Apache Spark 1.6.1, and is fully supported for production deployments.
Lucidworks Fusion 2.4 is available today. For more information and to download the product, please visit https://lucidworks.com/products/fusion/.
The Senate Appropriations Committee today delivered good news for libraries by increasing funding for LSTA Grants to States and National Leadership Grants to Libraries, while also providing level funding for Innovative Approaches to Literacy (IAL). The Labor, Health and Human Services, Education and Related Agencies Appropriations Subcommittee approved the bill just two days ago with no amendments or controversial policy riders.
The Grants to States program, which the President’s budget proposed cutting by $950,000, will instead be increased in the Senate bill by $314,000, raising its total funding to $156.1 million for FY2017. That reflects an increase of over $1.25 million from the President’s request. National Leadership Grants will also receive a $314,000 increase, bringing its total to $13.4 million. Overall, the Institute of Museum and Library Services (IMLS) will receive a $1 million increase to $231 million for FY2017.
Innovative Approaches to Literacy, just authorized in last year’s Every Student Succeeds Act (ESSA), will receive level funding in the Senate bill of $27 million for FY2017. One half of IAL funding is reserved for school library grants with the remaining reserved for non-profits.
ALA acknowledges the leadership of Senator Jack Reed (D-RI), and the deep commitment to library funding of many other key Senators, including Appropriations Committee Chairman Thad Cochran (R-MS), Subcommittee Chairman Roy Blunt (R-MO), Subcommittee Ranking Member Patty Murray (D-WA) and Senator Susan Collins (R-ME). ALA members from Maine, Mississippi, Missouri, Rhode Island, Washington are urged to send messages of thanks to these Senate offices.
The House Appropriations Committee has not yet announced a timetable for moving its Labor, Health and Human Services, Education and Related Agencies FY2017 funding bill. Despite the “no drama” Senate Subcommittee’s markup earlier this week, the overall Appropriations outlook remains very much in doubt. Few Washington insiders are expecting all 12 appropriations bills to pass the House and Senate. Rather, many are expecting one or more “Continuing Resolutions” to keep the government open beyond the October 1 start of the Fiscal Year. A messy “omnibus” spending package providing funding for numerous agencies also is expected to be considered later this year. A government shutdown, however, is not anticipated.
Libraries are working with government agencies and nonprofits to connect people to the digital world. From the U.S. Department of Housing & Urban Development’s ConnectHome effort to the Federal Communications Commission’s Lifeline Program to citywide digital inclusion initiatives, libraries are playing leadership roles in connecting low-income Americans online. Policy and library leaders will discuss public policy options and share exemplars of how libraries and allies are expanding digital opportunities at the 2016 American Library Association (ALA) Annual Conference.
During the conference session “Addressing Digital Disconnect for Low-Income Americans,” leaders will explore efforts to connect disadvantaged Americans to the digital world. The session takes place on Saturday, June 25, 2016, 4:30-5:30 p.m. in the Orange County Convention Center in Room W103A.
Session speakers include Veronica Creech, chief programs officer of EveryoneOn; Felton Thomas, director of the Cleveland Public Library and president-elect of the Public Library Association (PLA); and Lauren Wilson, legal advisor to the Chief of the Consumer and Governmental Affairs Bureau at the Federal Communications Commission (FCC). Larra Clark, deputy director of the ALA Office for Information Technology Policy, will moderate the program.
The post What’s working to connect all Americans to the digital world? appeared first on District Dispatch.
The Senate Rules Committee voted unanimously this afternoon to recommend that the full Senate approve the nomination of Dr. Carla Hayden to serve as the nation’s next, first female, first African American and just fourteenth Librarian of Congress in history. As the Committee’s vote was announced, ALA launched a large-scale grassroots and social media campaign to encourage all Senators to support her confirmation, and to urge Senate Majority Leader Mitch McConnell (R-KY) to schedule a Senate vote on her nomination immediately.
In a statement released immediately after the Committee’s vote, ALA president Sari Feldman said: “Once confirmed, she will be the perfect Librarian to pilot the Library of Congress fully into the 21st century, transforming it again into the social and cultural engine of progress and democracy for all Americans that it was meant to be.” Feldman then called upon Dr. Hayden’s supporters “in every corner of the nation” to use “ALA’s Legislative Action Center to contact every Senator — whether by email, tweet or phone — with this simple message: Please confirm Dr. Carla Hayden now!” Given the Rules Committee’s strong endorsement, and the absence of any public opposition to her nomination, that vote easily could come before the Senate takes its extended summer recess in mid-July, and quite possibly before the fast-approaching Independence Day recess beginning July 1st. That means there’s no time to lose to show your support for librarianship and Dr. Hayden.
Fortunately, contacting your two U.S. Senators by emailing, tweeting or phoning them couldn’t be easier. Just access ALA’s Legislative Action Center, choose your preferred method of communicating, and follow the few easy prompts. (You’ll also find more background on Dr. Hayden and the history of the Librarian’s position at the Action Center, and here, if you like.)
Once confirmed, Dr. Hayden — a past-President of ALA — will be the first professional librarian to be named Librarian of Congress in over 60 years. Don’t hide your pride! Please, take action now — and encourage your friends and colleagues to do the same — to make that a historic reality very soon.
Islandora CLAW Community Sprint 08 is coming up at the end of the month, and we want you to join in. New(ish) sprinter Ben Rosner gave us an inside look at what it's like to start working on CLAW as a developer and we have plenty of tasks that an interested newcomer can tackle for Sprint 08. Not a developer? No problem. We've got an extensive documentation ticket to build up written docs from our existing CLAW lessons videos - a great opportunity to learn more while you're creating something that will help other Islandora users.
The sign up sheet is here. We'll have a sprint kick of meeting on June 20 to sort out who is going to do what, and find everyone a job that fits their skills and interests.
This May, Managing Director Adam Ziegler was a guest on the Lawyerist podcast, discussing recent goings-on at the Library Innovation Lab.
Sam Glover and Adam discuss the future of law, its challenges and how the Innovation Lab endeavors to address these. Perma.cc is chiefly discussed, along with H2O and the Free the Law project.
About five and a half years ago, I was sitting in a big room in conventionland (San Diego, but who’s counting) with my class of Emerging Leaders, as we brainstormed about the qualities of an excellent leader.
Someone was writing those qualities up on a flip chart and, gosh, would I have liked to work for flip chart lady. She was so perceptive and thoughtful and strategic and empathetic and not bad at anything and just great. Way cooler than me. Everyone would like to work for flip chart lady.
And then one of my brainstorming colleagues said, you know, there’s one quality we haven’t put up there, because it’s not actually a core competency for leaders, and that’s intelligence. And the room nodded in agreement, because she was right. You probably can’t be an effective leader if you’re genuinely dumb, but all other things being equal, being smarter doesn’t actually make you a better leader. And we’ve all met really smart people who were disastrous leaders; intelligence alone simply does not confer the needed skills. Fundamentally, if “leader” were a D&D class, its prime requisite would not be INT.
The whole room nodded along with her while I thought, well crap, that’s the only thing I’ve always been good at.
So I was in a funk for a while, mulling that over. And eventually decided, well, people I respect put me in this room; I’m not going to tell them they’re wrong. I’m going to find a way to make it work. I’m going to look for the situations where the skills I have can make a difference, where my weaknesses don’t count against me too much. There’s not a shortage of situations in the world that need more leadership; I’ll just have to look for the ones where the leader that’s needed can be me. They won’t be the same situations where the people to my left and right will shine, and that’s okay. And if I’m not flip chart lady, if I’m missing half her strengths and I’m littered with weaknesses she doesn’t have (because she doesn’t have any)…well, as it turns out, no one is flip chart lady. We all have weaknesses. We are all somehow, if we’re leading interesting lives at all, inadequate to the tasks we set ourselves, and perhaps leadership consists largely in rising to those tasks nonetheless.
So here I am, five and a half years later, awed and humbled to be the LITA Vice-President elect. With a spreadsheet open where I’m sketching out at the Board’s request a two-year plan for the whole association, because if intelligence is the one thing you’ve always been good at, and the thing that’s needed is assimilating years’ worth of data about people and budgets and goals and strengths and weaknesses and opportunities, and transmuting that into something coherent and actionable…
Well hey. Maybe that’ll do.
Thanks for giving me the chance, everybody. I couldn’t possibly be more excited to serve such a thoughtful, creative, smart, motivated, fun, kind bunch of people. To figure out how LITA can honor your efforts and magnify your work as, together, we take a national association with near fifty years of history into its next fifty years. I can’t be flip chart lady for you (no one can), but I am spreadsheet lady, and I’m here for you. Let’s rock.
- The Web’s Creator Looks to Reinvent It
- The New App Store: Subscription Pricing, Faster Approvals, and Search Ads
- E.W. Scripps Buys Podcast Company Stitcher
- FBI wants access to Internet browser history without warrant in terrorism and spy cases
- The Voice UI has gone Mainstream
- Google Says Page speed ranking factor to use mobile page speed for mobile sites in upcoming months
- NPM Introduces Hooks
- Versioning, Licensing, and Sketch 4.0
- Instagram’s new algorithm is live
The post Creator of the Internet to Reinvent It and Fails to See the Irony appeared first on LibUX.
… I really do think it’s time to reopen the question of formalizing Code4Lib IF ONLY FOR THE PURPOSES OF BEING THE FIDUCIARY AGENT for the annual conference.
I agree — we need to discuss this. The annual main conference has grown from a hundred or so in 2006 to 440 in 2016. Given the notorious rush of folks racing to register to attend each fall, it is not unreasonable to think that a conference in the right location that offered 750 seats — or even 1,000 — would still sell out. There are also over a dozen regional Code4Lib groups that have held events over the years.
With more attendees comes greater responsibilities — and greater financial commitments. Furthermore, over the years the bar has (appropriately) been raised on what is counted as the minimum responsibilities of the conference organizers. It is no longer enough to arrange to keep the bandwidth high, the latency low, and the beer flowing. A conference host that does not consider accessibility and representation is not living up to what Code4Lib qua group of thoughtful GLAM tech people should be; a host that does not take attendee safety and the code of conduct seriously is being dangerously irresponsible.
Running a conference or meetup that’s larger than what can fit in your employer’s conference room takes money — and the costs scale faster than linearly. For recent Code4Lib conferences, the budgets have been in the low- to-middle- six figures.
That’s a lot of a money — and a lot of antacids consumed until the hotel and/or convention center minimums are met. The Code4Lib community has been incredibly lucky that a number of people have voluntarily chosen to take this stress on — and that a number of institutions have chosen to act as fiscal hosts and incur the risk of large payouts if a conference were to collapse.
To disclose: I am a member of the committee that worked on the erstwhile bid to host the 2017 conference in Chattanooga. I think we made the right decision to suspend our work; circumstances are such that many attendees would be faced with the prospect of traveling to a state whose legislature is actively trying to make it more dangerous to be there.
However, the question of building or finding a long-term fiscal host for the annual Code4Lib conference must be considered separately from the fate of the 2017 Chattanooga bid. Indeed, it should have been discussed before conference hosts found themselves transferring five-figure sums to the next year’s host.
Of course, one option is to scale back and cease attempting to organize a big international conference unless some big-enough institution happens to have the itch to backstop one. There is a lot of life in the regional meetings, and, of course, many, many people who will never get funding to attend a national conference but who could attend a regional one.
But I find stepping back like that unsatisfying. Collectively, the Code4Lib community has built an annual tradition of excellent conferences. Furthermore, those conference have gotten better (and bigger) over the years without losing one of the essences of Code4Lib: that any person who cares to share something neat about GLAM technology can have the respectful attention of their peers. In fact, the Code4Lib community has gotten better — by doing a lot of hard work — about truly meaning “any person.”
Is Code4Lib a “do-ocracy”? Loaded question, that. But this go around, there seems to be a number of people who are interested in doing something to keep the conference going in the long run. I feel we should not let vague concerns about “too much formality” or (gasp! horrors!) “too much library organization” stop the folks who are interested from making a serious go of it.
We may find out that forming a new non-profit is too much uncompensated effort. We may find out that we can’t find a suitable umbrella organization to join. Or we may find out that we can keep the conference going on a sounder fiscal basis by doing the leg-work — and thereby free up some people’s time to hack on cool stuff without having to pop a bunch of Maalox every winter.
But there’s one in argument against “formalizing” in particular that I object to. Quoting Eric Lease Morgan:
In the spirit of open source software and open access publishing, I suggest we
earnestly try to practice DIY — do it yourself — before other types of
formalization be put into place.
In the spirit of open source? OK, clearly that means that we should immediately form a non-profit foundation that can sustain nearly USD 16 million in annual expenses. Too ambitious? Let’s settle for just about a million in annual expenses.
I’m not, of course, seriously suggesting that Code4Lib aim to form a foundation that’s remotely in the same league as the Apache Software Foundation or the Mozilla Foundation. Nor do I think Code4Lib needs to become another LITA — we’ve already got one of those (though I am proud, and privileged, to count myself a member of both). For that matter, I do think it is possible for a project or group effort to prematurely spend too much time adopting the trappings of formal organizational structure and thus forget to actually do something.
But the sort of “DIY” (and have fun unpacking that!) mode that Morgan is suggesting is not the only viable method of “open source” organization. Sometimes open source projects get bigger. When that happens, the organizational structure always changes; it’s better if that change is done openly.
The Code4Lib community doesn’t have to grow larger; it doesn’t have to keep running a big annual conference. But if we do choose to do that — let’s do it right.
This is a guest post by Julia Kim, archivist in the American Folklife Center at the Library of Congress.
The American Folklife Center just celebrated 40 years since it was founded by Congressional mandate. But its origins far predate 1976; its earlier incarnation was the Archive of Folk Song,which was founded in 1928 and was part of the Library’s Music Division.
Its collections included many early analog audio recordings, like the Alan Lomax Collection and the Federal Cylinder Project’s Native and Indigenous American recordings. [See also the CulturalSurvival.org story.]
While the Library is well known for its work with different tools, guidelines and recommendations, less is known about its systems and workflows. I’ve been asked about my work in these areas and though I’ve only been on staff a relatively short while, I’d like to share a little about digital preservation at AFC.
As part of the Nation’s Library, AFC has a mandate to collect in the areas of “traditional expressive culture.” Of its digital collections, AFC maintains ongoing preservation of 200 TB of content but we project a 50% increase of approximately 100 TB of newly digitized or born-digital content this year. In our last fiscal year, the department’s acquisitions were 96% digital, spanning over 45 collections. StoryCorp’s 2015 accessions alone amounted to approximately 50,000 files (8 TB).
It has been a tremendous challenge to understand AFC’s past strategies with an already mature — but largely dark — repository, as well as how to refine them with incoming content. We have not yet had to systemically migrate large quantities of born-digital files but preserving the previously accessioned collections is a major challenge. More often than not, AFC processors apply the terms migration and remediation to older servers and databases rather than to files. This is an inevitable result of the growing maturity of our digital collections as well as others within the field of Digital Preservation.
The increasing amount of digital content also means that instead of relegating workflows to a single technical person (me), digital content is now handled by most of the archival processors in the division. AFC staff now regularly use a command line interface and understand how to navigate our digital repository. This is no small feat.
Similarly, staff training in core concepts is also ongoing. A common misconception is that ingest is a singular action when, in its fullest definition, it’s an abstraction that encompasses many actions, actors and systems. Ingest is one of the core functions in the OAIS framework. The Digital Preservation Coalition defines ingest as “the process of turning a Submission Information Package into an Archival Information Package, i.e. putting data into a digital archive.” Ingest, especially in this latter definition, can be contingent on relationships and agreements with external vendors, as well as arrangements with developers, project managers, processing staff and curators.
Transferring content is a major function of ingest and it is crucial to ensure that the many preservation actions down the line are done on authentic files. While transferring content involves taking an object into a digital repository, and it may seem to be a singular, discrete process, the transfer can involve many processes taking place over multiple systems by many different actors.
The flexibility inherent throughout the OAIS model requires systematic and clear definitions and documentation to be of any real use. This underscores the need for file verification and creating hash values at the earliest opportunity, as there is no technical ability to guarantee authenticity without receiving a checksum at production.
Ingest can then include validating the SIP, implementing quality assurance measures, extracting the metadata, inputting descriptive administrative metadata, creating and validating hash values and scanning for viruses. In our case, after establishing some intellectual control, AFC copies to linear tape before doing any significant processing and then re-copies again after any necessary renaming, reorganizing and processing.
Our digital preservation ecosystem relies on many commonly used open-source tools (bwfmetaedit, mediainfo, exiftool, Bagit, JHOVE, Tesseract), but one key tool is our modular home-grown repository, our Content Transfer Services ((see more about the development of CTS in this 2011 slide deck), which supports all of the Library of Congress, including the Copyright division and the Congressional Research Services.
CTS is primarily an inventory and transfer system but it continues to grow in capacity and it performs many ingest procedures, including validating bags upon transfer and copy, file-type validations (JHOVE2) and — with non-Mac filesystems — virus scanning. CTS allows users to track and relate copies of content across both long-term digital linear tape as well as disk-based servers used for processing and access. It is used to inventory and control access copies on other servers and spinning disks, as well as copies on ingest-specific servers and processing servers. CTS also supports workflow specifications for online access, such as optical character recognition, assessing and verifying digitization specifications and specifying sample rates for verifying quality.
Each grouping in CTS can be tracked through a chronology of PREMIS events, its metadata and its multiple copies and types. Any PREMIS event, such as a “copy,” will automatically validate the md5 hash value associated with each file, but CTS does not automatically or cyclically re-inventory and check hash values across all collections. Curators and archivists can use CTS for single files or large nested directories of files: CTS is totally agnostic. Its only requirement is that the job must have a unique name.
CTS’s content is handled by file systems. Historically, AFC files are arranged by AFC collections in hierarchical and highly descriptive directories. These structures can indicate quality, file/content types, collection groupings and accessioning groupings. It’s not unusual, for example, for an ingested SIP directory to include as much as five directory levels with divisions based on content types. This requires specific space projections for the creation and allocation of directory structures.
Similarly, AFC relies on descriptive file-naming practices with pre-pended indications of a collection identifier — as well as other identifiers — to create, in most cases, unique IDs. CTS does not, however, require unique file names, just a unique naming of the grouping of files. CTS, then, accepts highly hierarchical sets of files and directories but is unable to work readily at the file level. It works within curated groupings of files with a reasonable limitation of no more than 5,000 files and 1 TB for each grouping.
AFC plans to regularly ingest off-site to tape storage at the National Audio Visual Conservation Center in Culpepper, Virginia (see the PowerPoint overviews by James Snyder and Scott Rife). While most of our collections are audio and audiovisual, we don’t currently send any digital content to NAVCC servers except when we request physical media to be digitized for patrons to access. We’re in the midst of a year-long project to explore automating ingest to NAVCC in a way that integrates with our CTS repository systems on Capitol Hill.
This AFC-led project should support other divisions looking for similar integration and will also help pave the way to support on-site digitization and then ingest to NAVCC. The project has been fruitful in engaging conversations on different ingest requirements for NAVCC and its reliance, for example, on Merged AudioVisual Information System (MAVIS) xml records, previously used to track AFC’s movement of analog physical media to cold storage at NAVCC. AFC also relies heavily on department-created Oracle APEX databases and Access databases.
One pivotal aspect of ingest is data transfer. We receive content from providers in a variety of ways: hard drives and thumb drives sent through the mail, network transfer over cloud services like Signiant Exchange and Dropbox, and API harvesting of the StoryCorps.Me collection. Each method carries some form of risk, from network failures and outages to hard drive failures. And, of course, human error.
AFC also produces and sponsors lots of content production, including in-house lectures and concerts and its Occupational Folklife Collections, which involve many non-archival processing staff members and individuals.
Another aspect that determines our workflows involves the division between born-digital collections and accessions versus our digitized collections meant for online access. As part of my introduction to the Library, I jumped into AFC’s push to digitize and provide online access to 25 ethnographic field projects collected from 1977-1997 (20 TB multi-format). AFC has just completed and published a digitized collection, the Chicago Ethnic Arts project.
These workflows can be quite distinct but in both the concept of “processing” is interpreted widely. In the online access digitization workflows, which have involved the majority of our staff’s processing time over the past six months, we must assess and perform quality control measures on different digital specifications as well as create and inventory derivatives at a mass scale across multiple servers. These collections, which we will continue to process over the years, test the limits of existing systems.
The department quickly maxed out the server space set aside for reorganizing content, creating derivatives and running optical character recognition software. Our highly descriptive directory structures were incompatible with our online access tools and required extensive reorganization. We also realized that there was a very high learning curve to working with a command line interface for many staff and many ongoing mistakes were not found until much later. Also later in the project, we determined that our initial vendor specifications were unsupported by some of the tools we relied on for online display. The list goes on but the processing of these collections served as an intensive continuous orientation to historical institutional practices.
There are many reasons for the road blocks we encountered and some were inevitable. At the time that some of the older AFC practices had been established, CTS and other Library systems could not support our now current needs. However, like many new workflows, the field project digitization workflows are ongoing. Each of these issues required extensive meetings with stakeholders across different departments that will continue to over the coming months. These experiences have been essential in refining stakeholder roles and responsibilities as well as expectations around the remaining unprocessed 24 ethnographic field projects. Not least of all, there is a newer shared understanding of the time, training and space needed to move, copy and transform large quantities of digitized files. Like much of digital preservation itself, this is an iterative process.
As the year winds down, priorities can soon shift to revisiting our department’s digital preservation guidelines for amendment, inventorying unclearly documented content on tape and normalizing and sorting through the primarily descriptive metadata of our digital holdings.
Additionally AFC is focusing on re-organizing and understanding complex, inaccessible collections that are on tape. In doing so, we’ll be pushing our department to focus on areas of our self audit from last year that are most lacking, specifically in metadata. Another summer venture for us is to test and create workflows for identifying fugitive media left mixed in with paper with hybrid collections.
This summer, I’ll work with interns to develop a workflow to label, catalog, migrate and copy to tape, using the Alliance of American Quilts Collection as our initial pilot collection. AFC has also accumulated a digital backlog of collections that has not been processed or ingested in any meaningful way during our focus on digitization workflows. These need to be attended to in the next several months.
While this is just a sampling of our current priorities, workflows, systems and tools, it should paint a picture of some of the work being done in AFC’s processing room. AFC was an early adopter of digital preservation at the Library of Congress and as its scope has expanded over the past few decades, its computer systems and workflows have matured to keep up with its needs. The American Folklife Center continues to pioneer and improve digital preservation and access to the traditional expressive culture in the United States.