Technology is ubiquitous in our society today. It’s in our classrooms, businesses and homes, on our wrists and in our pockets. As the ambit of everyday objects and activities that employ – or even require the use of – technology continues to expand, what are the consequences for children and teens? Answering this question requires careful consideration of several policy issues in the context of youth, including, but not limited to: information access, digital literacy, diversity and economic development.
Realizing the importance of this complex task, OITP recently launched a Program on Youth and Technology. At the ALA 2016 Midwinter Meeting in Boston, OITP staff will host a meeting to accelerate the program’s work. The meeting will explore such topics as computational thinking, play-based learning and educational disruption through moderated discussions and hands-on activities (including the deliberative construction of a peanut butter and jelly sandwich).
Specific questions the meeting will address (devised by OITP Youth and Technology Fellow Chris Harris) include:
- Are there factors such as quality of software based interactions and level of parental engagement time that libraries could influence to tip the ongoing debate over children’s screen time?
- Are there developmental milestones in the psychological, logical, and brain development of children that need to be considered in relation to teaching computational thinking and the fundamentals of computer science/coding?
- Are there aspects of STEM/STEAM that are not being addressed comprehensively on which libraries could take leadership?
Do you have thoughts on these or other questions? Post them in the comment section. Are you attending Midwinter and interested in attending the meeting? Contact OITP Associate Director Marijke Visser at: email@example.com. Also, be sure to visit the program’s work-in-progress web page for more on OITP’s youth and tech work to date. It’s a living portal, so check back periodically for updates.
The following is a guest post by Mary Kidd, National Digital Stewardship Resident at New York Public Radio’s (NYPR) archive. She participates in the NDSR-NYC cohort.
My outlook on preservation issues surrounding radio archives has been deeply influenced by my work as a National Digital Stewardship Resident (NDSR) at New York Public Radio’s (NYPR) archive. Here, I have been tasked with writing a Digital Preservation Roadmap (PDF). This report will include a few things:
- A detailed overview of the current state of digital production throughout the various stations.
- A network-wide analysis of NYPR’s digital holdings to quantify the number and size of their digital assets, and track the rate of growth for both.
- Transcriptions of dozens of interviews across various departments to paint a holistic narrative of what systems NYPR staff are using, what file formats they are outputting, and how these files make their way to long-term storage.
One of the first details revealed to me throughout the staff interviews I have conducted so far is that NYPR’s definition of what makes radio includes both traditional “terrestrial” notions of radio (a tower beaming waves at transistors) as well as the “new” digital formats. By digital formats, I am referring to WAV and MP3 files, live and on-demand streaming audio and video, on-the-go podcasts, and social media posts. For example, some producers are using social media platforms like Twitter to extend “on air” conversations past strictly allotted time slots. By employing various digital formats, NYPR is providing its listenership more avenues to reach and engage a larger, more diverse audience. The archive, in turn, will have to develop new ways to address the fact that digital assets are often interconnected with other assets, rather than standalone audio objects.
Another idea I gleaned from my interviews is that reports or stories bolstered with material from the radio archive creates great news. This idea is embodied by a recently broadcast WNYC newscast, The Tale of the Tape: Hillary Clinton’s Gay Marriage Evolution by reporter Andrea Bernstein. Her report provides snippets found throughout her extensive collection of recordings documenting Hillary Clinton’s political career. Bernstein and NYPR’s Head Archivist Andy Lanset worked closely together to mine the archive’s PBCore-backed database, CAVAFY, as well as the actual physical archives for items, including the aforementioned cassette tape, to put together a comprehensive timeline of Clinton’s public remarks on marriage rights. This story, mentioned to me by different producers on three separate interviews, demonstrates how archives are not just safekeepers of the past, but informants of the present. Ultimately, my NDSR report will suggest ways that the archive can continue to inspire its producers and staff and work alongside the stations’ output, rather than behind it.
Newsflash: Radio Preservation Studies is Here
The evolving identity of radio is influencing the development of a new and emerging field sometimes referred to as “radio preservation studies”. As media scholar Carolyn Birdsall observes in her recent article in FLOW, “Can We Invent a Field Called ‘Radio Preservation Studies’?”, when “…the history of sound recording in radio has been acknowledged, it is either not connected to the archive or only discussed in relation to specific program formats.” An important step in developing radio preservation studies, especially in terms of the archive, is to distinguish radio archives from traditional sound archives. In some ways, this makes sense because on the surface, a radio archive does look a whole lot like a sound archive: a room full of discs, tapes and playback equipment.
However, radio has its own distinct sound and modes of production. For instance, a DJ playing out a selection of jazz songs on air sounds completely different from an individual turning out the same tracks on their record player in their living room. As Birdsall explains further:
“Recordings held in broadcast archives today are closely linked to the production context and the needs of program makers, yet they are disconnected from the original domestic context in which broadcast sound was received.”
In this sense, radio archives require their own distinct set of archival best practices, especially those that contextualize broadcast recordings. As of today, these practices are yet to be developed and documented, but they will likely be distinguished from traditional sound preservation recommendations.
Radio Preservation Task Force
In light of the developing field of radio preservation studies, scholars, archivists and larger organizations have begun to come together to form a distinct community that focuses on securing the legacy of radio. The most significant of these efforts is the Radio Preservation Task Force (RPTF) initiative, which is part of the National Recording Preservation Board (NRPB) of the Library of Congress. Over the past few months the RPTF has accomplished several important tasks, including aggregating participating archives, developing metadata on extant materials, promoting scholarly and research efforts within the field and allocating resources for endangered collections. The first RPTF conference, to be held in February 2016, will bring together scholars, archivists, librarians and other stakeholders in radio and media archives large and small. Most importantly, this conference will communicate and raise the visibility for the emerging field of radio preservation studies.
Striking a balance between the archival priorities and the expectations of producers is quite possibly one of the greatest challenges for an archive embedded in a media station. In July 2014 for The Signal, Hannah Sommers, at the time a Library Program Manager at NPR, wrote an enlightening guest post on media taxonomy. She described how a production environment can specifically benefit by putting robust metadata taxonomies in place:
“We also understand taxonomy as a preservation tactic. Our industry is evolving more quickly than the systems used to report the news itself, and is shedding library departments even faster. Each digital story that “knows what it’s about” from the tags it carries is a story more likely to be remembered because its tags connected it to an interested audience in the first place. It is a story that is inoculated against digital invisibility. It has a better chance of being accounted for, rediscovered and reused. It is a story that has a better chance of being part of a group of stories. It is a story that has a much better chance of persisting.”
My hope for the upcoming RPTF conference is to gain perspective on how other radio archives prioritize their efforts while embedded in a live production environment. NYPR’s archive, which has traditionally worked to preserve their vast analog legacy of mostly reel-to-reel tapes and discs, is seeking a more central role in how the stations handle, store, repurpose and provide access to their digital assets. On the surface, the production environment is driven by the complete opposite of what drives the archive: producers focus on the present, and archivists focus on the past. Through collaboration, archives and producers can work together with a dynamic sort of harmony that feeds off of what makes each so different. This will work in large part by means of rich and discoverable metadata.
Radio preservation studies presents a new angle on sound archives, and demonstrates the potential of radio-delivered information to give us better insight into how cultures have evolved and communicated. As one of the first NDSR residents to work at a radio archive, I look forward to contributing to this important and growing field.
In the final report which will be the culmination of my NDSR residency, I hope to propose a robust practical solution for how NYPR can leverage their archives, whereby producers and archivists can work together as dynamic collaborators to create and deliver rich, in-depth and sophisticated content to current reporting, storytelling and other shows. This will likely include suggesting ways for the archive to automate descriptive metadata tagging, and employing technologies allowing for deeper discovery of transcripts, such as speech-to-text recognition software.
Registration is Now Open, for any of 3 webinars and 2 web courses. Check out the great line up.
Webinars are one time sessions lasting 60 to 90 minutes.
How Your Public Library Can Inspire the Next Tech Billionaire: an Intro to Youth Coding Programs, with Kelly Smith, Crystle Martin, and Justin Hoenke
Offered: Thursday March 3, 2016, Noon Central Time
Kids, tweens, teens and their parents are increasingly interested in computer programming education, and they are looking to public and school libraries as a host for the informal learning process that is most effective for learning to code. This webinar will share lessons learned through youth coding programs at libraries all over the U.S. We will discuss tools and technologies, strategies for promoting and running the program, and recommendations for additional resources.
The Why and How of HTTPS for Libraries, with Jacob Hoffman-Andrews
Offered: Monday March 14, 2016, 1:00 pm Central Time
As more of our library browsing occurs over the Internet, the only way to continue to preserve patron privacy is to make sure that the library catalog and database traffic that travels between a web browser and a server remains encrypted. This webinar will discuss how encrypted websites work, and demonstrate exciting tools from the Electronic Frontier Foundation that make it easy to encrypt library websites by default.
Yes You Can Video, with Anne Burke, and Andreas Orphanides
Offered: Tuesday April 12, 2016, 1:00 pm Central Time
Have you ever wanted to create an engaging and educational instructional video, but felt like you didn’t have the time, ability, or technology? Are you perplexed by all the moving parts that go into creating an effective tutorial? This webinar will help to demystify the process, breaking it down into easy-to-follow steps, and provide a variety of technical approaches suited to a range of skill sets. They will cover choosing and scoping your topic, scripting and storyboarding, producing the video, and getting it online. They will also address common pitfalls at each stage.
Web Courses use a multiple week asynchronous format.
Which Test for Which Data: Statistics at the Reference Desk, with Rachel Williams
Starting Monday February 29, 2016, running for 4 weeks
This web course is designed to help librarians faced with statistical questions at the reference desk. Whether assisting a student reading through papers or guiding them when they brightly ask “Can I run a t-test on this?”, librarians will feel more confident facing statistical questions.
Universal Design for Libraries and Librarians, with Jessica Olin, and Holly Mabry
Starting Monday April 11, 2016, running for 6 weeks
Universal Design is the idea of designing products, places, and experiences to make them accessible to as broad a spectrum of people as possible, without requiring special modifications or adaptations. This course will present an overview of universal design as a historical movement, as a philosophy, and as an applicable set of tools.
Questions or Comments?
For all other questions or comments related to the course, contact LITA at (312) 280-4269 or Mark Beatty, firstname.lastname@example.org.
Now that you’ve all got your new diaries and calendars for 2016 (!) we are delighted to give you some dates to put in them.
Hydra Connect 2016 will take place in Boston, MA from Monday 3rd – Thursday 6th October 2016 jointly hosted by the Boston Public Library, Northeastern University, WGBH and the DPLA. More details in due course, but please reserve the dates. As we say, “as a Hydra Partner or user, if you can only make it to one Hydra meeting this academic year, this is the one to attend!”
Over the winter break, I had the pleasure of listening to the audio book version of The Life-Changing Magic of Tidying Up: The Japanese Art of Decluttering and Organizing by Marie Kondo. In this book, the author explains in detail her method of tidying up (which she calls KonMari). I highly recommend you read the book in its entirety to gain a fuller understanding of what the KonMari method entails, but in short:
- Gather everything you own that falls into a specific category
- Touch each item individually. Hold it, feel it, connect with it
- Ask yourself, “Does this item spark joy within me?”
- If it doesn’t spark joy, ask, “is it useful or necessary?”
- Lastly, if the item doesn’t spark joy, and it isn’t useful, discard it. Also, as you discard it, thank it for fulfilling its purpose, whatever it may have been.
- Do this category by category until your life is only filled with those things that spark joy.
As I listened to this book, I started to make some connections between the techniques being described and how they could apply to my life as a web services librarian. In this post, I’ll point out a few of the random connections it sparked for me, and perhaps others will be encouraged to do something similar, or even apply KonMari in other areas of librarianship — I’d love to hear what others have to say!
The first thing that stuck out to me about this method is how similar it felt to performing a content audit. Content auditing is an important step in developing an overall content strategy — I’d recommend taking a look at Margot Bloomstein’s article, “The Case for Content Strategy — Motown Style” for a pretty practical overview of content strategy and content auditing. Any information architect, or information worker in general, would be remiss to skip the step of documenting all existing content prior to structuring or restructuring any sort of website or *ahem* LibGuides system. I think that LibGuides (or any of the LibApps, really) would be a great candidate to begin experimenting with content auditing and discarding things. Applying the question “Does it spark joy?” actually becomes a really interesting question, because not only should you be considering it from your own perspective, but also that of the user. This quickly dives into a question of user experience. The oft-spoken about epidemic of “LibGuides Gone Wild” could be at least somewhat tamed if you were to apply this question to your guides. Obviously, you may not always be in a position to be able to act on the discarding of guides without buy-in, but maybe this can provide you with yet another language to describe the benefits of focusing on users.
One type of item that Kondo discusses is seminar notes, which, based on her description, aligns pretty much 100% with the notes we all take when we are at conferences. When I first started attending library conferences at the beginning of my career (about 5 years ago), I would shun taking notes on a computer, insisting that handwriting my notes would result in more effective notes because I would have to be more particular about what nuggets of knowledge I would jot down. In reality, all I would end up with was a sore hand, and I would actually miss out on quite a bit of what the speaker was saying. As I progressed, I would eventually resort to using an iPad along with OneNote, so that I could easily tap out whatever notes I wanted, as well as take pictures of relevant slides and include them along with my notes. This, I believed, was the perfect solution. But, what exactly was it the perfect solution for? It was the perfect solution to make sure I could provide an adequate write-up / conference recap to my co-workers to prove that I actually did learn something and that it was worth the investment. That’s pretty much it. Of course, in my own mind I would think “Oh, these are great! I can go back to these notes later and re-ingest the information and it will be available next time I need it!”. But, I can count on zero hands how many times I actually did that. One of the things that Kondo says about these sorts of events is that the benefit and purpose of them is in the moment — not the notes. You should fully invest yourself in the here and now during the event, because the experience of the event is the purpose. Also, the best way to honor the event is not to have copious notes — but to apply what you’ve learned immediately. This portion of the book pretty much spoke to me directly, because I’m 100% guilty of worrying too much about proving the greatness of professional development opportunities rather than experiencing the greatness.
While the last example I used can pretty much apply to any librarian who attends conferences, this example of where I can apply KonMari is pretty particular to those who have to code at some level. I think I may be more guilty of this than the average person, but the amount of stuff I have commented out (instead of deleting altogether) is atrocious. When I’m developing, I have a (bad) habit of commenting chunks of code that are no longer needed after being replaced by new code. Why do I do this? For the number one reason on Kondo’s list of excuses that people have when discarding things: “I might need it someday!”. In the words of Kondo herself, “someday never comes”. There are bits of code that have probably been commented out instead of deleted for a good 3 years at this point — I think it’s time to go ahead and delete them. Of course, there are good uses for comments, but for the sake of your own sanity (and the sanity of the person who will come after you, see your code and think, “wut?”) use them for their intended purpose, which is to help you (and others) understand your code. Don’t just use it as a safety net, like I have been. I’m even guilty of having older versions of EZproxy stanzas commented out in the config file. Why on Earth would those ever be useful? What makes me even worse is that we have pretty extensive version control, so I could very easily revert to or compare with earlier versions. You can even thank your totally unnecessary comments as you delete them, because they did ultimately serve a purpose — they taught you that you really can simply trust yourself (and your version control).
Well, that’s it for now — three ways of applying KonMari to Web Services Librarianship. I would love to hear of other ways librarians apply these principles to what they do!
This is what we know: On November 24, 2015, the Wu-Tang Clan sold its latest album, Once Upon a Time in Shaolin, through an online auction house. As one of the most innovative rap groups, the Wu-Tang Clan had used concepts for their recordings before, but the latest album would be their highest concept: it would exist as only one copy—as an LP, that physical, authentic format for music—encased in an artisanally crafted box. This album would have only one owner, and thus, perhaps, only one listener. By legal agreement, the owner would not be allowed to distribute it commercially until 88 years from now.
Once—note the singularity at the beginning of the album’s title—was purchased for $2 million by Martin Shkreli, a young man who was an unsuccessful hedge fund manager and then an unscrupulous drug company executive. This career arc was more than enough to make him filthy rich by age 30.
Then, in one of 2015’s greatest moments of schadenfreude, especially for those who care about the widespread availability of quality healthcare and hip hop, Shkreli was arrested by the FBI for fraud. Alas, the FBI left Once Upon a Time in Shaolin in Shkreli’s New York apartment.
Presumably, the album continues to sit there, in the shadows, unplayed. It may very well gather dust for some time.
This has made many people unhappy, and some have hatched schemes to retrieve Once, ideally using the martial arts the Shaolin monks are known for. But our obsession with possessing the album has prevented us from contemplating the nature of the album—its existence—which is what the Buddhists of Shaolin would, after all, prefer us to do.
RZA, the leader of the Wu-Tang Clan, had tried to forewarn us. As he told Forbes, “We’re about to put out a piece of art like nobody else has done in the history of music…This is like someone having the scepter of an Egyptian king.”
Many have sought ways that the public might listen to Once, but few have taken RZA at his word. What if Once Upon a Time in Shaolin is meant primarily as art, as a precious artifact that only one person, like a king, can hold? And if we consider this question, do we really need to listen to the album to hear what it’s saying?
* * *
In 1995, the Chinese artist Ai Weiwei took an ancient, priceless Han Dynasty vase and dropped it onto a brick floor. It instantly shattered. He took a series of high-speed photographs of the vase drop, which he assembled into a triptych; in the middle photograph the vase seems like it’s in a levitating, suspended state. It exists, but it is milliseconds from not existing. It is forever there, whole, and yet we know it is forever in pieces.
He shouldn’t have destroyed that singular vase, you may be thinking. You must think more deeply, and enter the Shaolin temple of your mind.
* * *
In the old mill town of North Adams, Massachusetts, a cluster of nineteenth-century factory buildings has been converted into the largest museum of contemporary art in the United States: Mass MoCA. One entire building, from top to bottom, is dedicated to the work of Sol LeWitt.
Sol LeWitt is an unusual artist in that he rarely painted, drew, or sculpted the art you see by him. Instead, he wrote out instructions for artwork, and then left it to “constructors”—often art students, museum curators, or others, to do the actual work of fabrication. LeWitt liked to be a recipe writer, not a chef.
“Wall Drawing 1180: Within a circle draw 10,000 straight black lines and 10,000 black not straight lines. All lines are randomly spaced and equally distributed.”
Somehow, incredibly, this ends up looking like a massive picture from the Hubble Telescope: an infinite field of stars emerges after weeks of drawing thousands of squiggly and straight lines with a pencil.
Sixty-five art students and artists, none of them Sol LeWitt, made the Sol LeWitt exhibit, and it is one of the most beautiful things you’ll ever see. The patterns, the colors, the way that LeWitt’s often deceptively simple recipes result in a sumptuous banquet for the eyes, is remarkable.
But the exhibit will only last for 25 years—eight of which have already ticked by—after which the museum will paint over all of the art. Touring the exhibit, you can’t help but think about this endtime: All of this beauty, and yet on some Monday morning in the not-really-that-distant future some guy with a 5-gallon bucket of white paint from Home Depot and a wide roller brush on the end of a long wood handle will cover those walls forever. Will he sigh before making the first stroke?
Until that Monday morning in 2033, the Sol LeWitt exhibit exists. You have 17 years remaining, but time moves more quickly than we like, doesn’t it? I have told you to see it, but will you make the trip to North Adams? Right now, for those who have not seen it, it’s Ai Weiwei’s Han vase in mid-drop. It’s just that the gravity is lighter, the fall slower. But the third photograph, the smashed pieces, is coming.
Do you fear the loss of that magical field of stars and scores of other wall-sized artworks? Or have you closed your eyes, meditated, and concluded: Even if I never get to North Adams, LeWitt’s recipes will still exist, and they are the true art.
* * *
In 2008, as Mass MoCA was constructing the Sol Lewitt exhibit, they also hosted an exhibit of the art of Spencer Finch. Finch was fascinated by Emily Dickinson, and wished to recreate the moments in which she looked out of her window, thinking and writing poetry. Could these ephemeral views be recaptured, made physical for us so many years later?
“Sunlight in an Empty Room (Passing Cloud for Emily Dickinson, Amherst, MA, August 28, 2004),” tried to do so. Finch used lighting and light filters to make a cloud of just the right wavelengths that Dickinson would have seen outside of her bedroom on a particular day.
You cannot capture a moment, you mutter softly, waving your hand, nor Emily Dickinson’s thoughts.
* * *
Open your favorite streaming music app, and search for the blockbuster 2013 song “Get Lucky.”
Make a playlist that includes the original Daft Punk version, which should come up as the first hit, but also add to the list three other covers of the song by artists you have never heard of, which you will find by scrolling down the search results page.
These versions exist because of something called a “compulsory license,” which means that by paying a defined fee to an agency, you are allowed to record a cover song without asking for, or receiving, permission from the artist who wrote it. The song becomes a recipe and you become the constructor.
Now visit a friend. Play the “Get Lucky” playlist on shuffle mode. When all four songs have been played, ask your friend to identify the original version. The guitar and bass and singing will sound surprisingly similar in each version. Your friend will probably ask, increasingly frantically: “Which is the one true song?”
Do not answer. Thank your friend, bow, and leave.
* * *
“Get Lucky” was co-written by Nile Rodgers, the mastermind behind some of the greatest pop music of the last 40 years, starting with Chic, the disco band that gave us infectious dance hits like “Good Times.” Shortly after “Good Times” was released as a single, the enterprising music producer Sylvia Robinson brought a funk band into a recording studio and had them copy Chic’s bassist Bernard Edwards’ memorable bass line from that song. She also sampled its string section. Adding some rappers no one had ever heard of before, she created “Rapper’s Delight,” which seemed laughable to those who really knew the inventive, emerging hip hop scene, but which rather effectively set rap music on a course for mainstream (and white) popularity.
Rodgers initially hated “Rapper’s Delight,” believing it was a wholesale copy of “Good Times,” and he and Edwards sued for copyright violation. Later, after he won and was listed as a co-writer of the song, he declared himself proud of “Rapper’s Delight.” He realized it was a brilliant theft that changed pop music forever, and yet didn’t diminish Chic’s original work.
“Rapper’s Delight” was far from the only hip hop song to borrow; in fact, the reuse of older recordings was standard within the new genre, and part of its enormous creativity. The technique reached its apogee in arguably the three seminal rap albums of the late 1980s: Public Enemy’s It Takes a Nation of Millions to Hold Us Back, De La Soul’s 3 Feet High and Rising, and Beastie Boys’ Paul’s Boutique. Each of these albums had over a hundred samples, mixing and matching from different genres to make sounds that were totally new.
They were large, you nod, they contained multitudes.
* * *
In 1992, the science fiction author William Gibson, who had coined the word “cyberspace,” released a new work entitled Agrippa (A Book of the Dead). The text was issued, most famously, in a deluxe edition on a 3.5” floppy disk encased in an artisanally crafted box. The disk would encrypt itself upon a single reading, so you only had one shot to read the text as it scrolled across your screen.
This Agrippa cost $2000, and only a very small number were made. Gibson publicly revelled in the work’s combination of the ephemeral and the valuable. He loved that the book, after viewing, would become like a television tuned to a dead channel.
Almost immediately, however, the text of Agrippa was surreptitiously released on an underground electronic bulletin board called MindVox. Anyone can now read it online, and view the deluxe packaging as well.
What is the nature of art, you consider, without its packaging? What is its value?
* * *
The British artist Damian Hirst is probably best known for putting a dead shark in a large tank of formaldehyde and giving it the existential title “The Physical Impossibility of Death in the Mind of Someone Living.” In 2007, he asked the jewelers who fabricate items for the British monarchy—scepters for the king—to make a human skull out of diamonds and platinum, based on a real skull he bought. The skull’s teeth were added to the final product. Hirst called this artwork “For the Love of God.” Many critics called it “tacky.”
But “For the Love of God” was as much an exercise in the finance that goes along with the contemporary art scene, where prices for works regularly head into eight or even nine figures at auction. The fabrication of the skull apparently cost £14 million, and Hirst tried to sell the bejeweled skull to bidders for £50 million. Although there were rumors of a sale, ultimately there were no takers. A mysterious consortium then evidently bought the skull, but for less than £50 million, perhaps much less, and oddly, Hirst seemed to be one of the investors. Some analysts believe that Hirst actually lost money on the deal.
Once Upon a Time in Shaolin was also rumored to be for sale for a much higher number, perhaps as much as $5 million, but Shkreli ultimately bought it for $2 million, which is far less than the Wu-Tang Clan would make from a regular album release.
* * *
What is Once Upon a Time in Shaolin really worth? Is its scarcity its worth, and its worth its true art and value?
Once Upon a Time in Shaolin may not be as scarce as we imagine. It surely exists beyond the sole copy in Martin Shkreli’s apartment. It exists in the sense that members of Wu-Tang created it and still have its music in their heads and could likely recreate it if they wanted. Perhaps RZA is humming some of the songs in his shower right now. It exists as a recipe.
But it may also exist in actuality, albeit in pieces, like the wisps of a cloud. The master recordings may have been destroyed, but the way that digital recording works mean that elements of Once existed more than once on magnetic media and probably, somewhere, continue to exist regardless of what Wu-Tang Clan has done with the completed master. Parts of the album can probably be dug up, like the scepter of an Egyptian king, or the disappearing poetry on a phosphorus screen.
If samples were used, they exist on other recordings; if a drum machine was used, those beats exist, identically, on many other machines. Any computers involved surely have files that have not been truly erased, and that could be dug up by digital archaeologists. There may be assembly to be done, and perhaps the final product would be different from the “original.” Or would it?
And perhaps too many traces of the full Once Upon a Time in Shaolin exist for it not to leak, just as Agrippa did.
Of course, then it will just be another stream of bits among the countless streams in our ephemeral era, severed from its unique packaging. It will take its place on millions of playlists, its songs sitting alongside tens of millions of other songs.
We will have gained something from Once’s liberation, but then we will have lost something as well.
* * *
The abstract artist Ellsworth Kelly, who recently died, was once asked about the nature of art. “I think what we all want from art is a sense of fixity, a sense of opposing the chaos of daily living,” he said, with more than a bit of Shaolin wisdom. “This is an illusion, of course.”
More than any previous year, 2015 was a year of growth and change for Equinox.
- We forged new partnerships that will let us serve our customers more efficiently and effectively.
- We helped some of the largest and busiest library consortia in the US grow and adapt, migrating more than 60 libraries to Sequoia-powered Evergreen instances.
- We enhanced our core services, doubling our Sequoia Platform capacity and streamlining customization, command, and control capabilities.
- We created numerous enhancements for Evergreen, including Message Center, Copy Alerts, and the cataloging and admin/reports modules of the completely rewritten web staff client.
All in all, 2015 set a high bar for the future.
It was also a time of growth for me personally. I find I’m happiest when I’m learning something new, and the past year did not disappoint in that regard. It’s been just a bit more than a year since I took on my new role as president. It’s been a great ride so far, and I have learned a lot. Having the opportunity to set the direction for the place you work is a rare thing, and can be overwhelming. The most important ingredient for success is the people you work with and learn from to make your shared vision come true. It has been exceptionally rewarding to stand shoulder to shoulder with the amazing team at Equinox, solving problems and laying the ground work for bigger things to come. Looking back at what we’ve accomplished in the last year I’m very proud of the work we’ve done for our customers. Even more, though, I’m proud of the company we are continuing to build together.
What will 2016 hold for Equinox? More of the same if I can help it.
Last updated January 4, 2016. Created by Peter Murray on January 4, 2016.
Log in to edit this page.
From the announcement:
The 11th International Digital Curation Conference (IDCC) with the theme “Visible data, invisible infrastructure" will take place in Amsterdam February 22-25, 2016 . The conference provides an opportunity for individuals, organizations and institutions across all disciplines and domains involved in curating data to get together with like-minded data practitioners to discuss policy and practice.
A belated announcement (holiday period in the way!) that Sufia 6.5.0 has been released.
6.5.0 is a small release adding a configuration option to facet on collections and fixing a bug in 6.4.0.
See the release notes  for upgrade instructions and details. Thanks to the 4 contributors to this release, which comprised 11 commits touching 18 files: Adam Wead, Anna Headley, Lynette Rayle, and Justin Coyne.
Over the past few years, holiday updates have become a part of a MarcEdit tradition. This year, I’ve been spending the past month working on two significant set of changes. On the Windows side, I’ve been working on enhancing the Linked Data tools, profiling more fields and more services. This update represents a first step in the process – as I’ll be working with the PCC to profile additional services and add new elements as we work through a pilot test around embedding linked data into MARC records and potential implications. For a full change list, please see: http://blog.reeset.net/archives/1822
The Mac version has seen a lot of changes – and because of that, I’ve moved the version number from 1.3.35 to 1.4.5. In addition to all the infrastructure changes made within the Windows/Linux program (the tools share a lot of code), I’ve also done significant work exposing preferences and re-enabling the ILS Integration. I didn’t get to test the ILS integration well – so there may be a few updates to correct problems once people start working with them – but getting to this point took a lot of work and I’m glad to see it through. For a full list of updates on the Mac Version, please see: http://blog.reeset.net/archives/1824
Before Christmas, I’d mentioned that I was working on three projects – with the idea that all would be ready by the time these updates were complete. I was wrong – so it looks like I’ll have one more Christmas/New Years gift left to give – and I’ll be trying to wrap that work up this week.
Downloads – you can pick up the new downloads at: http://marcedit.reeset.net/downloads or if you have the automatic update notification enabled, the tool should provide you with an option to update from within the program.
This represents a lot of work, and a lot of changes. I’ve tested to the best of my ability – but I’m expecting that I may have missed something. If you find something, let me know. I’m saving time over the next couple weeks to fix problems that might come up and turn around builds faster than normal.
Here’s looking forward to a wonderful 2016.
This article from Wired magazine is a must-read if you are interested in more impactful metrics for your library’s web site. At MPOE, we are scaling up our need for in-house web product expertise, but regardless of how much we invest in terms of staffing, it is likely that the amount of requested web support will always exceed the amount of resourcing we have for that support. Leveraging meaningful impact metrics can help us understand the value we get from the investment we make in our web presence, and more importantly help us define what types of impact we want to achieve through that investment. This is no easy feat, but it is good to see that others in the information ecosystem are looking at the same challenges.
Winchester, MA The 11th International Digital Curation Conference (IDCC) with the theme “Visible data, invisible infrastructure" will take place in Amsterdam February 22-25, 2016 (http://www.dcc.ac.uk/events/idcc16). The conference provides an opportunity for individuals, organizations and institutions across all disciplines and domains involved in curating data to get together with like-minded data practitioners to discuss policy and practice.
Last updated January 3, 2016. Created by Peter Murray on January 3, 2016.
Log in to edit this page.
Hydra Connect Regional Meeting at UC Santa Barbara event announcement.
I've been working primarily on ebooks for the past 5 years, mostly because I'm excited at the new possibilities they enable. I'm impressed - and excited - when ebooks do things that can't be done for print books, partly because ebooks often can't capture the most innovative uses of ink on paper.
Looking back on 2015, there was one ebook more than any other that demonstrated the possibilities of the ebook as an art form, while at the same time being fun, captivating, and awe-inspiring, Paul Ford's What Is Code?
Unfortunately, today's ebook technology standards can't fully accommodate this work. The compact disc of ebooks can store only four and a half movements of Beethoven's Ninth. That makes me sad.
You might ask, how does What Is Code? qualify as an ebook if it doesn't quite work on a kindle or your favorite ebook app? What Is Code? was conceived and developed as an HTML5 web application for Business Week magazine, not with the idea of making an ebook. Nonetheless, What Is Code? uses the forms and structures of traditional books. It has a title page. It has chapters, sections, footnotes and a table of contents which support a linear narrative. It has marginal notes, figures and asides.
Despite its bookishness, it's hard to imagine What Is Code? in print. Although the share buttons and video embeds are mostly adornments for the text, activity modules are core to the book's exposition. The book is about code, and by bringing code to life, the reader becomes immersed in the book's subject matter. There's a greeter robot that waves and seems to know the reader, showing the ebook's "intelligence". The "how do you type an "A" activity in section 2.1 is a script worth a thousand words and the "mousemove" activity in section 6.2 is a revelation even to an experienced programmer. If all that weren't enough, there's a random, active background that manages to soothe more than it distracts.
Even with its digital doodads, What Is Code? can be completely self contained and portable. To demonstrate this, I've packaged it up and archived it at Internet Archive; you can download with this link (21MB). Once you've downloaded it, unzip it and load the "index.html" file into a modern browser. Everything will work fine, even if you turn off your internet connection. What Is Code? will continue to work after Business Week disappears from the internet (or behind the most censorious firewall). 
I was curious how much of What is Code? could be captured in a standard EPUB ebook file. I first tried making a EPUB version 2 file with Calibre. The result was not a lame as I thought it would be, but stripped of interactivity, it seemed like a photocopy of a sticker book - the story's there, but the fun, not so much. Same with the Kindle version .
My experimentation with the code behind What Is Code? is another exciting aspect of making books into ebooks. Code and digital text can use a open licenses  that permit others to use, re-use and learn from What Is Code?. The entire project archive is hosted on GitHub and to date has been enhanced 671 times by 29 different contributors. There have been 187 forks (like mine) of the project. I think this collaborative creation process will be second nature to the ebook of the future.
There have been a number of proposals for portable HTML5 web archive formats for ebook technology moving forward. Among these are "5DOC" and W3C's "Portable Web Platform". As far as I can tell, these proposals aren't getting much traction or developer support. To succeed, the format has to be very lightweight and useful, or be supported by at least 2 of Amazon, Apple, and Google. I hope someone succeeds at this.
Best of 2015, don't you agree?
 In this case, the Apache License and the Creative Commons By-NC-ND License.
I’ve been working with the PCC Linked Data in MARC Task Group over the past couple of months, and as part of this process, I’ve been working on expanding the values that can be recognized by the Linking tool in MarcEdit. As those that have used it might remember, MarcEdit’s linking tool showed up about a year and a half ago, and leverages id.loc.gov, MESH, and VIAF (primarily). As part of this process with the PCC – a number of new vocabularies and fields have been added to the tools capacity. This also has meant created profiles for linking data in both bibliographic and authority data.
The big changes come in the range of indexes now supported by the tool (if defined within the record). At this point, the following vocabularies are profiled for use:
- LCSH Children
- RDA Vocabularies
The data profiled has also expanded beyond just 1xx, 6xx, and 7xx data to include 3xx data and data unique to the authority data.
This has required changing the interface slightly:
But I believe that I have the bugs worked out. This function will be changing often over the next month or so as the PCC utilizes this and other tools while piloting a variety of methods for embedding linked data into MARC records and considering the implications. As such, I’ll be adding to the list of profiled data over the coming month – however, if you use a specific vocabulary and don’t see it in the list – let me know. As long as the resource provides a set of APIs (it cannot be a data down – that doesn’t work for client applications – at this point, the profiled resources would require users to download almost 12 GB of data almost monthly if I went that route) that can support a high volume of queries.
Questions…let me know.
Sometime this past month, I was asked if there was a way to automate the batch processing of Sanborn Cutters. OCLC’s connection provides a handy set of methods for doing this if you are an OCLC member and utilize Connexion. It’s wrapped up in a nifty library and provides access to the current Sanborn Table 4 Cutters (which I believe are under the control of OCLC). For most users, this is probably what they want to use — and at some point I’d be interested in seeing OCLC might be willing to let me link to this particular library to provide a set of batch tools around Sanborn Cutter creation for users using the Four Figure table. However, the Three Figure Tables were published long before 1921 (the copy I’m using dates to 1904) — so I decided to provide a tool for batch creation of Sanborn Figure 3 Cutters.
I guess before we go further, I’m not particularly familiar with this set of cutters. I’ve only cataloged in an academic library, so I’m primarily familiar with LC’s cuttering methodology — so I’ll be interested in hearing if this actually works.
Ok, with that out of the way — this tool is available from within the MarcEditor. The assumptions here are that:
- You have a call number stem (assuming dewey). The tool defaults to assuming that you are looking to cutter on (by order of preference) — 1xx, 245, 243, 240 fields. However, if you want to cutter on a different value (say a 600) you can identify the cuttering field and provide data to be queried to select the correct cuttering field (in case there are multiple values).
- That you know what you are doing (because I don’t :))
To run the tool, have the file you want to process in the MarcEditor. Open the MarcEditor and Select Tools/Cuttering Tools/Sanborn Table 3 Cutters
When you select the value — you see the following window:
As you can see, I’ve tried to keep the function identical between the various platforms. In the first textbox, you need to enter the field to evaluate. Again, the tool assumes that you have the start of a call number in your record. So, for LC records (when generating LC tools) that would be 090$a, 099$a, 050$a; for Dewey, 082, 092. You then select how the cutter will be generated.
A couple words about how this works. Cuttering is generated off of a set of tables. The process works by looking for either a match in the cutter table or finding the closest match within the cutter table. In my testing, it looks like the process that I’m employing will work reliably — but text data can be weird — so you’ll have to let me know if you see problems.
When you process the data — the program will insert a $b into the defined call number field based on the information it determined best represents the information in the cutter tables.
One of the parts of the MarcEdit Mac port that has been lagging has been the ability to manage a number of the preferences that MarcEdit utilizes when running the program. Originally, I exposed the preferences for the MARCEngine and the Updates. As part of the next update, I’ve included options to handle preference settings for the MarcEditor, the Locations, the miscellaneous settings, and the ILS Integration. The set still isn’t as robust as the Windows/Linux version — but part of that is because some of the options are not applicable; though more likely, they just weren’t part of the list that were most commonly asked for. I’ll be working on adding the remainder through Jan.
The Elkins family was one of the 19th century's big names in Pennsylvania. William Lukens Elkins was one of the first "oil barons" whose company was the first to produce gasoline, just in time for the industrial and transportation revolution that would use untold gallons of the stuff. His business partner was Peter Widener, and the two families were intertwined through generations, their Philadelphia mansions built across the street from each other.
Eleanor Elkins, daughter of WL Elkins, married George Widener, son of Peter Widener, essentially marrying the two families. Eleanor and George had a son, Harry. Unfortunately, they were rich enough to book passage on the maiden voyage of the Titanic in 1912. George and Harry perished; Eleanor Elkins Widener survived.
Harry Widener had been an avid book collector, sharing this interest with his best friend, William McIntire Elkins. Harry had graduated from Harvard, as had his friend WM Elkins, and his will instructed his mother to donate his collection to Harvard, "to be known as the Harry Elkins Widener Collection." His mother took it one further and funded the creation of a new library that would house both her son's collection but also the entire Harvard library collection. Yes, I am talking about the now famous Widener Library.
His friend, William McIntire Elkins, lived until 1947, and during his lifetime amassed a huge rare book collection. He was particularly interested in Dickensiana, which included not only first serially published editions of Dickens' works, but also Dickens' desk, candlesticks, ink well, etc. His will left his entire collection to the Free Library of Philadelphia. Well, it turned out that it was not only his book collection, but the entire room, which was moved into the library, looking much as it did during Elkins' life.
Elkins Park is named, of course, for the Elkins family whose massive estate held an impressive array of mansions and grounds. Much of the estate has been divided up and sold off, but portions remain.
So that's the story of Elkins Park and how it fits into libraries and the rarified world of rare books. But I have another small bit to add to the story. In 1947, at the time of his death, John and Eleanor King, my grandparents, were working for (as he was known in my family) "old man Elkins" -- that is William McIntire Elkins. My grandfather was gardener and chauffeur for Elkins, and my grandmother was (as she called it) the executive of the household. When the library was transferred to the Free Library of Philadelphia, photographs were taken out the windows so that the "view" could be reproduced. Those photographs show the grounds that my grandfather cared for, and the room itself was undoubtedly very familiar to my grandmother (although she would never admit to having done any dusting with her own hand). A small amount bequeathed to them in Elkins' will allowed them to own their own property for the first time, just a few acres, but enough to live on, with sheep and chickens and a single steer. I have early memories of that farm, and a few family photos. Little did any of us know at the time that I would reconnect all of this because of books.
I’ve been working hard over the last month an a half trying to complete the process of porting functionality into the OSX version of MarcEdit. I’ve completed the vast majority of this work, in addition to bringing in a number of other changes. These changes will be made available as part of the 1/3/2016 update. The changes in this update will be as follows:
- Bug Fix: RDA Helper — 260/264 changes were sometimes not happening when running across specific 260$c formatting.
- Bug Fix: MARCValidator: Validator was crashing when records would go beyond 150,000 bytes. This has been corrected.
- Bug Fix: Build Links Tool — MESH headings were utilizing older syntax and occasionally missing values.
- Bug Fix: Validation Headings tool: When checking Automatically Correct variants, the embed URIs tool is automatically selected. This has been corrected.
- Bug Fix: Edit XML Functions: The modify option, the save button was turned on. This has been corrected.
- Enhancement: Build Links: Build links tool use to use the opensearch api in the id.loc.gov resolution. This was changed to be like the validate headings tool and provide more consistent linking.
- Enhancement: Most of MarcEdit’s preferences have been exposed.
- Enhancement: Build Links Tool – I’ve added profiles for a wide range of vocabularies being tested by the PCC Linked Data task force. These are available.
- Enhancement: Build Links Tool — Profiled services are found under a link.
- Enhancement: Build Links Tool — Task management options have been added for the new validate options.
- Enhancement: MarcEditor: Generate Cutters: LC cutter generation has been updated.
- Enhancement: MarcEditor: Generate Sanborn Cutters: Added function to generate Sanborn Table 3 Cutters.
- Enhancement: ILS Framework — MarcEdit’s ILS framework options were added.
- Enhancement: Koha Integration: Koha Integration options were added to the tool.
This doesn’t complete the function migration, but its close. These changes will be part of the 1/3/2016 update. I’ll be working to add a few YouTube videos to document new functions. Let me know if you have questions.