You are here

Feed aggregator

LITA: KonMari in Web Librarianship

planet code4lib - Tue, 2016-01-05 04:02

Over the winter break, I had the pleasure of listening to the audio book version of The Life-Changing Magic of Tidying Up: The Japanese Art of Decluttering and Organizing by Marie Kondo. In this book, the author explains in detail her method of tidying up (which she calls KonMari). I highly recommend you read the book in its entirety to gain a fuller understanding of what the KonMari method entails, but in short:

  • Gather everything you own that falls into a specific category
  • Touch each item individually. Hold it, feel it, connect with it
  • Ask yourself, “Does this item spark joy within me?”
  • If it doesn’t spark joy, ask, “is it useful or necessary?”
  • Lastly, if the item doesn’t spark joy, and it isn’t useful, discard it. Also, as you discard it, thank it for fulfilling its purpose, whatever it may have been.
  • Do this category by category until your life is only filled with those things that spark joy.

As I listened to this book, I started to make some connections between the techniques being described and how they could apply to my life as a web services librarian. In this post, I’ll point out a few of the random connections it sparked for me, and perhaps others will be encouraged to do something similar, or even apply KonMari in other areas of librarianship — I’d love to hear what others have to say!

Content Auditing

The first thing that stuck out to me about this method is how similar it felt to performing a content audit. Content auditing is an important step in developing an overall content strategy — I’d recommend taking a look at Margot Bloomstein’s article, “The Case for Content Strategy — Motown Style” for a pretty practical overview of content strategy and content auditing. Any information architect, or information worker in general, would be remiss to skip the step of documenting all existing content prior to structuring or restructuring any sort of website or *ahem* LibGuides system. I think that LibGuides (or any of the LibApps, really) would be a great candidate to begin experimenting with content auditing and discarding things. Applying the question “Does it spark joy?” actually becomes a really interesting question, because not only should you be considering it from your own perspective, but also that of the user. This quickly dives into a question of user experience. The oft-spoken about epidemic of “LibGuides Gone Wild” could be at least somewhat tamed if you were to apply this question to your guides. Obviously, you may not always be in a position to be able to act on the discarding of guides without buy-in, but maybe this can provide you with yet another language to describe the benefits of focusing on users.

Conference Notes

One type of item that Kondo discusses is seminar notes, which, based on her description, aligns pretty much 100% with the notes we all take when we are at conferences. When I first started attending library conferences at the beginning of my career (about 5 years ago), I would shun taking notes on a computer, insisting that handwriting my notes would result in more effective notes because I would have to be more particular about what nuggets of knowledge I would jot down. In reality, all I would end up with was a sore hand, and I would actually miss out on quite a bit of what the speaker was saying. As I progressed, I would eventually resort to using an iPad along with OneNote, so that I could easily tap out whatever notes I wanted, as well as take pictures of relevant slides and include them along with my notes. This, I believed, was the perfect solution. But, what exactly was it the perfect solution for? It was the perfect solution to make sure I could provide an adequate write-up / conference recap to my co-workers to prove that I actually did learn something and that it was worth the investment. That’s pretty much it. Of course, in my own mind I would think “Oh, these are great! I can go back to these notes later and re-ingest the information and it will be available next time I need it!”. But, I can count on zero hands how many times I actually did that. One of the things that Kondo says about these sorts of events is that the benefit and purpose of them is in the moment — not the notes. You should fully invest yourself in the here and now during the event, because the experience of the event is the purpose. Also, the best way to honor the event is not to have copious notes — but to apply what you’ve learned immediately. This portion of the book pretty much spoke to me directly, because I’m 100% guilty of worrying too much about proving the greatness of professional development opportunities rather than experiencing the greatness.

Code Comments

While the last example I used can pretty much apply to any librarian who attends conferences, this example of where I can apply KonMari is pretty particular to those who have to code at some level. I think I may be more guilty of this than the average person, but the amount of stuff I have commented out (instead of deleting altogether) is atrocious. When I’m developing, I have a (bad) habit of commenting chunks of code that are no longer needed after being replaced by new code. Why do I do this? For the number one reason on Kondo’s list of excuses that people have when discarding things: “I might need it someday!”. In the words of Kondo herself, “someday never comes”. There are bits of code that have probably been commented out instead of deleted for a good 3 years at this point — I think it’s time to go ahead and delete them. Of course, there are good uses for comments, but for the sake of your own sanity (and the sanity of the person who will come after you, see your code and think, “wut?”) use them for their intended purpose, which is to help you (and others) understand your code. Don’t just use it as a safety net, like I have been. I’m even guilty of having older versions of EZproxy stanzas commented out in the config file. Why on Earth would those ever be useful? What makes me even worse is that we have pretty extensive version control, so I could very easily revert to or compare with earlier versions. You can even thank your totally unnecessary comments as you delete them, because they did ultimately serve a purpose — they taught you that you really can simply trust yourself (and your version control).

Well, that’s it for now — three ways of applying KonMari to Web Services Librarianship. I would love to hear of other ways librarians apply these principles to what they do!

Dan Cohen: For What It’s Worth: A Review of the Wu-Tang Clan’s “Once Upon a Time in Shaolin”

planet code4lib - Tue, 2016-01-05 01:10

This is what we know: On November 24, 2015, the Wu-Tang Clan sold its latest album, Once Upon a Time in Shaolin, through an online auction house. As one of the most innovative rap groups, the Wu-Tang Clan had used concepts for their recordings before, but the latest album would be their highest concept: it would exist as only one copy—as an LP, that physical, authentic format for music—encased in an artisanally crafted box. This album would have only one owner, and thus, perhaps, only one listener. By legal agreement, the owner would not be allowed to distribute it commercially until 88 years from now.

Once—note the singularity at the beginning of the album’s title—was purchased for $2 million by Martin Shkreli, a young man who was an unsuccessful hedge fund manager and then an unscrupulous drug company executive. This career arc was more than enough to make him filthy rich by age 30.

Then, in one of 2015’s greatest moments of schadenfreude, especially for those who care about the widespread availability of quality healthcare and hip hop, Shkreli was arrested by the FBI for fraud. Alas, the FBI left Once Upon a Time in Shaolin in Shkreli’s New York apartment.

Presumably, the album continues to sit there, in the shadows, unplayed. It may very well gather dust for some time.

This has made many people unhappy, and some have hatched schemes to retrieve Once, ideally using the martial arts the Shaolin monks are known for. But our obsession with possessing the album has prevented us from contemplating the nature of the album—its existence—which is what the Buddhists of Shaolin would, after all, prefer us to do.

RZA, the leader of the Wu-Tang Clan, had tried to forewarn us. As he told Forbes, “We’re about to put out a piece of art like nobody else has done in the history of music…This is like someone having the scepter of an Egyptian king.”

Many have sought ways that the public might listen to Once, but few have taken RZA at his word. What if Once Upon a Time in Shaolin is meant primarily as art, as a precious artifact that only one person, like a king, can hold? And if we consider this question, do we really need to listen to the album to hear what it’s saying?

*          *          *

In 1995, the Chinese artist Ai Weiwei took an ancient, priceless Han Dynasty vase and dropped it onto a brick floor. It instantly shattered. He took a series of high-speed photographs of the vase drop, which he assembled into a triptych; in the middle photograph the vase seems like it’s in a levitating, suspended state. It exists, but it is milliseconds from not existing. It is forever there, whole, and yet we know it is forever in pieces.

He shouldn’t have destroyed that singular vase, you may be thinking. You must think more deeply, and enter the Shaolin temple of your mind.

*          *          *

In the old mill town of North Adams, Massachusetts, a cluster of nineteenth-century factory buildings has been converted into the largest museum of contemporary art in the United States: Mass MoCA. One entire building, from top to bottom, is dedicated to the work of Sol LeWitt.

Sol LeWitt is an unusual artist in that he rarely painted, drew, or sculpted the art you see by him. Instead, he wrote out instructions for artwork, and then left it to “constructors”—often art students, museum curators, or others, to do the actual work of fabrication. LeWitt liked to be a recipe writer, not a chef.

“Wall Drawing 1180: Within a circle draw 10,000 straight black lines and 10,000 black not straight lines. All lines are randomly spaced and equally distributed.”

Somehow, incredibly, this ends up looking like a massive picture from the Hubble Telescope: an infinite field of stars emerges after weeks of drawing thousands of squiggly and straight lines with a pencil.

Sixty-five art students and artists, none of them Sol LeWitt, made the Sol LeWitt exhibit, and it is one of the most beautiful things you’ll ever see. The patterns, the colors, the way that LeWitt’s often deceptively simple recipes result in a sumptuous banquet for the eyes, is remarkable.

But the exhibit will only last for 25 years—eight of which have already ticked by—after which the museum will paint over all of the art. Touring the exhibit, you can’t help but think about this endtime: All of this beauty, and yet on some Monday morning in the not-really-that-distant future some guy with a 5-gallon bucket of white paint from Home Depot and a wide roller brush on the end of a long wood handle will cover those walls forever. Will he sigh before making the first stroke?

Until that Monday morning in 2033, the Sol LeWitt exhibit exists. You have 17 years remaining, but time moves more quickly than we like, doesn’t it? I have told you to see it, but will you make the trip to North Adams? Right now, for those who have not seen it, it’s Ai Weiwei’s Han vase in mid-drop. It’s just that the gravity is lighter, the fall slower. But the third photograph, the smashed pieces, is coming.

Do you fear the loss of that magical field of stars and scores of other wall-sized artworks? Or have you closed your eyes, meditated, and concluded: Even if I never get to North Adams, LeWitt’s recipes will still exist, and they are the true art.

*          *          *

In 2008, as Mass MoCA was constructing the Sol Lewitt exhibit, they also hosted an exhibit of the art of Spencer Finch. Finch was fascinated by Emily Dickinson, and wished to recreate the moments in which she looked out of her window, thinking and writing poetry. Could these ephemeral views be recaptured, made physical for us so many years later?

“Sunlight in an Empty Room (Passing Cloud for Emily Dickinson, Amherst, MA, August 28, 2004),” tried to do so. Finch used lighting and light filters to make a cloud of just the right wavelengths that Dickinson would have seen outside of her bedroom on a particular day.

You cannot capture a moment, you mutter softly, waving your hand, nor Emily Dickinson’s thoughts.

*          *          *

Open your favorite streaming music app, and search for the blockbuster 2013 song “Get Lucky.”

Make a playlist that includes the original Daft Punk version, which should come up as the first hit, but also add to the list three other covers of the song by artists you have never heard of, which you will find by scrolling down the search results page.

These versions exist because of something called a “compulsory license,” which means that by paying a defined fee to an agency, you are allowed to record a cover song without asking for, or receiving, permission from the artist who wrote it. The song becomes a recipe and you become the constructor.

Now visit a friend. Play the “Get Lucky” playlist on shuffle mode. When all four songs have been played, ask your friend to identify the original version. The guitar and bass and singing will sound surprisingly similar in each version. Your friend will probably ask, increasingly frantically: “Which is the one true song?”

Do not answer. Thank your friend, bow, and leave.

*          *          *

“Get Lucky” was co-written by Nile Rodgers, the mastermind behind some of the greatest pop music of the last 40 years, starting with Chic, the disco band that gave us infectious dance hits like “Good Times.” Shortly after “Good Times” was released as a single, the enterprising music producer Sylvia Robinson brought a funk band into a recording studio and had them copy Chic’s bassist Bernard Edwards’ memorable bass line from that song. She also sampled its string section. Adding some rappers no one had ever heard of before, she created “Rapper’s Delight,” which seemed laughable to those who really knew the inventive, emerging hip hop scene, but which rather effectively set rap music on a course for mainstream (and white) popularity.

Rodgers initially hated “Rapper’s Delight,” believing it was a wholesale copy of “Good Times,” and he and Edwards sued for copyright violation. Later, after he won and was listed as a co-writer of the song, he declared himself proud of “Rapper’s Delight.” He realized it was a brilliant theft that changed pop music forever, and yet didn’t diminish Chic’s original work.

“Rapper’s Delight” was far from the only hip hop song to borrow; in fact, the reuse of older recordings was standard within the new genre, and part of its enormous creativity. The technique reached its apogee in arguably the three seminal rap albums of the late 1980s: Public Enemy’s It Takes a Nation of Millions to Hold Us Back, De La Soul’s 3 Feet High and Rising, and Beastie Boys’ Paul’s Boutique. Each of these albums had over a hundred samples, mixing and matching from different genres to make sounds that were totally new.

They were large, you nod, they contained multitudes.

*          *          *

In 1992, the science fiction author William Gibson, who had coined the word “cyberspace,” released a new work entitled Agrippa (A Book of the Dead). The text was issued, most famously, in a deluxe edition on a 3.5” floppy disk encased in an artisanally crafted box. The disk would encrypt itself upon a single reading, so you only had one shot to read the text as it scrolled across your screen.

This Agrippa cost $2000, and only a very small number were made. Gibson publicly revelled in the work’s combination of the ephemeral and the valuable. He loved that the book, after viewing, would become like a television tuned to a dead channel.

Almost immediately, however, the text of Agrippa was surreptitiously released on an underground electronic bulletin board called MindVox. Anyone can now read it online, and view the deluxe packaging as well.

What is the nature of art, you consider, without its packaging? What is its value?

*          *          *

The British artist Damian Hirst is probably best known for putting a dead shark in a large tank of formaldehyde and giving it the existential title “The Physical Impossibility of Death in the Mind of Someone Living.” In 2007, he asked the jewelers who fabricate items for the British monarchy—scepters for the king—to make a human skull out of diamonds and platinum, based on a real skull he bought. The skull’s teeth were added to the final product. Hirst called this artwork “For the Love of God.” Many critics called it “tacky.”

But “For the Love of God” was as much an exercise in the finance that goes along with the contemporary art scene, where prices for works regularly head into eight or even nine figures at auction. The fabrication of the skull apparently cost £14 million, and Hirst tried to sell the bejeweled skull to bidders for £50 million. Although there were rumors of a sale, ultimately there were no takers. A mysterious consortium then evidently bought the skull, but for less than £50 million, perhaps much less, and oddly, Hirst seemed to be one of the investors. Some analysts believe that Hirst actually lost money on the deal.

Once Upon a Time in Shaolin was also rumored to be for sale for a much higher number, perhaps as much as $5 million, but Shkreli ultimately bought it for $2 million, which is far less than the Wu-Tang Clan would make from a regular album release.

*          *          *

What is Once Upon a Time in Shaolin really worth? Is its scarcity its worth, and its worth its true art and value?

Once Upon a Time in Shaolin may not be as scarce as we imagine. It surely exists beyond the sole copy in Martin Shkreli’s apartment. It exists in the sense that members of Wu-Tang created it and still have its music in their heads and could likely recreate it if they wanted. Perhaps RZA is humming some of the songs in his shower right now. It exists as a recipe.

But it may also exist in actuality, albeit in pieces, like the wisps of a cloud. The master recordings may have been destroyed, but the way that digital recording works mean that elements of Once existed more than once on magnetic media and probably, somewhere, continue to exist regardless of what Wu-Tang Clan has done with the completed master. Parts of the album can probably be dug up, like the scepter of an Egyptian king, or the disappearing poetry on a phosphorus screen.

If samples were used, they exist on other recordings; if a drum machine was used, those beats exist, identically, on many other machines. Any computers involved surely have files that have not been truly erased, and that could be dug up by digital archaeologists. There may be assembly to be done, and perhaps the final product would be different from the “original.” Or would it?

And perhaps too many traces of the full Once Upon a Time in Shaolin exist for it not to leak, just as Agrippa did.

Of course, then it will just be another stream of bits among the countless streams in our ephemeral era, severed from its unique packaging. It will take its place on millions of playlists, its songs sitting alongside tens of millions of other songs.

We will have gained something from Once’s liberation, but then we will have lost something as well.

*          *          *

The abstract artist Ellsworth Kelly, who recently died, was once asked about the nature of art. “I think what we all want from art is a sense of fixity, a sense of opposing the chaos of daily living,” he said, with more than a bit of Shaolin wisdom. “This is an illusion, of course.”

Equinox Software: Looking back, looking forward

planet code4lib - Mon, 2016-01-04 16:46

More than any previous year, 2015 was a year of growth and change for Equinox.

  • We forged new partnerships that will let us serve our customers more efficiently and effectively.
  • We helped some of the largest and busiest library consortia in the US grow and adapt, migrating more than 60 libraries to Sequoia-powered Evergreen instances.
  • We enhanced our core services, doubling our Sequoia Platform capacity and streamlining customization, command, and control capabilities.
  • We created numerous enhancements for Evergreen, including Message Center, Copy Alerts, and the cataloging and admin/reports modules of the completely rewritten web staff client.

All in all, 2015 set a high bar for the future.

It was also a time of growth for me personally. I find I’m happiest when I’m learning something new, and the past year did not disappoint in that regard. It’s been just a bit more than a year since I took on my new role as president. It’s been a great ride so far, and I have learned a lot. Having the opportunity to set the direction for the place you work is a rare thing, and can be overwhelming. The most important ingredient for success is the people you work with and learn from to make your shared vision come true. It has been exceptionally rewarding to stand shoulder to shoulder with the amazing team at Equinox, solving problems and laying the ground work for bigger things to come. Looking back at what we’ve accomplished in the last year I’m very proud of the work we’ve done for our customers. Even more, though, I’m proud of the company we are continuing to build together.

What will 2016 hold for Equinox? More of the same if I can help it.

FOSS4Lib Upcoming Events: Fedora 4 Workshop

planet code4lib - Mon, 2016-01-04 16:20
Date: Monday, February 22, 2016 - 08:00 to 17:00Supports: Fedora Repository

Last updated January 4, 2016. Created by Peter Murray on January 4, 2016.
Log in to edit this page.

From the announcement:

The 11th International Digital Curation Conference (IDCC) with the theme “Visible data, invisible infrastructure" will take place in Amsterdam February 22-25, 2016 [1]. The conference provides an opportunity for individuals, organizations and institutions across all disciplines and domains involved in curating data to get together with like-minded data practitioners to discuss policy and practice.

Hydra Project: Sufia 6.5.0 released

planet code4lib - Mon, 2016-01-04 10:29

A belated announcement (holiday period in the way!) that Sufia 6.5.0 has been released.

6.5.0 is a small release adding a configuration option to facet on collections and fixing a bug in 6.4.0.

See the release notes [1] for upgrade instructions and details. Thanks to the 4 contributors to this release, which comprised 11 commits touching 18 files: Adam Wead, Anna Headley, Lynette Rayle, and Justin Coyne.

 

[1] https://github.com/projecthydra/sufia/releases/tag/v6.5.0

Terry Reese: Happy Holidays: MarcEdit Update

planet code4lib - Mon, 2016-01-04 04:55

Over the past few years, holiday updates have become a part of a MarcEdit tradition.  This year, I’ve been spending the past month working on two significant set of changes.  On the Windows side, I’ve been working on enhancing the Linked Data tools, profiling more fields and more services.  This update represents a first step in the process – as I’ll be working with the PCC to profile additional services and add new elements as we work through a pilot test around embedding linked data into MARC records and potential implications.  For a full change list, please see: http://blog.reeset.net/archives/1822

The Mac version has seen a lot of changes – and because of that, I’ve moved the version number from 1.3.35 to 1.4.5.  In addition to all the infrastructure changes made within the Windows/Linux program (the tools share a lot of code), I’ve also done significant work exposing preferences and re-enabling the ILS Integration.  I didn’t get to test the ILS integration well – so there may be a few updates to correct problems once people start working with them – but getting to this point took a lot of work and I’m glad to see it through.  For a full list of updates on the Mac Version, please see: http://blog.reeset.net/archives/1824

Before Christmas, I’d mentioned that I was working on three projects – with the idea that all would be ready by the time these updates were complete.  I was wrong – so it looks like I’ll have one more Christmas/New Years gift left to give – and I’ll be trying to wrap that work up this week.

Downloads – you can pick up the new downloads at: http://marcedit.reeset.net/downloads or if you have the automatic update notification enabled, the tool should provide you with an option to update from within the program.

This represents a lot of work, and a lot of changes.  I’ve tested to the best of my ability – but I’m expecting that I may have missed something.  If you find something, let me know.  I’m saving time over the next couple weeks to fix problems that might come up and turn around builds faster than normal.

Here’s looking forward to a wonderful 2016.

–tr

Jeremy Frumkin: Meaningful Web Metrics

planet code4lib - Mon, 2016-01-04 03:10

This article from Wired magazine is a must-read if you are interested in more impactful metrics for your library’s web site. At MPOE, we are scaling up our need for in-house web product expertise, but regardless of how much we invest in terms of staffing, it is likely that the amount of requested web support will always exceed the amount of resourcing we have for that support. Leveraging meaningful impact metrics can help us understand the value we get from the investment we make in our web presence, and more importantly help us define what types of impact we want to achieve through that investment. This is no easy feat, but it is good to see that others in the information ecosystem are looking at the same challenges.

DuraSpace News: REGISTER: Fedora 4 Workshop at International Digital Curation Conference

planet code4lib - Mon, 2016-01-04 00:00

Winchester, MA  The 11th International Digital Curation Conference (IDCC) with the theme “Visible data, invisible infrastructure" will take place in Amsterdam February 22-25, 2016 (http://www.dcc.ac.uk/events/idcc16). The conference provides an opportunity for individuals, organizations and institutions across all disciplines and domains involved in curating data to get together with like-minded data practitioners to discuss policy and practice.

FOSS4Lib Upcoming Events: Hydra Connect Regional Meeting at UC Santa Barbara

planet code4lib - Sun, 2016-01-03 20:02
Date: Friday, February 26, 2016 - 08:00 to 17:00Supports: Hydra

Last updated January 3, 2016. Created by Peter Murray on January 3, 2016.
Log in to edit this page.

Hydra Connect Regional Meeting at UC Santa Barbara event announcement.

Eric Hellman: The Best eBook of 2015: "What is Code?"

planet code4lib - Sun, 2016-01-03 14:25
When the Compact Disc was being standardized, its capacity was set to accommodate the length of Beethoven's Ninth Symphony, reportedly at the insistence of Sony executive Norio Ohga. In retrospect it seems obvious that a media technology should adapt to media art wherever possible, not vice versa. This is less possible when new media technologies enable new forms of creation, but that's what makes them so exciting.

I've been working primarily on ebooks for the past 5 years, mostly because I'm excited at the new possibilities they enable. I'm impressed - and excited -  when ebooks do things that can't be done for print books, partly because ebooks often can't capture the most innovative uses of ink on paper.

Looking back on 2015, there was one ebook more than any other that demonstrated the possibilities of the ebook as an art form, while at the same time being fun, captivating, and awe-inspiring, Paul Ford's What Is Code?

Unfortunately, today's ebook technology standards can't fully accommodate this work. The compact disc of ebooks can store only four and a half movements of Beethoven's Ninth. That makes me sad.

You might ask, how does What Is Code? qualify as an ebook if it doesn't quite work on a kindle or your favorite ebook app? What Is Code? was conceived and developed as an HTML5 web application for Business Week magazine, not with the idea of making an ebook. Nonetheless, What Is Code? uses the forms and structures of traditional books. It has a title page. It has chapters, sections, footnotes and a table of contents which support a linear narrative. It has marginal notes, figures and asides.

Despite its bookishness, it's hard to imagine What Is Code? in print. Although the share buttons and video embeds are mostly adornments for the text, activity modules are core to the book's exposition. The book is about code, and by bringing code to life, the reader becomes immersed in the book's subject matter. There's a greeter robot that waves and seems to know the reader, showing the ebook's "intelligence". The "how do you type an "A" activity in section 2.1 is a script worth a thousand words  and the "mousemove" activity in section 6.2 is a revelation even to an experienced programmer. If all that weren't enough, there's a random, active background that manages to soothe more than it distracts.

Even with its digital doodads, What Is Code? can be completely self contained and portable. To demonstrate this, I've packaged it up and archived it at Internet Archive; you can download with this link (21MB).  Once you've downloaded it, unzip it and load the "index.html" file into a modern browser. Everything will work fine, even if you turn off your internet connection. What Is Code? will continue to work after Business Week disappears from the internet (or behind the most censorious firewall). [1]

I was curious how much of What is Code? could be captured in a standard EPUB ebook file. I first tried making a EPUB version 2 file with Calibre. The result was not a lame as I thought it would be, but stripped of interactivity, it seemed like a photocopy of a sticker book - the story's there, but the fun, not so much. Same with the Kindle version .

I hoped that more of the scripts would work with an EPUB 3 file. This is more or the same as the zipped html file I made but I was unable to get it to display properly in iBooks despite 2 days of trying. Perhaps someone more experienced with javascript in EPUB3 could manage it. The display in Calibre was a bit better. Readium, the flagship software for EPUB3, just sat there spinning a cursor. It seems that the scripts handling the vertical swipe convention of the web conflict with the more bookish pagination attempted by iBooks.

The stand-alone HTML zip archive that I made addresses most of the use cases behind EPUB. The text is reflowable and user-adjustable. Elements adjust nicely to the size of the screen from laptop to smartphone. The javascript table of contents works the same as in an ebook reader. Accessibility could be improved, but that's mostly a matter of following accessibility standards that aren't specific to ebooks.

My experimentation with the code behind What Is Code? is another exciting aspect of making books into ebooks. Code and digital text can use a open licenses [2] that permit others to use, re-use and learn from What Is Code?. The entire project archive is hosted on GitHub and to date has been enhanced 671 times by 29 different contributors. There have been 187 forks (like mine) of the project. I think this collaborative creation process will be second nature to the ebook of the future.

There have been a number of proposals for portable HTML5 web archive formats for ebook technology moving forward. Among these are "5DOC"  and W3C's "Portable Web Platform".   As far as I can tell, these proposals aren't getting much traction or developer support. To succeed, the format has to be very lightweight and useful, or be supported by at least 2 of Amazon, Apple, and Google. I hope someone succeeds at this.

Whatever happens I hope there's room for Easter Eggs in the future of the ebook. There's a "secret" keyboard combination that triggers a laugh-out-loud Easter Egg on What is Code? And if you know how to look at What Is Code?'s javascript console, you'll see a message that's an appropriate ending for this post:


Best of 2015, don't you agree?

[1] To get everything in What Is Code? to work without an internet connection, I needed to add a small number of remotely loaded resources and fix a few small javascript bugs specific to loading from a file. (If you must know, pushing to the document.history of a file isn't allowed.) The YouTube embed is blank, of course, and a horrible, gratuitous Flash widget needed to be excised. You can see the details on GitHub.

[2] In this case, the Apache License and the Creative Commons By-NC-ND License.

Patrick Hochstenbach: Portraits for Sktchy

planet code4lib - Sun, 2016-01-03 14:17
Filed under: Figure Drawings, Sketchbook Tagged: brush, illustration, Photoshop, sktchy

Terry Reese: MarcEdit: Build Links Data tool enhancements

planet code4lib - Sat, 2016-01-02 20:54

I’ve been working with the PCC Linked Data in MARC Task Group over the past couple of months, and as part of this process, I’ve been working on expanding the values that can be recognized by the Linking tool in MarcEdit.  As those that have used it might remember, MarcEdit’s linking tool showed up about a year and a half ago, and leverages id.loc.gov, MESH, and VIAF (primarily).  As part of this process with the PCC – a number of new vocabularies and fields have been added to the tools capacity.  This also has meant created profiles for linking data in both bibliographic and authority data. 

The big changes come in the range of indexes now supported by the tool (if defined within the record).  At this point, the following vocabularies are profiled for use:

  1. NAF
  2. LCSH
  3. LCSH Children
  4. MESH
  5. ULAN
  6. AAT
  7. LCGFT
  8. AGROVOC
  9. LCMPT
  10. LCDGT
  11. TGM
  12. LCMPT
  13. LCDGT
  14. RDA Vocabularies

The data profiled has also expanded beyond just 1xx, 6xx, and 7xx data to include 3xx data and data unique to the authority data.

This has required changing the interface slightly:

But I believe that I have the bugs worked out.  This function will be changing often over the next month or so as the PCC utilizes this and other tools while piloting a variety of methods for embedding linked data into MARC records and considering the implications.  As such, I’ll be adding to the list of profiled data over the coming month – however, if you use a specific vocabulary and don’t see it in the list – let me know.  As long as the resource provides a set of APIs (it cannot be a data down – that doesn’t work for client applications – at this point, the profiled resources would require users to download almost 12 GB of data almost monthly if I went that route) that can support a high volume of queries.

Questions…let me know.

Terry Reese: MarcEdit: Generating Sanborn Cutters.

planet code4lib - Sat, 2016-01-02 20:42

Sometime this past month, I was asked if there was a way to automate the batch processing of Sanborn Cutters.  OCLC’s connection provides a handy set of methods for doing this if you are an OCLC member and utilize Connexion.  It’s wrapped up in a nifty library and provides access to the current Sanborn Table 4 Cutters (which I believe are under the control of OCLC). For most users, this is probably what they want to use — and at some point I’d be interested in seeing OCLC might be willing to let me link to this particular library to provide a set of batch tools around Sanborn Cutter creation for users using the Four Figure table.  However, the Three Figure Tables were published long before 1921 (the copy I’m using dates to 1904) — so I decided to provide a tool for batch creation of Sanborn Figure 3 Cutters.

I guess before we go further, I’m not particularly familiar with this set of cutters.  I’ve only cataloged in an academic library, so I’m primarily familiar with LC’s cuttering methodology — so I’ll be interested in hearing if this actually works.

Ok, with that out of the way — this tool is available from within the MarcEditor.  The assumptions here are that:

  1. You have a call number stem (assuming dewey).  The tool defaults to assuming that you are looking to cutter on (by order of preference) — 1xx, 245, 243, 240 fields.  However, if you want to cutter on a different value (say a 600) you can identify the cuttering field and provide data to be queried to select the correct cuttering field (in case there are multiple values).
  2. That you know what you are doing (because I don’t :))

To run the tool, have the file you want to process in the MarcEditor.  Open the MarcEditor and Select Tools/Cuttering Tools/Sanborn Table 3 Cutters

MarcEdit Mac:

MarcEdit Windows/Linux:

When you select the value — you see the following window:

MarcEdit Mac:

MarcEdit Windows/Linux

As you can see, I’ve tried to keep the function identical between the various platforms.  In the first textbox, you need to enter the field to evaluate.  Again, the tool assumes that you have the start of a call number in your record.  So, for LC records (when generating LC tools) that would be 090$a, 099$a, 050$a; for Dewey, 082, 092.  You then select how the cutter will be generated.

A couple words about how this works.  Cuttering is generated off of a set of tables.  The process works by looking for either a match in the cutter table or finding the closest match within the cutter table.  In my testing, it looks like the process that I’m employing will work reliably — but text data can be weird — so you’ll have to let me know if you see problems.

When you process the data — the program will insert a $b into the defined call number field based on the information it determined best represents the information in the cutter tables.

–tr

Terry Reese: MarcEdit Mac: New Preferences

planet code4lib - Sat, 2016-01-02 20:20

One of the parts of the MarcEdit Mac port that has been lagging has been the ability to manage a number of the preferences that MarcEdit utilizes when running the program.  Originally, I exposed the preferences for the MARCEngine and the Updates.  As part of the next update, I’ve included options to handle preference settings for the MarcEditor, the Locations, the miscellaneous settings, and the ILS Integration.  The set still isn’t as robust as the Windows/Linux version — but part of that is because some of the options are not applicable; though more likely, they just weren’t part of the list that were most commonly asked for.  I’ll be working on adding the remainder through Jan.

MarcEditor Preferences:

Locations:

Other Settings:

ILS Integration:

Karen Coyle: Elkins Park

planet code4lib - Sat, 2016-01-02 17:11
Many of you will have heard the name "Elkins Park" for the first time this week as the jurisdiction of Bill Cosby indictment. It is undoubtedly the most famous thing that Elkins Park has been known for. However, it has a connection to books and libraries that those of us involved with books and libraries should celebrate as a counter to this newly acquired notoriety.

The Elkins family was one of the 19th century's big names in Pennsylvania. William Lukens Elkins was one of the first "oil barons" whose company was the first to produce gasoline, just in time for the industrial and transportation revolution that would use untold gallons of the stuff. His business partner was Peter Widener, and the two families were intertwined through generations, their Philadelphia mansions built across the street from each other.

Eleanor Elkins, daughter of WL Elkins, married George Widener, son of Peter Widener, essentially marrying the two families. Eleanor and George had a son, Harry. Unfortunately, they were rich enough to book passage on the maiden voyage of the Titanic in 1912. George and Harry perished; Eleanor Elkins Widener survived.

Harry Widener had been an avid book collector, sharing this interest with his best friend, William McIntire Elkins. Harry had graduated from Harvard, as had his friend WM Elkins, and his will instructed his mother to donate his collection to Harvard, "to be known as the Harry Elkins Widener Collection." His mother took it one further and funded the creation of a new library that would house both her son's collection but also the entire Harvard library collection. Yes, I am talking about the now famous Widener Library.

His friend, William McIntire Elkins, lived until 1947, and during his lifetime amassed a huge rare book collection. He was particularly interested in Dickensiana, which included not only first serially published editions of Dickens' works, but also Dickens' desk, candlesticks, ink well, etc. His will left his entire collection to the Free Library of Philadelphia. Well, it turned out that it was not only his book collection, but the entire room, which was moved into the library, looking much as it did during Elkins' life.

Elkins Park is named, of course, for the Elkins family whose massive estate held an impressive array of mansions and grounds. Much of the estate has been divided up and sold off, but portions remain.
So that's the story of Elkins Park and how it fits into libraries and the rarified world of rare books. But I have another small bit to add to the story. In 1947, at the time of his death, John and Eleanor King, my grandparents, were working for (as he was known in my family) "old man Elkins" -- that is William McIntire Elkins. My grandfather was gardener and chauffeur for Elkins, and my grandmother was (as she called it) the executive of the household. When the library was transferred to the Free Library of Philadelphia, photographs were taken out the windows so that the "view" could be reproduced. Those photographs show the grounds that my grandfather cared for, and the room itself was undoubtedly very familiar to my grandmother (although she would never admit to having done any dusting with her own hand). A small amount bequeathed to them in Elkins' will allowed them to own their own property for the first time, just a few acres, but enough to live on, with sheep and chickens and a single steer. I have early memories of that farm, and a few family photos. Little did any of us know at the time that I would reconnect all of this because of books.

Terry Reese: MarcEdit Mac Update ChangeLog

planet code4lib - Sat, 2016-01-02 05:04

I’ve been working hard over the last month an a half trying to complete the process of porting functionality into the OSX version of MarcEdit.  I’ve completed the vast majority of this work, in addition to bringing in a number of other changes.  These changes will be made available as part of the 1/3/2016 update.  The changes in this update will be as follows:

  • Bug Fix: RDA Helper — 260/264 changes were sometimes not happening when running across specific 260$c formatting.
  • Bug Fix: MARCValidator: Validator was crashing when records would go beyond 150,000 bytes.  This has been corrected.
  • Bug Fix: Build Links Tool — MESH headings were utilizing older syntax and occasionally missing values.
  • Bug Fix: Validation Headings tool: When checking Automatically Correct variants, the embed URIs tool is automatically selected.  This has been corrected.
  • Bug Fix: Edit XML Functions: The modify option, the save button was turned on.  This has been corrected.
  • Enhancement: Build Links: Build links tool use to use the opensearch api in the id.loc.gov resolution.  This was changed to be like the validate headings tool and provide more consistent linking.
  • Enhancement: Most of MarcEdit’s preferences have been exposed.
  • Enhancement: Build Links Tool – I’ve added profiles for a wide range of vocabularies being tested by the PCC Linked Data task force.  These are available.
  • Enhancement: Build Links Tool — Profiled services are found under a link.
  • Enhancement: Build Links Tool — Task management options have been added for the new validate options.
  • Enhancement: MarcEditor: Generate Cutters: LC cutter generation has been updated.
  • Enhancement: MarcEditor: Generate Sanborn Cutters: Added function to generate Sanborn Table 3 Cutters.
  • Enhancement: ILS Framework — MarcEdit’s ILS framework options were added.
  • Enhancement: Koha Integration: Koha Integration options were added to the tool.

This doesn’t complete the function migration, but its close.  These changes will be part of the 1/3/2016 update.  I’ll be working to add a few YouTube videos to document new functions.  Let me know if you have questions.

Terry Reese: MarcEdit 6.2 Windows/Linux ChangeLog

planet code4lib - Sat, 2016-01-02 05:04

Over the past month, I’ve been working hard to make a few MarcEdit Changes.  These changes will be released on 1/3/2016.  This update will include a version number change to 6.2.  This update will have the following changes:

  • Bug Fix: RDA Helper — 260/264 changes were sometimes not happening when running across specific 260$c formatting.
  • Bug Fix: MARCValidator: Validator was crashing when records would go beyond 150,000 bytes.  This has been corrected.
  • Bug Fix: Build Links Tool — MESH headings were utilizing older syntax and occasionally missing values.
  • Bug Fix: Tutorials Link pointed to dead endpoint.  Corrected.
  • Bug Fix: 006/007 Menu Selection: The incorrect for is being selected when selecting the Serial and cartographic materials.
  • Bug Fix: Validation Headings tool: When checking Automatically Correct variants, the embed URIs tool is automatically selected.  This has been corrected.
  • Bug Fix: MarcEditor Find: When selecting edit query, the find box goes to the Replace dialog.  This has been corrected.
  • Bug Fix: Harvest OAI Records: If the harvester.txt file isn’t present, an unrecoverable error occurs.  This has been corrected.
  • Bug Fix: MarcEditor Task List: When you have a lot of tasks, the list of available tasks may not refresh on first run.  I believe I’ve corrected this.
  • Enhancement: Build Links: Build links tool use to use the opensearch api in the id.loc.gov resolutionn.  This was changed to be like the validate headings tool and provide more consistent linking.
  • Enhancement: Preferences: Under File preferences, you can set the default drive for the information in the MARC Tools source and output textboxes.
  • Enhancement: Build Links Tool – I’ve added profiles for a wide range of vocabularies being tested by the PCC Linked Data task force.  These are available.
  • Enhancement: Build Links Tool — Profiled services are found under a link.
  • Enhancement: Build Links Tool — Task management options have been added for the new validate options.
  • Enhancement: MarcEditor: Generate Cutters: LC cutter generation has been updated.
  • Enhancement: MarcEditor: Generate Sanborn Cutters: Added function to generate Sanborn Table 3 Cutters.

This update will be posted 1/3/2016. I’ll be working to add a few YouTube videos to document new functions. 

M. Ryan Hess: Return to Windows

planet code4lib - Sat, 2016-01-02 00:01

There’s a Windows machine back in my house. That’s right, after 14 years of Mac OS, I’ve shifted my OS back to Windows…on my primary computer!

Windows? WTF?

So, Mac OSX is still a superior operating system. But the gap between Windows and OSX has shrunk considerably with the launch of Windows 10, but that’s hardly a good reason to leave behind the most simple, well-designed and usable OS out there.

But Apple is steadily closing the noose on what computer users can do with their machines and this has really rubbed me the wrong way.

Besides, I had a dream. A dream to build a dream machine, that is. I wanted to build my own ‘Adobe Machine’ for home use and also be able to swap out hardware over time. In Apple’s ultra-controlled ecosystem, building such a device would be very, very costly and also fail to really expand over time. And for very practical reasons, relying on a finicky Hacitosh was out of the picture.

So, fed up with the self-imposed limitations of Mac, I went back to Windows…and this is my experience.

First Impressions

So, the design of Windows 10 is actually quite pleasant. The modern ‘Metro’ UI is very pleasant (I only wish it was applied uniformly across the OS–more on that later).

The Start (menu) is actually a great way to tuck all of your most important apps out of sight. And I love that it’s flexible, allowing you to organize apps and folders however you want. There are even ways to label and group apps however you wish. The librarian in me sings with these kind of organizational features.

I’ve found that I actually use the Start Menu as a replacement for not only my Desktop but also the Task Bar, which I only keep visible so I have the clock visible.

Maybe it’s the OSXer in me, but there are parts of Windows 10 that feel like redundant re-thinks of more familiar features. For example, the Action Center has quick access icons for things like VPN and creating Notes, all of which, one would expect would be handled by the Start Menu. There’s also the little arrow-thingy on the task bar where certain background apps live. Why?

An Unfinished OS?

As I began customizing and exploring Windows 10, I began to realize that Microsoft must have pushed Windows 10 out the door before the pain was dry. There are odd discontinuities you the pleasantly designed Metro aesthetic ends and you’re suddenly thrown into some god-awful old-school Windows environment. This happens in the Settings panel often, for example, once you get a couple levels down.

Uh, guys, the Metro thing really works. Did you not have time to reskin the old Windows 7/XP UI sections? Please do this soon. It’s like you drove up in a super sweet ride, with designer shades on your face and then you get out of the car and you’re not wearing pants! Actually, you’re wearing tighty-whities.

Also, what’s up with the VPN workflow? As it currently works, it takes no less than four clicks to connect to my VPN. This should be one or two clicks, really. Please fix.

There’s a very nice dark theme, but, alas, it only applies to certain top-level sections of the OS. The File Explorer (a heavily used part of the UI), actually does not inherit the dark theme. There are hacks out there, but seriously, this should be as universal as setting your color scheme.

Can’t wait for Windows 10 to get all grow’d up.

Privacy

I’m going to write an entire blog on this, but Privacy is the biggest issue with this OS. Readers of my blog will know my personal feelings on this issue run strong. So I spent considerable time fighting Microsoft’s defaults, configuring privacy settings, messing with the registry (really?) and even doing a few hacks to lock this computer down.

Microsoft is really doing a number on its users. Windows 10 users are handing over unconscionable amounts of personal information over to Microsoft’s servers, their advertising partners and, if this info ever gets hacked (won’t happen, right?), to whoever wants to do a number on Windows 10 users.

Anyway, needless to say, I had to forgo using Cortana, which is sad because I’m very interested in these kinds of proto-AI tools. But as long as their phoning home, I just unplug them. Did the same to all the “Modern Apps” like Maps, News, etc.

Bottom Line

Breaking up with OSX was actually not as painful as I had expected. And I’m really enjoying Windows 10, save for a few frustration points as outlined above. Overall, it’s well worth the trade offs.

And my Dream Machine, which I christened Sith Lord (because it’s a big, dark beast), is running Adobe CC, rendering at light speed and could probably do the Kessel Run in less than 12 Parsecs.


John Mark Ockerbloom: Public Domain Day 2016: Freezes and thaws

planet code4lib - Fri, 2016-01-01 16:37

For most of the past 55 years, the public domain in the United States has gone through a series of partial or complete freezes.  We’ve gotten used to them by now.  A thaw is coming soon, though, if there are no further changes in US copyright terms.  But right now, our government is trying to export freezes abroad, and is on the brink of succeeding.   And our own thaw is not yet a sure thing.

The freezes began in 1962, when Congress extended the length of copyright renewal terms in anticipation of an overhaul of copyright law.  Copyrights from 1906 that had been expiring over the course of that year stopped expiring.  The first extension was for a little over 3 years, but Congress kept passing new extensions before the old extensions ran out, until the 1976 Copyright Act established new, longer terms for copyright.  The 1906 copyrights that were frozen in 1962 would not enter the public domain until the start of 1982.

The freeze of the public domain in the 1960s and 1970s wasn’t complete.  Unrenewed copyrights continued to expire after 28 years, and works published without a copyright notice entered the public domain right away.  In 1982, all the traditional routes to the public domain were open again: age, non-renewal, publication without notice, and so on.  But that would only last about 7 years.   In 1989, the non-notice route was frozen out: from then on, anything published, or even written down, was automatically copyrighted, whether the author intended that or not.  In 1992, the non-renewal route was frozen out: copyrights would automatically run a full term whether or not the author or their heirs applied for a renewal.  In 1996, many non-US works were removed from the public domain, and returned to copyright, as if they had always been published with notice and renewals.  And finally in 1998, copyright expiration due to sheer age was also frozen out.  Due to a copyright extension passed that year, no more old published works would enter the public domain for another 20 years.  The freeze of the public domain became virtually complete at that point, with the trailing edge of copyrights stuck at the end of 1922.  It’s still there today.

But a thaw is in sight.  Just 3 years from now, in 2019, works published in 1923 that are still under copyright are scheduled to finally enter the public domain in the US.  Assuming we manage to stop any further copyright extensions, we’ll see another year’s worth of copyrights enter the public domain every January 1 from then on– just as happens in many other countries around the world.  Today, in most of Europe, and other countries that follow life+70 years terms, works by authors who died in 1945 (including everyone who died in World War II) finally enter the public domain.  In Canada, and other countries that follow the life+50 years terms of the Berne Convention, works by authors who died in 1965 enter the public domain.  The Public Domain Review shows some of the more famous people in these groups, and there are many more documented at Wikipedia.

But this may be the last year for a long while that people in Canada, and some other countries, see new works enter the public domain.  This past year, trade representatives from Canada, the US, and various other countries approved the Trans-Pacific Partnership, an agreement that includes a requirement pushed by the US to extend copyrights to life+70 years.  Those extensions would take place as soon as the TPP is ratified by a sufficient number of governments. In Canada, New Zealand, Japan, Malaysia, Brunei, and Vietnam, that would mean a 20-year freeze in the public domain, potentially coming into effect just before the US’s 20-year near-total freeze is scheduled to end.

Supporters of the public domain should not take either the pending freezes or the pending thaws for granted.  When the TPP was agreed on this past October, the leaders of the US and Canadian governments  were strong TPP supporters.  But the government of Canada has changed since then, and it looks like the US government might not put TPP to a vote until after the 2016 elections.  Canada’s new government, and some of the leading US candidates, seem to be more on the fence about TPP than their predecessors.  Organized public action could well shift their stance, in either direction.

While we’re awaiting a thaw in the US, we can still map out and digitize more of the public domain we have.  HathiTrust has been doing a wonderful job opening access to hundreds of thousands of post-1922 public domain books via its copyright review activities.   But other categories of unrenewed copyrights are not yet as well lit up.  For instance, Duke’s summary of the 1959 copyrights that could have been expiring today mentions 3 scholarly journals– Nature, Science, and JAMA, whose 1959 articles are behind paywalls at their publishers’ sites.  But it turns out that none of those journals renewed copyrights for their 1959 issues — the first issue to be renewed of any of them was the January 9, 1960 issue of JAMA — so we can digitize and open access to much of that content without waiting for the publishers to do so.

In the next three years, I’d love to see digital projects in the US make the post-1922 public domain as visible and comprehensive online as the pre-1923 public domain is now.  And then, if we ensure the thaw comes on schedule in the US, and we stave off freezes elsewhere, I hope we can quickly make another full year’s worth of public domain available every New Year’s Day.  Maybe once we get used to that happening in the US, we’ll be less likely to allow the public domain to freeze up again.
Happy Public Domain Day!  May we all soon have ample reason to celebrate it every year, all around the world.

 


Eric Hellman: A New Year's Resolution for Publishers and Libraries: Switch to HTTPS

planet code4lib - Fri, 2016-01-01 02:36
The endorsement list for the Library Digital Privacy Pledge of 2015-2016 is up and ready to add the name of your organization. We added the "-2016" part, because various things took longer than we thought.

Everything takes longer than you think it will. Web development, business, committee meetings, that blog post. Over the past few months, I've talked to all sorts of people about switching to HTTPS. Librarians, publishers, technologists. Library directors, CEOs, executive editors, engineering managers. Everyone wants to do it, but there are difficulties and complications, many of them small and some of them sticky. It's clear that we all have to work together to make this transition happen.

The list will soon get a lot longer, because a lot of people wanted to meet about it at the ALA Midwinter meeting just 1 week away OMG it's so soon! Getting it done is the perfect New Year's resolution for everyone in the world of libraries.

Here's what you can do:

If you're a Publisher...

... you probably know you need to make the switch, if for no other reason than the extra search engine ranking. By the end of the year, don't be surprised if non-secure websites look unprofessional, which is not what a publisher wants to project.

If you're a Librarian...

... you probably recognize the importance of user privacy, but you're at the mercy of your information and automation suppliers. If those publishers and suppliers haven't signed the pledge, go and ask them why not. And where you control a service, make it secure!

If you're a Library Technology Vendor...

... here's your opportunity to be a hero. You can now integrate security and privacy into your web solution without the customer paying for certificates. So what are you waiting for?

If you're a Library user...

... ask your library if their services are secure and private. Ask publishers if their services are immune to eavesdropping and corruption. If those services are delivered without encryption, the answer is NO!

Everything takes longer than you think it will. Until it happens faster than you can imagine. Kids grow up so fast!

Pages

Subscribe to code4lib aggregator