news aggregator

Corrado, Ed: Contoocook Lake in Jaffrey, NH, Track #298

planet code4lib - Mon, 2014-02-03 22:44

This weekend I wasn’t expecting to go to any races because I had some work-related things I wanted to do. However, as it turns out, I got most of the things I wanted to get done by the end of the day on Saturday and my Sunday was freed up. I decided to see some ice racing in New Hampshire. As far as I can tell there are three groups that run automobile ice races in New Hampshire and they all run there regular shows on Sunday and all start between 12:00 and 12:30 pm. I checked with each group and they all were running. After some hemming and hawing, I decided to go watch the Jaffrey Ice Racing Association race on an oval on Contoocook Lake in Jaffrey, NH. The main reason why I choose this club was that it was an hour closer to where I live than the others.

There were 3 divisions of racing and the scheduled called for each division to have two 6 lap heat races and a 10 lap feature heat race. While you could watch for free on the road, I paid $3 to walk onto the ice. Another option would have been to pay $10 to drive onto the ice. The weather was a bit warm for ice racing (40 degrees) and rain was on its way so I appreciated that they started on time. They probably could have taken a little less time between heat races though with rain in the forecast and on its way. In the end, they managed to get 5 of the 6 heat races completed before the rain and warm conditions made the event organizers call off the rest of the racing at about 1:30 pm. Besides being the first ice race I’ve seen in New Hampshire, this was my first oval ice race. I really did enjoy the oval ice races and hope to get to another one this year but with ice racing being so weather-dependent that may or may not happen.

PS: I didn’t write up a report for last weekend when I went to Georgia, but I saw two new tracks that weekend.

New Track since last report:

Albany Motor Speedway, dirt oval, Albany GA (dirt late models): 01/25/2014
Watermelon Capital Speedway, paved oval. Cordele GA (paved super late models): 01/26/2014
Contoocook Lake, ice oval, Jaffrey, NH: 2/1/2014

2014 Stats:
Total Tracks: 6
New Tracks: 6
States I saw new tracks: 5 (Georgia, New Hampshire, North Carolina, Ohio, Virginia)
Countries: 1 (USA)
Total Races: 6
Total lifetime TrackChaser countable tracks: 298

James Cook University, Library Tech: Brewing Your Own Linked Data #vala15 #bca

planet code4lib - Mon, 2014-02-03 21:54
Well that session was exactly what I wanted!  Open linked data has been on my watch list since 2010 after seeing OpenCalais demoed at that years VALA It was great to see under the hood and spakfil my limited understanding of what it all meant.  I liked that Tim and Paul be aware of 'Linked Open Data (LOD) Paralysis' (see image below) and keep dragging us back to practical uses. So I now

Engard, Nicole: Bookmarks for February 3, 2014

planet code4lib - Mon, 2014-02-03 20:30

Today I found the following resources and bookmarked them on <a href=

Digest powered by RSS Digest

The post Bookmarks for February 3, 2014 appeared first on What I Learned Today....

Related posts:

  1. Yahoo! Pipes
  2. Yahoo Pipes – Take 2
  3. My new career

ALA Equitable Access to Electronic Content: FCC makes “close call” in issuing e-reader waiver

planet code4lib - Mon, 2014-02-03 19:44

To the disappointment of the American Library Association and a range of advocates for people with disabilities, the Federal Communications Commission (FCC) has granted a one-year waiver to the E-reader Coalition—Amazon, Sony and Kobo—to the Commission’s rules for Access to Advanced Communications and Services (ACS) for People with Disabilities. The Coalition argued that their e-readers were made for the sole purpose of reading text and therefore should not have to comply with accessibility implementation requirements of the Twenty-First Century Communications and Video Accessibility Act of 2010. The FCC Order notes that it made a “close call” in granting the waiver because even the Coalition concedes that its basic devices include browsers and are capable of ACS (e.g., email, instant messaging, voice over Internet protocol service).

Advocates, including the ALA, did prevail in our call to the FCC not to grant the permanent waiver that the Coalition requested. It should also be noted that the FCC decision is focused on ACS and does not address e-reader accessibility concerns regarding speech-to-text or other measures that would provide access to text-based digital works.

In filings, the FCC wrote:

We grant a waiver from the Commission’s ACS rules for the class of “basic e-readers,” as defined herein, until January 28, 2015. We limit the term of the waiver to one year from the expiration of the temporary waiver, rather than grant the Coalition’s request for an indefinite waiver. We believe that, given the swift pace at which e-reader and tablet technologies are evolving and the expanding role of ACS in electronic devices, granting a waiver beyond this period is outweighed by the public interest and congressional intent to ensure that Americans with disabilities have access to advanced communications technologies.

The waiver applies to a class of e-reader devices that:

  • has no LCD screen;
  • has no camera;
  • is not offered or shipped to consumers with built-in ACS client applications, and the device manufacturer does not develop ACS applications for its respective device, but the device may be offered or shipped to consumers with a browser and social media applications; and
  • is marketed to consumers as a reading device and promotional material about the device does not tout the capability to access ACS.

In September 2013, ALA’s Office for Information Technology Policy (OITP) opposed the petition in submitted comments to the FCC, arguing that the waiver was discriminatory and prevented libraries from ensuring equitable access to all. In addition, libraries have been sued for violating regulations of the American with Disabilities Act because they offer basic readers (like the Kindle) that are not accessible. In a meeting with FCC officials, OITP also pointed out that due to the rapid advancement of digital technologies, a permanent waiver to the accessibility rules was not prudent, and likely not necessary. In November, OITP with the Association of Research Libraries (ARL) submitted additional comments with data that demonstrated owners of basic e-readers were, in fact, using the products for ACS.

While disappointed the “close call” was not ruled in favor of people with disabilities, OITP is pleased that e-reader manufacturers must file for a waiver next year and re-argue their case, or make their e-reader ACS features accessible  to people with print and other disabilities. Our work with advocates to increase accessibility of digital text for people with print and other disabilities will continue beyond the FCC.

The post FCC makes “close call” in issuing e-reader waiver appeared first on District Dispatch.

ALA Equitable Access to Electronic Content: The ALA Office for Information Policy is seeking nominations for two copyright awards

planet code4lib - Mon, 2014-02-03 18:40

The first is the L. Ray Patterson Award: In Support of Users’ Rights.  The Patterson Copyright Award recognizes contributions of an individual or group that pursues and supports the Constitutional purpose of the U.S. Copyright Law, fair use and the public domain. Professor Patterson was a copyright scholar and historian.  He argued that the statutory copyright monopoly had grown well out of proportion, to the extent that the purpose of the copyright law – to advance learning – was hindered.  Patterson was co-author (with Stanley W. Lindberg) of The Nature of Copyright: A Law of Users’ Rights and was particularly interested in libraries and their role in advancing users rights.  He served as expert counsel to Representative Bob Kastenmaier throughout the drafting of the Copyright Law of 1976. Previous winners of the Patterson Award include Kenneth D. Crews, Peter Jaszi, and Fred von Lohmann.  The Patterson Award is a crystal vase trophy.

The second award is the Robert Oakley Memorial Scholarship Fund, sponsored in collaboration with the Library Copyright Alliance (LCA).  This award is granted to an early-to-mid-career librarian who is pursuing copyright scholarship and public policy.  Professor Oakley was a member of the LCA representing the American Association of Law Librarians, and a long-time member of the International Library Federation of Libraries Associations and Institutions (IFLA), advocating for libraries at the World Intellectual Property Organization and UNESCO.  Oakley was a recognized leader in law librarianship and library management who also maintained a profound commitment to public policy and the rights of library users and was a mentor to many librarians interested in copyright policy.  The $1000 scholarship award may be used for travel necessary to conduct, attend conferences, release from library duties or other reasonable and appropriate research expenses.

The deadline for nominations has been extended to March 31, 2014.  For more information on nomination details, see the links above.  If you have additional questions, contact Carrie Russell, OITP Director of the Program on Public Access to Information at crussell@alawash.org.

The post The ALA Office for Information Policy is seeking nominations for two copyright awards appeared first on District Dispatch.

Scott, Dan: Mapping library holdings to the Product / Offer mode in schema.org

planet code4lib - Mon, 2014-02-03 18:35

Back in August, I mentioned that I taught Evergreen, Koha, and VuFind how to express library holdings in schema.org via the http://schema.org/Offer class. What I failed to mention was how others can do the same with their own library systems (well, okay, I linked to the W3C Schema.org Bibliographic Extension Community Group proposal for representing holdings via Offer but didn't focus on how one would go about doing that). This might have led to Diane Hillman recently finding the wrong, abandoned holdings proposal (thankfully Richard Wallis helped clear things up!). So, better late than never, here is a quick summary:

  1. Each copy that the library holds is marked up as an individual Offer.
  2. The itemOffered property attaches an Offer to a corresponding Product that contains the main description of the goods. In most library systems, this is going to be the title of the item, list of creators, abstract, subject classifications, etc; that which we generally refer to as the bibliographic record. While this will probably have its own type already (Book or Movie or MusicAlbum or the like), you can also include Product as a secondary type (either via a whitespace-delimited list or via the schema.org additionalType property).
  3. Mapping more familiar library terminology to the pertinent properties from Offer goes something like this:
    • Library = seller - the range of Organization includes Library as a child type, so you can link to a highly structured description of the library including hours of operation, contact information, location... and that's exactly what we now do in Evergreen
    • Call number / shelf number = sku - because a stock-keeping unit number is "a merchant-specific identifier for a product or service", and what is a call number if not a means by which you identify stock on the shelf?
    • Barcode = serialNumber - the unique "alphanumeric identifier of a particular product", am I right?
    • Shelving location = availableAtOrFrom - "[t]he place(s) from which the offer can be obtained"; with a range of Place this really should be linked to sub-units of the Library type you pointed to for the seller property, but schema.org does accept reality and the inevitability that some plain text values are going to be supplied where a typed range was indicated.
    • Item status = availability
    • - which is itself an enumeration of the children of ItemAvailability; "available" maps nicely to InStock, while "reference use" is a decent analogue for InStoreOnly. More suggested mappings are in the Schema BibEx document.
  4. Borrowing terms = businessFunction - another enumeration, for which the most likely value for a library is http://purl.org/goodrelations/v1#LeaseOut. After all, what is a library loan other than a lease with a limited period during which the price is $0.00?
  5. Price = price - while theoretically unnecessary, explicitly specifying a price of $0.00 may satisfy search engines that always expect to see a price attached to an offer (I'm looking at you, Google Structured Data Testing Tool!)

The language for some of the terminology may seem a little overly commercial right now, but the next iteration of the schema.org standard will adopt language that more broadly supports non-commercial activities... and this broadening of a number of schema.org definitions is also an outcome of the Schema BibEx community efforts. I'm pretty happy with the results of the group over the last six months! Hopefully this sheds some long-overdue light on some of the results of our efforts, and helps other systems adopt our group's recommended practices for exposing metadata via schema.org.

Scott, Dan: Mapping library holdings to the Product / Offer mode in schema.org

planet code4lib - Mon, 2014-02-03 18:35

Back in August, I mentioned that I taught Evergreen, Koha, and VuFind how to express library holdings in schema.org via the http://schema.org/Offer class. What I failed to mention was how others can do the same with their own library systems (well, okay, I linked to the W3C Schema.org Bibliographic Extension Community Group proposal for representing holdings via Offer but didn't focus on how one would go about doing that). This might have led to Diane Hillman recently finding the wrong, abandoned holdings proposal (thankfully Richard Wallis helped clear things up!). So, better late than never, here is a quick summary:

  1. Each copy that the library holds is marked up as an individual Offer.
  2. The itemOffered property attaches an Offer to a corresponding Product that contains the main description of the goods. In most library systems, this is going to be the title of the item, list of creators, abstract, subject classifications, etc; that which we generally refer to as the bibliographic record. While this will probably have its own type already (Book or Movie or MusicAlbum or the like), you can also include Product as a secondary type (either via a whitespace-delimited list or via the schema.org additionalType property).
  3. Mapping more familiar library terminology to the pertinent properties from Offer goes something like this:
    • Library = seller - the range of Organization includes Library as a child type, so you can link to a highly structured description of the library including hours of operation, contact information, location... and that's exactly what we now do in Evergreen
    • Call number / shelf number = sku - because a stock-keeping unit number is "a merchant-specific identifier for a product or service", and what is a call number if not a means by which you identify stock on the shelf?
    • Barcode = serialNumber - the unique "alphanumeric identifier of a particular product", am I right?
    • Shelving location = availableAtOrFrom - "[t]he place(s) from which the offer can be obtained"; with a range of Place this really should be linked to sub-units of the Library type you pointed to for the seller property, but schema.org does accept reality and the inevitability that some plain text values are going to be supplied where a typed range was indicated.
    • Item status = availability
    • - which is itself an enumeration of the children of ItemAvailability; "available" maps nicely to InStock, while "reference use" is a decent analogue for InStoreOnly. More suggested mappings are in the Schema BibEx document.
  4. Borrowing terms = businessFunction - another enumeration, for which the most likely value for a library is http://purl.org/goodrelations/v1#LeaseOut. After all, what is a library loan other than a lease with a limited period during which the price is $0.00?
  5. Price = price - while theoretically unnecessary, explicitly specifying a price of $0.00 may satisfy search engines that always expect to see a price attached to an offer (I'm looking at you, Google Structured Data Testing Tool!)

The language for some of the terminology may seem a little overly commercial right now, but the next iteration of the schema.org standard will adopt language that more broadly supports non-commercial activities... and this broadening of a number of schema.org definitions is also an outcome of the Schema BibEx community efforts. I'm pretty happy with the results of the group over the last six months! Hopefully this sheds some long-overdue light on some of the results of our efforts, and helps other systems adopt our group's recommended practices for exposing metadata via schema.org.

Schneider, Karen G: FYI on PBA at ALA

planet code4lib - Mon, 2014-02-03 17:28

Liberty Bell – from Wikipedia

I stayed in Philadelphia past ALA Midwinter for my third-semester doctoral program intensive until today, Sunday, February 2, so I’m just getting around to my post-ALA blogging.  What I’m writing about tonight, on my flight back to SFO, is a bit wonky. But stick with me, because if you’re an ALA member, it matters.

ALA has a unit called the Planning and Budget Assembly. Despite serving three previous terms on Council, I really never gave PBA much thought until last summer when I agreed to run for a position as one of its ALA Council representatives, who are elected by our Council peers. My interest was really piqued when an ALA member I respect greatly took me aside to warn me not to waste my time on PBA. I almost took that advice, but in the end, I’m glad I pressed on anyway.

I was swept into office with a grand 93 votes–hey, you laugh, but I was the top vote-getter. (I recited that from memory while composing this over  sluggish in-flight wifi, and now I am worrying there will be a scandal in which it turns out I actually received 91 votes and will have to go on an Apology Tour.)

The charge to PBA is “To assist the ALA Executive Board and the Budget Analysis and Review Committee (BARC), there shall be a Planning and Budget Assembly which shall consist of one representative of each division, ALA committee, round table, and five councilors-at-large and five councilors from chapters.” There are other fiscal units–besides BARC, a very important fiscal unit is the Finance and Audit Committee of ALA’s Executive Board–but PBA does, after all, exist, at least on paper.

First, I’d like to point out how huge PBA is. At least by headcount, it’s about 80 delegates, not including ALA staff. Additionally, the PBA assembly, taken together, is comprised of some of the best, most seasoned minds in ALA. I am in complete awe of the potential force of this assembly, and in theory, I could learn quite a bit from the questions they ask or the observations they make. Based on both their ALA work and the work they do in their libraries, they are extremely well-positioned to provide commentary and planning advice on ALA’s next steps in light of the fiscal challenges ALA has faced in the past five-plus years of recession and changing information patterns: staffing cutbacks, frozen salaries, creeping workload, sinking revenues.

But PBA’s rather exceptional group of people is not actually empowered to do anything other than be herded into a room twice a year and then read condensed highlights from various reports (reports, no less, that a number of us have already had read to us at Saturday’s Council/Executive Board/Membership Information Session).

It’s diagnostic of PBA’s dilemma that there is no onboarding for PBA, its charge is vague, and there’s no direction for what PBA is to do once it has attended these twice-yearly meetings. For ALA Midwinter, a PBA meeting that everyone knew would attract strong participation, we had a one-hour session in which we were squeezed into a  room for a group half our size, asked to do introductions (which of course took a while), then read to from reports that had been read from at Saturday’s . We had exactly 5 minutes at the end for “discussion.”

Structurally, there’s no way this assembly of close to 100 people can use this format to “assist” other ALA units.  Symbolically, the message is clear: PBA is to be seen and not heard.

Other shenanigans have bordered on silly. PBA members have no easy method for communicating as a group. We are emailed in a couple of reply-all batches. When I asked ALA several weeks back to create a Sympa discussion list for PBA, I encountered pushback.

I get that every new mailing list creates overhead, but PBA is the only governance unit denied such a list, and far more human labor has been spent stonewalling the creation of this discussion list than would have been expended just making it happen. Why, you’d think they were concerned about some activist PBA member stirring the pot and encouraging PBA members to, you know, talk amongst themselves about the future of PBA! I was assured at Midwinter that ALA will in fact create a discussion list, and I’ll let you know if that does or doesn’t happen.

In any event, it’s time to make PBA useful or kill it off. As I wrote on Council list, “I don’t want to speak for everyone on Council, but it seems safe to say that there was general agreement that the Planning and Budget Assembly has untapped potential, and that its present composition and charge and how that charge is interpreted and acted upon are not useful to ALA or to the members of the assembly. In particular, to paraphrase something Mary Ghikas said months back, PBA’s potential role in planning has not been leveraged. To be more blunt, you’re welcome to dismiss me but don’t also waste my time while you’re doing it.

But the good news about PBA is, to paraphrase Yogi Berra, there is a fork in the road, and we plan to take it.

Council has informal sessions it calls Forums. These sessions, which are open meetings, are opportunities to discuss matters before Council in a relaxed, conversational manner, outside the framework of parliamentary procedure. Council Forum II, held Monday night, concluded with an extremely resonant, thoughtful, and engaged conversation about PBA.  Maybe it’s a question of my own personal motivation—I have spent a year asking questions about the ALA budget, starting in January 2013 when I expressed concern about projected revenues from RDA—but I felt really attuned to the conversation that flowed among  former treasurers, Executive Board members, BARC-ers, and new and seasoned Councilors.

I originally thought I would head to Council Forum II with a proposal for a presidential task force on fiscal communication. But I forced myself to spend a few hours reviewing earlier ALA presidential task forces, and I learned something worth heeding: if you want to keep membership at bay on an issue, form a presidential task force. Let them have their meetings, their special sessions, their lovely dinners. Let them spend years crafting their long, carefully-considered reports. The recommendations rarely get implemented. It was disturbing to confirm another Councilor’s observation that one task force we had served on for two years had simply disappeared into an ALA Vortex.

Thinking a presidential task force is going to “fix” an ALA issue is like thinking a dues increase is going to have a significant impact on ALA’s fiscal situation. You do realize that the long-debated dues increase voted in last year will only marginally affect the ebb and tide of ALA’s revenue/expenditure stream? Ah, maybe you only thought dues made a huge difference.  Dues matter, but only to a point, and are eclipsed by other revenue streams. For example, almost half of ALA’s revenue comes from publishing.

Instead, I dialed back to a proposal for a simple resolution specific to PBA to be brought to Council, and that, among many other things, is what will be moving forward. LITA Councilor Aaron Dobbs and I are co-workerbees on this project, and we’ve already begun developing timelines and deliverables. There will be widening circles of engagement and crowdsourcing, from us to PBA to Council and beyond. A rough preliminary goal is to have a resolution ready for BARC and other units to discuss at ALA’s spring meetings in April. In addition to round-robining versions of this resolution, we’re hoping to hold a virtual Council Forum session before then to get additional input.

There are ancillary ideas that may emerge in parallel with this work. For example, I keep floating the idea of holding the Council/Executive Board/Membership Information Session online, at least two weeks prior to ALA. We have the technology to do this, and “flipping” this session would give people a chance to hear, process, think, ask a few questions, and come prepared to have real conversations about ALA.  In the Council Forum discussion, wise librarians of all ages also shared ideas and insights about what they would like to see from fiscal documents, and we were also reminded of the excellent ALA Financial Learning series of short videos.

 Not everyone thinks my focus on PBA and ALA’s fiscal condition is a good idea; I heard as much from one colleague at Midwinter.  But I can tell you that based on the phone calls and email and meetups I have had over the past year with people I truly respect—many of whom have currently or previously held distinguished positions among the ALA membership—engaging with the problem of how members engage with ALA in the budget and planning processes is an honorable investment of effort.

 

 

Bookmark to:

Schneider, Karen G: FYI on PBA at ALA

planet code4lib - Mon, 2014-02-03 17:28

Liberty Bell – from Wikipedia

I stayed in Philadelphia past ALA Midwinter for my third-semester doctoral program intensive until today, Sunday, February 2, so I’m just getting around to my post-ALA blogging.  What I’m writing about tonight, on my flight back to SFO, is a bit wonky. But stick with me, because if you’re an ALA member, it matters.

ALA has a unit called the Planning and Budget Assembly. Despite serving three previous terms on Council, I really never gave PBA much thought until last summer when I agreed to run for a position as one of its ALA Council representatives, who are elected by our Council peers. My interest was really piqued when an ALA member I respect greatly took me aside to warn me not to waste my time on PBA. I almost took that advice, but in the end, I’m glad I pressed on anyway.

I was swept into office with a grand 93 votes–hey, you laugh, but I was the top vote-getter. (I recited that from memory while composing this over  sluggish in-flight wifi, and now I am worrying there will be a scandal in which it turns out I actually received 91 votes and will have to go on an Apology Tour.)

The charge to PBA is “To assist the ALA Executive Board and the Budget Analysis and Review Committee (BARC), there shall be a Planning and Budget Assembly which shall consist of one representative of each division, ALA committee, round table, and five councilors-at-large and five councilors from chapters.” There are other fiscal units–besides BARC, a very important fiscal unit is the Finance and Audit Committee of ALA’s Executive Board–but PBA does, after all, exist, at least on paper.

First, I’d like to point out how huge PBA is. At least by headcount, it’s about 80 delegates, not including ALA staff. Additionally, the PBA assembly, taken together, is comprised of some of the best, most seasoned minds in ALA. I am in complete awe of the potential force of this assembly, and in theory, I could learn quite a bit from the questions they ask or the observations they make. Based on both their ALA work and the work they do in their libraries, they are extremely well-positioned to provide commentary and planning advice on ALA’s next steps in light of the fiscal challenges ALA has faced in the past five-plus years of recession and changing information patterns: staffing cutbacks, frozen salaries, creeping workload, sinking revenues.

But PBA’s rather exceptional group of people is not actually empowered to do anything other than be herded into a room twice a year and then read condensed highlights from various reports (reports, no less, that a number of us have already had read to us at Saturday’s Council/Executive Board/Membership Information Session).

It’s diagnostic of PBA’s dilemma that there is no onboarding for PBA, its charge is vague, and there’s no direction for what PBA is to do once it has attended these twice-yearly meetings. For ALA Midwinter, a PBA meeting that everyone knew would attract strong participation, we had a one-hour session in which we were squeezed into a  room for a group half our size, asked to do introductions (which of course took a while), then read to from reports that had been read from at Saturday’s . We had exactly 5 minutes at the end for “discussion.”

Structurally, there’s no way this assembly of close to 100 people can use this format to “assist” other ALA units.  Symbolically, the message is clear: PBA is to be seen and not heard.

Other shenanigans have bordered on silly. PBA members have no easy method for communicating as a group. We are emailed in a couple of reply-all batches. When I asked ALA several weeks back to create a Sympa discussion list for PBA, I encountered pushback.

I get that every new mailing list creates overhead, but PBA is the only governance unit denied such a list, and far more human labor has been spent stonewalling the creation of this discussion list than would have been expended just making it happen. Why, you’d think they were concerned about some activist PBA member stirring the pot and encouraging PBA members to, you know, talk amongst themselves about the future of PBA! I was assured at Midwinter that ALA will in fact create a discussion list, and I’ll let you know if that does or doesn’t happen.

In any event, it’s time to make PBA useful or kill it off. As I wrote on Council list, “I don’t want to speak for everyone on Council, but it seems safe to say that there was general agreement that the Planning and Budget Assembly has untapped potential, and that its present composition and charge and how that charge is interpreted and acted upon are not useful to ALA or to the members of the assembly. In particular, to paraphrase something Mary Ghikas said months back, PBA’s potential role in planning has not been leveraged. To be more blunt, you’re welcome to dismiss me but don’t also waste my time while you’re doing it.

But the good news about PBA is, to paraphrase Yogi Berra, there is a fork in the road, and we plan to take it.

Council has informal sessions it calls Forums. These sessions, which are open meetings, are opportunities to discuss matters before Council in a relaxed, conversational manner, outside the framework of parliamentary procedure. Council Forum II, held Monday night, concluded with an extremely resonant, thoughtful, and engaged conversation about PBA.  Maybe it’s a question of my own personal motivation—I have spent a year asking questions about the ALA budget, starting in January 2013 when I expressed concern about projected revenues from RDA—but I felt really attuned to the conversation that flowed among  former treasurers, Executive Board members, BARC-ers, and new and seasoned Councilors.

I originally thought I would head to Council Forum II with a proposal for a presidential task force on fiscal communication. But I forced myself to spend a few hours reviewing earlier ALA presidential task forces, and I learned something worth heeding: if you want to keep membership at bay on an issue, form a presidential task force. Let them have their meetings, their special sessions, their lovely dinners. Let them spend years crafting their long, carefully-considered reports. The recommendations rarely get implemented. It was disturbing to confirm another Councilor’s observation that one task force we had served on for two years had simply disappeared into an ALA Vortex.

Thinking a presidential task force is going to “fix” an ALA issue is like thinking a dues increase is going to have a significant impact on ALA’s fiscal situation. You do realize that the long-debated dues increase voted in last year will only marginally affect the ebb and tide of ALA’s revenue/expenditure stream? Ah, maybe you only thought dues made a huge difference.  Dues matter, but only to a point, and are eclipsed by other revenue streams. For example, almost half of ALA’s revenue comes from publishing.

Instead, I dialed back to a proposal for a simple resolution specific to PBA to be brought to Council, and that, among many other things, is what will be moving forward. LITA Councilor Aaron Dobbs and I are co-workerbees on this project, and we’ve already begun developing timelines and deliverables. There will be widening circles of engagement and crowdsourcing, from us to PBA to Council and beyond. A rough preliminary goal is to have a resolution ready for BARC and other units to discuss at ALA’s spring meetings in April. In addition to round-robining versions of this resolution, we’re hoping to hold a virtual Council Forum session before then to get additional input.

There are ancillary ideas that may emerge in parallel with this work. For example, I keep floating the idea of holding the Council/Executive Board/Membership Information Session online, at least two weeks prior to ALA. We have the technology to do this, and “flipping” this session would give people a chance to hear, process, think, ask a few questions, and come prepared to have real conversations about ALA.  In the Council Forum discussion, wise librarians of all ages also shared ideas and insights about what they would like to see from fiscal documents, and we were also reminded of the excellent ALA Financial Learning series of short videos.

 Not everyone thinks my focus on PBA and ALA’s fiscal condition is a good idea; I heard as much from one colleague at Midwinter.  But I can tell you that based on the phone calls and email and meetups I have had over the past year with people I truly respect—many of whom have currently or previously held distinguished positions among the ALA membership—engaging with the problem of how members engage with ALA in the budget and planning processes is an honorable investment of effort.

 

 

Bookmark to:

Manage Metadata (Phipps and Hillmann): Talking Points Report

planet code4lib - Mon, 2014-02-03 17:14

Prior to Midwinter I posted the list of presentations I was doing over the course of Midwinter. It seemed only fair to report on some of those sessions, and to share my slides. I thought about posting them separately, since my posts tend to balloon fairly significantly once I get writing (those of you who know me are free to point out that I talk like that, too)–but given that I’ve decided to post more often, y’all will have to live with length.

Saturday, January 25, 2014, 3:00-4:00 p.m., “A Consideration of Holdings in the World Beyond MARC” [Slides]

There were two speakers at this session, at which I spoke second. The first speaker was Rebecca Guenther, who spoke on BibFrame generally as well as the BibFrame approach to holdings. BibFrame currently has a fairly simple approach, for now limited to the simpler holdings needs for non-serials. This is the easy [easier?] part of course, and it will be interesting to see how serial holdings will be integrated with the model.

My presentation briefly surveyed other important holdings work in progress, including a project at the Deutschen National Bibliothek (DNB), the ONIX for Serials Coverage Statement, the current proposals for schema.org, to a brief report on a project my group is considering that would do for MARC Holdings (sample) what we’ve already set up for MARC Bibliographic data at marc21rdf.info.

What struck me when I was setting up the presentation was the amazing variety of work going on in this area. I really didn’t expect that, I confess. But by immersing myself in holdings as I hadn’t done for many moons, I found I was looking at an awful lot of very recent work. And it wasn’t just the diversity of approaches that surprised me, but the varied results as well. The efforts ran the gamut from very complex and comprehensive approaches (ONIX and MFHD) to much simpler approaches. The functions anticipated for each colored the diverse outcomes to a great extent. The ONIX XML schema was easily the most complex–with some ideas based on the MFHD work.

The schema.org effort is, like BibFrame, still in the process of jelling. When looking for the evidence of schema.org holdings, I found myself on a path that had already been abandoned (though it then showed no signs of abandonment). Richard Wallis pointed me at the right place, and the slides have been corrected to fix that problem.

Sunday, January 26, 2014, 8:30-10:00 a.m., “The Other Side of Linked Data: Managing Metadata Aggregation” [Slides]

This session also included two presentations, mine was first this time. My focus was that most people think Linked Open Data (LOD) is about libraries exposing their data to the world, but that’s only half of LOD. The other half is taking advantage of the data others (libraries and non-libraries) are exposing openly. The two fundamental things about the LOD world are both ideas that tend to explode minds. First is the realization that we’re not talking about highly OCLC-curated MARC records, pre-aggregated for easy ingest into traditional library systems. Instead, we are talking about management of statements (which may indeed be records as originally ingested, but to be useful in this multiple choice world must be shredded on the way in and re-aggregated on the way out.) There are many new skills we’ll have to learn (and an awful lot of assumptions that we’ll need to examine closely and maybe toss out the window). This is daunting, but hardly rocket surgery, and the sooner we get going, the better off we’ll be.

The second presentation was from a group working at the Digital Public Library of America (DPLA), which is confronting many of these issues. Their announcement stated:

“This talk will introduce and outline the challenges of aggregating disparate metadata flavors from the perspective of both DPLA staff and representative hubs. We will review next steps and emerging frontiers as well, including improvements to normalization at the hub level and wider adoption of controlled vocabularies and formats for geospatial metadata and usage rights statements.”

And this was exactly what they did. They provided a very juicy look at the real world that faces anyone attempting to deal with the current metadata chaos. This is definitely work to follow, because where they are now will change over time and with experience, providing the rest of us with some really useful insights. Their slides are available from the IG site.

Sunday, January 26, 2014, 10:30-11:30 a.m., “Mapmakers” [Slides]

The Mapmakers presentation was designed to highlight some research I’ve been involved in, along with my colleagues Jon Phipps and Gordon Dunsire. This topic has not received a vast following, but should as experience with new schemas and value vocabularies expands. As is usual, there was another presentation just before ours that gave an exciting view of innovative work in expanding our notion of authority, in particular gathering and managing data from a broad variety of sources. Their work is encountering very similar challenges as the DPLA, though in some ways even more challenging since they often have to develop the sources and bring them into the LOD world.

That presentation focused on work done in the ProMusicaDB project, with founder Christy Crowl and metadata librarian Kimmy Szeto sharing the podium. There was a feast of slides and stories, all of them illustrating the new ways that we’ll all be operating in the very near future. Their slides (including a demo) should be available on the ALCTS Cataloging and Classification Research IG site very shortly (though not yet as of this writing).

Manage Metadata (Phipps and Hillmann): Talking Points Report

planet code4lib - Mon, 2014-02-03 17:14

Prior to Midwinter I posted the list of presentations I was doing over the course of Midwinter. It seemed only fair to report on some of those sessions, and to share my slides. I thought about posting them separately, since my posts tend to balloon fairly significantly once I get writing (those of you who know me are free to point out that I talk like that, too)–but given that I’ve decided to post more often, y’all will have to live with length.

Saturday, January 25, 2014, 3:00-4:00 p.m., “A Consideration of Holdings in the World Beyond MARC” [Slides]

There were two speakers at this session, at which I spoke second. The first speaker was Rebecca Guenther, who spoke on BibFrame generally as well as the BibFrame approach to holdings. BibFrame currently has a fairly simple approach, for now limited to the simpler holdings needs for non-serials. This is the easy [easier?] part of course, and it will be interesting to see how serial holdings will be integrated with the model.

My presentation briefly surveyed other important holdings work in progress, including a project at the Deutschen National Bibliothek (DNB), the ONIX for Serials Coverage Statement, the current proposals for schema.org, to a brief report on a project my group is considering that would do for MARC Holdings (sample) what we’ve already set up for MARC Bibliographic data at marc21rdf.info.

What struck me when I was setting up the presentation was the amazing variety of work going on in this area. I really didn’t expect that, I confess. But by immersing myself in holdings as I hadn’t done for many moons, I found I was looking at an awful lot of very recent work. And it wasn’t just the diversity of approaches that surprised me, but the varied results as well. The efforts ran the gamut from very complex and comprehensive approaches (ONIX and MFHD) to much simpler approaches. The functions anticipated for each colored the diverse outcomes to a great extent. The ONIX XML schema was easily the most complex–with some ideas based on the MFHD work.

The schema.org effort is, like BibFrame, still in the process of jelling. When looking for the evidence of schema.org holdings, I found myself on a path that had already been abandoned (though it then showed no signs of abandonment). Richard Wallis pointed me at the right place, and the slides have been corrected to fix that problem.

Sunday, January 26, 2014, 8:30-10:00 a.m., “The Other Side of Linked Data: Managing Metadata Aggregation” [Slides]

This session also included two presentations, mine was first this time. My focus was that most people think Linked Open Data (LOD) is about libraries exposing their data to the world, but that’s only half of LOD. The other half is taking advantage of the data others (libraries and non-libraries) are exposing openly. The two fundamental things about the LOD world are both ideas that tend to explode minds. First is the realization that we’re not talking about highly OCLC-curated MARC records, pre-aggregated for easy ingest into traditional library systems. Instead, we are talking about management of statements (which may indeed be records as originally ingested, but to be useful in this multiple choice world must be shredded on the way in and re-aggregated on the way out.) There are many new skills we’ll have to learn (and an awful lot of assumptions that we’ll need to examine closely and maybe toss out the window). This is daunting, but hardly rocket surgery, and the sooner we get going, the better off we’ll be.

The second presentation was from a group working at the Digital Public Library of America (DPLA), which is confronting many of these issues. Their announcement stated:

“This talk will introduce and outline the challenges of aggregating disparate metadata flavors from the perspective of both DPLA staff and representative hubs. We will review next steps and emerging frontiers as well, including improvements to normalization at the hub level and wider adoption of controlled vocabularies and formats for geospatial metadata and usage rights statements.”

And this was exactly what they did. They provided a very juicy look at the real world that faces anyone attempting to deal with the current metadata chaos. This is definitely work to follow, because where they are now will change over time and with experience, providing the rest of us with some really useful insights. Their slides are available from the IG site.

Sunday, January 26, 2014, 10:30-11:30 a.m., “Mapmakers” [Slides]

The Mapmakers presentation was designed to highlight some research I’ve been involved in, along with my colleagues Jon Phipps and Gordon Dunsire. This topic has not received a vast following, but should as experience with new schemas and value vocabularies expands. As is usual, there was another presentation just before ours that gave an exciting view of innovative work in expanding our notion of authority, in particular gathering and managing data from a broad variety of sources. Their work is encountering very similar challenges as the DPLA, though in some ways even more challenging since they often have to develop the sources and bring them into the LOD world.

That presentation focused on work done in the ProMusicaDB project, with founder Christy Crowl and metadata librarian Kimmy Szeto sharing the podium. There was a feast of slides and stories, all of them illustrating the new ways that we’ll all be operating in the very near future. Their slides (including a demo) should be available on the ALCTS Cataloging and Classification Research IG site very shortly (though not yet as of this writing).

Scott, Dan: What would you understand if you read the entire world wide web?

planet code4lib - Mon, 2014-02-03 15:40

On Tuesday, February 4th, I'll be participating in Laurentian University's Research Week lightning talks. Unlike most five-minute lightning talk events in which I've participated, the time limit for each talk tomorrow will be one minute. Imagine 60 different researchers getting up to summarize their research in one minute each, and you have what is likely to be a brain-melting hour. Should be fun!

Here's a rough draft of what I'm planning to say (which, when read at an even cadence with decent intonation, comes out to exactly one minute:)

What would you understand if you read the _entire_ world wide web?

As humans, we would understand a lot: but we can rely on the context, structure, and significance of elements of web pages to derive meaning.

The algorithms behind search engines adopt a similar approach, but struggle with ambiguity; when a web page mentions "Dan Scott", is it:

  • "Dan Scott" the character from the One Tree Hill TV show
  • "Dan Scott" the artist from Magic the Gathering card game
  • "Dan Scott" the Ontario academic professor from the University of Waterloo
  • "Dan Scott" the Ontario academic librarian from Laurentian University

schema.org is a vocabulary for embedding explicit meaning and intent within web pages that offers a way to disambiguate those entities.

My research is a collaborative effort--within the auspices of the World Wide Web Consortium--to define bibliographic extensions for schema.org where necessary, and best practices based on concrete implementations in three different library systems.

Scott, Dan: What would you understand if you read the entire world wide web?

planet code4lib - Mon, 2014-02-03 15:40

On Tuesday, February 4th, I'll be participating in Laurentian University's Research Week lightning talks. Unlike most five-minute lightning talk events in which I've participated, the time limit for each talk tomorrow will be one minute. Imagine 60 different researchers getting up to summarize their research in one minute each, and you have what is likely to be a brain-melting hour. Should be fun!

Here's a rough draft of what I'm planning to say (which, when read at an even cadence with decent intonation, comes out to exactly one minute:)

What would you understand if you read the _entire_ world wide web?

As humans, we would understand a lot: but we can rely on the context, structure, and significance of elements of web pages to derive meaning.

The algorithms behind search engines adopt a similar approach, but struggle with ambiguity; when a web page mentions "Dan Scott", is it:

  • "Dan Scott" the character from the One Tree Hill TV show
  • "Dan Scott" the artist from Magic the Gathering card game
  • "Dan Scott" the Ontario academic professor from the University of Waterloo
  • "Dan Scott" the Ontario academic librarian from Laurentian University

schema.org is a vocabulary for embedding explicit meaning and intent within web pages that offers a way to disambiguate those entities.

My research is a collaborative effort--within the auspices of the World Wide Web Consortium--to define bibliographic extensions for schema.org where necessary, and best practices based on concrete implementations in three different library systems.

OCLC Dev Network: Welcome to Developer House

planet code4lib - Mon, 2014-02-03 15:00

Developer House kicks off this afternoon and we are so excited to welcome 12 terrific library developers bringing a variety of experience and skills: Sara Amato, Joe Atzberger, Terry Brady, Brian Cassidy, Michael Doran, Bobbi Fox, Bohyun Kim, Lauren Magnuson, Terry Reese, Andrea Schurr, Jason Stirnaman, and Mark Sullivan. George Campbell, Karen Coombs, Steve Meyer here at Developer Network will be elbow deep in code right along with them. This week will really be driven by the group effort - from ideas to plans to code – and our experiences together.

read more

OCLC Dev Network: Welcome to Developer House

planet code4lib - Mon, 2014-02-03 15:00

Developer House kicks off this afternoon and we are so excited to welcome 12 terrific library developers bringing a variety of experience and skills: Sara Amato, Joe Atzberger, Terry Brady, Brian Cassidy, Michael Doran, Bobbi Fox, Bohyun Kim, Lauren Magnuson, Terry Reese, Andrea Schurr, Jason Stirnaman, and Mark Sullivan. George Campbell, Karen Coombs, Steve Meyer here at Developer Network will be elbow deep in code right along with them. This week will really be driven by the group effort - from ideas to plans to code – and our experiences together.

read more

Open Knowledge Foundation: The Open Knowledge Foundation Newsletter, February 2014

planet code4lib - Mon, 2014-02-03 14:01

Sign up here for monthly updates to your inbox.

Greetings!

One month into 2014, there’s plenty going on around the Open Knowledge Foundation, including lots of activity for Copyright Week mid-January as well as preparation for Open Data Day towards the end of February.

As ever, the global Open Knowledge Foundation network has been busy, including Bangladesh supporting the regional Math Olympiad, Nepal celbrating Education Freedom Day, and Scotland collaborating with other organisations to create Datafest Scotland 2014 – see for yourself what the various communities have been up to at the Community Stories Tumblr, and do add your own stories!

So here’s your monthly digest: grab a cuppa, put your feet up and settle back for a coffee-break celebration of all things Open.

Open Knowledge Foundation Germany rejects cease and desist order in the cause of Open

Say you use the Freedom Of Information (FOI) process to access some information. You decide to use FragDenStaat.de, an FOI portal, as it will publish the results, which makes sense as anyone else could access this if they also submitted an FOI. You wouldn’t expect to be prohibited from publishing the requested information freely, and certainly not because of copyright, a tool created to defend the creative works of artists and authors for their own livelihood… Right? Wrong!

A cease and desist order has been issued to Open Knowledge Foundation Germany, the of FragDenStaat.de, for publishing a document received under the German federal FOI law. The German Federal Ministry of the Interior claims copyright as the reason for this order, and FragDenStaat.de is refusing to comply, standing “against this blatant misuse of copyright” and “looking forward to a court decision that will strengthen freedom of speech, freedom of the press and freedom of information rights in Germany” (in the words of Stefan Wehrmeyer in his blog post).

Want to help?

  • Help support the court case by donating at BetterPlace.org or through you bank (using these details – Open Knowledge Foundation Deutschland e.V /IBAN: DE89830944950003009670 /BIC: GENO DE F1 ETK)
  • Tell the EU to fix copyright (see article mentioned below for more details on this public consultation)

For full information refer to the campaign site.

Open Copyright Week 2014

New Licenses approved as Open

Big news preceeding Copyright Week was that Creative Commons 4.0 BY and BY-SA licenses were approved conformant with the Open Definition. The Open Definition, one of the first projects of the Open Knowledge Foundation, is the reference-point for understanding what Open is and how you can determine whether something is Open or not. Being able to release data and information openly is one of the most important steps in making Open the norm – thanks, Creative Commons!

Want to have your say in what licenses are needed and should be reviewed and approved? Join the Open Definition Discuss email list.

Fix EU Copyright!

This was the cry during Copyright Week, encouraging input to the public consultation on the review of the EU copyright rules. Creativity for Copyright and volunteer coders put together an online version of the paper document (attention EU: bringing us right into the 21st Century, only 14 years in) at http://youcan.fixcopyright.eu/ to create a multilingual form for easy submission. You still have a few days to contribute (deadline is 5th Feb) so if you haven’t done so already, get your opinion heard.

For more background on copyright and Open Access, have a read of this article by the Open Access Working Group, and the work on Public Domain Calculators by the Public Domain and OpenGLAM Working Groups along with OKF-France.

Who is the Open Spending Data Community?

This question was answered through an in-depth mapping project, investigating how citizens, journalists, and Civil Society Organisations (CSOs) around the world use government finance data to further their civic missions. The results – well, we won’t give spoilers here, you’ll just have to read for yourself and see if you feature!

To set the scene, check out this video series, “Athens to Berlin”, in which various members of CSOs reflect on their work in this area and look ahead to future opportunities.

Coming Up:

Open Data Day is coming… In preparation for February 22nd’s big event, the cry to participate went up, spearheaded by the Open Knowledge Foundation’s Events Manager, Beatrice Martini. The Hangout, held on the 21st January, was hosted by Beatrice, Heather Leson and the founder of Open Data Day, David Eaves. This gave the history of the event, tips and advice on planning events (following up on this article from December), and a Q&A session. Sorry you missed it? Join the mailing list to know more and get planning. Already planning? Add your event to the website.

Watch this space for news about the one-and-only OKFestival, coming very soon!

Copyright infringement cartoon by Hartboy

Open Knowledge Foundation: The Open Knowledge Foundation Newsletter, February 2014

planet code4lib - Mon, 2014-02-03 14:01

Sign up here for monthly updates to your inbox.

Greetings!

One month into 2014, there’s plenty going on around the Open Knowledge Foundation, including lots of activity for Copyright Week mid-January as well as preparation for Open Data Day towards the end of February.

As ever, the global Open Knowledge Foundation network has been busy, including Bangladesh supporting the regional Math Olympiad, Nepal celbrating Education Freedom Day, and Scotland collaborating with other organisations to create Datafest Scotland 2014 – see for yourself what the various communities have been up to at the Community Stories Tumblr, and do add your own stories!

So here’s your monthly digest: grab a cuppa, put your feet up and settle back for a coffee-break celebration of all things Open.

Open Knowledge Foundation Germany rejects cease and desist order in the cause of Open

Say you use the Freedom Of Information (FOI) process to access some information. You decide to use FragDenStaat.de, an FOI portal, as it will publish the results, which makes sense as anyone else could access this if they also submitted an FOI. You wouldn’t expect to be prohibited from publishing the requested information freely, and certainly not because of copyright, a tool created to defend the creative works of artists and authors for their own livelihood… Right? Wrong!

A cease and desist order has been issued to Open Knowledge Foundation Germany, the of FragDenStaat.de, for publishing a document received under the German federal FOI law. The German Federal Ministry of the Interior claims copyright as the reason for this order, and FragDenStaat.de is refusing to comply, standing “against this blatant misuse of copyright” and “looking forward to a court decision that will strengthen freedom of speech, freedom of the press and freedom of information rights in Germany” (in the words of Stefan Wehrmeyer in his blog post).

Want to help?

  • Help support the court case by donating at BetterPlace.org or through you bank (using these details – Open Knowledge Foundation Deutschland e.V /IBAN: DE89830944950003009670 /BIC: GENO DE F1 ETK)
  • Tell the EU to fix copyright (see article mentioned below for more details on this public consultation)

For full information refer to the campaign site.

Open Copyright Week 2014

New Licenses approved as Open

Big news preceeding Copyright Week was that Creative Commons 4.0 BY and BY-SA licenses were approved conformant with the Open Definition. The Open Definition, one of the first projects of the Open Knowledge Foundation, is the reference-point for understanding what Open is and how you can determine whether something is Open or not. Being able to release data and information openly is one of the most important steps in making Open the norm – thanks, Creative Commons!

Want to have your say in what licenses are needed and should be reviewed and approved? Join the Open Definition Discuss email list.

Fix EU Copyright!

This was the cry during Copyright Week, encouraging input to the public consultation on the review of the EU copyright rules. Creativity for Copyright and volunteer coders put together an online version of the paper document (attention EU: bringing us right into the 21st Century, only 14 years in) at http://youcan.fixcopyright.eu/ to create a multilingual form for easy submission. You still have a few days to contribute (deadline is 5th Feb) so if you haven’t done so already, get your opinion heard.

For more background on copyright and Open Access, have a read of this article by the Open Access Working Group, and the work on Public Domain Calculators by the Public Domain and OpenGLAM Working Groups along with OKF-France.

Who is the Open Spending Data Community?

This question was answered through an in-depth mapping project, investigating how citizens, journalists, and Civil Society Organisations (CSOs) around the world use government finance data to further their civic missions. The results – well, we won’t give spoilers here, you’ll just have to read for yourself and see if you feature!

To set the scene, check out this video series, “Athens to Berlin”, in which various members of CSOs reflect on their work in this area and look ahead to future opportunities.

Coming Up:

Open Data Day is coming… In preparation for February 22nd’s big event, the cry to participate went up, spearheaded by the Open Knowledge Foundation’s Events Manager, Beatrice Martini. The Hangout, held on the 21st January, was hosted by Beatrice, Heather Leson and the founder of Open Data Day, David Eaves. This gave the history of the event, tips and advice on planning events (following up on this article from December), and a Q&A session. Sorry you missed it? Join the mailing list to know more and get planning. Already planning? Add your event to the website.

Watch this space for news about the one-and-only OKFestival, coming very soon!

Copyright infringement cartoon by Hartboy

Reese, Terry: MarcEdit 5.9 Update

planet code4lib - Mon, 2014-02-03 06:48

I had a few free cycles this past weekend, and decided to close a few enhancement requests.  Nothing earth shattering.  Here’s the list:

Version: 5.9.75

  • Enhancement: Swap Field Function — Modified the tool so that when swapping a multiple subfields to a single subfield. I.E., 245ahb to 999a
  • Enhancement: Merge Function — Updated the weighting so that the program does a better job when main titles match, but other data does not. This is by creating a secondary match criteria that uses the presence and match of subsequent data.
  • Enhancement: Merge Function — Updated the tool so that records with multiple titles (due to Romanization or transliteration) can be matched (before, they were often ignored if multiple titles were present).
  • Enhancement: Delete Function — Updated the function so that the Remove if does not match can take a regular expression. This has always been supported in the code, but disabled via the UI for additional testing.
  • Enhancement: MARCSplit Function — padded the numeric values used when generating filenames to allow for better sorting.
  • Enhancement: MARCSplit Function — Before processing, the program will evaluate the destination directory and if split files already exist, will give you the option to preserve the files by incrementing the count of the file naming convention. I.E., if you have a file where the last split file is named msplit00000012.mrc, the program will start the next split operation at msplit00000013.mrc.
  • Enhancement: Add the following streaming functions to the COM:
    • Marc2XML_Stream
    • Add_Field_Stream
    • Delete_Field_Stream

If you have the automatic updates configured – you should be prompted to update the next time you run MarcEdit.  Otherwise, you can download the update from:

 

–tr

Reese, Terry: MarcEdit 5.9 Update

planet code4lib - Mon, 2014-02-03 06:48

I had a few free cycles this past weekend, and decided to close a few enhancement requests.  Nothing earth shattering.  Here’s the list:

Version: 5.9.75

  • Enhancement: Swap Field Function — Modified the tool so that when swapping a multiple subfields to a single subfield. I.E., 245ahb to 999a
  • Enhancement: Merge Function — Updated the weighting so that the program does a better job when main titles match, but other data does not. This is by creating a secondary match criteria that uses the presence and match of subsequent data.
  • Enhancement: Merge Function — Updated the tool so that records with multiple titles (due to Romanization or transliteration) can be matched (before, they were often ignored if multiple titles were present).
  • Enhancement: Delete Function — Updated the function so that the Remove if does not match can take a regular expression. This has always been supported in the code, but disabled via the UI for additional testing.
  • Enhancement: MARCSplit Function — padded the numeric values used when generating filenames to allow for better sorting.
  • Enhancement: MARCSplit Function — Before processing, the program will evaluate the destination directory and if split files already exist, will give you the option to preserve the files by incrementing the count of the file naming convention. I.E., if you have a file where the last split file is named msplit00000012.mrc, the program will start the next split operation at msplit00000013.mrc.
  • Enhancement: Add the following streaming functions to the COM:
    • Marc2XML_Stream
    • Add_Field_Stream
    • Delete_Field_Stream

If you have the automatic updates configured – you should be prompted to update the next time you run MarcEdit.  Otherwise, you can download the update from:

 

–tr

Summers, Ed: Paris Review Interviews and Wikipedia

planet code4lib - Mon, 2014-02-03 03:29

I was recently reading an amusing piece by David Dobbs about William Faulkner being a tough interview. Dobbs has been working through the Paris Review archive of interviews which are available on the Web. The list of authors is really astonishing, and the interviews are great examples of longform writing on the Web.

The 1965 interview with William S. Burroughs really blew me away. So much so that I got to wondering how many Wikipedia articles reference these interviews.

A few years ago, I experimented with a site called Linkypedia for visualizing how a particular website is referenced on Wikipedia. It’s actually pretty easy to write a script to see what Wikipedia articles point at a Website, and I’ve done it enough times that it was convenient to wrap it up in a little Python module.

from wplinks import extlinks   for src, target in extlinks('http://www.theparisreview.org/interviews'): print wikipedia_url, website_url

But I wanted to get a picture not only of what Wikipedia articles pointed at the Paris Review, but also Paris Review interviews which were not referenced in Wikipedia. So I wrote a little crawler that collected all the Paris Review interviews, and then figured out which ones were pointed at by English Wikipedia.

This was also an excuse to learn about JSON-LD, which became a W3C Recommendation a few weeks ago. I wanted to use JSON-LD to serialize the results of my crawling as an RDF graph so I could visualize the connections between authors, their interviews, and each other (via influence links that can be found on dbpedia) using D3′s Force Layout. Here’s a little portion of the larger graph, which you can find by clicking on it.

As you can see it’s a bit of a hairball. If you want to have a go at visualizing the data the JSON-LD can be found here. The blue nodes are Wikipedia articles, the white and red nodes are Paris Review interviews. The red ones are interviews that are not yet linked to from Wikipedia. 322 of the 362 interviews are already linked to Wikipedia. Here is the list of 40 that still need to be linked, in the unlikely event that you are a Wikipedian looking for something to do:

I ran into my friend Dan over coffee who sketched out a better way to visualize the relationships between the writers, the interviews and the time periods. Might be a good excuse to get a bit more familiar with D3 …

Syndicate content