planet code4lib

Syndicate content
Planet Code4Lib - http://planet.code4lib.org
Updated: 1 week 3 days ago

Rochkind, Jonathan: Agility vs ‘agile’

Tue, 2014-03-11 12:33

Yes, more of this please. From Dave Thomas, one of the originators of the ‘agile manifesto’, who I have a newfound respect for after reading this essay.

Agile Is Dead (Long Live Agility)

However, since the Snowbird meeting, I haven’t participated in any Agile events, I haven’t affiliated with the Agile Alliance, and I haven’t done any “agile” consultancy. I didn’t attend the 10th anniversary celebrations.

Why? Because I didn’t think that any of these things were in the spirit of the manifesto we produced…

Let’s look again at the four values:

Individuals and Interactions over Processes and Tools
Working Software over Comprehensive Documentation
Customer Collaboration over Contract Negotiation, and
Responding to Change over Following a Plan

The phrases on the left represent an ideal—given the choice between left and right, those who develop software with agility will favor the left.

Now look at the consultants and vendors who say they’ll get you started with “Agile.” Ask yourself where they are positioned on the left-right axis. My guess is that you’ll find them process and tool heavy, with many suggested work products (consultant-speak for documents to keep managers happy) and considerably more planning than the contents of a whiteboard and some sticky notes…

Back to the Basics

Here is how to do something in an agile fashion:

What to do:

  • Find out where you are
  • Take a small step towards your goal
  • Adjust your understanding based on what you learned
  • Repeat

How to do it:

When faced with two of more alternatives that deliver roughly the same value, take the path that makes future change easier.

And that’s it. Those four lines and one practice encompass everything there is to know about effective software development. Of course, this involves a fair amount of thinking, and the basic loop is nested fractally inside itself many times as you focus on everything from variable naming to long-term delivery, but anyone who comes up with something bigger or more complex is just trying to sell you something.

http://pragdave.me/blog/2014/03/04/time-to-kill-agile/

I think people tricked by others trying to sell them something isn’t actually the only, or even the main, reason people get distracted from actual agility by lots of ‘agile’ rigamarole which is anything but.

I think there are intrinsic distracting motivations and interests in many organizations too: The need for people in certain positions to feel in control; the need for blame to be assigned when something goes wrong; just plain laziness and desire for shortcuts and magic bullets; prioritizing all of these things (whether you realize it or not) over actual product quality.

Producing good software is hard, for both technical and social/organizational reasons. But my ~18 years of software engineering (and life!) experience lead me to believe that there are no ‘tool’ shortcuts or magic bullets, you do it just the way Thomas says you do it: you just do it, always in small iterative steps always re-evaluating next steps and always in continual contact with ‘stakeholders’ (who need to put time and psychic energy in too).  Anything else is distraction at best but more likely even worse, misdirection.

And there’s a whole lot of distraction and misdirection labelled ‘agile’.


Filed under: General

ALA Equitable Access to Electronic Content: Teen issues and tech policies intersect this Thursday

Mon, 2014-03-10 21:40

In what ways are Washington issues affecting teen library users? How can librarians support technology policies that support teenagers? Ask these questions and more this Thursday when technology policy leaders from the American Library Association’s Office for Information Technology Policy (OITP) discuss digital learning via the Young Adult Library Services Association’s @yalsa Twitter account.

As part of Teen Tech Week, OITP will join several businesses, nonprofits, library organizations and Internet companies in highlighting the digital tools, resources and services that libraries offer to teens and their families. OITP will cover a variety of topics all-day Thursday, including current technology policies, internet filtering, copyright fair use, internet access and net neutrality.

Ask questions and follow the Twitter discussion using the #TTW14 hashtag.

The post Teen issues and tech policies intersect this Thursday appeared first on District Dispatch.

ALA Equitable Access to Electronic Content: Teen issues and tech policies intersect this Thursday

Mon, 2014-03-10 21:40

In what ways are Washington issues affecting teen library users? How can librarians support technology policies that support teenagers? Ask these questions and more this Thursday when technology policy leaders from the American Library Association’s Office for Information Technology Policy (OITP) discuss digital learning via the Young Adult Library Services Association’s @yalsa Twitter account.

As part of Teen Tech Week, OITP will join several businesses, nonprofits, library organizations and Internet companies in highlighting the digital tools, resources and services that libraries offer to teens and their families. OITP will cover a variety of topics all-day Thursday, including current technology policies, internet filtering, copyright fair use, internet access and net neutrality.

Ask questions and follow the Twitter discussion using the #TTW14 hashtag.

The post Teen issues and tech policies intersect this Thursday appeared first on District Dispatch.

code4lib: Call for proposals: Code4Lib Journal, issue 25

Mon, 2014-03-10 18:23

The Code4Lib Journal (C4LJ) exists to foster community and share information among those interested in the intersection of libraries, technology, and the future.

We are now accepting proposals for publication in our 25th issue. Don't miss out on this opportunity to share your ideas and experiences. To be included in the 25th issue, which is scheduled for publication in mid-July 2014, please submit articles, abstracts, or proposals via web form or by email to journal@code4lib.org by Friday, April 11, 2014. When submitting, please include the title or subject of the proposal in the subject line of the email message.

C4LJ encourages creativity and flexibility, and the editors welcome submissions across a broad variety of topics that support the mission of the journal. Possible topics include, but are not limited to:

read more

code4lib: Call for proposals: Code4Lib Journal, issue 25

Mon, 2014-03-10 18:23

The Code4Lib Journal (C4LJ) exists to foster community and share information among those interested in the intersection of libraries, technology, and the future.

We are now accepting proposals for publication in our 25th issue. Don't miss out on this opportunity to share your ideas and experiences. To be included in the 25th issue, which is scheduled for publication in mid-July 2014, please submit articles, abstracts, or proposals via web form or by email to journal@code4lib.org by Friday, April 11, 2014. When submitting, please include the title or subject of the proposal in the subject line of the email message.

C4LJ encourages creativity and flexibility, and the editors welcome submissions across a broad variety of topics that support the mission of the journal. Possible topics include, but are not limited to:

read more

ALA Equitable Access to Electronic Content: OITP expands policy staff

Mon, 2014-03-10 17:42

I am pleased to announce that Charles P. (Charlie) Wapner begins work today as an information policy analyst. Charlie will work on a broad range of topics that includes copyright, licensing, telecommunications and E-rate, and provide support for our new Policy Revolution! initiative sponsored by the Bill & Melinda Gates Foundation.

Charlie comes to the American Library Association from the Office of Representative Ron Barber (D-AZ) where he was a legislative fellow. Earlier, Charlie also served as a legislative correspondent for Representative Mark Critz (D-PA). Charlie also interned in the offices of Senator Kirsten Gillibrand (D-NY) and Pennsylvania Governor Edward Rendell. After completing his B.A. in Diplomatic History at the University of Pennsylvania, Charlie received his M.S. in public policy and management from Carnegie Mellon University.

We look forward to Charlie’s help in advancing our efforts on many different fronts.

The post OITP expands policy staff appeared first on District Dispatch.

ALA Equitable Access to Electronic Content: OITP expands policy staff

Mon, 2014-03-10 17:42

I am pleased to announce that Charles P. (Charlie) Wapner begins work today as an information policy analyst. Charlie will work on a broad range of topics that includes copyright, licensing, telecommunications and E-rate, and provide support for our new Policy Revolution! initiative sponsored by the Bill & Melinda Gates Foundation.

Charlie comes to the American Library Association from the Office of Representative Ron Barber (D-AZ) where he was a legislative fellow. Earlier, Charlie also served as a legislative correspondent for Representative Mark Critz (D-PA). Charlie also interned in the offices of Senator Kirsten Gillibrand (D-NY) and Pennsylvania Governor Edward Rendell. After completing his B.A. in Diplomatic History at the University of Pennsylvania, Charlie received his M.S. in public policy and management from Carnegie Mellon University.

We look forward to Charlie’s help in advancing our efforts on many different fronts.

The post OITP expands policy staff appeared first on District Dispatch.

Open Knowledge Foundation: “Open-washing” – The difference between opening your data and simply making them available

Mon, 2014-03-10 15:55

(This is the English version of the Danish blog post originally posted on the Open Knowledge Foundation Danish site and translated from Danish by Christian Villum, “Openwashing” – Forskellen mellem åbne data og tilgængelige data)

Last week, the Danish it-magazine Computerworld, in an article entitled “Check-list for digital innovation: These are the things you must know“, emphasised how more and more companies are discovering that giving your users access to your data is a good business strategy. Among other they wrote:

(Translation from Danish) According to Accenture it is becoming clear to many progressive businesses that their data should be treated as any other supply chain: It should flow easily and unhindered through the whole organisation and perhaps even out into the whole eco-system – for instance through fully open API’s.

They then use Google Maps as an example, which firstly isn’t entirely correct, as also pointed out by the Neogeografen, a geodata blogger, who explains how Google Maps isn’t offering raw data, but merely an image of the data. You are not allowed to download and manipulate the data – or run it off your own server.

But secondly I don’t think it’s very appropriate to highlight Google and their Maps project as a golden example of a business that lets its data flow unhindered to the public. It’s true that they are offering some data, but only in a very limited way – and definitely not as open data – and thereby not as progressively as the article suggests.

Surely it’s hard to accuse Google of not being progressive in general. The article states how Google Maps’ data are used by over 800,000 apps and businesses across the globe. So yes, Google has opened its silo a little bit, but only in a very controlled and limited way, which leaves these 800,000 businesses dependent on the continual flow of data from Google and thereby not allowing them to control the very commodities they’re basing their business on. This particular way of releasing data brings me to the problem that we’re facing: Knowing the difference between making data available and making them open.

Open data is characterized by not only being available, but being both legally open (released under an open license that allows full and free reuse conditioned at most to giving credit to it’s source and under same license) and technically available in bulk and in machine readable formats – contrary to the case of Google Maps. It may be that their data are available, but they’re not open. This – among other reasons – is why the global community around the 100% open alternative Open Street Map is growing rapidly and an increasing number of businesses choose to base their services on this open initiative instead.

But why is it important that data are open and not just available? Open data strengthens the society and builds a shared resource, where all users, citizens and businesses are enriched and empowered, not just the data collectors and publishers. “But why would businesses spend money on collecting data and then give them away?” you ask. Opening your data and making a profit are not mutually exclusive. Doing a quick Google search reveals many businesses that both offer open data and drives a business on them – and I believe these are the ones that should be highlighted as particularly progressive in articles such as the one from Computerworld.

One example is the British company OpenCorporates, which offer their growing repository of corporate register data as open data, and thereby cleverly positions themselves as a go-to resource in that field. This approach strengthens their opportunity to offer consultancy services, data analysis and other custom services for both businesses and the public sector. Other businesses are welcome to use the data, even for competitive use or to create other services, but only under the same data license – and thereby providing a derivative resource useful for OpenCorporates. Therein lies the real innovation and sustainability – effectively removing the silos and creating value for society, not just the involved businesses. Open data creates growth and innovation in our society – while Google’s way of offering data probably mostly creates growth for…Google.

We are seeing a rising trend of what can be termed “open-washing” (inspired by “greenwashing“) – meaning data publishers that are claiming their data is open, even when it’s not – but rather just available under limiting terms. If we – at this critical time in the formative period of the data driven society – aren’t critically aware of the difference, we’ll end up putting our vital data streams in siloed infrastructure built and owned by international corporations. But also to give our praise and support to the wrong kind of unsustainable technological development.

To learn more about open data visit the Open Definition and this introduction to the topic by the the Open Knowledge Foundation. To voice your opinion join the mailing list for Open Knowledge Foundation.

Open Knowledge Foundation: “Open-washing” – The difference between opening your data and simply making them available

Mon, 2014-03-10 15:55

(This is the English version of the Danish blog post originally posted on the Open Knowledge Foundation Danish site and translated from Danish by Christian Villum, “Openwashing” – Forskellen mellem åbne data og tilgængelige data)

Last week, the Danish it-magazine Computerworld, in an article entitled “Check-list for digital innovation: These are the things you must know“, emphasised how more and more companies are discovering that giving your users access to your data is a good business strategy. Among other they wrote:

(Translation from Danish) According to Accenture it is becoming clear to many progressive businesses that their data should be treated as any other supply chain: It should flow easily and unhindered through the whole organisation and perhaps even out into the whole eco-system – for instance through fully open API’s.

They then use Google Maps as an example, which firstly isn’t entirely correct, as also pointed out by the Neogeografen, a geodata blogger, who explains how Google Maps isn’t offering raw data, but merely an image of the data. You are not allowed to download and manipulate the data – or run it off your own server.

But secondly I don’t think it’s very appropriate to highlight Google and their Maps project as a golden example of a business that lets its data flow unhindered to the public. It’s true that they are offering some data, but only in a very limited way – and definitely not as open data – and thereby not as progressively as the article suggests.

Surely it’s hard to accuse Google of not being progressive in general. The article states how Google Maps’ data are used by over 800,000 apps and businesses across the globe. So yes, Google has opened its silo a little bit, but only in a very controlled and limited way, which leaves these 800,000 businesses dependent on the continual flow of data from Google and thereby not allowing them to control the very commodities they’re basing their business on. This particular way of releasing data brings me to the problem that we’re facing: Knowing the difference between making data available and making them open.

Open data is characterized by not only being available, but being both legally open (released under an open license that allows full and free reuse conditioned at most to giving credit to it’s source and under same license) and technically available in bulk and in machine readable formats – contrary to the case of Google Maps. It may be that their data are available, but they’re not open. This – among other reasons – is why the global community around the 100% open alternative Open Street Map is growing rapidly and an increasing number of businesses choose to base their services on this open initiative instead.

But why is it important that data are open and not just available? Open data strengthens the society and builds a shared resource, where all users, citizens and businesses are enriched and empowered, not just the data collectors and publishers. “But why would businesses spend money on collecting data and then give them away?” you ask. Opening your data and making a profit are not mutually exclusive. Doing a quick Google search reveals many businesses that both offer open data and drives a business on them – and I believe these are the ones that should be highlighted as particularly progressive in articles such as the one from Computerworld.

One example is the British company OpenCorporates, which offer their growing repository of corporate register data as open data, and thereby cleverly positions themselves as a go-to resource in that field. This approach strengthens their opportunity to offer consultancy services, data analysis and other custom services for both businesses and the public sector. Other businesses are welcome to use the data, even for competitive use or to create other services, but only under the same data license – and thereby providing a derivative resource useful for OpenCorporates. Therein lies the real innovation and sustainability – effectively removing the silos and creating value for society, not just the involved businesses. Open data creates growth and innovation in our society – while Google’s way of offering data probably mostly creates growth for…Google.

We are seeing a rising trend of what can be termed “open-washing” (inspired by “greenwashing“) – meaning data publishers that are claiming their data is open, even when it’s not – but rather just available under limiting terms. If we – at this critical time in the formative period of the data driven society – aren’t critically aware of the difference, we’ll end up putting our vital data streams in siloed infrastructure built and owned by international corporations. But also to give our praise and support to the wrong kind of unsustainable technological development.

To learn more about open data visit the Open Definition and this introduction to the topic by the the Open Knowledge Foundation. To voice your opinion join the mailing list for Open Knowledge Foundation.

Rochkind, Jonathan: Another gem packaging of chosen.js for rails asset pipeline

Mon, 2014-03-10 14:59

chosen-rails already existed as a gem to package chosen.js assets for the Rails asset pipeline.

But I was having trouble getting it to work right, not sure why, but it appeared to be related to the compass dependency.

The compass dependency is actually in the original chosen.js source too — chosen.js is originally written in SASS. And chosen-rails is trying to use the original chosen.js source.

I made a fork which instead uses the post-compiled pure JS and CSS from the chosen.js release, rather than it’s source. (Well, it has to customize the CSS a bit to change referenced url()s to Rails asset pipeline asset-url() calls.)

I’ve called it chosen_assets. (rubygems; github).  Seems to be working well for me.


Filed under: General

Rochkind, Jonathan: Another gem packaging of chosen.js for rails asset pipeline

Mon, 2014-03-10 14:59

chosen-rails already existed as a gem to package chosen.js assets for the Rails asset pipeline.

But I was having trouble getting it to work right, not sure why, but it appeared to be related to the compass dependency.

The compass dependency is actually in the original chosen.js source too — chosen.js is originally written in SASS. And chosen-rails is trying to use the original chosen.js source.

I made a fork which instead uses the post-compiled pure JS and CSS from the chosen.js release, rather than it’s source. (Well, it has to customize the CSS a bit to change referenced url()s to Rails asset pipeline asset-url() calls.)

I’ve called it chosen_assets. (rubygems; github).  Seems to be working well for me.


Filed under: General

Rochkind, Jonathan: vendor optical disc format promoted as ‘archival’?

Mon, 2014-03-10 12:34

Anyone in the digital archivist community want to weigh in on this, or provide  citations to reviews or evaluations?

I’m not sure exactly who the market actually is for these “Archival Discs.” If it was actually those professionally concerned with long-term reliable storage, I would think the press release would include some information on what leads them to believe the media will be especially reliable long-term, compared to other optical media. Which they don’t seem to.

Which makes me wonder how much of the ‘archival’ is purely marketing. I guess the main novelty here is just the larger capacity?

Press Release: ”Archival Disc” standard formulated for professional-use next-generation optical discs

Tokyo, Japan – March 10, 2014 – Sony Corporation (“Sony”) and Panasonic Corporation (“Panasonic”) today announced that they have formulated “Archival Disc”, a new standard for professional-use, next-generation optical discs, with the objective of expanding the market for long-term digital data storage*.

Optical discs have excellent properties to protect themselves against the environment, such as dust-resistance and water-resistance, and can also withstand changes in temperature and humidity when stored. They also allow inter-generational compatibility between different formats, ensuring that data can continue to be read even as formats evolve. This makes them robust media for long-term storage of content. Recognizing that optical discs will need to accommodate much larger volumes of storage going forward, particularly given the anticipated future growth in the archive market, Sony and Panasonic have been engaged in the joint development of a standard for professional-use next-generation optical discs.


Filed under: General

Rochkind, Jonathan: vendor optical disc format promoted as ‘archival’?

Mon, 2014-03-10 12:34

Anyone in the digital archivist community want to weigh in on this, or provide  citations to reviews or evaluations?

I’m not sure exactly who the market actually is for these “Archival Discs.” If it was actually those professionally concerned with long-term reliable storage, I would think the press release would include some information on what leads them to believe the media will be especially reliable long-term, compared to other optical media. Which they don’t seem to.

Which makes me wonder how much of the ‘archival’ is purely marketing. I guess the main novelty here is just the larger capacity?

Press Release: ”Archival Disc” standard formulated for professional-use next-generation optical discs

Tokyo, Japan – March 10, 2014 – Sony Corporation (“Sony”) and Panasonic Corporation (“Panasonic”) today announced that they have formulated “Archival Disc”, a new standard for professional-use, next-generation optical discs, with the objective of expanding the market for long-term digital data storage*.

Optical discs have excellent properties to protect themselves against the environment, such as dust-resistance and water-resistance, and can also withstand changes in temperature and humidity when stored. They also allow inter-generational compatibility between different formats, ensuring that data can continue to be read even as formats evolve. This makes them robust media for long-term storage of content. Recognizing that optical discs will need to accommodate much larger volumes of storage going forward, particularly given the anticipated future growth in the archive market, Sony and Panasonic have been engaged in the joint development of a standard for professional-use next-generation optical discs.


Filed under: General

Summers, Ed: Dissecting GettyImage Embeds

Mon, 2014-03-10 09:23

Yes, GettyImages have decided to encourage people to embed their images. Despite opinions to the contrary I think this is A Good Thing. So what happens when you embed a Getty image into your HTML? To get something like this in your page:

you need to include a little snippet of HTML in your pages:

<iframe src="//embed.gettyimages.com/embed/81901686?et=4td6Xm2f0k6pMgQVX7pNFA&sig=fhRom4eoepnZbyWjZ0_2N3SdVG1dxQTC2GUAK4XrPjg=" width="462" height="440" frameborder="0" scrolling="no"></iframe>

which in turn embeds this HTML into your page:

<!DOCTYPE html> <html> <head> <base target="_parent" /> <title>20 - 30 year old female worker pulls box off of warehouse shelf [Getty Images]</title> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" /> <!--[if lt IE 10]> <script src="//html5shiv.googlecode.com/svn/trunk/html5.js"></script> <![endif]--> </head> <body> <link rel="stylesheet" type="text/css" href="//embed.gettyimages.com/css/style.css" /> <section id="embed-body" data-asset-id="81901686" data-collection-id="41"> <a href="http://gty.im/81901686" target="_blank"><img src="http://d2v0gs5b86mjil.cloudfront.net/xc/81901686.jpg?v=1&c=IWSAsset&k=2&d=F5B5107058D53DF50D8BA2399504758256BF753C679B89B417A38C0E9F1FBB9F&Expires=1394499600&Key-Pair-Id=APKAJZZHJ4LGWQENK3OQ&Signature=UC1YXxhGwSAY0BduwMZqnFQ7fcAQTdCksDvYu4WVmNWlTou7NktH7rZ8uk7BLbupJ4sp0ijiDaA93Yi2XijnC-TtcUO1Kylcew4nZpM~Al9jD0OSfx5yNe7jcIalweGpLGOdMLTXn0wRs6XfEh3~1fc~csMrAesHJkUayhBqNxo6Xja-35XQLx98d5fg6UXazOsCRT-UzebWA4dFURz~BSxXgq0RtU~LhKVKRZvkUTvl2RrsqBcN4bW3i~dbNMwHKn~7s9dMy5CxH-7k4ELyJaBClWEO2Jgr5WV9cXy~WGBQnNd-5Lb7CMcZclzn88-LbmDnFcO~BVLgtSU5x-KTpw__" /></a> <footer> <ul class="meta"> <li class="gi-logo icon icon-logo"></li> <li>Bob O'Connor / Stone</li> </ul> <ul class="reblog"> <li> <a href="//twitter.com/share" title="Share on Twitter" class="twitter-share-button" data-lang="en" data-count="none" data-url="http://gty.im/81901686"></a> </li> <li> <a class="icon-tumblr" target="_self" title="Share on Tumblr" href="//www.tumblr.com/share/video?embed=%3Ciframe%20src%3D%22%2f%2fembed.gettyimages.com%2fembed%2f81901686%3fet%3d4td6Xm2f0k6pMgQVX7pNFA%26sig%3dfhRom4eoepnZbyWjZ0_2N3SdVG1dxQTC2GUAK4XrPjg%3d%22%20width%3D%22462%22%20height%3D%22440%22%20frameborder%3D%220%22%20%3E%3C%2Fiframe%3E"></a> </li> <li> <a href="javascript:void(0);" title="Re-embed this image"><i class="icon-code"></i></a> </li> </ul> </footer> </section> <aside class='modal embed-modal' style='display: none;'> <div class='contents'> <a class="icon modal-close icon-close" href="#close" title="Close"></a> <span id="re-embed-body"> <h3>Embed this image</h3> <p>Copy this code to your website or blog. <a href="http://www.gettyimages.com/helpcenter" target="_blank" id="learn-more">Learn more</a></p> <p class="commercial-use"> Note: Embedded images may not be used for commercial purposes.</p> <p id="embed-link"> <textarea></textarea></p> <p class="terms"> By embedding this image, you agree to Getty Images <a href="http://www.gettyimages.com/corporate/terms.aspx" target="_blank">terms of use</a>.</p> </span> </div> </aside> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script> <script type="text/javascript" src="/script/embed.js"></script> <script src="//platform.tumblr.com/v1/share.js"></script> <script src="//platform.twitter.com/widgets.js"></script> </body> </html>

You can see Amazon’s CloudFront is being used as a CDN for the images, and that Getty are using CloudFront’s Signed URLs to expire the images…it looks like after 24 hours? This isn’t a problem because Getty are serving the page up, but anyone that’s tried to snag the image URL for reuse (Google Images?) will end up getting a 400 error.

I thought it was interesting that the embedded iframe gives you not only the image, author and collection, but also links to re-share the image on Twitter and Tumblr. I guess this is Viral Marketing 101, but it’s smart I think, since it encourages reuse, and the recycling of content on the Web. Conspicuously absent from the reshare buttons is Facebook — maybe there’s a story there? Also, as we’ll see in a second, the description of the image is missing from the embedded view:

20 – 30 year old female worker pulls box off of warehouse shelf

Of course the other big thing the iframe does is gives Getty an idea of where their content is being used. Anyone who uses this one line embed iframe will trigger an HTTP request to a embed.gettyimages.com URL (hosted on Amazon EC2 incidentally). These requests, and their referral information can be stashed away and analyzed, so that Getty can get a picture of who is using their content, and how. Embedded images and the Twitter and Tumblr reshares are automatically linked to Getty’s specific short URLs, such as:

http://gty.im/81901686

The number used in the short URL is also used in the expanded URL:

http://www.gettyimages.com/detail/photo/year-old-female-worker-pulls-box-off-of-high-res-stock-photography/81901686

But the title text is just there for SEO, it can be changed to anything:

http://www.gettyimages.com/detail/photo/wikileaks-storage-annex/81901686

Ordinarily I’d be down on the use of a short URL, but in this case it’s role is more of a permalink. Of course these short URLs have the same problem as Handles and PURLs in that people won’t ordinarily bookmark them. But, Que Sera Sera. As the Verge pointed out these embedded iframes could end up depriving Web content of lead images, if the GettyImages decides to pull the plug on the embeds and they suddenly 404. But their credibility would suffer quite a bit by a decision like that. I think it’s important that they are encouraging the Web to rely on these URLs, and that they are putting their reputation on the line.

Of course lots of inbound links to those pages should do wonders for their PageRank. Plus, following that link allows you to purchase the image, explore other images by the photographer, related images in the GettyImages collection, as well as see some additional metadata about the photo: item number, rights, license type, original file dimensions, size, dots-per-inch. Some of this metadata is even expressed using RDFa (Facebook’s OpenGraph metadata) … which makes the lack of a Facebook share button even more interesting. In addition there is also some minimal use of schema.org HTML microdata for the search engine’s to nibble on. If you are curious, Google’s Structured Data Testing Tool provides a view on this metadata.

It seems like there’s an opportunity to express more information in RDFa or microdata, specifically the details about the original, as well as licensing/rights metadata. Oddly the RDFa doesn’t even mark up the author of the image, I suppose because Facebook’s OpenGraph doesn’t give a way of expressing it. They could start by marking up the author of the image, but what if Getty established photographer pages, so instead of Bob O’Connor linking to:

http://www.gettyimages.com/search/2/image?artist=Bob+O%27Connor&family=Creative

What if it linked to a vanity URL like:

http://www.gettyimages.com/people/bob-oconnor

This would be a perfect place to share links to author’s other social media accounts, a bio, their photographer friends, etc. I’m thinking of the sort of work that National Geographic are doing with their YourShot application, for example this Profile page for Bahareh Mohamadian.

The licensing restrictions and iframes around these images would have ordinarily turned me off. But given Getty’s market position in this space it’s completely understandle, and seems like a useful compromise for now. These landing pages are a perfect place to make more structured metadata available that could be used by integrating applications. Getty should invest in this real estate, not only for the Web, but also for data resuse across their enterprise. The landing pages are an example of just how influential Facebook and Google have been in promoting the use of metadata on the Web. Without them, I think it is safe to assume we wouldn’t have seen any structured metadata on these pages at all.

Summers, Ed: Dissecting GettyImage Embeds

Mon, 2014-03-10 09:23

Yes, GettyImages have decided to encourage people to embed their images. Despite opinions to the contrary I think this is A Good Thing. So what happens when you embed a Getty image into your HTML? To get something like this in your page:

you need to include a little snippet of HTML in your pages:

<iframe src="//embed.gettyimages.com/embed/81901686?et=4td6Xm2f0k6pMgQVX7pNFA&sig=fhRom4eoepnZbyWjZ0_2N3SdVG1dxQTC2GUAK4XrPjg=" width="462" height="440" frameborder="0" scrolling="no"></iframe>

which in turn embeds this HTML into your page:

<!DOCTYPE html> <html> <head> <base target="_parent" /> <title>20 - 30 year old female worker pulls box off of warehouse shelf [Getty Images]</title> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" /> <!--[if lt IE 10]> <script src="//html5shiv.googlecode.com/svn/trunk/html5.js"></script> <![endif]--> </head> <body> <link rel="stylesheet" type="text/css" href="//embed.gettyimages.com/css/style.css" /> <section id="embed-body" data-asset-id="81901686" data-collection-id="41"> <a href="http://gty.im/81901686" target="_blank"><img src="http://d2v0gs5b86mjil.cloudfront.net/xc/81901686.jpg?v=1&c=IWSAsset&k=2&d=F5B5107058D53DF50D8BA2399504758256BF753C679B89B417A38C0E9F1FBB9F&Expires=1394499600&Key-Pair-Id=APKAJZZHJ4LGWQENK3OQ&Signature=UC1YXxhGwSAY0BduwMZqnFQ7fcAQTdCksDvYu4WVmNWlTou7NktH7rZ8uk7BLbupJ4sp0ijiDaA93Yi2XijnC-TtcUO1Kylcew4nZpM~Al9jD0OSfx5yNe7jcIalweGpLGOdMLTXn0wRs6XfEh3~1fc~csMrAesHJkUayhBqNxo6Xja-35XQLx98d5fg6UXazOsCRT-UzebWA4dFURz~BSxXgq0RtU~LhKVKRZvkUTvl2RrsqBcN4bW3i~dbNMwHKn~7s9dMy5CxH-7k4ELyJaBClWEO2Jgr5WV9cXy~WGBQnNd-5Lb7CMcZclzn88-LbmDnFcO~BVLgtSU5x-KTpw__" /></a> <footer> <ul class="meta"> <li class="gi-logo icon icon-logo"></li> <li>Bob O'Connor / Stone</li> </ul> <ul class="reblog"> <li> <a href="//twitter.com/share" title="Share on Twitter" class="twitter-share-button" data-lang="en" data-count="none" data-url="http://gty.im/81901686"></a> </li> <li> <a class="icon-tumblr" target="_self" title="Share on Tumblr" href="//www.tumblr.com/share/video?embed=%3Ciframe%20src%3D%22%2f%2fembed.gettyimages.com%2fembed%2f81901686%3fet%3d4td6Xm2f0k6pMgQVX7pNFA%26sig%3dfhRom4eoepnZbyWjZ0_2N3SdVG1dxQTC2GUAK4XrPjg%3d%22%20width%3D%22462%22%20height%3D%22440%22%20frameborder%3D%220%22%20%3E%3C%2Fiframe%3E"></a> </li> <li> <a href="javascript:void(0);" title="Re-embed this image"><i class="icon-code"></i></a> </li> </ul> </footer> </section> <aside class='modal embed-modal' style='display: none;'> <div class='contents'> <a class="icon modal-close icon-close" href="#close" title="Close"></a> <span id="re-embed-body"> <h3>Embed this image</h3> <p>Copy this code to your website or blog. <a href="http://www.gettyimages.com/helpcenter" target="_blank" id="learn-more">Learn more</a></p> <p class="commercial-use"> Note: Embedded images may not be used for commercial purposes.</p> <p id="embed-link"> <textarea></textarea></p> <p class="terms"> By embedding this image, you agree to Getty Images <a href="http://www.gettyimages.com/corporate/terms.aspx" target="_blank">terms of use</a>.</p> </span> </div> </aside> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script> <script type="text/javascript" src="/script/embed.js"></script> <script src="//platform.tumblr.com/v1/share.js"></script> <script src="//platform.twitter.com/widgets.js"></script> </body> </html>

You can see Amazon’s CloudFront is being used as a CDN for the images, and that Getty are using CloudFront’s Signed URLs to expire the images…it looks like after 24 hours? This isn’t a problem because Getty are serving the page up, but anyone that’s tried to snag the image URL for reuse (Google Images?) will end up getting a 400 error.

I thought it was interesting that the embedded iframe gives you not only the image, author and collection, but also links to re-share the image on Twitter and Tumblr. I guess this is Viral Marketing 101, but it’s smart I think, since it encourages reuse, and the recycling of content on the Web. Conspicuously absent from the reshare buttons is Facebook — maybe there’s a story there? Also, as we’ll see in a second, the description of the image is missing from the embedded view:

20 – 30 year old female worker pulls box off of warehouse shelf

Of course the other big thing the iframe does is gives Getty an idea of where their content is being used. Anyone who uses this one line embed iframe will trigger an HTTP request to a embed.gettyimages.com URL (hosted on Amazon EC2 incidentally). These requests, and their referral information can be stashed away and analyzed, so that Getty can get a picture of who is using their content, and how. Embedded images and the Twitter and Tumblr reshares are automatically linked to Getty’s specific short URLs, such as:

http://gty.im/81901686

The number used in the short URL is also used in the expanded URL:

http://www.gettyimages.com/detail/photo/year-old-female-worker-pulls-box-off-of-high-res-stock-photography/81901686

But the title text is just there for SEO, it can be changed to anything:

http://www.gettyimages.com/detail/photo/wikileaks-storage-annex/81901686

Ordinarily I’d be down on the use of a short URL, but in this case it’s role is more of a permalink. Of course these short URLs have the same problem as Handles and PURLs in that people won’t ordinarily bookmark them. But, Que Sera Sera. As the Verge pointed out these embedded iframes could end up depriving Web content of lead images, if the GettyImages decides to pull the plug on the embeds and they suddenly 404. But their credibility would suffer quite a bit by a decision like that. I think it’s important that they are encouraging the Web to rely on these URLs, and that they are putting their reputation on the line.

Of course lots of inbound links to those pages should do wonders for their PageRank. Plus, following that link allows you to purchase the image, explore other images by the photographer, related images in the GettyImages collection, as well as see some additional metadata about the photo: item number, rights, license type, original file dimensions, size, dots-per-inch. Some of this metadata is even expressed using RDFa (Facebook’s OpenGraph metadata) … which makes the lack of a Facebook share button even more interesting. In addition there is also some minimal use of schema.org HTML microdata for the search engine’s to nibble on. If you are curious, Google’s Structured Data Testing Tool provides a view on this metadata.

It seems like there’s an opportunity to express more information in RDFa or microdata, specifically the details about the original, as well as licensing/rights metadata. Oddly the RDFa doesn’t even mark up the author of the image, I suppose because Facebook’s OpenGraph doesn’t give a way of expressing it. They could start by marking up the author of the image, but what if Getty established photographer pages, so instead of Bob O’Connor linking to:

http://www.gettyimages.com/search/2/image?artist=Bob+O%27Connor&family=Creative

What if it linked to a vanity URL like:

http://www.gettyimages.com/people/bob-oconnor

This would be a perfect place to share links to author’s other social media accounts, a bio, their photographer friends, etc. I’m thinking of the sort of work that National Geographic are doing with their YourShot application, for example this Profile page for Bahareh Mohamadian.

The licensing restrictions and iframes around these images would have ordinarily turned me off. But given Getty’s market position in this space it’s completely understandle, and seems like a useful compromise for now. These landing pages are a perfect place to make more structured metadata available that could be used by integrating applications. Getty should invest in this real estate, not only for the Web, but also for data resuse across their enterprise. The landing pages are an example of just how influential Facebook and Google have been in promoting the use of metadata on the Web. Without them, I think it is safe to assume we wouldn’t have seen any structured metadata on these pages at all.

Murray, Peter: Memento: An RFC and a Chrome Plugin

Mon, 2014-03-10 00:19

A belated congratulations to the Memento team on the publication of their RFC and Google Chrome plugin for the Memento WWW time travel protocol. A fan of the Internet Archive Wayback Machine? Ever look at the history of a Wikipedia page? Curious to know about changes to a particular web page? The first is now easier to access…the second is a work in progress…and the third may come to a website near you. See what I mean through this demonstration video.

If you want to see more of the details, check out the guided introduction. If you are a hardcore techie, take a look at the text of RFC 7089. If you’d like to try it out yourself, load up Chrome and install the Memento extension. Because the Chrome Web Store won’t let you see the details of an extension unless you are actually using Chrome, I’ve reproduced the description here:

Travel to the past of the web by right-clicking pages and links.

Memento for Chrome allows you to seamlessly navigate between the present web and the web of the past. It turns your browser into a web time travel machine that is activated by means of a Memento sub-menu that is available on right-click.

First, select a date for time travel by clicking the black Memento extension icon. Now right-click on a web page, and click the “Get near …” option from the Memento sub-menu to see what the page looked like around the selected date. Do the same for any link in a page to see what the linked page looked like. If you hit one of those nasty “Page not Found” errors, right-click and select the “Get near current time” option to see what the page looked like before it vanished from the web. When on a past version of a page – the Memento extension icon is now red – right-click the page and select the “Get current time” option to see what it looks like now.

Memento for Chrome obtains prior versions of pages from web archives around the world, including the massive web-wide Internet Archive, national archives such as the British Library and UK National Archives web archives, and on-demand web archives such as archive.is. It also allows time travel in all language versions of Wikipedia. There’s two things Memento for Chrome can not do for you: obtain a prior version of a page when none have been archived and time travel into the future. Our sincere apologies for that.

Technically, the Memento for Chrome extension is a client-side implementation of the Memento protocol that extends HTTP with content negotiation in the date time dimension. Many web archives have implemented server-side support for the Memento protocol, and, in essence, every content management system that supports time-based versioning can implement it. Technical details are in the Memento Internet Draft at http://www.mementoweb.org/guide/rfc/ID/. General information about the protocol, including a quick introduction, is available at http://mementoweb.org.

For queries about the Memento for Chrome extension and the Memento protocol, get in touch at memento-dev@googlegroups.com.

The Memento team is also developing a plugin for Mediawiki that speaks the Memento protocol. The effort to get it into the English Wikipedia has stalled at the moment, but I expect the developers will give it another go at some point. Congratulations to Herbert Van de Sompel, Michael Nelson, Rob Sanderson and the rest of the team at Los Alamos National Lab and Old Dominion University.

Link to this post!

Murray, Peter: Memento: An RFC and a Chrome Plugin

Mon, 2014-03-10 00:19

A belated congratulations to the Memento team on the publication of their RFC and Google Chrome plugin for the Memento WWW time travel protocol. A fan of the Internet Archive Wayback Machine? Ever look at the history of a Wikipedia page? Curious to know about changes to a particular web page? The first is now easier to access…the second is a work in progress…and the third may come to a website near you. See what I mean through this demonstration video.

If you want to see more of the details, check out the guided introduction. If you are a hardcore techie, take a look at the text of RFC 7089. If you’d like to try it out yourself, load up Chrome and install the Memento extension. Because the Chrome Web Store won’t let you see the details of an extension unless you are actually using Chrome, I’ve reproduced the description here:

Travel to the past of the web by right-clicking pages and links.

Memento for Chrome allows you to seamlessly navigate between the present web and the web of the past. It turns your browser into a web time travel machine that is activated by means of a Memento sub-menu that is available on right-click.

First, select a date for time travel by clicking the black Memento extension icon. Now right-click on a web page, and click the “Get near …” option from the Memento sub-menu to see what the page looked like around the selected date. Do the same for any link in a page to see what the linked page looked like. If you hit one of those nasty “Page not Found” errors, right-click and select the “Get near current time” option to see what the page looked like before it vanished from the web. When on a past version of a page – the Memento extension icon is now red – right-click the page and select the “Get current time” option to see what it looks like now.

Memento for Chrome obtains prior versions of pages from web archives around the world, including the massive web-wide Internet Archive, national archives such as the British Library and UK National Archives web archives, and on-demand web archives such as archive.is. It also allows time travel in all language versions of Wikipedia. There’s two things Memento for Chrome can not do for you: obtain a prior version of a page when none have been archived and time travel into the future. Our sincere apologies for that.

Technically, the Memento for Chrome extension is a client-side implementation of the Memento protocol that extends HTTP with content negotiation in the date time dimension. Many web archives have implemented server-side support for the Memento protocol, and, in essence, every content management system that supports time-based versioning can implement it. Technical details are in the Memento Internet Draft at http://www.mementoweb.org/guide/rfc/ID/. General information about the protocol, including a quick introduction, is available at http://mementoweb.org.

For queries about the Memento for Chrome extension and the Memento protocol, get in touch at memento-dev@googlegroups.com.

The Memento team is also developing a plugin for Mediawiki that speaks the Memento protocol. The effort to get it into the English Wikipedia has stalled at the moment, but I expect the developers will give it another go at some point. Congratulations to Herbert Van de Sompel, Michael Nelson, Rob Sanderson and the rest of the team at Los Alamos National Lab and Old Dominion University.

Link to this post!

Tennant, Roy: A License to Skill

Mon, 2014-03-10 00:11

In many of the talks that I have given over the years I have taken pains to point out a key fact about library budgets: the vast majority of any library’s budget the budget for most libraries goes to staff. Usually I use this as a way to put investment in computer hardware in perspective. That is, should your most expensive resource (staff, duh) be forced to waste time dealing with inferior equipment? No, I would assert. It’s just stupid. [correction made to correct an overstatement]

But that, of course, is merely the tip of the iceberg. It’s also one of the easiest problems to fix, since all it requires is better equipment. A much more difficult way to squeeze the most out of your most expensive investment is to build additional skills. And yet that is exactly what nearly all libraries should be doing.

Why? Because hardly any job in a library is the same as it was even just a few years ago. The kinds of tasks we are doing may be quite different than they were when we were hired. Doing these new things effectively often requires building new skills.

Therefore every library manager needs to have a plan for constant staff retooling. What makes this difficult is that people can have a variety of ways in which they learn best. Some learn best in a formal class. Others need only a few good books and some time to experiment.  One of your first steps, then, is to help your staff determine how they learn best and find avenues for learning based on those preferences.

Sure, I’m talking about a lot of work. But isn’t your single largest investment worth it? Of course it is. Now get out there and start using your license to skill.

Tennant, Roy: A License to Skill

Mon, 2014-03-10 00:11

In many of the talks that I have given over the years I have taken pains to point out a key fact about library budgets: the vast majority of any library’s budget the budget for most libraries goes to staff. Usually I use this as a way to put investment in computer hardware in perspective. That is, should your most expensive resource (staff, duh) be forced to waste time dealing with inferior equipment? No, I would assert. It’s just stupid. [correction made to correct an overstatement]

But that, of course, is merely the tip of the iceberg. It’s also one of the easiest problems to fix, since all it requires is better equipment. A much more difficult way to squeeze the most out of your most expensive investment is to build additional skills. And yet that is exactly what nearly all libraries should be doing.

Why? Because hardly any job in a library is the same as it was even just a few years ago. The kinds of tasks we are doing may be quite different than they were when we were hired. Doing these new things effectively often requires building new skills.

Therefore every library manager needs to have a plan for constant staff retooling. What makes this difficult is that people can have a variety of ways in which they learn best. Some learn best in a formal class. Others need only a few good books and some time to experiment.  One of your first steps, then, is to help your staff determine how they learn best and find avenues for learning based on those preferences.

Sure, I’m talking about a lot of work. But isn’t your single largest investment worth it? Of course it is. Now get out there and start using your license to skill.

Murray, Peter: Mystery in the Library

Sun, 2014-03-09 22:13

A colleague e-mailed me the other day expressing appreciation for the DLTJ blog in part, and also describing a mystery that she is running in her library:

Adrian (MN) Police Chief Shawn Langseth gathering evidence in the library “crime”.

Because I am staring out the window, at yet another snow-storm-in-the-works, having just learned that school is called off AGAIN (waiting for the library urchins to pour in), I am trying to get caught up on life outside of a small prairie town.

To combat some serious winter blues (and who doesn’t have them this year?), we have decided to have a just-for-fun “crime spree” at our library. Thus far, the local Chief of Police has no leads (he has graciously agreed to participate and has been kept in the dark as to the identities of the perpetrators). We decided that having a crime spree might be a more interesting way to get people to talk about the library.

If you find yourself looking for something to take your mind off the weather, feel free to take a look at our crime spree: http://adrianbranchlibrary.blogspot.com/

Take a look at the posts created by Meredith Vaselaar, Librarian at the Adrian Branch Library. She even does have the police chief involved with the story. The articles are posted on Blogspot and in the local newspaper. This sounds like a great way to bring the community into the local branch. Congratulations, Meredith! I’ll be watching from afar to see how this turns out.

Link to this post!