planet code4lib

Syndicate content
Planet Code4Lib - http://planet.code4lib.org
Updated: 20 weeks 15 hours ago

Murray, Peter: Mystery in the Library

Sun, 2014-03-09 22:13

A colleague e-mailed me the other day expressing appreciation for the DLTJ blog in part, and also describing a mystery that she is running in her library:

Adrian (MN) Police Chief Shawn Langseth gathering evidence in the library “crime”.

Because I am staring out the window, at yet another snow-storm-in-the-works, having just learned that school is called off AGAIN (waiting for the library urchins to pour in), I am trying to get caught up on life outside of a small prairie town.

To combat some serious winter blues (and who doesn’t have them this year?), we have decided to have a just-for-fun “crime spree” at our library. Thus far, the local Chief of Police has no leads (he has graciously agreed to participate and has been kept in the dark as to the identities of the perpetrators). We decided that having a crime spree might be a more interesting way to get people to talk about the library.

If you find yourself looking for something to take your mind off the weather, feel free to take a look at our crime spree: http://adrianbranchlibrary.blogspot.com/

Take a look at the posts created by Meredith Vaselaar, Librarian at the Adrian Branch Library. She even does have the police chief involved with the story. The articles are posted on Blogspot and in the local newspaper. This sounds like a great way to bring the community into the local branch. Congratulations, Meredith! I’ll be watching from afar to see how this turns out.

Link to this post!

OCLC Dev Network: Welcome to the New Developer Network Website

Sun, 2014-03-09 14:30

Come on in and take a stroll around our new site! Check out the new Web services documentation, try the API Explorer, and request a WSKey just for fun (Steve says, "not really" to that last idea): We invite you to spend some time, poke around, and let us know what you think of our new digs.

OCLC Dev Network: Welcome to the New Developer Network Website

Sun, 2014-03-09 14:30

Come on in and take a stroll around our new site! Check out the new Web services documentation, try the API Explorer, and request a WSKey just for fun (Steve says, "not really" to that last idea): We invite you to spend some time, poke around, and let us know what you think of our new digs.

Chen, Sean: Gotcha: sRGB, Emacs 24, themes

Sun, 2014-03-09 04:13

I’ve been working with the Solarized color theme in my Emacs for a while. The homebrew recipe for Emacs has an option to pull in a patch which corrects the Cocoa port for Emacs to handle srgb colors correctly. But for the longest time I couldn’t get the colors to exactly line up to the references.

But I finally figured out that the theme was expecting a variable to be set:

(setq solarized-broken-srgb nil)

From the customize information:

Emacs bug #8402 results in incorrect color handling on Macs. If this is t (the default on Macs), Solarized works around it with alternative colors. However, these colors are not totally portable, so you may be able to edit the “Gen RGB” column in solarized-definitions.el to improve them further.

The gotcha is that if you set this through customize, generally the default custom.el loads after init.el with a lightly managed Emacs. So if you thought you were setting the variable in customize and it would work, you are wrong, since normally themes are loaded through your init.el, either through a separate library or directly in mine.

So for me to load solarized with correct srbg support:

(setq solarized-broken-srgb nil) (load-theme 'solarized-dark t)

Chen, Sean: Gotcha: sRGB, Emacs 24, themes

Sun, 2014-03-09 04:13

I’ve been working with the Solarized color theme in my Emacs for a while. The homebrew recipe for Emacs has an option to pull in a patch which corrects the Cocoa port for Emacs to handle srgb colors correctly. But for the longest time I couldn’t get the colors to exactly line up to the references.

But I finally figured out that the theme was expecting a variable to be set:

(setq solarized-broken-srgb nil)

From the customize information:

Emacs bug #8402 results in incorrect color handling on Macs. If this is t (the default on Macs), Solarized works around it with alternative colors. However, these colors are not totally portable, so you may be able to edit the “Gen RGB” column in solarized-definitions.el to improve them further.

The gotcha is that if you set this through customize, generally the default custom.el loads after init.el with a lightly managed Emacs. So if you thought you were setting the variable in customize and it would work, you are wrong, since normally themes are loaded through your init.el, either through a separate library or directly in mine.

So for me to load solarized with correct srbg support:

(setq solarized-broken-srgb nil) (load-theme 'solarized-dark t)

Ng, Cynthia: Meetup Notes & TakeAways: Library People Who Want to Code

Sat, 2014-03-08 04:41
So tonight, we had a get together this evening for library people (many SLAIS students in attendance) who want to learn how to code in a more informal manner without having to take a full course. Trying to Start Difficult to get the kind of necessary learning in many formal settings including library schools. So […]

Morgan, Eric Lease: Semantic Web application

Fri, 2014-03-07 21:47

This posting outlines the implementation of a Semantic Web application.

Many people seem to think the ideas behind the Semantic Web (and linked data) are interesting, but many people are also waiting to see some of the benefits before committing resources to the effort. This is what I call the “chicken & egg problem of the linked data”.

While I have not created the application outlined below, I think it is more than feasible. It is a sort of inference engine feed with a URI and integer, both supplied by a person. Its ultimate goal is to find relationships between URIs that were not immediately or readily apparent.* It is a sort of “find more like this one” application. Here’s the algorithm:

  1. Allow the reader to select an actionable URI of personal interest, ideally a URI from the set of URIs you curate
  2. Submit the URI to an HTTP server or SPARQL endpoint and request RDF as output
  3. Save the output to a local store
  4. For each subject and object URI found the output, go to Step #2
  5. Go to step #2 n times for each newly harvested URI in the store where n is a reader-defined integer greater than 1; in other words, harvest more and more URIs, predicates, and literals based on the previously harvested URIs
  6. Create a set of human readable services/reports against the content of the store, and think of these services/reports akin to a type of finding aid, reference material, or museum exhibit of the future. Example services/reports might include:
    • hierarchal lists of all classes and properties – This would be a sort of semantic map. Each item on the map would be clickable allowing the reader to read more and drill down.
    • text mining reports – collect into a single “bag of words” all the literals saved in the store and create: word clouds, alphabetical lists, concordances, bibliographies, directories, gazetteers, tabulations of parts of speech, named entities, sentiment analyses, topic models, etc.
    • maps – use place names and geographic coordinates to implement a geographic information service
    • audio-visual mash-ups – bring together all the media information and create things like slideshows, movies, analyses of colors, shapes, patterns, etc.
    • search interfaces – implement a search interface against the result, SPARQL or otherwise
    • facts – remember SPARQL queries can return more than just lists. They can return mathematical results such as sums, ratios, standard deviations, etc. It can also return Boolean values helpful in answering yes/no questions. You could have a set of canned fact queries such as, how many ontologies are represented in the store. Is the number of ontologies greater than 3? Are there more than 100 names represented in this set? The count of languages used in the set, etc.
  7. Allow the reader to identify a new URI of personal interest, specifically one garnered from the reports generated in Step #6.
  8. Go to Step #2, but this time have the inference engine be more selective by having it try to crawl back to your namespace and set of locally curated URIs.
  9. Return to the reader the URIs identified in Step #8, and by consequence, these URIs ought to share some of the same characteristics as the very first URI; you have implemented a “find more like this one” tool. You, as curator of the collection of URIs might have thought the relations between the first URI and set of final URIs was obvious, but those relationships would not necessarily be obvious to the reader, and therefore new knowledge would have been created or brought to light.
  10. If there are no new URIs from Step #7, then go to Step #6 using the newly harvested content.
  11. Done. If a system were created such as the one above, then the reader would quite likely have acquired some new knowledge, and this would be especially true the greater the size of n in Step #5.
  12. Repeat. Optionally, have a computer program repeat the process with every URI in your curated collection, and have the program save the results for your inspection. You may find relationships you did not perceive previously.

I believe many people perceive the ideas behind the Semantic Web to be akin to investigations in artificial intelligence. To some degree this is true, and investigations into artificial intelligence seem to come and go in waves. “Expert systems” and “neural networks” were incarnations of artificial intelligence more than twenty years ago. Maybe the Semantic Web is just another in a long wave of forays.

On the other hand, Semantic Web applications do not need to be so sublime. They can be as simple as discovery systems, browsable interfaces, or even word clouds. The ideas behind the Semantic Web and linked data are implementable. It just a shame that nothing is catching the attention of the wider audiences.

* Remember, URIs are identifiers intended to represent real world objects and/or descriptions of real-world objects. URIs are perfect for cultural heritage institutions because cultural heritage institutions maintain both.

Morgan, Eric Lease: Semantic Web application

Fri, 2014-03-07 21:47

This posting outlines the implementation of a Semantic Web application.

Many people seem to think the ideas behind the Semantic Web (and linked data) are interesting, but many people are also waiting to see some of the benefits before committing resources to the effort. This is what I call the “chicken & egg problem of the linked data”.

While I have not created the application outlined below, I think it is more than feasible. It is a sort of inference engine feed with a URI and integer, both supplied by a person. Its ultimate goal is to find relationships between URIs that were not immediately or readily apparent.* It is a sort of “find more like this one” application. Here’s the algorithm:

  1. Allow the reader to select an actionable URI of personal interest, ideally a URI from the set of URIs you curate
  2. Submit the URI to an HTTP server or SPARQL endpoint and request RDF as output
  3. Save the output to a local store
  4. For each subject and object URI found the output, go to Step #2
  5. Go to step #2 n times for each newly harvested URI in the store where n is a reader-defined integer greater than 1; in other words, harvest more and more URIs, predicates, and literals based on the previously harvested URIs
  6. Create a set of human readable services/reports against the content of the store, and think of these services/reports akin to a type of finding aid, reference material, or museum exhibit of the future. Example services/reports might include:
    • hierarchal lists of all classes and properties – This would be a sort of semantic map. Each item on the map would be clickable allowing the reader to read more and drill down.
    • text mining reports – collect into a single “bag of words” all the literals saved in the store and create: word clouds, alphabetical lists, concordances, bibliographies, directories, gazetteers, tabulations of parts of speech, named entities, sentiment analyses, topic models, etc.
    • maps – use place names and geographic coordinates to implement a geographic information service
    • audio-visual mash-ups – bring together all the media information and create things like slideshows, movies, analyses of colors, shapes, patterns, etc.
    • search interfaces – implement a search interface against the result, SPARQL or otherwise
    • facts – remember SPARQL queries can return more than just lists. They can return mathematical results such as sums, ratios, standard deviations, etc. It can also return Boolean values helpful in answering yes/no questions. You could have a set of canned fact queries such as, how many ontologies are represented in the store. Is the number of ontologies greater than 3? Are there more than 100 names represented in this set? The count of languages used in the set, etc.
  7. Allow the reader to identify a new URI of personal interest, specifically one garnered from the reports generated in Step #6.
  8. Go to Step #2, but this time have the inference engine be more selective by having it try to crawl back to your namespace and set of locally curated URIs.
  9. Return to the reader the URIs identified in Step #8, and by consequence, these URIs ought to share some of the same characteristics as the very first URI; you have implemented a “find more like this one” tool. You, as curator of the collection of URIs might have thought the relations between the first URI and set of final URIs was obvious, but those relationships would not necessarily be obvious to the reader, and therefore new knowledge would have been created or brought to light.
  10. If there are no new URIs from Step #7, then go to Step #6 using the newly harvested content.
  11. Done. If a system were created such as the one above, then the reader would quite likely have acquired some new knowledge, and this would be especially true the greater the size of n in Step #5.
  12. Repeat. Optionally, have a computer program repeat the process with every URI in your curated collection, and have the program save the results for your inspection. You may find relationships you did not perceive previously.

I believe many people perceive the ideas behind the Semantic Web to be akin to investigations in artificial intelligence. To some degree this is true, and investigations into artificial intelligence seem to come and go in waves. “Expert systems” and “neural networks” were incarnations of artificial intelligence more than twenty years ago. Maybe the Semantic Web is just another in a long wave of forays.

On the other hand, Semantic Web applications do not need to be so sublime. They can be as simple as discovery systems, browsable interfaces, or even word clouds. The ideas behind the Semantic Web and linked data are implementable. It just a shame that nothing is catching the attention of the wider audiences.

* Remember, URIs are identifiers intended to represent real world objects and/or descriptions of real-world objects. URIs are perfect for cultural heritage institutions because cultural heritage institutions maintain both.

ALA Equitable Access to Electronic Content: Up Next? E-rate and it’s worth the wait

Fri, 2014-03-07 20:51

We’re not sure how you best characterize waiting with your finger poised over the refresh key anticipating the release of an FCC Public Notice. But, nonetheless, we at ALA were not the only ones who impatiently awaited the latest installment of the E-rate modernization proceeding that began last June (if not before with the 2010 National Broadband Plan) with the President’s ConnectED initiative announcement.

Since the summer release of the Notice of Proposed Rulemaking (NPRM), the Commission has logged over 1500 comments and ex parte filings. Some of the issues raised in the NPRM warrant further public input to help the Commission determine the best path forward. To that end the Commission is seeking detailed input on three specific issues:

  • “How best to focus E-rate funds on high-capacity broadband, especially high-speed Wi-Fi and internal connections;
  • Whether and how the Commission should begin to phase down or phase out support for traditional voice services in order to focus more funding on broadband; and
  • Whether there are demonstration projects or experiments that the Commission should authorize as part of the E-rate program that would help the Commission test new, innovative ways to maximize cost-effective purchasing in the E-rate program.” (Paragraph 4)

Within these issues, there are a number of critical questions asked that are important to libraries and decisions made through the public record will certainly influence library broadband capacity and the ability of libraries to deliver key community services. The opportunity to shape the future direction of the E-rate program is immense and therefore somewhat daunting as we begin to fine-tune our own proposals. In our initial comments and throughout the process we have sought input and feedback from a wide range of librarians and expect to do so again through the guidance of the ALA E-rate Task force, other ALA member leaders, and expert consultants.

Once a Notice is released, the most common thing to do is count the pages (phew, “only” 20 pages of questions) and search for your key interest – in our case “librar.” It is noteworthy to mention that a number of comments by libraries are cited as well as those from ALA. Moreover, some of our ideas are discussed explicitly (paragraph 59). This is reflective of the Commission’s dedication to capturing the important role libraries play in their communities that broadband enables; the great need libraries have in boosting broadband capacity; and potential differences in needs of libraries from our school counterparts.

We are gratified to see that the Commission remains open and even aggressive in soliciting new ideas about how to make sure the program is efficient and effective – that funds are targeted to the most critical services that build library and school broadband capacity. As stated in the Notice, the targeted issues are not the sole issues that could be included in a final order. They are simply the ones for which the record to date was either unclear or commenters were equally split, or where the Commission needs more detail to fully understand how to address some of the stickiest challenges raised in the NPRM.

Many of the questions posed ask commenters to make difficult choices – such as what is the most equitable means to ensure applicants receive funding for internal wiring and Wi-Fi networks and determining how to handle voice services in a program gearing toward a broadband capacity focus. Though not mentioned in the Notice, we do understand that increasing the overall size of the fund is still somewhere on the table – while we feast on this Public Notice, we expect the overall funding question to be the next course.

So what’s the next step?

Comments are due April 7th with reply comments due April 21st. For the next few days we will be nose down reading and parsing the Notice which will be a break of sorts from the multiple meetings we have had with FCC commissioners and staff, as ALA, and as part of inside-the-beltway coalitions – in person and by phone. These meetings were a combination of advocacy for the role libraries play in education, employment and entrepreneurship, and empowering people though providing access to e-government, health information, digital literacy training, and similar services and providing the Commission with library data of the current state of broadband capacity, network configuration, and projecting future trends in library services that call for a scalable approach to building capacity for libraries.

We are appreciative of the careful review the Commission has given to the public record. In reality, the process, though sometimes murky and arguably long, is successful. Concepts are being analyzed, issues debated, and solutions weighed. There will be some changes in the future which will have to be worked through when implemented and we expect some discomfort. However, we support the direction of the Commission as we think about library needs five, ten, and twenty years from now. We view the Notice as another important step in the future of the E-rate program and thus the future of broadband capacity for all libraries.

Alan Inouye, OITP director, did the hard part of this post by getting the first thoughts down and contributed with thoughtful suggestions.

The post Up Next? E-rate and it’s worth the wait appeared first on District Dispatch.

ALA Equitable Access to Electronic Content: Up Next? E-rate and it’s worth the wait

Fri, 2014-03-07 20:51

We’re not sure how you best characterize waiting with your finger poised over the refresh key anticipating the release of an FCC Public Notice. But, nonetheless, we at ALA were not the only ones who impatiently awaited the latest installment of the E-rate modernization proceeding that began last June (if not before with the 2010 National Broadband Plan) with the President’s ConnectED initiative announcement.

Since the summer release of the Notice of Proposed Rulemaking (NPRM), the Commission has logged over 1500 comments and ex parte filings. Some of the issues raised in the NPRM warrant further public input to help the Commission determine the best path forward. To that end the Commission is seeking detailed input on three specific issues:

  • “How best to focus E-rate funds on high-capacity broadband, especially high-speed Wi-Fi and internal connections;
  • Whether and how the Commission should begin to phase down or phase out support for traditional voice services in order to focus more funding on broadband; and
  • Whether there are demonstration projects or experiments that the Commission should authorize as part of the E-rate program that would help the Commission test new, innovative ways to maximize cost-effective purchasing in the E-rate program.” (Paragraph 4)

Within these issues, there are a number of critical questions asked that are important to libraries and decisions made through the public record will certainly influence library broadband capacity and the ability of libraries to deliver key community services. The opportunity to shape the future direction of the E-rate program is immense and therefore somewhat daunting as we begin to fine-tune our own proposals. In our initial comments and throughout the process we have sought input and feedback from a wide range of librarians and expect to do so again through the guidance of the ALA E-rate Task force, other ALA member leaders, and expert consultants.

Once a Notice is released, the most common thing to do is count the pages (phew, “only” 20 pages of questions) and search for your key interest – in our case “librar.” It is noteworthy to mention that a number of comments by libraries are cited as well as those from ALA. Moreover, some of our ideas are discussed explicitly (paragraph 59). This is reflective of the Commission’s dedication to capturing the important role libraries play in their communities that broadband enables; the great need libraries have in boosting broadband capacity; and potential differences in needs of libraries from our school counterparts.

We are gratified to see that the Commission remains open and even aggressive in soliciting new ideas about how to make sure the program is efficient and effective – that funds are targeted to the most critical services that build library and school broadband capacity. As stated in the Notice, the targeted issues are not the sole issues that could be included in a final order. They are simply the ones for which the record to date was either unclear or commenters were equally split, or where the Commission needs more detail to fully understand how to address some of the stickiest challenges raised in the NPRM.

Many of the questions posed ask commenters to make difficult choices – such as what is the most equitable means to ensure applicants receive funding for internal wiring and Wi-Fi networks and determining how to handle voice services in a program gearing toward a broadband capacity focus. Though not mentioned in the Notice, we do understand that increasing the overall size of the fund is still somewhere on the table – while we feast on this Public Notice, we expect the overall funding question to be the next course.

So what’s the next step?

Comments are due April 7th with reply comments due April 21st. For the next few days we will be nose down reading and parsing the Notice which will be a break of sorts from the multiple meetings we have had with FCC commissioners and staff, as ALA, and as part of inside-the-beltway coalitions – in person and by phone. These meetings were a combination of advocacy for the role libraries play in education, employment and entrepreneurship, and empowering people though providing access to e-government, health information, digital literacy training, and similar services and providing the Commission with library data of the current state of broadband capacity, network configuration, and projecting future trends in library services that call for a scalable approach to building capacity for libraries.

We are appreciative of the careful review the Commission has given to the public record. In reality, the process, though sometimes murky and arguably long, is successful. Concepts are being analyzed, issues debated, and solutions weighed. There will be some changes in the future which will have to be worked through when implemented and we expect some discomfort. However, we support the direction of the Commission as we think about library needs five, ten, and twenty years from now. We view the Notice as another important step in the future of the E-rate program and thus the future of broadband capacity for all libraries.

Alan Inouye, OITP director, did the hard part of this post by getting the first thoughts down and contributed with thoughtful suggestions.

The post Up Next? E-rate and it’s worth the wait appeared first on District Dispatch.

OCLC Dev Network: Get a Sneak Peek at the New Developer Network site

Fri, 2014-03-07 16:45

The new Developer Network Website is only 3 days away! While we're busy doing the final touches over the weekend, we wanted to give you a small taste of what's coming on Monday. We're pleased with the new design and structure, of course--but the 3 things below are ones most likely to help make it easier to work with OCLC's Web services.

OCLC Dev Network: Get a Sneak Peek at the New Developer Network site

Fri, 2014-03-07 16:45

The new Developer Network Website is only 3 days away! While we're busy doing the final touches over the weekend, we wanted to give you a small taste of what's coming on Monday. We're pleased with the new design and structure, of course--but the 3 things below are ones most likely to help make it easier to work with OCLC's Web services.

LITA: LITA Webinar: Setting a Course for Social Media Success

Fri, 2014-03-07 15:35

LITA is offering a webinar All Aboard – The Party’s Starting! Setting a Course for Social Media Success, presented by Mary Anne Hansen, Doralyn Rossmann, Angela Tate, and Scott Young of Montana State University Library, from 2:00 p.m. – 4:00 pm CDT on April 2, 2014.

Social media is more than a way to inform users; social media is a powerful way to build community online. Presenters will go beyond the basics by demonstrating how to create a social media guide for developing communities on Facebook, Twitter, Tumblr, and Pinterest.  We will explore data tracking and assessment tools such as ThinkUp, HootSuite, Google Analytics, focus group data, and survey methods. We will also discuss strategies for integrating social media efforts into your organization’s strategic plan and educating peer organizations about best practices.

Participants will take home a template for creating a comprehensive plan for social media usage and assessment, with an emphasis on creating a meaningful voice and a compelling personality.

For registration and additional information, visit the course page.

LITA: LITA Webinar: Setting a Course for Social Media Success

Fri, 2014-03-07 15:35

LITA is offering a webinar All Aboard – The Party’s Starting! Setting a Course for Social Media Success, presented by Mary Anne Hansen, Doralyn Rossmann, Angela Tate, and Scott Young of Montana State University Library, from 2:00 p.m. – 4:00 pm CDT on April 2, 2014.

Social media is more than a way to inform users; social media is a powerful way to build community online. Presenters will go beyond the basics by demonstrating how to create a social media guide for developing communities on Facebook, Twitter, Tumblr, and Pinterest.  We will explore data tracking and assessment tools such as ThinkUp, HootSuite, Google Analytics, focus group data, and survey methods. We will also discuss strategies for integrating social media efforts into your organization’s strategic plan and educating peer organizations about best practices.

Participants will take home a template for creating a comprehensive plan for social media usage and assessment, with an emphasis on creating a meaningful voice and a compelling personality.

For registration and additional information, visit the course page.

OCLC Dev Network: Get a Sneak Peek at the New Developer Network site

Fri, 2014-03-07 13:46

The new Developer Network Website is only 3 days away! While we're busy doing the final touches over the weekend, we wanted to give you a small taste of what's coming on Monday. We're pleased with the new design and structure, of course--but the 3 things below are ones most likely to help make it easier to work with OCLC's Web services.

read more

OCLC Dev Network: Get a Sneak Peek at the New Developer Network site

Fri, 2014-03-07 13:46

The new Developer Network Website is only 3 days away! While we're busy doing the final touches over the weekend, we wanted to give you a small taste of what's coming on Monday. We're pleased with the new design and structure, of course--but the 3 things below are ones most likely to help make it easier to work with OCLC's Web services.

read more

ALA Equitable Access to Electronic Content: Thinking bigger about ebooks

Thu, 2014-03-06 22:02

In an American Libraries article published today, Alan S. Inouye, director of the American Library Association’s Office for Information Technology Policy, reported on his participation in the Connecticut State Library’s Ebook Symposium, a one-day event where library and publishing experts explored the current state of ebook affairs and the future of ebook lending for libraries, publishers, and readers.

As a presenter at the statewide ebook conference, Inouye discussed the large number of challenges faced by libraries working to meet patron demands for ebooks, including concerns related to fair pricing, equitable access to ebook titles, digital preservation, privacy, digital rights management and accommodations for readers with limited vision.

Inouye writes:

My presentation provided a national view of library ebook challenges through the lens, naturally, of ALA’s work during the past several years. While we saw good progress in 2013 (having come from the depths of despair in 2012), the present state of library ebook lending is nonetheless not good. I talked about high prices as the paramount problem, though there are many other ones, including lack of availability to libraries of the full range of ebook titles and lack of full access by library consortia. Additionally, we also have concerns relating to archiving and preservation, privacy, accommodations for people with disabilities, among others.

It is essential that we think bigger. The publishing model itself is evolving from a simple linear progression of author to reader to a complex set of relationships in which nearly any entity could relate to another directly. For example, authors can work with libraries directly, or publishers can take on distribution or retailing operations. The library community needs to be creative and innovative in contemplating the models that will work best for us and our users.

Also on hand to comprise the publisher panel were Skip Dye, vice president of library and academic sales at Random House and Adam Silverman, director of digital business development at HarperCollins. This session produced the most heat for the symposium, as a couple of Connecticut librarians pointedly criticized high prices for library ebooks. Through subsequent informal discussion, I got the sense that this dissatisfaction resonated with the other attendees.

Read the full article

The post Thinking bigger about ebooks appeared first on District Dispatch.

ALA Equitable Access to Electronic Content: Thinking bigger about ebooks

Thu, 2014-03-06 22:02

In an American Libraries article published today, Alan S. Inouye, director of the American Library Association’s Office for Information Technology Policy, reported on his participation in the Connecticut State Library’s Ebook Symposium, a one-day event where library and publishing experts explored the current state of ebook affairs and the future of ebook lending for libraries, publishers, and readers.

As a presenter at the statewide ebook conference, Inouye discussed the large number of challenges faced by libraries working to meet patron demands for ebooks, including concerns related to fair pricing, equitable access to ebook titles, digital preservation, privacy, digital rights management and accommodations for readers with limited vision.

Inouye writes:

My presentation provided a national view of library ebook challenges through the lens, naturally, of ALA’s work during the past several years. While we saw good progress in 2013 (having come from the depths of despair in 2012), the present state of library ebook lending is nonetheless not good. I talked about high prices as the paramount problem, though there are many other ones, including lack of availability to libraries of the full range of ebook titles and lack of full access by library consortia. Additionally, we also have concerns relating to archiving and preservation, privacy, accommodations for people with disabilities, among others.

It is essential that we think bigger. The publishing model itself is evolving from a simple linear progression of author to reader to a complex set of relationships in which nearly any entity could relate to another directly. For example, authors can work with libraries directly, or publishers can take on distribution or retailing operations. The library community needs to be creative and innovative in contemplating the models that will work best for us and our users.

Also on hand to comprise the publisher panel were Skip Dye, vice president of library and academic sales at Random House and Adam Silverman, director of digital business development at HarperCollins. This session produced the most heat for the symposium, as a couple of Connecticut librarians pointedly criticized high prices for library ebooks. Through subsequent informal discussion, I got the sense that this dissatisfaction resonated with the other attendees.

Read the full article

The post Thinking bigger about ebooks appeared first on District Dispatch.

OCLC Dev Network: Resolved - Web Services Temporarily Unavailable

Thu, 2014-03-06 21:21

All critical services have been restored. Please contact us at devnet@oclc.org if you continue to experience problems.

We apologize again for the inconvience.

OCLC Dev Network: Resolved - Web Services Temporarily Unavailable

Thu, 2014-03-06 21:21

All critical services have been restored. Please contact us at devnet@oclc.org if you continue to experience problems.

We apologize again for the inconvience.