I am teaching a 90 minute (!) online workshop on September 13th on Jobs to be Done and New Feature Planning, where — yep! — I will be talking about the Kano model. Those of you not familiar with the jobs to be done framework might still have heard
People don’t want a quarter-inch drill, they want a quarter-inch hole. Theodore Levitt, paraphrasing
— the observation being that people buy services not for the services themselves but to get jobs done. There is a lot actually written which takes this farther, noting that while demographics and characteristics play some minor role the task or job at hand is largely independent from that, rendering feature-planning around demographics — you might know them as personas — sort of useless. We’ll try to reconcile that, too.Details
Core to improving the library user experience is identifying need and introducing new and useful services, features, and content, but the risk of failure sometimes trumps our willingness to try anything out of the ordinary. What a shame, right? In this workshop, Michael Schofield — a developer, librarian, and chief #libuxer — will introduce you to methods and models for identifying the tasks patrons want to perform (or, their “jobs to be done”), and whether providing a new service or feature won’t actually have a negative impact on the overall library user experience.
- September 13, 2016
- 2 – 3:30 p.m. (Eastern)
The cost is free to library staff in the state of Florida, but you might still try to give it a whirl and let us know in the comments.
Hey there. Michael here. I am running an online service design workshop courtesy of my amaaaazing friends at NEFLIN called Service Design for Libraries: From Map to Blueprint.Details
In this practical workshop, Michael Schofield — a developer, librarian, and chief #libuxer — introduces service design, its place is in the user experience zeitgeist, and its role deconstructing library services to hammer out the kinks. A brief conceptual overview is made-up for by a useful workshop that has attendees creating a customer journey map before morphing it into the practical service blueprint.
- August 23, 2016
- 2 – 3:30 p.m. (Eastern)
- Free to library staff in the state of Florida (although, I think it’s free otherwise, too. Give it a whirl and let us know in the comments).
As we countdown to the annual Lucene/Solr Revolution conference in Boston this October, we’re highlighting talks and sessions from past conferences. Today, we’re highlighting Devansh Dhutia’s session on how Gannet manages schema changes to large Solr collections.
Deploying schema-changes to solr collections with large volumes of data can be problematic when the reindex activity can take almost a whole day. Keeping in mind that Gannett’s 16 million document index grows by approximately 800,000 documents per month, the status quo isn’t satisfactory. A side effect of the current architecture is that during a Solr outage, not only are all reindex activities paused, but upstream authoring engines suffer from latency issues.
This talk demonstrates how Gannett is switching to a queue based solution with creative use of collections & aliases to dramatically improve the deployment, reindex, and authoring experiences. The solution also incorporates keeping a pair of Solr clouds in geographically dispersed data centers in an eventually synchronized state.
Devansh joined the Gannett family in 2006 and has been an active contributor to Gannett’s search strategy starting with Lucene, and for the last 2 years, Solr. Devansh was one of the primary developers involved in switching Gannett from the traditional master-slave solr setup to a geo-replicated Solr Cloud environment. When Devansh isn’t working with Solr, he enjoys spending time with his wife & 3 year old daughter and trying new recipes.
Queue Based Solr Indexing with Collection Management: Presented by Devansh Dhutia, Gannett Co. from Lucidworks
Join us at Lucene/Solr Revolution 2016, the biggest open source conference dedicated to Apache Lucene/Solr on October 11-14, 2016 in Boston, Massachusetts. Come meet and network with the thought leaders building and deploying Lucene/Solr open source search technology. Full details and registration…
The post Queue Based Indexing & Collection Management at Gannett appeared first on Lucidworks.com.
Start out the fall with these all new sessions, including a web course and two webinars:
Social Media For My Institution; from “mine” to “ours”
Instructor: Plamen Miltenoff
Starting Wednesday September 21, 2016, running for 4 weeks
Register Online, page arranged by session date (login required)
A course for librarians who want to explore the institutional application of social media. Based on an established academic course at St. Cloud State University “Social Media in Global Context” (more information at http://web.stcloudstate.edu/pmiltenoff/lib290/ ). A theoretical introduction will assist participants to detect and differentiate the private use of social media from the structured approach to social media for an educational institution. Legal and ethical issues will be discussed, including future trends and management issues. The course will include hands-on exercises on creation and dissemination of textual and multimedia content and patrons’ engagement. Brainstorming on suitable for the institution strategies regarding resources, human and technological, workload share, storytelling, and branding.
This is a blended format web course:
The course will be delivered as 4 separate live webinar lectures, one per week on:
Wednesdays, September 21, 28, October 5 and 12
2:00 – 3:00 pm Central
You do not have to attend the live lectures in order to participate. The webinars will be recorded and distributed through the web course platform, Moodle for asynchronous participation. The web course space will also contain the exercises and discussions for the course.
How to Talk About User Experience
Presenter: Michael Schofield
Wednesday September 7, 2016
Noon – 1:30 pm Central Time
Register Online, page arranged by session date (login required)
The explosion of new library user experience roles, named and unnamed, the community growing around it, the talks, conferences, and corresponding literature signal a major shift. But the status of library user experience design as a professional field is impacted by the absence of a single consistent definition of the area. While we can workshop card sorts and pick apart library redesigns, even user experience librarians can barely agree about what it is they do – let alone why it’s important. How we talk about the user experience matters. So, in this 90 minute talk, we’ll fix that.
Online Productivity Tools: Smart Shortcuts and Clever Tricks
Presenter: Jaclyn McKewan
Tuesday September 20, 2016
11:00 am – 12:30 pm Central Time
Register Online, page arranged by session date (login required)
Become a lean, mean productivity machine! In this 90 minute webinar we’ll discuss free online tools that can improve your organization and productivity, both at work and home. We’ll look at to-do lists, calendars, and other programs. We’ll also explore ways these tools can be connected, as well as the use of widgets on your desktop and mobile device to keep information at your fingertips.
And don’t miss the other upcoming LITA fall continuing education offerings:
Beyond Usage Statistics: How to use Google Analytics to Improve your Repository, with Hui Zhang
Offered: Tuesday October 11, 2016, 11:00 am – 12:30 pm
Project Management for Success, with Gina Minks
Offered: October 2016, runs for 4 weeks
Contextual Inquiry: Using Ethnographic Research to Impact your Library UW, with Rachel Vacek and Deirdre Costello
Offered: October 2016, running for 6 weeks.
Check the Online Learning web page for more details as they become available.
Questions or Comments?
For all other questions or comments related to the course, contact LITA at (312) 280-4268 or Mark Beatty, email@example.com
The Hydra Connect 2016 Program Committee thought that you might appreciate an update on how planning is going, so…
The list of workshops for Monday has been available on the wiki for some time now. We shall shortly be asking delegates to indicate which sessions they hope to attend so that we can allocate appropriately sized rooms and so that convenors can send out any pre-workshop materials to them.
The conference proper will start on Tuesday with a plenary session, a mix of key presentations and lightning talks as at previous Connects. On Tuesday afternoon we shall have the very popular poster session for which we ask a poster from every attending institution – please start planning! As last year, we shall arrange for printing at a FedEx branch near the conference venue for those who prefer not to travel with a poster tube! Details soon.
We received far more suggestions for Connect sessions than we have had in the past – in particular there were a lot of suggestions for panels and breakouts. We’re pleased to report that by extending the “traditional” Wednesday morning parallel tracks into the afternoon we have managed to accommodate everyone’s requests. We’ve timetabled presentations in 30-minute slots (a 20-minute presentation, 5 minutes or so for questions and a bit of time for possible movement between rooms). Panel and breakout sessions have been timetabled in one hour slots (50-55 minutes plus movement time). If you are involved in presenting or facilitating any of these sessions you should hear from us with confirmation at the end of next week when we have finished tweaking the timetable. We have included a number of slots for lightning talks and we’ll start soliciting these at the end of the month. We anticipate having the Tuesday and Wednesday programs on the wiki in ten days’ time or so and you’ll find there is so much to choose from that, inevitably, you will have to make some hard choices about which sessions to attend. We are hoping (though this is yet to be confirmed) that we may be able to make, and subsequently post, audio recordings of all the sessions so that you can listen to those that you couldn’t attend once you return home.
Thursday morning has been given over to unconference sessions and we hope to make “Sessionizer” available to delegates in about three weeks’ time so that you can start requesting slots. Thursday afternoon is available for Interest Groups and Working Groups to have face-time. We shall make any spare room capacity on Thursday available for booking to allow ad-hoc gatherings, Birds of a Feather sessions, and the like.
Booking is beginning to fill up and if you haven’t yet registered now would be a good time to do so! Full details of registration and the conference hotel are on the wiki. Please note that the specially negotiated hotel rate is only valid until September 6th and you must register by that same date to receive a Hydra t-shirt!
If you can only make it to one Hydra meeting in 2016/17, this is the one to attend!
In May, the OpenTrialsFDA team (a collaboration between Erick Turner, Dr. Ben Goldacre and the OpenTrials team at Open Knowledge) was selected as a finalist for the Open Science Prize. This global science competition is focused on making both the outputs from science and the research process broadly accessible to the public. Six finalists will present their final prototypes at an Open Science Prize Showcase in early December 2016, with the ultimate winner to be announced in late February or early March 2017.
As the name suggests, OpenTrialsFDA is closely related to OpenTrials, a project funded by The Laura and John Arnold Foundation that is developing an open, online database of information about the world’s clinical research trials. OpenTrialsFDA will work on increasing access, discoverability and opportunities for re-use of a large volume of high quality information currently hidden in user-unfriendly Food and Drug Administration (FDA) drug approval packages (DAPs).
The FDA publishes these DAPs as part of the general information on drugs via its data portal known as Drugs@FDA. These documents contain detailed information about the methods and results of clinical trials, and are unbiased, compared to reports of clinical trials in academic journals. This is because FDA reviewers require adherence to the outcomes and analytic methods prespecified in the original trial protocols, so, in contrast to most journal editors, they are unforgiving of practices such as post hoc switching of outcomes and changes to the planned statistical analyses. These review packages also often report on clinical trials that have never been published.
However, despite their high value, these FDA documents are notoriously difficult to access, aggregate, and search. The website itself is not easy to navigate, and much of the information is stored in PDFs or non-searchable image files for older drugs. As a consequence, they are rarely used by clinicians and researchers. OpenTrialsFDA will work on improving this situation, so that valuable information that is currently hidden away can be discovered, presented, and used to properly inform evidence-based treatment decisions.
The team has started to scrape the FDA website, extracting the relevant information from the PDFs through a process of OCR (optical character recognition). A new OpenTrialsFDA interface will be developed to explore and discover the FDA data, with application programming interfaces (APIs) allowing third party platforms to access, search, and present the information, thus maximising discoverability, impact, and interoperability. In addition, the information will be integrated into the OpenTrials database, so that for any trial for which a match exists, users can see the corresponding FDA data.
Future progress will be shared both through this blog and the OpenTrials blog: you can also sign up for the OpenTrials newsletter to receive regular updates and news. More information about the Open Science Prize and the other finalists is available from www.openscienceprize.org/res/p/finalists.
New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.
New This Week
Visit the LITA Job Site for more available jobs and for information on submitting a job posting.
Are we the only ones who feel like this summer is flying by? While the thought of saying goodbye to warm summer days pains us a little inside, we are excited that Lucene/Solr Revolution 2016 is just two short months away! The conference will be held October 11-14 in Boston, MA. If you haven’t secured your spot yet, don’t wait. Here’s a list of 10 things (in no particular order) that you won’t want to miss at this year’s conference:
1. Mix and Mingle with the Brains Behind Solr
Hear talks from and mingle with those who shape the Apache Solr project – the committers! Lucene/Solr Revolution is the unofficial annual gathering of Solr committers from around the world. It’s not often that this many committers gather at one event, so don’t miss the chance to meet the people who know Solr best.
Over 50 breakout sessions from Solr users across all industries. Learn how companies like Salesforce, IBM, Bloomberg, Sony, The Home Depot, Microsoft, Allstate Insurance Company, and more use Solr to solve business problems. Check out the agenda here.
3. Very Happy Hours
Happy Hours to network with attendees, speakers, sponsors, and committers. There is a ton of content to digest at Lucene/Solr Revolution and we guarantee you will learn a lot. However, we aren’t skimping on the fun!
4. Superstar Keynotes
Keynotes from Cathy Polinksy, SVP of Search at Salesforce and Sridhar Sudarsan, CTO of Product Management and Partnerships at IBM Watson. Hear about the role Solr plays in the strategy and execution of two of the world’s leading enterprises.
5. World-Class Training
Pre-conference training to polish your skills and prime your brain for all of the information coming your way during the conference. Two-day hands-on Solr training is offered on October 11th & 12th. Check out the course listings here.
6. Ask the Experts – in Person!
Meet with experts on specific questions you have about Solr during Office Hours.
7. Stump the Chump!
Catch the popular “Stump the Chump” session with Solr Committer, Chris “Hoss” Hostetter. Be prepared for a laugh as you watch Hoss answer questions from attendees and community members trying to stump him. Check out last year’s video here. Want to Stump the Chump? Submit questions to firstname.lastname@example.org before October 12.
8. Get a New Profile Pic
A professional headshot – come on, you know you need one. Get yours for free at the conference. Your LinkedIn followers will thank you.
9. Party in the Sky
Conference party at Skywalk Boston – check out 360 degree views of Boston from the top of the Prudential Center while enjoying food, drinks, games, and music. It’s sure to be a good time!
Visit our sponsor showcase to learn about products and services for your search and big data needs or about industry job opportunities. Participate in contests to win prizes.
Register today to join the fun and spend the week with us learning about all things Solr.
The post 10 Things You Don’t Want to Miss at Lucene/Solr Revolution 2016 appeared first on Lucidworks.com.
After reading Mor’s post on the recent website update, I thought I’d elaborate a little on the team page, and how we ended up using Slack to update it. The following is from a post on my personal blog.
I recently undertook the task of redesigning a couple of key pages for the Open Knowledge International website. A primary objective here was to present ourselves as people, as much as an organisation. After all, it’s the (incredible) people that make Open Knowledge International what it is. One of the pages to get some design time was the team page. Here I wanted to show that behind this very static page, were real people, doing real stuff. I started to explore the idea of status updates for each staff member. This would, if all goes to plan, keep the content fresh, while hopefully making us a little more relatable.
My work here wasn’t done. In this scenario, my colleagues become “users”, and if this idea had any chance of working it needed a close to effortless user experience. Expecting anyone other than a few patient techies to interact with the website’s content management system (CMS) on regular basis just isn’t realistic. As it happens, my colleagues and I were already creating the content I was looking for. We chat daily using an internal instant messaging app (we use Slack). As well as discussing work related issues, people often share water-cooler items such as an interesting article they have read, or a song that captures how they are feeling. Sharing stuff like this can be as easy as copy and pasting a link. Slack will grab the title of the page and present it nicely for us. So what if at that moment of sharing, you could choose to share more widely, via the website? After some discussions, we introduced a system that facilitated just this, where if you add one a a few specific hashtags to your your Slack message, it would get pushed to the website and become your most recent status. The implementation still needs a little polishing, but I’m happy to say that after a few weeks of use, it seems to be working well, in terms of uptake at least. Whether anyone visiting the site really cares, remains to be proven.
I really like this solution. I like that it achieves its objective of not requiring additional effort, of course. Moreover, I like that it doesn’t introduce any barriers. It doesn’t assume that anyone wanting to contribute have a certain amount of knowledge (outside of what is already proven) or are happy to learn a new tool. It doesn’t ask anyone to change their way of working. It makes me wonder, how far could you take this model? It’s a big leap, but could we expand on this to the point where the interface being interacted with is that of whatever application the content creator sees fit? Just as the (slightly modified) action of sending a message in Slack became enough to make this small content change, could/should the action of saving a Word document to your local drive be enough to publish a blog post? (That particular example is not difficult to imagine, if you assume it’s happening within a walled-off Microsoft infrastructure, but that of course would be contrary to what I’m pondering here.)
Originally posted on smth.uk.
I spend a significant amount of time working with Google folks, especially Dan Brickley, and others on the supporting software, vocabulary contents, and application of Schema.org. So it is with great pleasure, and a certain amount of relief, I share the announcement of the release of 3.1.
That announcement lists several improvements, enhancements and additions to the vocabulary that appeared in versions 3.0 & 3.1. These include:
- Health Terms – A significant reorganisation of the extensive collection of medical/health terms, that were introduced back in 2012, into the ‘health-lifesci’ extension, which now contains 99 Types, 179 Properties and 149 Enumeration values.
- Finance Terms – Following an initiative and work by Financial Industry Business Ontology (FIBO) project (which I have the pleasure to be part of), in support of the W3C Financial Industry Business Ontology Community Group, several terms to improve the capability for describing things such as banks, bank accounts, financial products such as loans, and monetary amounts.
- Spatial and Temporal and Datasets – CreativeWork now includes spatialCoverage and temporalCoverage which I know my cultural heritage colleagues and clients will find very useful. Like many enhancements in the Schema.org community, this work came out of a parallel interest, in which Dataset has received some attention.
- Hotels and Accommodation – Substantial new vocabulary for describing hotels and accommodation has been added, and documented.
- Pending Extension – Introduced in version 3.0 a special extension called “pending“, which provides a place for newly proposed schema.org terms to be documented, tested and revised. The anticipation being that this area will be updated with proposals relatively frequently, in between formal Schema.org releases.
- How We Work – A HowWeWork document has been added to the site. This comprehensive document details the many aspects of the operation of the community, the site, the vocabulary etc. – a useful way in for casual users through to those who want immerse themselves in the vocabulary its use and development.
For fuller details on what is in 3.1 and other releases, checkout the Releases document.Hidden Gems
Often working in the depths of the vocabulary, and the site that supports it, I get up close to improvements that on the surface are not obvious which some [of those that immerse themselves] may find interesting that I would like to share:
- Snappy Performance – The Schema.org site, a Python app hosted on the Google App Engine, is shall we say a very popular site. Over the last 3-4 releases I have been working on taking full advantage of muti-threaded, multi-instance, memcache, and shared datastore capabilities. Add in page caching imrovements plus an implementation of Etags, and we can see improved site performance which can be best described as snappiness. The only downsides being, to see a new version update you sometimes have to hard reload your browser page, and I have learnt far more about these technologies than I ever thought I would need!
- Data Downloads – We are often asked for a copy of the latest version of the vocabulary so that people can examine it, develop form it, build tools on it, or whatever takes their fancy. This has been partially possible in the past, but now we have introduced (on a developers page we hope to expand with other useful stuff in the future – suggestions welcome) a download area for vocabulary definition files. From here you can download, in your favourite format (Triples, Quads, JSON-LD, Turtle), files containing the core vocabulary, individual extensions, or the whole vocabulary. (Tip: The page displays the link to the file that will always return the latest version.)
- Data Model Documentation – Version 3.1 introduced updated contents to the Data Model documentation page, especially in the area of conformance. I know from working with colleagues and clients, that it is sometimes difficult to get your head around Schema.org’s use of Multi-Typed Entities (MTEs) and the ability to use a Text, or a URL, or Role for any property value. It is good to now have somewhere to point people when they question such things.
- Markdown – This is a great addition for those enhancing, developing and proposing updates to the vocabulary. The rdfs:comment section of term definitions are now passed through a Markdown processor. This means that any formatting or links to be embedded in term description do not have to be escaped with horrible coding such as & and > etc. So for example a link can be input as [The Link](http://example.com/mypage) and italic text would be input as *italic*. The processor also supports WikiLinks style links, which enables the direct linking to a page within the site so [[CreativeWork]] will result in the user being taken directly to the CreativeWork page via a correctly formatted link. This makes the correct formatting of type descriptions a much nicer experience, as it does my debugging of the definition files.
I could go on, but won’t – If you are new to Schema.org, or very familiar, I suggest you take a look.
Except where otherwise stated, all content on eRambler by Jez Cope is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license.
On Monday, NYU Libraries finally went live with a redesign several months in the making. Their user experience team lead by Nadaleen Templeman-Kluit have been chronicling some of the process and it’s been sort of a joy to watch.
I mentioned when I wrote about the University of Indiana Libraries’ redesign that it was during NYU Libraries’ beta that I first noticed this thing I’m calling the “descending hero search pattern.”
This is also the second time I’ve seen this descending hero search pattern. When you engage with the search icon in the menu, the entire search unit either folds-up or descends. In this way the more advanced search options that libraries require can be present on multiple pages, consistently so. …This seems intuitive but the animation is key to staying oriented. … I haven’t thought about this too much to weigh-in with any authority, but I think I’m a fan.
I think I am going to write this up in a post but, tl;dr, regardless what’s actually present in the container, if it’s a uniquely complex search box the main benefit in carrying from page to page will be the consistency and reliability of that tool, rather than shoe-horn a keyword search in a sidebar.
For fun, they have archived the original site. Someone get them a round of beers, stat!
Emergency Alert: Dust Storm Warning in this area until 12:00PM MST. Avoid travel. Check local media - NWS. WTF? Where to even begin with this stupidity? Well, here goes:
- "this area" - what area? In the Bay Area we have earthquakes, wildfires, flash floods, but we don't yet have dust storms. Why does the idiot who composed the message think they know where everyone who will read it is?
- Its 11:44AM Pacific, or 18:44UTC. That's 12:44PM Mountain. Except we're both on daylight savings time. So did the message mean 12:00PM MDT, in which case the message was already 44 minutes too late? Or did the message mean 12:00MST, or 19:00UTC, in which case it had 16 minutes to run? Why send a warning 44 minutes late or use the wrong time zone?
- A dust storm can be dangerous, so giving people 16 minutes (but not -44 minutes) warning could save some lives. Equally, distracting everyone in "this area" who is driving, operating machinery, performing surgery, etc. could cost some lives. Did anyone balance the upsides and downsides of issuing this warning, even assuming it only reached people in "this area"?
- I've written before about the importance and difficulty of modelling correlated failures. Now that essentially every driver is carrying (but hopefully not talking on) a cellphone, the emergency alert system is a way to cause correlated distraction of every driver across the entire nation. Correlated distraction caused by rubbernecking at accidents is a well-known cause of additional accidents. But at least that is localized in space. Who thought that building a system to cause correlated distraction of every driver in the nation was a good idea?
- Who has authority to trigger the distraction? Who did trigger the distraction? Can we get that person fired?
- This is actually the third time the siren has gone off while I'm driving. The previous two were Amber alerts. Don't get me wrong. I think getting drivers to look out for cars that have abducted children is a good idea, and I'm glad to see the overhead signs on freeways used for that purpose. But it isn't a good enough idea to justify the ear-splitting siren and consequent distraction. So I had already followed instructions to disable Amber alerts. I've now also disabled Emergency alerts.
A new version of our UX framework Lucidworks View is ready for download!
View is an extensible search interface designed to work with Fusion, allowing for the deployment of an enterprise-ready search front end with minimal effort. View has been designed to use the power of Fusion query pipelines and signals, and provides essential search capabilities including faceted navigation, typeahead suggestions, and landing page redirects.
- Windows Support: We’ve added a Windows packaged build, you can now run View on Windows
- You can now specify which port View runs on
- Improved performance by minifying builds by default and turning off page change animations
- Introduced developer mode, which allows you to develop with unminified build objects, just npm run start-dev
Lucidworks View 1.3 is available for immediate download at http://lucidworks.com/products/view
From Graham Triggs, VIVO Technical Lead, on behalf of the VIVO team.
Austin, TX The VIVO team is proud to announce that VIVO 1.9 was released on August 8, 2016.
• Full release notes are included below and are also available on the wiki: https://wiki.duraspace.org/display/VIVODOC19x/Release+Notes
guest post by Nick Gross, OITP’s 2016 Google Policy Fellow
Last Friday, the Washington Office of the American Library Association (ALA) hosted a luncheon for the 2016 Google Policy Fellows to discuss ALA’s public policy work. The Google Policy Fellowship gives undergraduate, graduate, and law students (like me) the opportunity to spend the summer working at public interest groups engaged in Internet and technology policy issues. While most organizations are in D.C., others are in Boston, Ottawa, San Francisco, and additional cities around the world.
Other fellows at the lunch included Matt McCoy from Tech Freedom; Lindsay Bembenek of the American Enterprise Institute; Apratim Vidyarthi, who is at the Center for Democracy and Technology; David Morar from the Internet Education Foundation; Cristina Contreras Zamora of the National Hispanic Media Coalition; and Raymond Russell, who is at the Mercatus Center. OITP intern Brian Clark also joined the lunch. They all shared their educational backgrounds, interests, and experiences at their host organizations.
ALA’s Washington staffers Emily Sheketoff, Alan Inouye, Carrie Russell, and Larra Clark shared their policy focus and experience at ALA. In doing so, they explained the importance of information and technology policies to libraries (from copyright to telecommunications and privacy), the underlying principles of libraries, and how ALA’s Washington office strives to advance tech policies that reflect those core principles. In particular, the discussion covered ALA’s past and current efforts to reform policy, including the privacy of library records, Universal Service Fund modernization, Net Neutrality, unlicensed spectrum, internet filtering at libraries, and copyright.
Fellows then had the opportunity to field questions to ALA staffers, which prompted interesting discussions about the privacy of Internet browsing history and encryption efforts at local libraries, broadband investment and competition, E-Rate, cyber-security standards, the Right to be Forgotten, and the growing need for copyright law to reflect our digital age by better protecting users. Given libraries’ prominent position in communities and their mission to serve communities, they can play a vital role in protecting patrons’ privacy, spurring broadband deployment, connecting citizens to information, and – as both content users and creators – helping craft a balanced copyright law.
The luncheon highlighted the importance of public interest organizations, like the ALA, in the technology policy arena. More importantly, it showed attendees how prominent and active the ALA has been and is in shaping forward-looking information and technology policies that benefit individuals all over the U.S. The lunch’s insightful discourse further confirmed that Google Policy Fellows have the knowledge and preparation to debate tech policies in-depth and to make a significant impact on future tech policies.
Although I don’t know how deliberate it was, I think it’s smart to keep the hours above the fold on as many devices as possible. I haven’t written about it before, but I gut-check my impression of a library’s homepage by how quickly I can find its hours. Their presence in general tends to be a basic expectation that, including them, won’t really garner you any bonus points. But the ease with which the library’s hours are found — for someone looking for them — is inversely related to how much frustration is felt.
This is also the second time I’ve seen this descending hero search pattern. When you engage with the search icon in the menu, the entire search unit either folds-up or descends. In this way the more advanced search options that libraries require can be present on multiple pages, consistently so.
I first noticed this when NYU Libraries — who also went live with their redesign today (!) — were beta testing their new site.
This seems intuitive but the animation is key to staying oriented. The immediacy of IU Libraries’ toggle — although it might just be my browser — left me sort of flashbanged.
I haven’t thought about this too much to weigh-in with any authority, but I think I’m a fan.
It seems like something I would do.
I’ve been putting together some information to help a few non-catalogers understand what MarcEdit is an how much it get’s used around the world. Couple of fun stats for 2015 based only on usage information from users that make use of the automated update tool.
2015 Usage Information:
- ~3 million unique program executions (times the program checked for an update [was started])
- 192 unique countries/political regions
- ~18,000 unique/active users
- around 105 print/online citations in 2015 related to MarcEdit (combination of Google Scholar, WOS data – though, this wasn’t rigorously scrubbed for dups)
Where is MarcEdit used?
Each of these points (saved the couple in the US where Google got a little confused with Georgia, Jersey, etc.), represents an individual country with users working with MarcEdit. In 2015, the US represented ~55 percent of the unique usage. That means that 45% of the MarcEdit community is outside of the United States.