Yesterday library executives, mayors, county executives and school superintendents met with White House officials in Washington, D.C. to discuss their participation in the ConnectED Library Challenge designed to get a library card in every student’s hand. The national initiative is gaining momentum with approximately 60 cities and counties currently participating, 50 of which were represented at the convening. The White House and the Institute of Museum and Library Services (IMLS) worked with the American Library Association and the Urban Libraries Council to develop the program, one where each participating jurisdiction must have “buy in” from their local government, the school superintendent and the local public library system to support providing a public library card to students. It leverages the ability of libraries to transform the learning experience for students of all ages, not only sparking a love for reading, but offering ready access to computers to gain basic online research skills and digital literacy as well as a place to create and innovate.
ALA President Sari Feldman, executive director of the Cuyahoga County Public Library, commented:
I know from my experience that when you link the school library, the school and the public library, that collaboration gives every student access to a rich collection of resources that improves their education.
An important factor of this initiative is that each jurisdiction is able to decide how best to provide this library access. Two examples of this are:
- In Charlotte-Mecklenburg, NC, they have created ONE Access, a program that uses students’ school identification numbers to access public library materials instead of a separate library card. Of the 147,000 students in their county, 100,000 have accessed Library services through this new program.
- In Washington, D.C. they have the D.C. One Card that provides access to District government facilities and programs, including public schools, recreation centers, libraries and the Metro. In an effort to increase usage, many areas including D.C., have chosen to eliminate fines for the participating students.
Some of the communities participating in this program are: Baltimore, Boston, Charlotte, Chicago, Cleveland, Clinton Macomb, Columbus, Cuyahoga, D.C., Denver, Hartford, Hennepin County, Howard County, Indianapolis, Madison, Milwaukee, New Haven, Oakland, Pierce County, Pima, Pocatello, Pueblo City, Ramsey County, Columbia, Rochester Hills, Rochester, Salt Lake City, San Francisco, Seattle, Skokie, and St. Louis.
If you were not able to participate in the live event, the morning session is available on YouTube.
Digital Public Library of America (DPLA) is very pleased to announce the release of its second group of Primary Source Sets about topics in US history, literature, and culture, along with new features for navigating our growing project. These sets were developed and reviewed by our Education Advisory Committee for use by students and teachers in grades 6-12 and higher education. Each set includes an overview, ten to fifteen primary sources, links to additional resources, and a teaching guide. This project was generously funded by the Whiting Foundation.
DPLA will continue adding new sets and new features to the project through Spring 2016. To learn more about DPLA’s education work, read about education projects, sign up for the education news list, or contact firstname.lastname@example.org.
About the Whiting Foundation
The Whiting Foundation (www.whitingfoundation.org) has supported scholars and writers for more than forty years. This grant is part of the Foundation’s efforts to infuse the humanities into American public culture.
About the Digital Public Library of America
Digital Public Library of America (http://dp.la) is a national digital library that provides access to millions of primary and secondary sources from libraries, archives, and museums across the United States.
I’m Michelle Callaghan, a second-year graduate student at Villanova University. This is our column, “‘Cat in the Stacks.” I’m the ‘cat. Falvey Memorial Library is the stacks. I’ll be posting about living that scholarly life, from research to study habits to embracing your inner-geek, and how the library community might aid you in all of it.
It’s nearing the end of January and you’re probably kicking yourself for not keeping up with those New Year’s resolutions – and if you did keep up, my apologies and congratulations, but I’d say most of us have either fallen off the wagon or never had resolutions to begin with.
Resolutions are just too big!
Small goals for small periods of time that build into aspirations, on the other hand, are not only achievable but totally healthy. It’s the second week of the new semester, so the real work has probably kicked into gear. Do you know what your goals for this week are?
My goals for this week are to wrap up a short essay and ramp up thesis research.
My goal for the month is to get a chapter of my thesis written.
My goal for the year is to graduate from this Master’s program.
Do you see how this kind of scaffolded structure might be more useful to you than one overarching resolution? For one thing, goals for the week keep you working. Goals for the month give you a sense of application for that weekly grind. And your yearly goal, given it’s a goal you feel strongly about, can motivate you when your weekly goal–or even your daily goal or your hourly goal–feels all but impossible.
Context is important for motivation. What are you working toward? What do you really want, and what are you doing to achieve it?
As the wisest man among men* once said, “Do it. Just do it. Don’t let your dreams be dreams. Yesterday you said tomorrow. So just do it. Make your dreams come true. Just do it! Some people dream of success, while you’re gonna wake up and work hard at it. Nothing is impossible. You should get to the point where anyone else would quit and you’re not going to stop there, no! What are you waiting for? Do it! Just do it! Yes you can! Just do it. If you’re tired of starting over, stop giving up.”
*Shia LaBeouf – I do hope the sarcasm is palpable.
Article by Michelle Callaghan, graduate assistant on the Communication and Service Promotion team. She is currently pursuing her MA in English at Villanova University.
I know a lot of librarians who’ve suffered with depression or anxiety, take psychotropics, or who go to therapy. It makes me wonder if people with mental illness are drawn to librarianship in greater numbers than other professions. I was very happy — and a little trepidatious — when I saw that two fantastic librarians were organizing an LIS Mental Health Week. I think it’s great that we stamp out the stigma surrounding mental illness. Many of the most creative, brilliant, and productive people in our society have suffered from mental illness, yet admitting to being a fellow-sufferer can feel like career suicide. But it shouldn’t be, not just because it’s illegal to discriminate against people with a mental illness, but because many of the most talented and productive librarians I know suffer from mental illness. Myself included. I wonder if we’re so productive because of the tendency among the depressed and anxious to feel like we’re never doing enough. We’re never good enough. My boss recently reminded me that “this isn’t the tenure track” and that I really didn’t have to worry about getting continuous appointment. But that’s not why I work so hard. I do it because of what I might think of myself if I don’t.
I’ve never felt comfortable talking about my depression (or admitting to it) when I’m experiencing it. It’s only after I’ve clawed my way out of that hole that I can admit it ever happened. And the funny thing is that once you’re out of it, it’s hard to really make sense of the experience. A big part of depression for me involves shame. Deep, viscerally painful shame at my weakness; my inability to man up and either fix myself or seek help. Intellectually I understand that this is brain chemistry and illness and not a failure on my part, but it doesn’t change how I feel. And that’s the problem with depression. It doesn’t matter what you think and what you know, because this other irrational part of your brain is now in control of you. You make self-destructive choices that you’d never make under any other circumstances and its all driven by the fact that you feel unworthy of even feeling better.
I’ve had four bouts of major depression in my life and suffer from social anxiety (the latter of which I’m sure many who know me would find surprising). I’ve found ways to cope with the social anxiety and even to overcome it in some situations, but depression is much more problematic. I have a hard time remembering when I first started feeling depressed, but I think it was in 9th grade. I had lots of friends and fit in well and loved the arts school I was attending, but I felt alone, apart from everyone. I felt unloveable. That was when I started cutting myself, something I did on-and-off until age 20, and the scars criss-crossing my left arm are a constant visual reminder of how I tried to turn my pain inward and make it physical because that was somehow easier to process. I don’t think I really understood what depression was at the time, and at some point, I came out of it naturally and had wonderful experiences with people I’m lucky to still call my friends.
Major depression again settled on me like a shroud towards the end of my Freshman year of college, but now I had the self-awareness to know that something was wrong. By the middle of my Sophomore year, I was sleeping 12 hours a day, barely eating, and, in my waking moments, fantasized endlessly about different ways I would kill myself. That was when I called my dad and begged him to institutionalize me. He got me therapy and medication, which didn’t help, and I spent another 6 months spiraling out of control on the inside while playing the part of a successful college student on the outside. My grades never dropped. Maybe it was for the best that I stayed in school and toughed it out, but it was the most painful experience in my life and all I wanted was to take a vacation from myself. I came out of my depression the summer before my Junior year and feeling ordinary again was the most extraordinary feeling in the world.
I didn’t have another issue with depression for a decade until I had my son. I have a Masters degree in Social Work and have actually diagnosed and provided therapy to people with depression, but I still couldn’t do anything for myself. I just made myself and my husband (and probably my baby in some way) completely miserable for six months. Again, I didn’t seek help until things were so bad that I didn’t think I could be in my life anymore. This time, though, a low dose of Zoloft actually did the trick within days, and I felt like the biggest idiot in the world for spending six months suffering so horribly when a little pill could fix it for me. But that’s depression. It pulls you away from your loved ones and from your rational brain.
My last episode of depression was actually related to something that happened at Portland State. I won’t go into details, but the experience made it very clear to me that 1) allowing my work to define me was a huge problem and 2) I really needed to leave my job. 6 months later, I heard about the opening at PCC and my depression lifted as soon as I knew I had a way out. Here, I feel so supported by my colleagues and part of a team that’s doing great work for students. I also struck a healthier work-life balance when I got off the tenure track. 1.5 years in and I couldn’t be happier. This was the only depressive episode I had that was caused by a specific situational event (I suppose you could argue postpartum depression was too, but that’s really chemical/hormonal). I was ok one day and then I wasn’t. My world was full of possibilities and then it wasn’t. There was no slow ramping up to major depression; the trauma happened and BAM. I truly thought at the time that my career was over… even as I was winning the ACRL Instruction Section Innovation Award.
I don’t think I’ll ever let myself get to a place where a work-specific trauma (save an actual disaster) could cause depression again, but that doesn’t mean I’ll never be depressed again. I’ve done what I can to depression-proof my life, but none of us can control everything. I just hope I’ll have the strength to seek help earlier if it does happen to me again.
When my depression is at its worst, I immerse myself in work even more, so it can sometimes be difficult for colleagues to know that something is wrong. I’m super-productive, which isn’t very different from what I normally am at work. I guess the bright side of this is that I don’t inconvenience colleagues, but, while I’m getting everything done (and sometimes more), I feel even more isolated from everyone around me. It almost feels like I’m acting, playing the role of a good employee. I think I’m pretty good at it, because no one outside ever seemed to notice that I was falling apart; not when I had postpartum depression at Norwich, nor when I was at PSU. I was completely empty on the inside, but somehow still managed to look like me on the outside. What a trick.
When I’m depressed, work is almost a gift; something I absolutely have to do that keeps my mind from spinning out of control. It’s the unscheduled moments that are the worst. The evenings. The weekends. Those moments when you don’t have anything you absolutely have to do, so instead you sit and essentially bask in your pain. I have terrible memories of sitting in the dark in college staring at nothing for hours as my mind spun and spun. Being depressed is so much more difficult when you have children, because you have to try and keep things together, even at home. I think that was the hardest part with my last experience of depression; that my son knew something was up. Shame upon shame.
If people learn anything from this, I hope it’s that not seeking help when you’re starting to be depressed is the worst thing you can do. It’s easy to ignore depression as it slowly creeps in or to think it’s something you can just power your way out of. Often it’s not until things are really bad that you realize you need help, and by then, you’re often at a place that is both dangerous and hard to come back from. I have a hard time asking for help even when I’m not depressed and this is a character flaw of which I’ve been trying to cure myself. There is no shame in seeking help; in fact, it takes great strength to do so. Don’t let shame or denial or negative self-talk keep you from things that could help you get out of depression.
To those who aren’t depressed, you should know that people manifest depression in many different ways. Not everyone fits the stereotype of the weepy zombie who can’t get out of bed. Depression can look like anger, insecurity, numbness, overcompensation, extreme sadness, etc. We all cope (or try to cope) with it in our own ways. Also, don’t assume that a colleague who is depressed is a liability, though, yes, some people may not perform at 100% when they’re depressed, like anyone with a physical illness. You may be surprised that some of the most productive librarians you know and maybe admire are suffering from depression, and that relentless productivity is just a symptom. Your best bet: 1) don’t make assumptions and 2) if you see a colleague’s behavior changing in a way that seems concerning, talk to them. I would have been so grateful if someone (other than my husband who is a saint) had reached out to me.
To everyone suffering from depression right now: you are not alone. Get help.
Some other great blog posts from this week so far:
- When burnout obscures major depression: a #LISMentalHealth week post
- #LISMentalHealth and the state of me
- (My) Chronic and Mental illness story – #LISMentalHealth
- Working on it
- All that you leave behind
And follow (and participate if so inclined) in the Twitter discussion: #LISMentalHealth.
Gratitude to Cecily Walker and Kelly McElroy for calling us together for LIS Mental Health Week 2016.
Pondering my bona fides. I will say this: the black dog is my constant companion. I cannot imagine life without that weight.
I am afraid to say more too openly.
I will deflect, then, but in a way that I hope is useful to others.
Consider this: I am certain, as much as I am certain of anything, that my profession has killed at least three men of my acquaintance.
A mentor. A friend. A colleague who I did not know as well as I would have liked, but who I respected.
All of whom were loved. All of whom had the respect of their colleagues — and the customers they served.
All of whom cared, deeply. Too much? I cannot say.
I have been working in library automation long enough to have become a member of that strange group of folks who have their own lore of long nights, of impossible demands and dilemmas, of being at once part of and separate from the overall profession of librarianship. Long enough to have seen friends and colleagues pass away, and to know that my list of the departed will only lengthen.
But these men? All I know is that they left us, or were taken, too soon — and that I can all too easily imagine circumstances where they could have stayed longer. (But please, please don’t take this as an expression of blame.)
I am haunted by the others whom I don’t know, and never will.
I cannot reconcile myself to this. If this blog post were a letter, it would be spotted by my tears.
But I can make a plea.
The relationship between librarians and their vendors is difficult and fraught. It is all to easy to demonize vendors — but sometimes, enmity is warranted; more often, adversariality at least is; and accountability: always. Thus do the strictures of the systems we live in constrain us and alienate us from one another.
At times, circumstances may not permit warmth or even much kindness. But please remember this, if not for me, for the memory of my absent friends: humans occupy both ends of the library/vendor relationship. Humans.
This update includes a new tool, changes to the merge tool, and a behavior change in the MARCEngine. You can see the change log at:
- Windows/Linux: http://marcedit.reeset.net/software/update.txt
- Mac OSX: http://marcedit.reeset.net/software/mac.txt
You can get the update through MarcEdit’s automated update mechanism or from: http://marcedit.reeset.net/downloads/
SAVE THE DATE…
Replacement Parts. The Ethics of Procuring and Replacing Organs in Humans. Friday, January 29 at 3:00 p.m. in Room 205. Scholarship@Villanova lecture featuring Arthur L. Caplan, PhD; The Rev. James J. McCartney, OSA; and Daniel P. Reid ‘14 CLAS. Dr. Caplan, an internationally recognized bioethicist, along with co-editors Father McCartney and Reid, will discuss their collection of essays from medicine, philosophy, economics and religion that address the ethical challenges raised by organ transplantation. Questions? Contact: Sally Scholz
Happening @ ‘Nova
Be sure to check out these noteworthy events that are taking place on Villanova’s campus this week!
Dr. Martin Luther King, Jr. Commemoration: 1/19
Join the Center for Peace and Justice Education as they welcome MK Asante as the 2016 Dr. Martin Luther King, Jr. keynote speaker on Tuesday, Jan. 19 at 5 p.m. in the Villanova Room. MK Asante is an associate professor of English (Morgan State), author, filmmaker, and rapper. He is most well-known for his best-selling memoir, Buck. Questions? Contact: Sharon Discher
Dispatch from the Climate Summit: 1/19
Hear first-hand about the agreement coming out of December’s Paris Climate Summit. Anthony Giancatarino, Policy Director for the Center for Inclusion in NYC, participated in the Summit and will discuss his experience. This is the first event in a series titled “Care for our Common Home: Multi-faith Views on Climate Justice.” The event will take place on Tuesday, Jan. 19, 12:45-2 p.m., St. Rita Community Center. A light Lunch provided – RSVP to email@example.com Questions? Contact: Julia Sheetz
Spring Career Fairs: 2/2 & 2/3
The Career Center is hosting the 2016 Spring Career Fairs on 2/2 and 2/3. Tuesday, Feb. 2: 10 a.m.–1 p.m. Communication, Marketing & Media; Tuesday, Feb. 2: 3–6 p.m. Finance, Accounting & Consulting; Wednesday, Feb. 3: 10 a.m.–1 p.m. Engineering, Science, & Technology. All fairs are held in The Villanova Room. Questions? Contact: Sheila Doherty
What Could Be Better Than Two New Printers?
Three new printers have replaced the two public printers on Falvey’s first floor. Although the new printers are smaller than the previous ones, their speed is about the same. Most importantly, three machines provide a much greater capacity.
If a printer needs paper, has an error message, has a paper jam, or has any other problem, please notify the Service Desk Supervisor.
Falvey staff received specialized training from the supplier on how to service these new machines. Having only trained personnel service the printers will ensure that repairs are accurate and quick and that the printers will avoid chronic problems in the future.
Library staff welcome this improvement to our services and remain committed to your success!
PICTURES FOR DAYS
Do you like images? How about high quality, copyright-free images? Do you want them right now!? Check out what the New York Public Library has to offer. Spoiler alert: they have 180,000 high resolution images in public domain easily accessible from their website, featuring items from their New York City collection, historical maps, illustrations, texts – “just go forth and reuse!” they say. You can check out Walt Whitman’s manuscript, medieval and Renaissance illuminated manuscripts, and 19th-20th century stereoscopic views.
DID YOU KNOW you could be the one who names the next neologism?
From across the pond, Cambridge Dictionaries Online includes the following from 2015:
digital amnesia (“the inability to remember basic things, such as telephone numbers, dates, etc. as a result of over-reliance on mobile phones, the Internet etc. for storing information”),
fitspo (“informal short for ‘fitspiration’; the inspiration to get fit and strong”) and
simulator sickness (“a nauseous feeling caused by moving your head too fast while playing a virtual reality, simulation, game”).
Banished words?—Perhaps calling them “overused” would be more accurate. Lake Superior State University in Michigan publishes a list of words and phrases that should be retired, including manspreading, physicality and “break the internet.”
Whether you’re a logophile, a neophile or just a curious person, you’ll be entertained by these lists of latecomers to our lexicon.
“Who exactly are the ‘intellectuals’?” Human beings have possessed an intelligence beyond that of animals for millions of years. So what separates the intellectuals from the rest of humanity? According to the author of Birth of the Intellectuals, Christophe Charle, the term came into use with the Dreyfus Affair, a political scandal in France that divided the country for more than 50 years, and “signified a cultural and political vanguard who dared to challenge the status quo.”
QUOTE OF THE DAY
Poet and author Edgar Allan Poe was born on this day in 1809 in Boston, Massachusetts. Perhaps you are familiar with “The Raven,” “The Tell-Tale Heart,” or “The Fall of the House of Usher.” He is known for his dark, mysterious, and sometimes macabre stories. Did you know there is an Edgar Allan Poe museum in Richmond, Virginia?
“Once upon a midnight dreary, while I pondered, weak and weary,
Over many a quaint and curious volume of forgotten lore—
While I nodded, nearly napping, suddenly there came a tapping,
As of some one gently rapping, rapping at my chamber door—
“‘Tis some visitor,” I muttered, “tapping at my chamber door—
Only this and nothing more.”
from “The Raven”
Have an excellent day! Feel free to comment your thoughts and ideas for future editions of The 8:30 below.
There’s lots of practical material out there written by practitioners in the field, but those resources – blogs, websites, conference presentations – are often written for fellow experts. Ain’t nothing like experts talking to other experts, to make a novice instantly feel lost and/or dumb.
Wow, yeah, that’s a problem. So we asked Meg to help us bridge that gap with to-the-point questions and answers from the perspective of someone interested but new to the fun.
Each episode is oriented around a single episode with just a little room for back and forth (and my — Michael’s — tangents), but kept tight to be as useful as possible.Things we talked about
- University of Michigan Libraries’ X/O Participatory Design (PDF) by Suzanne Chapman and Ellin Wilson
- Customer Journey Maps
- Measure the Future – Jason Griffey, Jeff Branson, and Gretchen Caserotti
- UX, Consideration, and a CMMI-Based Model by Coral Sheldon-Hess
If you like you can download the MP3.
The post “What research techniques and tools are actually used in library UX work?” appeared first on LibUX.
Excited about #dig in MRI 2.3.0? Want to use it in your gem code, but don’t want your gem to require MRI 2.3.0 yet?
I got you covered:
It’ll add in a pure-ruby #dig implementation if Hash/Array/Struct don’t have #dig yet. If they already have #dig defined, it’ll do nothing to them, so you can use dig_rb on gem code meant to run on any ruby. When run on MRI 2.3.0, you’ll be using native implementation, on other rubies dig_rb’s pure ruby implementation.
Note: JRuby 9k doesn’t support #dig yet either, and dig_rb will work fine there too.
Filed under: General
Islandora Camp is going to Fort Myers, FL from May 4 - 6. We'll be holding our traditional three day camp, with two days of sessions sandwiching a day of hands-on training from experienced Islandora instructors. We are very pleased to announce that those instructors will be:Developers:
Nick Ruest is the Digital Assets Librarian at York University, a cornerstone of the Islandora community, and one of its most experienced instructors, with five camps and the Islandora Conference under his belt. Nick has also been Release Manager for four Islandora releases, is the author of two solutions packs and several tools, and is Project Director of the Islandora CLAW project.
Diego Pino is an experienced Islandora developer and an official Committer. Although this is his inaugural Islandora Camp as an instructor, he has been helping folks learn how to get the most out of Islandora on our community listserv since he joined up. Diego started with Islandora in the context of handling biodiversity data for REUNA Chile and has transitioned over to develop and support the many Islandora sites of the Metropolitan New York Library Council.Administrators
Melissa Anez has been working with Islandora since 2012 and has been the Community and Project Manager of the Islandora Foundation since it was founded in 2013. She has been a frequent instructor in the Admin Track and developed much of the curriculum, refining it with each new Camp.
Melissa VandeBurgt is the Head of Archives, Special Collections, and Digital Initiatives at Florida Gulf Coast University. She cut her teeth as an instructor during the Islandora Conference, co-leading a workshop on Building Collections.
Sound like a team you'd like to hear from? Registration is open, with an Early Bird discount until February 15th. You could win a free registration if you design a t-shirt for the camp. You can also submit a proposal to do a session of your own on Day One or Day Three.
The fundamental reasons for the failure are lack of decentralization at both the organizational and technical levels. You have to read Mike's post to understand the organizational issues, which would probably have doomed Bitcoin irrespective of the technical issues. They prevented Bitcoin responding to the need to increase the block size. But the block size is a minor technical issue compared to the fact that:
the block chain is controlled by Chinese miners, just two of whom control more than 50% of the hash power. At a recent conference over 95% of hashing power was controlled by a handful of guys sitting on a single stage.As Mike says:
Even if a new team was built to replace Bitcoin Core, the problem of mining power being concentrated behind the Great Firewall would remain. Bitcoin has no future whilst it’s controlled by fewer than 10 people. And there’s no solution in sight for this problem: nobody even has any suggestions. For a community that has always worried about the block chain being taken over by an oppressive government, it is a rich irony.Mike's post is a must-read. But reading it doesn't explain why "nobody even has any suggestions". For that you need to read Economies of Scale in Peer-to-Peer Networks.
A summary of this post made it to Dave Farber's IP list, drawing a response from Tony Lauck, which you should read. Tony argues that Bitcoin is not a peer-to-peer system. In strict terms I would agree with him, but my argument does not depend on its being a strict P2P system. I would also point out two flaws in what Tony says here:
Decentralized control of the network depends on the rational behavior of the owners of the hashing power, but this is not concentrated for protocol reasons, rather it is an historical artifact of the evolution of the network due to ASIC supply chain issues and geography (low cost electricity and cold climates for inexpensive cooling). The highly concentrated operators of mining pools serve as representatives of the hash power, who can switch to other pools in less than a minute if they believe the operators are misbehaving.First, I have never argued that the failure of decentralization was for "protocol reasons". It is for economic reasons, namely the inevitable economies of scale. Tony in effect agrees with me when he assigns "ASIC supply chain issues and geography (low cost electricity and cold climates for inexpensive cooling)" as the cause. Economies of Scale in Peer-to-Peer Networks addresses both low costs:
If there is even one participant whose rewards outpace their costs, Brian Arthur's analysis shows they will end up dominating the network.and supply chain issues:
Early availability of new technology acts to reduce the costs of the larger participants, amplifying their economies of scale. This effect must be very significant in Bitcoin mining, as Butterfly Labs noticed.Second, while in principle the "owners of the hashing power" can switch pools, it is a fact that the mining power has been controlled by a small number of pools each much larger than needed to provide stable income to miners for a long time. Mike Hearn's argument that "Bitcoin has no future whilst it’s controlled by fewer than 10 people." is sound at both the technical and organizational levels.
A new Pew report identifies a decline of in-home broadband connections among lower- to middle-income, rural, and minority households from just two years ago. Even millennials, who are part borg, aren’t as tethered as we might assume.
This isn’t to say millennials aren’t plugged in: “80% of American adults have either a smartphone or a home broadband connection,” an increasing number of which are mobile-only, particularly where you see broadband adoption declining.
The increase in the “smartphone-only” phenomenon largely corresponds to the decrease in home broadband adoption over this period. The rise in “smartphone-only” adults is especially pronounced among low-income households (those whose annual incomes are $20,000 or less) and rural adults. African Americans, who saw a marked decline in home broadband adoption, also exhibited a sharp increase in “smartphone-only” adoption (from 10% to 19%), as did parents with school-age children (from 10% in 2013 to 17% in 2015) John B. Horrigan and Maeve Duggan, Pew
I suspect we have seen in-home broadband adoption peak. The implications are that while more people have access to the web, the speeds and quality of that connection are diminished. Unfortunately, such plans are often roped-in by data caps, meaning that just as metrics suffer from services that neglect mobile performance, increased page-weight has literal cost to users.
This post is part of a nascent library UX data collection we hope you can use as a reference to make smart decisions. If you’re interested in more of the same, follow @libuxdata on Twitter, or continue the conversation on our Facebook group. You might also think about signing-up for the Web for Libraries newsletter.
This won’t be the first time I ever admit this, nor will it be the last, but boy am I out of touch.
I’m more than familiar with the term “selfie”, which is when you take a photo of yourself. Heck, my profile pictures on Facebook, Twitter, and even here on LITA Blog are selfies. As much as I try to put myself above the selfie fray, I find myself smack in the middle of it. (I vehemently refuse to get a selfie stick, though. Just…no.)
But I’d never heard of this “shelfie” phenomenon. Well, I have, but apparently there’s more than one definition. I had to go to Urban Dictionary, that proving ground for my “get off my yard”-ness, to learn it’s a picture of your bookshelf, apparently coined by author Rick Riordan. But I was under the impression that a shelfie is where you take a picture of yourself with a book over your face. Like so:Promo poster for bookstore Mint Vinetu
But apparently that’s called “book face”, so I’m still wrong.
Also, I just found out there’s an app called Shelfie, which lets you take a picture of your bookshelf and matches your books with free or low-cost digital version (an e-ternative, if you will).
All along, you see, I thought a shelfie was when you took a picture of yourself with your favorite book in front of your bookshelf (because selfie + shelf = selfie?), but it’s just of your bookshelf, not you. Apparently I’m vainer than I thought.
Here’s my version of a shelfie:I never could get the hang of Thursdays.
Regardless, it’s a cool idea to share our books with our friends, to find out what each other is reading, or just to show off how cool our bookshelves look (and believe me, I’m jealous of a few of you). There are other ways to be social about your books – Goodreads and Library Thing come to mind – but this is a unique way to do it if you don’t use either one.
What does your shelfie look like?
A failed attempt of speeding up grouping in Solr, with an idea for next attempt.Grouping at a Statsbiblioteket project
We have 100M+ articles from 10M+ pages belonging to 700K editions of 170 newspapers in a single Solr shard. It can be accessed at Mediestream. If you speak Danish, try searching for “strudsehest”. Searches are at the article level, with the results sorted by score and grouped by edition, with a maximum of 10 articles / edition. Something like this:q=strudsehest&group=true&group.field=edition&group.limit=10
This works well for most searches. But for the heavier ones, response times creeps into seconds, sometimes exceeding the 10 second timeout we use. Not good. So what happens in a grouped search that is sorted by document score?
- The hits are calculated
- A priority queue is used to find the top-X groups with the highest scores
- For each hit, calculate its score
- If the score is > the lowest score in the queue, resolve the group value and update the priority queue
- For each of the top-X groups, a priority queue is created and filled with document IDs
- For each hit, calculate is score and resolve its group value (a BytesRef)
- If the group value matched one of the top-X groups, update that group’s queue
- Updating the queue might involve resolving multiple field values for the document, depending on in-group sorting
- Iterate the top-X groups and resolve the full documents
Observation 1: Hits are iterated twice. This is hard to avoid if we need more than 1 entry in each group. An alternative would be to keep track of all groups until all the hits has been iterated, but this would be extremely memory costly with high cardinality fields.
Observation 2: In step 3.1, score and group resolving is performed for all hits. It is possible to use the same logic as step 2.1, where the group is only resolved if the score is competitive.Attempt 1: Delayed group resolving
The idea in observation 2 has been implemented as a kludge-hacky-proof-of-concept. Code is available at the group_4_10 branch at GitHub for those who like hurt.
When the hits are iterated the second time, all scores are resolved but only the group values for the documents with competitive scores are resolved. So how well does it work?
Observation: Optimized (aka lazy group value resolving) grouping is a bit slower than vanilla Solr grouping for some result sets, probably the ones where most of the group values has to be resolved. For other result sets there is a clear win.
It should be possible to optimize a bit more and bring the overhead of the worst-case optimized groupings down to near-zero. However, since there are so few best-case result sets and since the win is just about a third of the response time, I do not see this optimization attempt as being worth the effort.Idea: A new level of lazy
Going back to the algorithm for grouping we can see that “resolving the value” occurs multiple times. But what does it mean?
With DocValued terms, this is really a two-step process: The DocValue ordinal is requested for a given docID (blazingly fast) and the ordinal is used to retrieve the term (fast) in the form of a BytesRef. You already know where this is going, don’t you?
Millions of “fast” lookups accumulates to slow and we don’t really need the terms as such. At least not before we have to deliver the final result to the user. What we need is a unique identifier for each group value and the ordinal is exactly that.
But wait. Ordinals are not comparable across segments! We need to map the segment ordinals to a global structure. Luckily this is exactly what happens when doing faceting with facet.method=fc, so we can just scrounge the code from there.
With this in mind, the algorithm becomes
- The hits are calculated
- A priority queue is used to find the top-X groups with the highest scores
- For each hit, calculate its score
- If the score is > the lowest score in the queue, resolve the group value ordinal and update the priority queue
- For each of the top-X groups, a priority queue is created and filled with document IDs
- For each hit, resolve its group value segment-ordinal and convert that to global ordinal
- If the group value ordinal matches one of the top-X groups, update that group’s queue
- Updating the queue might involve resolving the document score or resolving multiple field value ordinals for the document, depending on in-group sorting
- Iterate the top-X groups and resolve the Terms from the group value ordinals as well as the full documents
Note how the logic is reversed for step 3.1, prioritizing value ordinal resolving over score calculation. Experience from the facet code suggests that ordinal lookup is faster than score calculation.
This idea has not been implemented yet. Hopefully it will be done Real Soon Now, but no promises.
Ariadne hits its 20th birthday, and its 75th issue.
Back in 1994 the UK Electronic Libraries Programme (eLib) was set up by the JISC, paid for by the UK's funding councils. One of the many projects funded by eLib was an experimental magazine that could help document the changes under way and give the researchers working on eLib projects a means to communicate with one another and their user communities. That magazine was called Ariadne. Originally produced in both print and web versions, it outlived the project that gave birth to it. We are now at the point where we can celebrate 20 years of the web version of Ariadne. Read more about Editorial: Happy 20th Birthday Ariadne!Article type: Issue number: Authors: Organisations: Date published: Sun, 01/17/201675http://www.lboro.ac.uk/issue75/editorial
This may sound radical, but the fact is the FRBR does define some subtypes. They don't appear in the three high-level diagrams, so it isn't surprising that many people aren't aware of them. They are present, however in the attributes. Here is the list of attributes for FRBR work:
title of the work
form of work
date of the work
other distinguishing characteristic
context for the work
medium of performance (musical work)
numeric designation (musical work)
key (musical work)
coordinates (cartographic work)
equinox (cartographic work)I've placed in italics those that are subtypes of work. There are two: musical work, and cartographic work. I would also suggest that "intended termination" could be considered a subtype of "continuing resource", but this is subtle and possibly debatable.
Other subtypes in FRBR are:
Expression: serial, musical notation, recorded sound, cartographic object, remote sensing image, graphic or projected image
Manifestation: printed book, hand-printed book, serial, sound recording, image, microform, visual projection, electronic resource, remote access electronic resourceThese are the subtypes that are present in FRBR today, but because sub-typing probably was not fully explored, there are likely to be others.
Object-oriented design was a response to the need to be able to extend a data model without breaking what is there. Adding a subtype should not interfere with the top-level type nor with other subtypes. It's a tricky act of design, but when executed well it allows you satisfy the special needs that arise in the community while maintaining compatibility of the data.
Since we seem to respond well to pictures, let me provide this idea in pictures, keeping in mind that these are simple examples just to get the idea across.
The above picture models what is in FRBR today, although using the inheritance capability of OO rather than the E-R model where inheritance is not possible. Both musical work and cartographic work have all of the attributes of work, plus their own special attributes.
If it becomes necessary to add other attributes that are specific to a single type, then another sub-type is added. This new subtype does not interfere with any code that is making use of the elements of the super-type "work". It also does not alter what the music and maps librarians must be concerned with, since they are in their own "boxes." As an example, the audio-visual community did an analysis of BIBFRAME and concluded, among other things, that the placement of duration, sound content and color content in the BIBFRAME Instance entity would not serve their needs; instead, they need those elements at the work level.*
This just shows work, and I don't know how/if it could or should be applied to the entire WEMI thread. It's possible that an analysis of this nature would lead to a different view of the bibliographic entities. However, using types and sub-types, or classes and sub-classes (which would be the common solution in RDF) would be far superior to the E-R model of FRBR. If you've read my writings on FRBR you may know that I consider FRBR to be locked into an out-of-date technology, one that was already on the wane by 1990. Object-oriented modeling, which has long replaced E-R modeling, is now being eclipsed by RDF, but there would be no harm in making the step to OO, at least in our thinking, so that we can break out of what I think is a model so rigid that it is doomed to fail.
*This is an over-simplification of what the A-V community suggested, modified for my purposes here. However, what they do suggest would be served by a more flexible inheritance model than the model currently used in BIBFRAME.
Marieke Guy, Philip Hunter, John Kirriemuir, Jon Knight and Richard Waller look back at how Ariadne began 20 years ago as part of the UK Electronic Libraries Programme (eLib), how some of the other eLib projects influenced the web we have today and what changes have come, and may yet come, to affect how digital libraries work.
Ariadne is 20 years old this week and some members of the current editorial board thought it might be useful to look back at how it came to be, how digital library offerings have changed over the years, and maybe also peer into the near future. To do this, we’ve enlisted the help of several of the past editors of Ariadne who have marshalled their memories and crystal balls. Read more about FIGIT, eLib, Ariadne and the Future.
Marieke Guy, Philip Hunter, John Kirriemuir, Jon Knight, Richard WallerOrganisations: Article type: Issue number: Authors: Date published: Sun, 01/17/201675http://www.lboro.ac.uk/issue75/editorsreview
There have been a number of workshops and presentations that I’ve seen floating around that talk about ways of using MarcEdit and OpenRefine together when doing record editing. OpenRefine, for folks that might not be familiar, use to be known as Google Refine, and is a handy tool for working with messy data. While there is a lot of potential overlap between the types of edits available between MarcEdit and OpenRefine, the strength of the tool is that it allows you to access your data via a tabular interface to easily find variations in metadata, relationships, and patterns.
For most folks working with MarcEdit and OpenRefine together, the biggest challenge is moving the data back and forth. MARC binary data isn’t supported by OpenRefine, and MarcEdit’s mnemonic format isn’t well suited for import using OpenRefine’s import options as well. And once the data has been put into OpenRefine, getting back out and turned into MARC can be difficult for first time users as well.
Because I’m a firm believe that uses should use the tool that they are most comfortable with – I’ve been talking to a few OpenRefine users trying to think about how I could make the process of moving data between the two systems easier. And to that end, I’ll be adding to MarcEdit a toolset that will facilitate the export and import of MARC (and MarcEdit’s mnemonic) data formats into formats that OpenRefine can parse and easily generate. I’ve implemented this functionality in two places – one as a standalone application found on the Main MarcEdit Window, and one as part of the MarcEditor – which will automatically convert or import data directly into the MarcEditor Window.
Exporting Data from MarcEdit
As noted above, there will be two methods of exporting data from MarcEdit into one of two formats for import into OpenRefine. Presently, MarcEdit supports generating either json or tab delimited format. These are two formats that OpenRefine can import to create a new project.
If I have a MARC file and I want to export it for use in OpenRefine – I would using the following steps:
- Open MarcEdit
- Select Tools/OpenRefine/Export from the menu
- Enter my Source File (either a marc or mnemonic file)
- My Save File – MarcEdit supports export in json or tsv (tab delimited)
- Select Process
This will generate a file that can used for importing into OpenRefine. A couple notes about that process. When importing via tab delimited format – you will want to unselect options that does number interpretation. I’d also uncheck the option to turn blanks into nulls and make sure the option is selected that retains blank rows. These are useful on export and reimport into MarcEdit. When using Json as the file format – you will want to make sure after import to order your columns as TAG, Indicators, Content. I’ve found OpenRefine will mix this order, even though the json data is structured in this order.
Once you’ve made the changes to your data – Select the export option in OpenRefine and select the export tab delimited option. This is the file format MarcEdit can turn back into either MARC or the mnemonic file format. Please note – I’d recommend always going back to the mnemonic file format until you are comfortable with the process to ensure that the import process worked like you expected.
And that’s it. I’ve recorded a video on YouTube walking through these steps – you can find it here:
This if course just shows how to data between the two systems. If you want to learn more about how to work with the data once it’s in OpenRefine, I’d recommend one of the many excellent workshops that I’ve been seeing put on at conferences and via webinars by a wide range of talented metadata librarians.
The VIVO Committers Group. The VIVO project now has a committers group!
Emma Tonkin discusses how the words we use, and where we use them, change over time, and how this can cause issues for digital preservation.
'Now let's take this parsnip in.'
'Parsnip, coffee. Perrin, Wellbourne. What does it matter what we call things?'
– David Nobbs, The Fall And Rise of Reginald Perrin
Introduction Read more about Lost Words, Lost Worlds.
Emma TonkinOrganisations: Article type: Issue number: Authors: Date published: Sat, 01/16/201675http://www.lboro.ac.uk/issue75/tonkin