You are here

Feed aggregator

Nicole Engard: Bookmarks for September 6, 2015

planet code4lib - Sun, 2015-09-06 20:30

Today I found the following resources and bookmarked them on Delicious.

  • Gimlet Your library’s questions and answers put to their best use. Know when your desk will be busy. Everyone on your staff can find answers to difficult questions.

Digest powered by RSS Digest

The post Bookmarks for September 6, 2015 appeared first on What I Learned Today....

Related posts:

  1. What makes a librarian?
  2. RSS
  3. Tech Savvy Staff and Patrons

Open Knowledge Foundation: Event Guide, 2015 Open Data Index

planet code4lib - Sun, 2015-09-06 18:46

Getting together at a public event can be a fun way to contribute to the 2015 Global Open Data Index. It can also be a great way to engage and organize people locally around open data. Here are some guidelines and tips for hosting an event in support of the 2015 Index and getting the most out of it.

Hosting an event around the Global Open Data Index is an excellent opportunity to spread the word about open data in your community and country, not to mention a chance to make a contribution to this year’s Index. Ideally, your event would focus broadly on open data themes, possibly even identifying the status of all 15 key datasets and completing the survey. Set a reasonable goal for yourself based on the audience you think you can attract. You may choose to not even make a submission at your event, but just discuss the state of open data in your country, that’s fine too.

It may make sense to host an event focused around one or more of the datasets. For instance, if you can organize people around government spending issues, host a party focused on the budget, spending, and procurement tender datasets. If you can organize people around environmental issues, focus on the pollutant emissions and water quality datasets. Choose whichever path you wish, but it’s good to establish a focused agenda, a clear set of goals and outcomes for any event you plan.

We believe the datasets included in the survey represent a solid baseline of open data for any nation and any citizenry; you should be prepared to make this case to the participants at your events. You don’t have to have be an expert yourself, or even have topical experts on hand to discuss or contribute to the survey. Any group of interested and motivated citizens can contribute to a successful event. Meet people where they are at, and help them understand why this work is important in your community and country. It will set a good tone for your event by helping participants realize they are part of a global effort and that the outcomes of their work will be a valuable national asset.

Ahmed Maawy, who hosted an event in Kenya around the 2014 Index, sums up the value of the Index with these key points that you can use to set the stage for your event:

  • It defines a benchmark to assess how healthy and helpful our open datasets are.
  • It allows us to make comparisons between different countries.
  • Allows us to asses what countries are doing right and what countries are doing wrong and to learn from each other.
  • Provides a standard framework that allows us to identify what we need to do or even how to implement or make use of open data in our countries and identify what we are strong at or what we are week at.
What to do at an Open Data Index event

It’s great to start your event with an open discussion so you can gauge the experience in the room and how much time you should spend educating and discussing introductory materials. You might not even get around to making a contribution, and that’s ok. Introducing the Index in anyway will put your group on the right path.

If you’re hosting an event with mostly newcomers, it’s always a good idea to look to the Open Definition and the Open Data Handbook for inspiration and basic information.

  • If your group is more experienced, everything you need to contribute to the survey can be found in this year’s Index contribution tutorial.
  • If you’re actively contributing at an event, we recommend splitting into teams and assigning one or more datasets to each of the group and having them use the Tutorial as a guide. There can only be one submission per dataset, so be sure to not have teams working on the same task.
  • Pair more experienced people with less experienced people so teams can better rely on themselves to answer questions and solve problems.

More practical tips can be found at the 2015 Open Data Index Event Guide.

Photo credits: Ahmed Maawy

Ranti Junus: Phone’s cracked screen, replaced.

planet code4lib - Sat, 2015-09-05 23:56

I usually am quite careful when it comes to my phone: use phone case, apply the screen protector, things like that. But I suppose accident happens regardless. So, during the first week of August, I accidentally dropped a big screwdriver on the phone (don’t ask why) and heard a “crack” sound. Uugghh… my heart dropped when I saw the crack. Really bad.

The phone with the cracked screen. Looks scary.

Hoping the screen protector was strong enough to protect the touchscreen (after all, I used tempered glass screen protector), I turned it on and, bummer, the touch screen is completely borked. Fortunately, the hard drive was not affected so software worked fine. However, I could not interact with the apps, even when I tried to shutdown the phone. So, the only thing I could do was to let the phone run until it was running out of the battery and shutdown by default.

The software works just fine, but since the touch display is damaged, I cannot interact with it at all.

I checked the company’s website and their user forum, and found out one could send the phone back to the company in China and get charged for $150 (apparently this kind of physical damage doesn’t get covered by the warranty) or spend about $50 for the screen/touch display and replace it oneself. Being a tinkerer I am and always want to see the guts of any electronic devices, I decided to risk it and do the screen replacement myself. The downside: opening up the phone means I will void the warranty. But, at this point, warranty means little to me if I have to spend big bucks anyway to have the phone fixed. Besides, I am going to learn something new here. Worst case scenario: I failed. But then I can always sell the phone as parts on eBay. So, nothing really to loose here. Besides, I still have my Moto X phone as a backup phone.

YouTube provides various instructions on DIY phone screen replacement. I found two videos that really helped me to understand the ins and outs of replacing the screen.

The first video below nicely showed how to remove the damaged screen and put the replacement back. He showed which areas we need to pay attention to so we won’t damage the component.

The second video was created by a professional technician, so his method is very structured. The tools he used helped me to figure out the tools I need.

I basically watched those two videos probably a dozen times or so to make sure I didn’t miss anything (and, yes, I donated to their Paypal account as my thanks.)

It took me a while to finally finished the screen replacement work. I removed the cracked screen first, and then had to wait for about 3 weeks to receive the screen replacement. I just used whatever online store they recommended to get the parts that I need.

Below is a set of thumbnails with captions explaining my work. Each thumbnail is clickable to its original image.


Phone with its cracked screen. Ready to be worked on for screen replacement. 2.

The back of the phone. The SIM card is removed and the back cover is ready to be opened. 3.

The phone with back cover removed. The battery occupies most of the section. There’s a white dot sticker on the top right corner covering one of the screws. Removing that screw will void the warranty. 4.

The top part of the phone that covers the hard disk, camera lens, and SIM car reader is removed. There’s a white, square sticker on the top left corner. It will turn pink if the phone is exposed to moisture (dropped into a puddle of water, etc.) 5.

Bottom part of the phone is removed. It houses the USB port, the touch capacity, and the antenna. 6.

The battery is removed. It took me quite a while to work on this because the glue was so strong and I was so worried I might bend the battery too much and damage it. 7.

All the components that would need to be removed had been removed. The hard disk, the main cable, the touch capacity/USB port/antenna part. Looking good. 8.

The video instruction from ModzLink suggested to use a heat to loosen up the glue. Good thing I have a blow dryer with a nozzle that allows me to focus the hot air on certain section of the screen. The guitar pick was used to tease out the glass part once the surface is hot enough. 9.

It took me about 20 minutes to finally get the screen hot enough and the glue loosen up. By the way, I vacuumed the screen first to remove glass debris so the blow drier won’t blow them all over the place. 1o.

I used the magnifying glass from my soldering station to make sure all glue and loose debris were gone. 11.

The screen replacement, on the left, finally arrived. Even though they said it’s an original screen, I’m not really sure, considering the original one has extra copper lines on the sides. 12.

The casing is clean so all I need to do is inserting the screen replacement in it. 13.

Carefully putting the adhesive strips on the sides of the casing. 14.

New screen in place. I had to redo it because I forgot to put the speaker grill on the top at the first time. 15.

Added new adhesive strips so the battery will stick on it. Put the rest of the components back. 16.

Added a new tempered glass screen protector, put the SIM card back in, and turned on the phone.


Success. I got my favorite phone back.

It was scary the first time I worked on the phone, mostly because I don’t want to break things. But I eventually felt comfortable dealing with the components and, should similar thing happened again (knocks on the wood it won’t), I at least know what to do now.


Jonathan Rochkind: Memories of my discovery of the internet

planet code4lib - Sat, 2015-09-05 14:12

As I approach 40 years old, I find myself getting nostalgic and otherwise engaged in memories of my youth.

I began high school in 1989. I was already a computer nerd, beginning from when my parents sent me to a Logo class for kids sometime in middle school; I think we had an Apple IIGS at home then, with a 14.4 kbps modem. (Thanks Mom and Dad!).  Somewhere around the beginning of high school, maybe the year before, I discovered some local dial-up multi-user BBSs.

Probably from information on a BBS, somewhere probably around 1994, me and a friend discovered Michnet, a network of dial-up access points throughout the state of Michigan, funded, I believe, by the state department of education. Dialing up Michnet, without any authentication, gave you access to a gopher menu. It didn’t give you unfettered access to the internet, but just to what was on the menu — which included several options that would require Michigan higher ed logins to proceed, which I didn’t have. But also links to other gophers which would take you to yet other places without authentication. Including a public access unix system (which did not have outgoing network connectivity, but was a place you could learn unix and unix programming on your own), and ISCABBS. Over the next few years I spent quite a bit of time on ISCABBS, a bulletin board system with asynchronous message boards and a synchronous person-to-person chat system, which at that time routinely had several hundred simultaneous users online.

So I had discovered The Internet. I recall trying to explain it to my parents, and that it was going to be big; they didn’t entirely understand what I was explaining.

When visiting colleges to decide on one in my senior year, planning on majoring in CS, I recall asking at every college what the internet access was like there, if they had internet in dorm rooms, etc. Depending on who I was talking to, they may or may not have known what I was talking about. I do distinctly recall the chair of the CS department at the University of Chicago telling me “Internet in dorm rooms? Bah! The internet is nothing but a waste of time and a distraction of students from their studies, they’re talking about adding internet in dorm rooms but I don’t think they should! Stay away from it.” Ha. I did not enroll at the U of Chicago, although I don’t think that conversation was a major influence.

Entering college in 1993, in my freshmen year in the CS computer lab, I recall looking over someone’s shoulder and seeing them looking at a museum web page in Mozilla — the workstations in the lab were unix X-windows systems of some kind, I forget what variety of unix. I had never heard of the web before. I was amazed, I interupted them and asked “What is that?!?”. They said “it’s the World Wide Web, duh.”  I said “Wait, it’s got text AND graphics?!?”  I knew this was going to be big. (I can’t recall the name of the fellow student a year or two ahead who first showed me the WWW, but I can recall her face. I do recall Karl Fogel, who was a couple years ahead of me and also in CS, kindly showing me things about the internet on other occasions. Karl has some memories of the CS computer lab culture at our college at the time here, I caught the tail end of that).

Around 1995, the college IT department hired me as a student worker to create the first-ever experimental/prototype web site for the college. The IT director had also just realized that the web was going to be big, and while the rest of the university hadn’t caught on yet, he figured they should do some initial efforts in that direction. I don’t think CSS or JS existed yet then, or at any rate I didn’t use them for that website. I did learn SQL on that job.  I don’t recall much about the website I developed, but I do recall one of the main features was an interactive campus map (probably using image maps).  A year or two or three later, when they realized how important it was, the college Communications unit (ie, advertising for the college)  took over the website, and I think an easily accessible campus map disappeared not to return for many years.

So I’ve been developing for the web for 20 years!

Ironically (or not), some of my deepest nostalgia these days is for the pre-internet pre-cell-phone society; even most of my university career pre-dated cell phones, you wanted to get in touch with someone you called their dorm room, maybe left a message on their answering machine.  The internet, and then cell phones, eventually combining into smart phones, have changed our social existence truly immensely, and I often wonder these days if it’s been mostly for the better or not.

Filed under: General

Ed Summers: Seminar Week 1

planet code4lib - Sat, 2015-09-05 01:59

These are some notes for the readings from my first Seminar class. It’s really just a test to see if my BibTeX/Jekyll/Pandoc integration is working. More about that in a future post hopefully…

(Shera, 1933) was written in the depths of the Great Depression … and it shows. There is a great deal of concern about fiscal waste in libraries and a strong push for centralization, in line with FDR’s New Deal. The paper sees increasing cultural homogenization and a blurring of the rural and the urban that hasn’t seemed to come to pass. His thoughts about the television apparatus at the elbow seems almost memex like in its vision of the future. I must admit given all of what he gets wrong, I really like his idea of looking at the current state of our social situation and relations for the seeds of what tomorrow might look like. But at the same time I have trouble understanding how else you could meaningfully try to predict future trends. There is a tension between his desire for centralization of control, while allowing for decentralization, that seems quintessentially American.

(Taylor, 1962) muses about the nature of questions, how they progress in an almost Freudian way from the unconscious to a fully sublimated formal question of an information system. One thing that is particularly interesting is his formulation about how questions themselves are only fully understood in the context of an accepted answer. It’s almost as if the causal chain of question/answer is inverted, with the question being determined by the answer, and time running backwards. I know this is a flight of fancy on my part, but it seemed like a quirky and fun interpretation. The paper is deeply ironic because it opens up new vistas of future information science research by asking a lot of questions about questions. The method is admittedly rhetorical, and the paper is largely a philosophical meditation on how people with questions fit into information systems, rather than a methodological qualitative or quantitative study of some kind. It makes me wonder about the information system his questions are aimed at. Is scientific inquiry an information system? Also, perhaps this is heretical, but is there really such a thing as an information need? Don’t we have needs/desires for particular outcomes which information can help us realize: information as tool for achieving something, not as an object that is needed? I guess this could be considered a pragmatist critique of a particular strand of information science. I guess this would be a good place to invoke Maslow’s Hierarchy of Needs.

(Borko, 1968) attempts to define what information since in the wake of the American Documentation Institute changed its name to the American Society for Information Science. He explicitly calls out Robert Taylor’s definition, who was instrumental in helping create the Internet at DARPA.

He summarizes information science as the interdisciplinary study of information behavior. It’s kind of strange to think of information behaving independent of humans isn’t it? Are we really studying the behavior of people as reflected in their information artifacts, or is the behavior of information really something that happens independent of people? This question makes me think of Object Oriented Ontology a bit. A key part of his definition is the feedback loop where the traditional library and archive professions apply the theories of information science, which in turn are informed by practice. This relationship between theory and practice is a significant dimension to his definition. It seems like perhaps today many of the disciplines he identified have been subsumed into computer science departments? But it seems information science has a way of tying different disciplines together that were previously siloed?

(Bush, 1945) is a classic in the field of computing, cited mostly for its prescience in anticipating the hyperlink, and the World Wide Web. He is quite gifted at connecting scientific innovation with tools that are graspable by humans. One disquieting thing is the degree to which women, or as he calls them, “girls” are made part of the machinery of computation. To what extent are people unwittingly made part of this machinery of war that Bush assembled in the form of the Manhattan Project. Who does this machinery serve? Does it inevitably serve those in power? If we fast forward to today, what machinery are we made part of, by the transnational corporations that run our elections, and deliver us our information? Can this information system resist the forms of tyranny that it was created by? Ok, enough crazy talk for now :-)

Borko, H. (1968). Information science: What is it? American Documentation, 3–5.

Bush, V. (1945). As we may think. Atlantic. Retrieved from

Shera, J. H. (1933). Recent social trends and future library policy. Library Quarterly, 3, 339–353.

Taylor, R. S. (1962). The process of asking questions. American Documentation, 391–396.

Erin White: Back-to-school mobile snapshot

planet code4lib - Fri, 2015-09-04 19:40

This week I took a look at mobile phone usage on the VCU Libraries website for the first couple weeks of class and compared that to similar time periods from the past couple years.


Here’s some data from the first week of class through today.

Note that mobile is 9.2% of web traffic. To round some numbers, 58% of those devices are iPhones/iPods and 13% are iPads. So we’re looking at about 71% of mobile traffic (about 6.5% of all web traffic) from Apple devices. Dang. After that, it’s a bit of a long tail of other device types.

To give context, about 7.2% of our overall traffic came from the Firefox browser. So we have more mobile users than Firefox users.


Mobile jumped to 9% of all traffic this year. This is partially due to our retiring our mobile-only website in lieu of a responsive web design. As with the other years, at least 2/3 of the mobile traffic is an iOS device.


Mobile was 4.7% of all traffic; iOS was 74% of all traffic; tablets, amazingly, were 32% of all mobile traffic.

I have one explanation for the relatively low traffic from iPhone: at the time, we had a separate mobile website that was catching a lot of traffic for handheld devices. Most phone users were being automatically redirected there.

Observations Browser support

Nobody’s surprised that people are using their phones to access our sites. When we launched the new VCU Libraries website last January, the web team built it with a responsive web design that could accommodate browsers of many shapes and sizes. At the same time, we decided which desktop browsers to leave behind – like Internet Explorer 8 and below, which we also stopped fully supporting when we launched the site. Looking at stats like this helps us figure out which devices to prioritize/test most with our design.

Types of devices

Though it’s impossible to test on every device, we have targeted most of our mobile development on iOS devices, which seems to be a direction we should keep going as it catches a majority of our mobile users. It would also be useful for us to look at larger-screen Android devices, though (any takers?). With virtual testing platforms like BrowserStack at our disposal we can test on many types of devices. But we should also look at ways to test with real devices and real people.


Thinking broadly about strategy, making special mobile websites/m-dots doesn’t make sense anymore. People want full functionality of the web, not an oversimplified version with only so-called “on-the-go” information. Five years ago when we debuted our mobile site, this might’ve been the case. Now people are doing everything with their phones–including writing short papers, according to our personas research a couple years ago. So we should keep pushing to make everything usable no matter the screen.

District Dispatch: Library groups keep up fight for net neutrality

planet code4lib - Fri, 2015-09-04 14:55

From Flickr

Co-authored by Larra Clark and Kevin Maher

Library groups are again stepping to the front lines in the battle to preserve an open internet. The American Library Association (ALA), Association of College and Research Libraries (ACRL), Association for Research Libraries (ARL) and the Chief Officers of State Library Agencies (COSLA) have requested the right to file an amici curiae brief supporting the respondent in the case of United States Telecom Association (USTA) v. Federal Communications Commission (FCC) and United States of America. The brief would be filed in the US Court of Appeals for the District of Columbia Circuit—which also has decided two previous network neutrality legal challenges. ALA also is opposing efforts by Congressional Appropriators to defund FCC rules.

Legal brief to buttress FCC rules, highlight library values

The amici request builds on library and higher education advocacy throughout the last year supporting the development of strong, enforceable open internet rules by the FCC. As library groups, we decided to pursue our own separate legal brief to best support and buttress the FCC’s strong protections, complement the filings of other network neutrality advocates, and maintain the visibility for the specific concerns of the library community. Each of the amici parties will have quite limited space to make its arguments (likely 4,000-4,500 words), so particular library concerns (rather than broad shared concerns related to free expression, for instance) are unlikely to be addressed by other filers and demand a separate voice. The FCC also adopted in its Order a standard that library and higher education groups specifically and particularly brought forward—a standard for future conduct that reflects the dynamic nature of the internet and internet innovation to extend protections against questionable practices on a case-by-case basis.

Based on conversations with FCC general counsel and lawyers with aligned advocates, we plan to focus our brief on supporting the future conduct standard (formally referenced starting on paragraph 133 in the Order as “no unreasonable interference or unreasonable disadvantage standard for internet conduct”) and why it is important to our community, re-emphasize the negative impact of paid prioritization for our community and our users if the bright-line rules adopted by the FCC are not sustained, and ultimately make our arguments through the lens of the library mission and promoting our research and learning activities.

As the library group motion states, we argue that FCC rules are “necessary to protect the mission and values of libraries and their patrons, particularly with respect to the rules prohibiting paid prioritization.” Also, the FCC’s general conduct standard is “an important tool in ensuring the open character of the Internet is preserved, allowing the Internet to continue to operate as a democratic platform for research, learning and the sharing of information.”

USTA and amici opposed to FCC rules filed their briefs July 30, and the FCC filing is due September 16. Briefs supporting the FCC must be filed by September 21.

Congress threatens to defund FCC rules

ALA also is working to oppose Republican moves to insert defunding language in appropriations bills that could effectively block the FCC from implementing its net neutrality order. Under language included in both the House and Senate versions of the Financial Services and General Government Appropriations Bill, the FCC would be prohibited from spending any funds towards implementing or enforcing its net neutrality rules during FY2016 until specified legal cases and appeals (see above!) are resolved. ALA staff and counsel have been meeting with Congressional leaders to oppose these measures.

The Obama Administration criticized the defunding move in a letter from Office of Management and Budget (OMB) Director Shaun Donovan stating, “The inclusion of these provisions threatens to undermine an orderly appropriations process.” While not explicitly threatening a Presidential veto, the letter raises concern with appropriators attempts at “delaying or preventing implementation of the FCC’s net neutrality order, which creates a level playing field for innovation and provides important consumer protections on broadband service…”

Neither the House or Senate versions of the funding measure has received floor consideration. The appropriations process faces a bumpy road in the coming weeks as House and Senate leaders seek to iron out differing funding approaches and thorny policy issues before the October 1 start of the new fiscal year. Congress will likely need to pass a short-term continuing resolution to keep the government open while discussions continue. House and Senate Republican leaders have indicated they will work to avoid a government shut-down. Stay tuned!

The post Library groups keep up fight for net neutrality appeared first on District Dispatch.

DPLA: DPLA Archival Description Working Group

planet code4lib - Fri, 2015-09-04 14:55

The Library, Archives, and Museum communities have many shared goals: to preserve the richness of our culture and history, to increase and share knowledge, to create a lasting record of human progress.

However, each of these communities approaches these goals in different ways. For example, description standards vary widely among these groups. The library typically adopts a 1:1 model where each item has its own descriptive record. Archives and special collections, on the other hand, usually describe materials in the aggregate as a collection. A single record, usually called a “finding aid,” is created for the entire collection. Only the very rare or special item typically warrants a description all its own. So the archival data model typically has one metadata record for many objects (or a 1:n ratio).

At DPLA, our metadata application profile and access platform have been centered on an item-centric library model for description: one metadata record for each individual digital object. While this method works well for most of the items in DPLA, it doesn’t translate to the way many archives are creating records for their digital objects. Instead, these institutions are applying an aggregate description to their objects.

Since DPLA works with organizations that use both the item-level and aggregation-based description practices, we need a way to support both. The Archival Description Working Group will help us get there.

The group will explore solutions to support varying approaches to digital object description and access and will produce a whitepaper outlining research and recommendations. While the whitepaper recommendations will be of particular use to DPLA or other large-scale aggregators, any data models or tools advanced by the group will be shared with the community for further development or adoption.

The group will include representatives from DPLA Hubs and Contributing Institutions, as well as national-level experts in digital object description and discovery. Several members of the working group have been invited to participate, but DPLA is looking for a few additional members to volunteer. As a member of the working group, active participation in conference calls is required, as well as a willingness to assist with research and writing.

If you are interested in being part of the Archival Description Working Group, please fill out the volunteer application form by 9/13/15. Three applicants will be chosen to be a part of the working group, and others will be asked to be the first reviewers of the whitepaper and any deliverables. An announcement of the full group membership will be made by the end of the month.

LITA: 3D Printing Partnerships: Tales Of Collaboration, Prototyping, And Just Plain Panic

planet code4lib - Fri, 2015-09-04 14:00


*Photo taken from Flickr w/Attribution CC License:

Many institutions have seen the rise of makerspaces within their libraries, but it’s still difficult to get a sense of how embedded they truly are within the academic fabric of their campuses and how they contribute to student learning. Libraries have undergone significant changes in the last five years, shifting from repositories to learning spaces, from places to experiences. It is within these new directions that the makerspace movement has risen to the forefront and begun to pave the way for truly transformative thinking and doing. Educause defines a makerspace as “a physical location where people gather to share resources and knowledge, work on projects, network, and build” (ELI 2013). These types of spaces are being embraced by the arts as well as the sciences and are quickly being adopted by the academic community because “much of the value of a makerspace lies in its informal character and its appeal to the spirit of invention” as students take control of their own learning (ELI 2013).

Nowhere is this spirit more alive than in entrepreneurship where creativity and innovation are the norm. The Oklahoma State University Library recently established a formal partnership with the School of Entrepreneurship to embed 3D printing into two pilot sections of its EEE 3023 course with the idea that if successful, all sections of this course would include a making component that could involve more advanced equipment down the road. Students in this class work in teams to develop an original product from idea, to design, to marketing. The library provides training on coordination of the design process, use of the equipment, and technical assistance for each team. In addition, this partnership includes outreach activities such as featuring the printers at entrepreneurship career fairs, startup weekends and poster pitch sessions. We have not yet started working with the classes, so much of this will likely change as we learn from our mistakes and apply what worked well to future iterations of this project.

This is all well and good, but how did we arrive at this stage of the process? The library first approached the School of Entrepreneurship with an idea for collaboration, but as we discovered, simply saying we wanted to partner would not be enough. We didn’t have a clear idea in mind, and the discussions ended without a concrete action plan. Fast forward to the summer, when the library was approached and asked about something that had been mentioned in the meeting-a makerspace. Were we interested in splitting the cost and pilot a project with a course? The answer was a resounding yes.

We quickly met several times to discuss exactly what we meant by “makerspace”, and we decided that 3D printing would be a good place to start. We drafted an outline that consisted of the equipment needed, which consisted of three Makerbot Replicator 5th generation printers and one larger Z18 along with the accompanying accessories and warranties. This information was gathered based on the collective experiences of the group along, with a few quick website searches to establish what other institutions were doing.

Next, we turned our attention to discussing the curriculum. While creating learning outcomes for making is certainly part of the equation, we had a very short time frame to get this done, so we opted for two sets of workshops for students with homework in between culminating in a certification to enable them to work on their product. The first workshop will walk them through using Blender to create an original design at a basic level, the second is designed to have them try out the printers themselves. In between workshops, they will watch videos and have access to a book to help them learn as they go. The certification at the end will consist of each team coming in and printing something (small) on their own after which they will be cleared to work on their own products. Drop-in assistance as well as consultation assistance will also be available, and we are determining the best way to queue requests as they come in knowing that we might have jobs printing over night, while others may come in at the very last minute.

Although as mentioned, we have just started on this project, we’ve learned several valuable lessons already that are worth sharing-they may sound obvious, but are still important to highlight:

  1. Be flexible! Nothing spells disaster like a rigid plan that cannot be changed at the last minute. We wanted a website for the project, we didn’t have time to create one. We had to wait until we received the printers to train ourselves on how they worked so that we can turn around and train the students. We are adapting as we go!
  2. Start small. Even two sections are proving to be a challenge with 40+ students all descending on a small space with limited printers. We hope they won’t come to blows, but we may have to play referee as much as consultant. There are well over 30 sections of this course that will present a much bigger challenge should we decide to incorporate this model into all of them.
  3. Have a plan in place, even if you end up changing it. We are now realizing that there are three main components to this collaboration all of which need a point person and support structure: tech support, curriculum, and outreach. There are 4 separate departments in the library (Research and Learning Services, Access Services, Communications, and IT) who are working together to make this a successful experience for all involved, not to mention our external partners.

Oh yes, and there’s the nagging thought at the end of each day-please, please, let this work. Fingers crossed!

Hydra Project: ActiveFedora 9.4.0 released

planet code4lib - Fri, 2015-09-04 09:38

We are pleased to announce the release of ActiveFedora 9.4.0.

This release adds hash URIs for sub resources, stops using InboundRelationConnection for speed, and refactors some existing code.

Release notes can be found here:

SearchHub: Using Thoth as a Real-Time Solr Monitor and Search Analysis Engine

planet code4lib - Fri, 2015-09-04 08:00
As we countdown to the annual Lucene/Solr Revolution conference in Austin this October, we’re highlighting talks and sessions from past conferences. Today, we’re highlighting Mhatre Braga and Praneet Damiano’s session on how Trulia uses Thoth and Solr for real-time monitoring and analysis. Managing a large and diversified Solr search infrastructure can be challenging and there is still a lack of good tools that can help monitor the entire system and help the scaling process. This session will cover Thoth: an open source real-time Solr monitor and search analysis engine that we wrote and currently use at Trulia. We will talk about how Thoth was designed, why we chose Solr to analyze Solr and the challenges that we encountered while building and scaling the system. Then, we will talk about some Thoth useful features like integration with Apache ActiveMQ and Nagios for real-time paging, generation of reports on query volume, latency, time period comparisons and the Thoth dashboard. Following that, we will summarize our application of machine learning algorithms and its results to the process of query analysis and pattern recognition. Then we will talk about the future directions of Thoth, opportunities to expand the project with new plug-ins and integration with Solr Cloud. Damiano is part of the search team at Trulia where he also helps managing the search infrastructure and creating internal tools to help the scaling process. Prior to Trulia, he studied and worked for the University of Ferrara (Italy) where he completed his Master Degree in Computer science Engineering. Praneet works as a Data Mining Engineer on Trulia’s Algorithms team. He works on property data handling algorithms, stats and trends generation, comparable homes and other data driver projects at Trulia. Before Trulia, he got his Bachelors degree in Computer Engineering from VJTI, India and his Masters in Computer Science from the University of California, Irvine. Thoth – Real-time Solr Monitor and Search Analysis Engine: Presented by Damiano Braga & Praneet Mhatre, Trulia from Lucidworks Join us at Lucene/Solr Revolution 2015, the biggest open source conference dedicated to Apache Lucene/Solr on October 13-16, 2015 in Austin, Texas. Come meet and network with the thought leaders building and deploying Lucene/Solr open source search technology. Full details and registration…

The post Using Thoth as a Real-Time Solr Monitor and Search Analysis Engine appeared first on Lucidworks.

SearchHub: Lucene Revolution Presents, Inside Austin(‘s) City Limits: Stump The Chump!

planet code4lib - Fri, 2015-09-04 00:35

It’s that time of year again folks…

Six weeks from today, Stump The Chump will be coming to Austin Texas at Lucene/Solr Revolution 2015.

If you are not familiar with “Stump the Chump” it’s a Q&A style session where “The Chump” (that’s me) is put on the spot with tough, challenging, unusual questions about Solr & Lucene — live, on stage, in front of hundreds of rowdy convention goers, with judges (who have all had a chance to review and think about the questions in advance) taking the opportunity to mock The Chump (still me) and award prizes to people whose questions do the best job of “Stumping The Chump”.

If that sounds kind of insane, it’s because it kind of is.

You can see for yourself by checking out the videos from past events like Lucene/Solr Revolution Dublin 2013 and Lucene/Solr Revolution 2013 in San Diego, CA. (Unfortunately no video of Stump The Chump is available from Lucene/Solr Revolution 2014: D.C. due to audio problems.)

Information on how to submit questions is available on the conference website.

I’ll be posting more details as we get closer to the conference, but until then you can subscribe to this blog (or just the “Chump” tag) to stay informed.

The post Lucene Revolution Presents, Inside Austin(‘s) City Limits: Stump The Chump! appeared first on Lucidworks.

Jonathan Rochkind: bento_search 1.4 released

planet code4lib - Thu, 2015-09-03 19:36

bento_search is a ruby gem that provides standardized ruby API and other support for querying external search engines with HTTP API’s, retrieving results, and displaying them in Rails. It’s focused on search engines that return scholarly articles or citations.

I just released version 1.4.

The main new feature is a round-trippable JSON serialization of any BentoSearch::Results or Items. This serialization captures internal state, suitable for a round-trip, such that if you’ve changed configuration related to an engine between dump and load, you get the new configuration after load.  It’s main use case is a consumer that is also ruby software using bento_search. It is not really suitable for use as an API for external clients, since it doesn’t capture full semantics, but just internal state sufficient to restore to a ruby object with full semantics. (bento_search does already provide a tool that supports an Atom serialization intended for external client API use).

It’s interesting that once you start getting into serialization, you realize there’s no one true serialization, it depends on the use cases of the serialization. I needed a serialization that really was just of internal state, for a round trip back to ruby.

bento_search 1.4 also includes some improvements to make the specialty JournalTocsForJournal adapter a bit more robust. I am working on an implementation of JournalTocs featching that needed the JSON round-trippable serialization too, for an Umlaut plug-in. Stay tuned.

Filed under: General

Harvard Library Innovation Lab: Link roundup September 3, 2015

planet code4lib - Thu, 2015-09-03 19:10

Goodbye summer

You can now buy Star Wars’ adorable BB-8 droid and let it patrol your home | The Verge

If only overdue fines could be put toward a BB-8 to cruise around every library.

World Airports Voronoi

I want a World Airports Library map.

Stephen Colbert on Making The Late Show His Own | GQ

Amazing, deep interview with Stephen Colbert

See What Happens When Competing Brands Swap Colors | Mental Floss

See competing brands swap logo colors

The Website MLB Couldn’t Buy

Major League Baseball’s worked hard to buy team domains. They don’t own though. It’s owned by two humans.

Zotero: Studying the Altmetrics of Zotero Data

planet code4lib - Thu, 2015-09-03 18:26

In April of last year, we announced a partnership with the University of Montreal and Indiana University, funded by a grant from the Alfred P. Sloan Foundation, to examine the readership of reference sources across a range of platforms and to expand the Zotero API to enable bibliometric research on Zotero data.

The first part of this grant involved aggregating anonymized data from Zotero libraries. The initial dataset was limited to items with DOIs, and it included library counts and the months that items were added. For items in public libraries, the data also included titles, creators, and years, as well as links to the public libraries containing the items. We have been analyzing this anonymized, aggregated data with our research partners in Montreal, and now are beginning the process of making that data freely and publicly available, beginning with Impactstory and Altmetric, who have offered to conduct preliminary analysis (we’ll discuss Impactstory’s experience in a future post).

In our correspondence with Altmetric over the years, they have repeatedly shown interest in Zotero data, and we reached out to them to see if they would partner with us in examining the data. The Altmetric team that analyzed the data consists of about twenty people with backgrounds in English literature and computer science, including former researchers and librarians. Altmetric is interested in any communication that involves the use or spread of research outputs, so in addition to analyzing the initial dataset, they’re eager to add the upcoming API to their workflow.

The Altmetric team parsed the aggregated data and checked it against the set of documents known to have been mentioned or saved elsewhere, such as on blogs and social media. Their analysis revealed that approximately 60% of the items in their database that had been mentioned in at least one other place, such as on social media or news sites, had at least one save in Zotero. The Altmetric team was pleased to find such high coverage, which points to the diversity of Zotero usage, though further research will be needed to determine the distribution of items across disciplines.

The next step forward for the Altmetric team involves applying the data to other projects and tools such as the Altmetric bookmarklet. The data will be useful in understanding the impact of scholarly communication, because conjectures about reference manager data can be confirmed or denied, and this information can be studied in order to gain a greater comprehension of what such data represents and the best ways to interpret it.

Based on this initial collaboration, Zotero developers are verifying and refining the aggregation process in preparation for the release of a public API and dataset of anonymized, aggregated data, which will allow bibliometric data to be highlighted across the Zotero ecosystem and enable other researchers to study the readership of Zotero data.

Thom Hickey: Matching names to VIAF

planet code4lib - Thu, 2015-09-03 18:11

The Virtual International Authority File (VIAF) currently has about 28 million entities created by a merge of three dozen authority files from around the world.  Here at OCLC we are finding it very useful in controlling names in records.  In the linked data world we are beginning to experience 'controlling' means assigning URIs (or at least identifiers that can easily be converted to URIs) to the entities.  Because of ambiguities in VIAF and the bibliographic records we are matching it to, the process is a bit more complicated than you might imagine. In fact, our first naive attempts at matching were barely usable.  Since we know others are attempting to match VIAF to their files, we thought a description of how we go about it would be welcome (of course if your file consists of bibliographic records and they are already in WorldCat, then we've already done the matching).  While a number of people have been involved in refining this process, most of the analysis and code was done by Jenny Toves here in OCLC Research over the last few years.

First some numbers: The 28 million entities in VIAF were derived from 53 million source records and 111 million bibliographic records. Although we do matching to other entities in VIAF, this post is about matching against VIAF's 24 million corporate and personal entities.  The file we are matching it to (WorldCat) consists of about 400 million bibliographic records (at least nominally in MARC-21), each of which have been assigned a work identifier before the matching described below. Of the 430 million names in author/contributor (1XX/7XX) fields in WorldCat we are able to match 356 million (or 83%).  If those headings were weighted by how many holdings are associated with them, the percentage controlled would be even higher, as names in the more popular records are more likely to have been subjected to authority control somewhere in the world.

It is important to understand the issues raised when pulling together the source files that VIAF is based on.  While we claim that better than 99% of the 54 million links that VIAF makes between source records are correct, that does not mean that the resulting clusters are 99% perfect.  In fact many of the more common entities represented in VIAF will have not only the a 'main' VIAF cluster, but one or more smaller clusters derived from authority records that we were unable to bring into the main cluster because of missing, duplicated or ambiguous information.  Another thing to keep in mind is that any relatively common name that has one or more famous people associated with it can be expected to have some misattributed titles (this is true for even the most carefully curated authority files of any size).

WorldCat has many headings with subfield 0's ($0s) that associate an identifier with the heading. This is very common in records loaded into WorldCat by some national libraries, such as French and German, so one of the first things we do in our matching is look for identifiers in $0's which can be mapped to VIAF.  When those mappings are unambiguous we use that VIAF identifier and are done.

The rest of this post is a description of what we do with the names that do not already have a usable identifier associated with them.  The main difficulties arise when there either are multiple VIAF clusters that look like good matches or we lack enough information to make a good match (e.g. no title or date match).  Since a poor link is often worse than no link at all, we do not make a link unless we are reasonably confident of it.

First we extract information about each name of interest in each of the bibliographic records:

  • Normalized name key:
    • Extract subfields a,q and j
    • Expand $a with $q when appropriate
    • Perform enhanced NACO normalization on the name
  • $b, $c's, $d, $0's, LCCNs, DDC class numbers, titles, language of cataloging, work identifier

The normalized name key does not include the dates ($d) because they are often not included in the headings in bibliographic records. The $b and $c are so variable, especially across languages, that they also ignored at this point.  The goal is to have a key which will bring together variant forms of the name without pulling in too many different entities together. After preliminary matching we do matching with more precision and $b, $c and $d are used for that.

Similar normalized name keys are generated from the names in VIAF clusters.

When evaluating matches we have a routine that scores the match based on criteria about the names:

  • Start out with '0'
    • A negative value implies the names do not match
    • A 0 implies the names are compatible (nothing to indicate they can't represent the same entity), but nothing beyond that
    • Increasing positive values imply increasing confidence in the match
  • -1 if dates conflict*
  • +1 if a begin or end date matches
  • +1 if both begin and end dates match
  • +1 if begin and end dates are birth and death dates (as opposed to circa or flourished)
  • +1 if there is at least one title match
  • +1 if there is at least one LCCN match
  • -3 if $b's do not match
  • +1 if $c's match
  • +1 if DDCs match
  • +1 if the match is against a preferred form

Here are the stages we go through.  At each stage proceed to the next if the criteria are not met:

  • If only one VIAF cluster has the normalized name from the bibliographic record, use that VIAF identifier
  • Collapse bibliographic information based on the associated work identifiers so that they can share name dates, $b and $c, LCCN, DDC
    • Try to detect fathers/sons in same bibliographic record so that we don’t link them to the same VIAF cluster
  • If a single best VIAF cluster (better than all others) exists – use it
    • Uses dates, $b, $c, titles, preferred form of name to determine best match as described above
  • Try the previous rule again adding LCC and DDC class numbers in addition to the other match points (as matches were made in the previous step, data was collected to make this easier)
    • If there is a single best candidate, use it
    • If more than one best candidate – sort candidate clusters based on the number of source records in the clusters. If there is one cluster that has 5 or more sources and the next largest cluster has 2 or less sources, use the larger cluster
  • Consider clusters where the names are compatible, but not exact name matches
    • Candidate clusters include those where dates and/or enumeration do not exist either in the bibliographic record or the cluster
    • Select the cluster based on the number of sources as described above
  • If only one cluster has an LC authority record in it, use that one
  • No link is made

Fuzzy Title Matching

Since this process is mainly about matching names, and titles are used only to resolve ambiguity, the process described here depends on a separate title matching process.  As part of OCLC’s FRBR matching (which happens after the name matching described here) we pull bibliographic records into work clusters, and each bibliographic record in WorldCat has a work identifier associated with it based on these clusters.  Once we can associate a work identifier with a VIAF identifier, that work identifier can be used to pull in otherwise ambiguous missed matches on a name.  Here is a simple example:

Record 1:

    Author: Smith, John

    Title: Title with work ID #1

Record 2:

    Author: Smith, John

    Title: Another title with work ID #1

Record 3:

    Author: Smith, John

    Title: Title with work ID #2

In this case, if we were able to associate the John Smith in record #1 to a VIAF identifier, we could also assign the same VIAF identifier to the John Smith in record #2 (even though we do not have a direct match on title), but not to the author of record #3. This lets us use all the variant titles we have associated with a work to help sort out the author/contributor names.

Of course this is not perfect.  There could be two different John Smith’s associated with a work (e.g. father and son), so occasionally titles (even those that appear to be properly grouped in a work) can lead us astray.

That's a sketch of how the name matching process operates.  Currently WorldCat is updated with this information once per month and it is visible in the various linked data views of WorldCat.

--Th & JT

*If you want to understand more about how dates are processed, our code{4}lib article about Parsing and Matching Dates in VIAF describes that in detail.

Library of Congress: The Signal: Seeking Comment on Migration Checklist

planet code4lib - Thu, 2015-09-03 15:39

The NDSA Infrastructure Working Group’s goals are to identify and share emerging practices around the development and maintenance of tools and systems for the curation, preservation, storage, hosting, migration, and similar activities supporting the long term preservation of digital content. One of the ways the IWG strives to achieve their goals is to collaboratively develop and publish technical guidance documents about core digital preservation activities. The NDSA Levels of Digital Preservation and the Fixity document are examples of this.

Birds. Ducks in pen. (Photo by Theodor Horydczak, 1920) (Source: Horydczak Collection Library of Congress Prints and Photographs Division,

The latest addition to this guidance is a migration checklist. The IWG would like to share a draft of the checklist with the larger community in order to gather comments and feedback that will ultimately make this a better and more useful document. We expect to formally publish a version of this checklist later in the Fall, so please review the draft below and let us know by October 15, 2015 in the comments below or in email via ndsa at loc dot gov if you have anything to add that will improve the checklist.

Thanks, in advance, from your IWG co-chairs Sibyl Schaefer from University of California, San Diego, Nick Krabbenhoeft from Educopia and Abbey Potter from Library of Congress. Another thank you to the former IWG co-chairs Trevor Owens from IMLS and Karen Cariani from WGBH who lead the work to initially develop this checklist.

Good Migrations: A Checklist for Moving from One Digital Preservation Stack to Another

The goal of this document is to provide a checklist for things you will want to do or think through before and after moving digital materials and metadata forward to new digital preservation systems/infrastructures. This could entail switching from one system to another system in your digital preservation and storage architecture (various layers of hardware, software, databases, etc.). This is a relatively expansive notion of system. In some cases, organizations have adopted turn-key solutions whereby the requirements for ensuring long term access to digital objects are taken care of by a single system or application. However, in many cases, organizations make use of a range of built and bought applications and core functions of interfaces to storage media that collectively serve the function of a preservation system. This document is intended to be useful for migrations between either comprehensive systems as well as situations where one is swapping out individual components in a larger preservation system architecture.

Issues around normalization of data or of moving content or metadata from one format to another are out of scope for this document. This document is strictly focused on checking through issues related to moving fixed digital materials and metadata forward to new systems/infrastructures.

Before you Move:

  1. Review the state of data in the current system, clean up any data inconsistencies or issues that are likely to create problems on migration and identify and document key information (database naming conventions, nuances and idiosyncrasies in system/data structures, use metrics, etc.).
  2. Make sure you have fixity information for your objects and make sure you have a plan for how to bring that fixity information over into your new system. Note, that different systems may use different algorithms/instruments for documenting fixity information so check to make sure you are comparing the same kinds of outputs.
  3. Make sure you know where all your metadata/records for your objects are stored and that if you are moving that information that you have plans to ensure it’s integrity in place.
  4. Check/validate additional copies of your content stored in other systems, you may need to rely on some of those copies for repair if you run into migration issues.
  5. Identify any dependent systems using API calls into your system or other interfaces which will need to be updated and make plans to update, retire, or otherwise notify users of changes.
  6. Document feature parity and differences between the new and old system and make plans to change/revise and refine workflows and processes.
  7. Develop new documentation and/or training for users to transition from the old to the new system.
  8. Notify users of the date and time the system will be down and not accepting new records or objects. If the process will take some time, provide users with a plan for expectations on what level of service will be provided at what point and take the necessary steps to protect the data you are moving forward during that downtime.
  9. Have a place/plan on where to put items that need ingestion while doing the migration.  You may not be able to tell people to just stop and wait.
  10. Decide on what to do with your old storage media/systems. You might want to keep them for a period just in case, reuse them for some other purpose or destroy them. In any event it should be a deliberate, documented decision.
  11. Create documentation recording what you did and how you approached the migration (any issues, failures, or issues that arose) to provide provenance information about the migration of the materials.
  12. Test migration workflow to make sure it works – both single records and bulk batches of varying sizes to see if there are any issues.

After you Migrate

  1. Check your fixity information to ensure that your new system has all your objects intact.
  2. If any objects did not come across correctly, as identified by comparing fixity values, then repair or replace the objects via copies in other systems. Ideally, log this kind of information as events for your records.
  3. Check to make sure all your metadata has come across, spot check to make sure it hasn’t been mangled.
  4. Notify your users of the change and again provide them with new or revised user documentation.
  5. Record what is done with the old storage media/systems after migration.
  6. Assemble all documentation generated and keep with other system information for future migrations.
  7. Establish timeline and process for reevaluating when future migrations should be planned for (if relevant).

Relevant resources and tools:

This post was updated 9/3/2015 to fix formatting and add email information.

District Dispatch: Last chance to support libraries at SXSW

planet code4lib - Thu, 2015-09-03 14:34

From Flickr

A couple of weeks ago, the ALA Washington Office urged support for library programs at South by Southwest (SXSW). The library community’s footprint at this annual set of conferences and activities has expanded in recent years, and we must keep this trend going! Now is your last chance to do your part, as public voting on panel proposals will end at 11:59 pm (CDT) this Friday, September 4th [Update: Now Monday, September 7th]. SXSW received more than 4,000 submissions this year—an all-time record—so we need your help more than ever to make library community submissions stand out. You can read about, comment on, and vote for, the full slate of proposed panels involving the Washington Office here.

Also, the SXSW library “team” that connects through the lib*interactive Facebook group and #liblove has compiled a list of library programs that have been proposed for all four SXSW gatherings. Please show your support for all of them. Thanks!

The post Last chance to support libraries at SXSW appeared first on District Dispatch.

LITA: Get Involved in the National Digital Platform for Libraries

planet code4lib - Thu, 2015-09-03 13:00

Editor’s note: This is a guest post by Emily Reynolds and Trevor Owens.

Recently IMLS has increased its focus on funding digital library projects through the lens of our National Digital Platform strategic priority area. The National Digital Platform is the combination of software applications, social and technical infrastructure, and staff expertise that provides library content and services to all users in the U.S… in other words, it’s the work many LITA members are already doing!

Participants at IMLS Focus: The National Digital Platform

As libraries increasingly use digital infrastructure to provide access to digital content and resources, there are more and more opportunities for collaboration around the tools and services that they use to meet their users’ needs. It is possible for each library in the country to leverage and benefit from the work of other libraries in shared digital services, systems, and infrastructure. We’re looking at ways to maximize the impact of our funds by encouraging collaboration, interoperability, and staff training. We are excited to have this chance to engage with and invite participation from the librarians involved in LITA in helping to develop and sustain this national digital platform for libraries.

National Digital Platform convening report

Earlier this year, IMLS held a meeting at the DC Public Library to convene stakeholders from across the country to identify opportunities and gaps in existing digital library infrastructure nationwide. Recordings of those sessions are now available online, as is a summary report published by OCLC Research. Key themes include:


Engaging, Mobilizing and Connecting Communities

  • Engaging users in national digital platform projects through crowdsourcing and other approaches
  • Establishing radical and systematic collaborations across sectors of the library, archives, and museum communities, as well as with other allied institutions
  • Championing diversity and inclusion by ensuring that the national digital platform serves and represents a wide range of communities

Establishing and Refining Tools and Infrastructure

  • Leveraging linked open data to connect content across institutions and amplify impact
  • Focusing on documentation and system interoperability across digital library software projects
  • Researching and developing tools and services that leverage computational methods to increase accessibility and scale practice across individual projects

Cultivating the Digital Library Workforce

  • Shifting to continuous professional learning as part of library professional practice
  • Focusing on hands-on training to develop computational literacy in formal library education programs
  • Educating librarians and archivists to meet the emerging digital needs of libraries and archives, including cross-training in technical and other skills

We’re looking to support these areas of work with the IMLS grant programs available to library applicants.

IMLS Funding Opportunities

IMLS has three major competitive grant programs for libraries, and we encourage the submission of proposals related to the National Digital Platform priority to all three. Those programs are:

  • National Leadership Grants for Libraries (NLG): The NLG program is specifically focused on supporting our two strategic priorities, the National Digital Platform and Learning in Libraries. The most competitive proposals will advance some area of library practice on a national scale, with new tools, research findings, alliances, or similar outcomes. The NLG program makes awards up to $2,000,000, with funds available for both project and planning grants.
  • Laura Bush 21st Century Librarian Program (LB21): The LB21 program supports professional development, graduate education and continuing education for librarians and archivists. The LB21 program makes awards up to $500,000, and like NLG supports planning as well as project grants.
  • Sparks! Ignition Grants for Libraries: Sparks! grants support the development, testing, and evaluation of promising new tools, products, services, and practices. They often balance broad potential impact with an element of risk or innovation. The Sparks! program makes awards up to $25,000.

These programs can fund a wide range of activities. NLG and LB21 grants support projects, research, planning, and national forums (where grantees can hold meetings to gather stakeholders around a particular topic). The LB21 program also has a specific category for supporting early career LIS faculty research.

Application Process and Deadlines

Over the past year, IMLS piloted an exciting new model for our grant application process, which this year will be in place for both the NLG and LB21 programs. Rather than requiring a full application from every applicant, only a two-page preliminary proposal is due at the deadline. After a first round of peer review, a small subset of applicants will be invited to submit full proposals, and will have the benefit of the peer reviewers’ comments to assist in constructing the proposal. The full proposals will be reviewed by a second panel of peer reviewers before funding decisions are made. The Sparks! program goes through a single round of peer review, and requires the submission of a full proposal from all applicants.

The LB21 and NLG programs will both have a preliminary proposal application deadline on October 1, 2015, as well as an additional application deadline in February, 2016.

Are you considering applying for an IMLS grant for your digital library project? Do you want to discuss which program might be the best fit for your proposal? We’re always happy to chat, and love hearing your project ideas, so please email us at (Emily) and (Trevor).

SearchHub: How Bloomberg Executes Search Analytics with Apache Solr

planet code4lib - Thu, 2015-09-03 08:00
As we countdown to the annual Lucene/Solr Revolution conference in Austin this October, we’re highlighting talks and sessions from past conferences. Today, we’re highlighting Steven Bower’s session on how Bloomberg uses Solr for search analytics. Search at Bloomberg is not just about text, it’s about numbers, lots of numbers. In order for our clients to research, measure and drive decisions from those numbers we must provide flexible, accurate and timely analytics tools. We decided to build these tools using Solr, as Solr provides the indexing performance, filtering and faceting capabilities needed to achieve the flexibility and timeliness required by the tools. To perform the analytics required we developed an Analytics component for Solr. This talk will cover the Analytics Component that we built at Bloomberg, some use cases that drove it and then dive into features/functionality it provides. Steven Bower has worked for 15 years in the web/enterprise search industry. First as part of the R&D and Services teams at FAST Search and Transfer, Inc. and then as a principal engineer at Attivio, Inc. He has participated/lead the delivery of hundreds of search applications and now leads the search infrastructure team at Bloomberg LP, providing a search as a service platform for 80+ applications. Search Analytics Component: Presented by Steven Bower, Bloomberg L.P. from Lucidworks Join us at Lucene/Solr Revolution 2015, the biggest open source conference dedicated to Apache Lucene/Solr on October 13-16, 2015 in Austin, Texas. Come meet and network with the thought leaders building and deploying Lucene/Solr open source search technology. Full details and registration…

The post How Bloomberg Executes Search Analytics with Apache Solr appeared first on Lucidworks.


Subscribe to code4lib aggregator