Do you want to learn to code? Of course you do, why wouldn’t you? Programming is fun, like solving a puzzle. It helps you think in a computational and pragmatic way about certain problems, allowing you to automate those problems away with a few lines of code. Choosing to learn programming is the first step on your path, and the second is choosing a language. These days there are many great languages to choose from, each with their own strengths and weaknesses. The right language for you depends heavily on what you want to do (as well as what language your coworkers are using).
If you don’t have any coder colleagues and can’t decide on a language, I would suggest taking a look at Python. It’s mature, battle-tested, and useful for a just about anything. I work across many different domains (often in the same day) and Python is a powerful tool that helps me take care of business whether I’m processing XML, analyzing data or batch renaming and moving files between systems. Python was created to be easy to read and aims to have one obvious “right” way to do any given task. These language design decisions not only make Python an easy language to learn, but an easy language to remember as well.
One of the potential problems with Python is that it might not already be on your computer. Even if it is on your computer, it’s most likely an older version (the difference between Python v2 and v3 is kind of a big deal). This isn’t necessarily a problem with Python though; you would probably have to install a new interpreter (the program that reads and executes your code) no matter what language you choose. The good news is that there is a very simple (and free!) tool for getting the latest version of Python on your computer regardless of whether you are using Windows, Mac or Linux. It’s called Anaconda.
Anaconda is a Python distribution, which means that it is Python, just packaged in a special way. This special packaging turns out to make all the difference. Installing an interpreter is usually not a trivial task; it often requires an administrator password to install (which you probably won’t have on any system other than your personal computer) and it could cause conflicts if an earlier version already exists on the system. Luckily Anaconda bypasses most of this pain with a unique installer that puts a shiny new Python in your user account (this means you can install it on any system you can log in to, though others on the system wouldn’t be able to use it), completely separate from any pre-existing version of Python. Learning to take advantage of this installer was a game-changer for me since I can now write and run Python code on any system where I have a user account. Anaconda allows Python to be my programming Swiss Army knife; versatile, handy and always available.
Another important thing to understand about Anaconda’s packaging is that it comes with a lot of goodies. Python is famous for having an incredible amount of high-quality tools built in to the language, but Anaconda extends this even further. It comes with Spyder, a graphical text editor that makes writing Python code easier, as well as many packages that extend the langauge’s capabilities. Python’s convenience and raw number crunching power has made it a popular language in the scientific programming community, and a large number of powerful data processing and analysis libraries have been developed by these scientists as a result. You don’t have to be a scientist to take advantage of these libraries, though; the simplicity of Python makes these libraries accessible to anyone with the courage to dive in and try them out. Anaconda includes the best of these scientific libraries: IPython, NumPy, SciPy, pandas, matplotlib, NLTK, scikit-learn, and many others (I use IPython and pandas pretty frequently, and I’m in the process of learning matplotlib and NLTK). Some of these libraries are a bit tricky to install and configure with the standard Python interpreter, but Anaconda is set up and ready to use them from the start. All you have to do is use them.
While we’re on the subject of tricky installations, there are many more packages that Anaconda doesn’t come with that can be a pain to install as well. Luckily Anaconda comes with its own package manager, conda, which is handy for not only grabbing new packages and installing them effortlessly, but also for upgrading the packages you have to the latest version. Conda even works on the Python interpreter itself, so when a new version of Python comes out you don’t have to reinstall anything. Just to test it out, I upgraded to the latest version of Python, 3.4.2, while writing this article. I typed in ‘conda update python‘ and had the newest version running in less than 30 seconds.
In summary, Anaconda makes Python even more simple, convenient and powerful. If you are looking for an easy way to take Python for a test drive, look no further than Anaconda to get Python on your system as fast as possible. Even seasoned Python pros can appreciate the reduced complexity Anaconda offers for installing and maintaining some of Python’s more advanced packages, or putting a Python on systems where you need it but lack security privileges. As an avid Python user who could install Python and all its packages from scratch, I choose to use Anaconda because it streamlines the process to an incredible degree. If you would like to try it out, just download Anaconda and follow the guide.
On March 25, 2015, the American Library Association’s Washington Office and the University of Maryland’s iPAC will host the free webinar “Baltimore’s Virtual Supermarket: Grocery Delivery to Your Library or Community Site.” During the webinar, library leaders will discuss Baltimore’s Virtual Supermarket Program, an innovative partnership between the Enoch Pratt Free Library, the Baltimore City Health Department and ShopRite. Through the Virtual Supermarket Program, customers can place grocery online orders at select libraries, senior apartment buildings, or public housing communities and have them delivered to that site at no added cost. In this webinar, you will learn about the past, present, and future of the Virtual Supermarket Program, as well as the necessary elements to replicate the program in your own community.Webinar speakers
- Laura Flamm is the Baltimarket and Food Access Coordinator at the Baltimore City Health Department. In this role, Laura coordinates a suite of community-based food access programs that include the Virtual Supermarket Program, the Neighborhood Food Advocates Initiative, and the Healthy Stores Program. Laura holds a Master’s of Science in Public Health from the Johns Hopkins Bloomberg School of Public Health in Health, Behavior, and Society and a certificate in Community-Based Public Health. She believes that eating healthy should not be a mystery or a privilege.
- Eunice Anderson is Chief of Neighborhood Library Services for the Enoch Pratt Free Library. A Baltimore native, she has worked 36 years at Pratt Library coming up through the ranks from support staff to library professional. In the various positions she’s held, providing quality and enriching library services by assisting customers, supporting and leading staff, and community outreach, has kept her battery charged.
Webinar title: Baltimore’s Virtual Supermarket: Grocery Delivery to Your Library or Community Site
Date: March 25, 2015
Time: 2:00-3:00 p.m. EST
The post Free webinar: Bringing fresh groceries to your library appeared first on District Dispatch.
This week, I joined my colleague Kevin Maher, assistant director of the American Library Association’s (ALA) Office of Government Relations, in meeting with staff from Reach Out and Read, Save the Children and Reading Is Fundamental to lobby congressional Appropriators staff for level funding for Innovative Approaches to Literacy (IAL), a grant program with at least half of funding going to school libraries.
In the U.S. Senate and U.S. House, both Republicans and Democrats all talked about how tight the budget will be, how little money is available….but how much they all want to have an appropriation. But in the Senate, they are not optimistic that they can get a Labor, Health and Human Services education bill on to the U.S. Senate floor for a vote (that hopefully passes).
Many congressional staff members advised us to make sure Members of Congress know about the IAL funding program and how it benefits school libraries. For the first time, we need to submit electronic appropriations forms (like folks used to have to do for earmarks in the past) for all programs, and it will be a stronger submission with a “hometown” local connection.
We are asking every school that has received an IAL grant to support the ALA’s advocacy efforts. Email Kevin Maher kmaher[at]alawash[dot]org with a good story as soon as possible. These forms are due March 12, 2015, so we do not have much time.
The post School librarians: Send us your successful IAL story appeared first on District Dispatch.
- Most important: Impact vs. Cost
- Impact is how many (what portion) of your patrons will be effected; and how profound the benefit may be to their research, teaching, learning.
- Cost may include hardware or software costs, but for most projects we do the primary cost is staff time.
- You are looking for the projects with the greatest impact at the lowest cost.
- If you want to try and quantify, it may be useful to simply estimate three qualities:
- Portion of userbase impacted (1-10 for 10% to 100% of userbase impacted)
- Profundity of impact (estimate on a simple scale, say 1 to 3 with 3 being the highest)
- “Cost” in terms of time. Estimate with only rough granularity knowing estimates are not accurate. 2 weeks, 2 months, 6 months, 1 year. Maybe assign those on a scale from 1-4.
- You could then simply compute (portion * profundity) / cost, and look for the largest values. Or you could plot on a graph with (benefit = portion * profundity) on the x-axis, and cost on the y-axis. You are looking for projects near the lower right of the graph — high benefit, low cost.
- Demographics impacted. Will the impact be evenly distributed, or will it be greater for certain demographics? Discipline/school/department? Researcher vs grad student vs undergrad?
- Are there particular demographics which should be prioritized, because they are currently under-served or because focusing on them aligns with strategic priorities?
- Types of services or materials addressed. Print items vs digital items? Books vs journal articles? Other categories? Again, are there service areas that have been neglected and need to be brought to par? Or service areas that are strategic priorities, and others that will be intentionally neglected?
- Strategic plans. Are there existing Library or university.strategic plans? Will some projects address specific identified strategic focuses? Can also be used to determine prioritized demographics or service areas from above.
- Ideally all of this is informed by strategic vision, where the library organization wants to be in X years, and what steps will get you there. And ideally that vision is already captured in a strategic plan. Few libraries may have this luxury of a clear strategic vision, however.
Filed under: General
Hot on the heels of last week’s announcement of KriKri and Heidrun, we here at DPLA HQ are excited to release the newest revision of the DPLA Metadata Application Profile, version 4.0 (DPLA MAP v4.0).
What is an “application profile”? It’s a defined set of metadata properties that combines selected elements from multiple schemas, often along with locally defined ones. An application profile, therefore, allows us to take the parts of other metadata schemes best suited to our needs to build a profile that works for us. We’ve taken full advantage of this model to combine properties from DCMI, EDM, Open Annotation, and more to create the DPLA MAP v4.0. Because the majority of the elements come from standard schemas (indicated by a namespace prefix, such as “dc:date” for Dublin Core’s date element), we remain aligned with the Europeana Data Model (EDM), while having enough flexibility for our local needs.
Our new version of the DPLA MAP has lots of properties tailor-made for storing Universal Resource Identifiers (or URIs) from Linked Open Data (LOD) sources. These are other data sets and vocabularies that publish URIs tied to specific terms and concepts. We can use those URIs to point to the external LOD source and enrich our own data with theirs. In particular, we now have the ability to gather LOD about people or organizations (in the new class we’ve created for “Agents”), places (in the revision of our existing “Place” class) and concepts, topics, or subject headings (in the new “Concept” class).
At the moment DPLA’s plans for LOD include associating URIs that are already present in the records we get from our partners, as well as looking up and populating URIs for place names when we can. In the future, we plan to incorporate more linked data vocabularies such as the Library of Congress Subject Headings and Authorities. After that we can begin to consider other kinds of LOD possibilities like topic analysis or disambiguation of terms, transliteration, enrichment of existing records with more metadata from other sources (a la Wikipedia, for example), and other exciting possibilities.
Every journey begins with a first step, and our journey began with the upgrades announced in recent weeks (as described in our recent Code4Lib presentation, blog posts, and software releases). Along with these upgrades, MAP v4.0 has become our official internal metadata application profile. As of today, documentation for the new version of DPLA MAP v4.0 is available here as well as a new Introduction to the DPLA Metadata Model.
Today I found the following resources and bookmarked them on <a href=
- Sphinx Sphinx is a tool that makes it easy to create intelligent and beautiful documentation, written by Georg Brandl and licensed under the BSD license.
Digest powered by RSS Digest
All Web services that require user level authentication will be unavailable during the installation window, which is between 2:00 – 8:00 AM Eastern USA, Sunday March 8th
Last week we published The BC Open Textbook Accessibility Toolkit. I’m really excited and proud of the work that we did and am moved by how generous people have been with us.
Since last fall I’ve been working with Amanda Coolidge (BCcampus) and Sue Doner (Camosun College) to figure out how to make the open textbooks produced in BC accessible from the start. This toolkit was published using Pressbooks, a publishing plugin for WordPress. It is licensed with the same Creative Commons license as the rest of the open textbooks (CC-BY). This whole project has been a fantastic learning experience and it’s been a complete joy to experience so much generosity from other colleagues.
We worked with students with print disabilities to user test some existing open textbooks for accessibility. I rarely get to work face-to-face with students. It was such a pleasure to work with this group of well-prepared, generous and hardworking students.
Initially we were stumped about how to get faculty, who would be writing open textbooks, to care about print disabled students who may be using their books. Serendipitously I came across this awesome excerpt from Sarah Horton and Whitney Queensbury’s book A Web For Everyone. User personas seemed like the way to explain some of the different types of user groups. A blind student is likely using different software, and possibly different hardware than a student with a learning disability. Personas seemed like a useful tool to create empathy and explain why faculty should write alt text descriptions for their images.
Instead of rethinking these from the beginning Amanda suggested contacting them to see if their work was licensed under a Creative Commons license that would allow us to reuse and remix their work. They emailed me back in 5 minutes and gave their permission for us to reuse and repurpose their work. They also gave us permission to use the illustrations that Tom Biby did for their book. These illustrations are up on Flickr and clearly licensed with a CC-BY license.
While I’ve worked on open source software projects this is the first time I worked on an open content project. It is deeply satisfying for me when people share their work and encourage others to build upon it. Not only did this save us time but their generosity and enthusiasm gave us a boost. We were complete novices: none of us had done any user testing before. Sarah and Whitney’s quick responses were really encouraging.
This is the first version and we intend to improve it. We already know that we’d like to add some screenshots of ZoomText and we need to provide better information on how to make formulas and equations accessible. It’s difficult for me to put work out that’s not 100% perfect and complete but other people’s generosity have helped me to relax.
I let our alternate format partners across Canada know about this toolkit. Within 24 hours of publishing this our partner organization in Ontario offered to translate it into French. They had also started working on a similar project and loved our approach. So instead of writing their own toolkit they will use use or adapt ours. As it’s licensed under a CC-BY license they didn’t even need to ask us to use it or translate it.
Thank you to Mary Burgess at BCcampus who identified accessibility as a priority for the BC open textbook project.
Thank you to Bob Minnery at AERO for the offer of a French translation.
Thank you to Sarah Horton and Whitney Queensbury for your generosity and enthusiasm. I really feel like we got to stand on the shoulders of giants.
Thank you to the students who we worked with. This was an awesome collaboration.
Thank you to Amanda Coolidge and Sue Doner for being such amazing collaborators. I love how we get stuff done together.
We’re debuting a new series this month: a roundup inspired by our friends at Hack Library School! Each month, the LITA bloggers will share selected library tech links, resources, and ideas that resonated with us. Enjoy – and don’t hesitate to tell us what piqued your interest recently in the comments section!Brianna M.
Get excited: This month I discovered some excellent writing related to research data management.
- If you’ve ever wondered… –> What Drives Academic Data Sharing?
- Excellent, spot on advice from Celia Emmelhainz –> Things You Can Do As a Library Student to Prepare for a Career as a Data Librarian
- UW-Madison unveiled our new electronic lab notebook this past fall and we’re continuing to educate the community about it. –> Manage Your Data with LabArchives
- Stacy always teaches me stuff. This time it’s about the tool Docker. –> A Gentle Introduction to Docker for Reproducible Research
- More and more federal agencies are releasing requirements following February 2013’s OSTP memo. These institutions are doing a great job aggregating that information. –> Oregon State University | Carnegie Mellon | Columbia University
The lion’s share of my work revolves around our digital library system, and lately I’ve been waxing philosophical about what role these systems play in our culture. I don’t have a concrete answer yet, but I’m getting there.
- Lawrence Lessig is pretty much the coolest person ever. He’s the co-founder of the Creative Commons license we use pretty much every week on our blog, and he puts his money where is mouth is when it comes to the books he’s written. I’m currently reading The Future of Ideas and Free Culture, both of which are freely available under a CC license.
- Europe is doing a stellar job of raising public awareness about the importance of the public domain. Organizations like Communia and Europeana are putting a lot of effort into initiatives like the Public Domain Manifesto. Let’s hope it spreads across the pond.
- There’s a horse in that car!
I’m just unburying myself from a major public computer revamp (new PCs, new printers, new reservation/printing system, mobile printing, etc.) so here are a few things I’ve found interesting:
- Eric Hellman from Unglue.it writes about Creative Commons licensing and how “Free” can help a book do its job
- An interview with David Weingerber of the Harvard Library Innovation Lab –> Is There a Library-Sized Hole in the Internet?
- David Lee King wants suggests your strategic plan needs a technology plan –> Which Comes First – Strategic Plan or Technology Plan?
- If you’ve got some time, this is an excellent 53-page white paper from Marshall Breeding (PDF link on the page) –> NISO White Paper Explores the Future of Library Resource Discovery
This month my life is starting to revolve around online learning. Here’s what I’ve been reading:
- So much video… –> BYU-Idaho Supports Online Learning with Automated Video Transcoding
- Virtual reality… –> Distance Learning Taps in to Virtual Reality Technology
- Students might not like this, but school doesn’t have to stop for snow!… –> For Some Schools, Learning Doesn’t Stop on Snow Days
- And because all of this is really hard to do on your own… –> Why You Now Need a Team to Create and Deliver Learning
I’ve been immersed in metadata and cataloguing, so here’s a grab bag of what’s intrigued me lately:
- Although my university doesn’t collect video games, I know well that cataloguing rules often lag behind technological developments – A History of Video Game Cataloging in U.S. Libraries
- Wish I’d thought of this project… “What Am I Fighting For?”: Creating a Controlled Vocabulary for Video Game Plot Metadata
- To get my brain off gaming. I’m new to Viewshare, but it seems pretty neat … Visual Representation of Academic Communities through Viewshare
- LibraryThing is looking snazzier than most library catalogues, yet again: New “More Like This” for LibraryThing for Libraries
Hey, LITA Blog readers. Are you managing multiple projects? Have you run out of Post-it (R) notes? Are the to-do lists not cutting it anymore? Me too. The struggle is real. Here are a set of totally unrelated links to distract all of us from the very pressing tasks at hand. I mean inspire us to finish the work.
- A pair of retired scholars have meticulously reconstructed a prestigious book collection online (and it’s a thing of beauty) –> Reassembling William Morris’ Library
- CRASSH is hosting a dreamboat of a conference exploring the total system of knowledge and how new technology is bringing us closer to making it a reality –> The Total Archive:Dreams of Universal Knowledge from the Encyclopedia to Big Data
- Michael Schofield lets us know what we’ve always suspected was true. –> “Social” the Right Way is a Timesuck
The agenda for DPLAfest 2015 is now available! Featuring dozens of sessions over two days, DPLAfest 2015 will bring together hundreds from across the cultural heritage sector to discuss everything from technology and metadata, to (e)books, law, genealogy, and education. The events will take place on April 17-18, 2015 in Indianapolis, Indiana.
The second iteration of the fest–set to coincide with DPLA’s 2nd birthday–will appeal to teachers and students, librarians, archivists, museum professionals, developers and technologists, publishers and authors, genealogists, and members of the public alike who are interested in an engaging mix of interactive workshops, hands-on activities, hackathons, discussions with community leaders and practitioners, and more.
For DLF member organizations that are interested in attending DPLAfest 2015 but are in need of travel support, please note that today (March 5) is the final day to apply for a DPLA + DLF Cross-Pollinator Travel Grant.
See you in Indy!
First, Tom's just wrong about Facebook's optical storage when he writes:
Finally let’s look at why a company like Facebook is interested in optical archives. The figure below shows the touch rate vs. response time for an optical storage system with a goal of <60 seconds response time, which can be met at a range of block sizes with 12 optical drives per 1 PB rack in an optical disc robotic library.The reason Facebook gets very low cost by using optical technology is, as I wrote here, that they carefully schedule the activities of the storage system to place a hard cap on the maximum power draw, and to provide maximum write bandwidth. They don't have a goal of <60s random read latency. Their goals are minimum cost and maximum write bandwidth. The design of their system assumes that reads almost never happen, because they disrupt the write bandwidth. As I understand it, reads have to wait while a set of 12 disks is completely written. Then all 12 disks of the relevant group are loaded, read and the data staged back to the hard disk layers above the optical storage. Then a fresh set of 12 disks is loaded and writing resumes.
Facebook's optical read latency is vastly longer than 60s. The system Tom is analysing is a hypothetical system that wouldn't work nearly as well as Facebook's given their design goals. And the economics of such a system would be much worse than Facebook's.
Second, it is true that Facebook gains massive advantages from their multi-tiered long-term storage architecture, which has a hot layer, a warm layer, a hard-disk cold layer and a really cold optical layer. But you have to look at why they get these advantages before arguing that archives in general can benefit from tiering. Coughlin writes:
Archiving can have tiers. ... In tiering content stays on storage technologies that trade off the needs (and opportunities) for higher performance with the lower costs for higher latency and lower data rate storage. The highest value and most frequently accessed content is kept on higher performance and more expensive storage and the least valuable or less frequently accessed content is kept on lower performance and less expensive storage.Facebook stores vast amounts of data, but a very limited set of different types of data, and their users (who are not archival users) read those limited types of data in highly predictable ways. Facebook can therefore move specific types of data rapidly to lower-performing tiers without imposing significant user-visible access latency.
More normal archives, and especially those with real archival users, do not have such highly predictable access patterns and will therefore gain much less benefit from tiering. More typical access patterns to archival data can be found in the paper at the recent FAST conference describing the two-tier (disk+tape) archive at the European Center for Medium-Range Weather Forecasting. Note that these patterns come from before the enthusiasm for "big data" drove a need to data-mine from archived information, which will reduce the benefit from tiering even more significantly.
Fundamentally, tiering like most storage architectures suffers from the idea that in order to do anything with data you need to move it from the storage medium to some compute engine. Thus an obsession with I/O bandwidth rather than what the application really wants, which is query processing rate. By moving computation to the data on the storage medium, rather than moving data to the computation, architectures like DAWN and Seagate's and WD's Ethernet-connected hard disks show how to avoid the need to tier and thus the need to be right in your predictions about how users will access the data.
From Chris Awre, on behalf of the Hydra Europe Planning Team
London, UK The Hydra Project is pleased to announce two Hydra Europe events for 2015, taking place this coming April, at LSE Library, London.
I was invited to speak on a panel with three other speakers: Christopher Kevlahan, Branch Head, Joe Fortes – Vancouver Public Library, Miriam Moses, Acquisitions Manager, Burnaby Public Library, and Greg Mackie, Assistant Professor, UBC Department of English.
I think that libraries do a great job of promoting Freedom to Read Week with events and book displays, but could be doing a better job in advocating for intellectual freedom in the digital realm.Public library examples
I spoke about how Fraser Valley Regional Library filters all their internet, how Vancouver Public Library changed their internet use policy to single out “sexually explicit images”, and how most public library internet policies don’t appear to have been updated since the 90s.
Bibliocommons is a product that has beautiful and well designed interface that used by a lot of public libraries to sit over their public facing catalogues. It is a huge improvement over the traditional OPAC interface, I like that there’s a small social component, with user tagging and comments, as well. However, Bibliocommons allows patrons to flag content for: Coarse Language, Violence, Sexual Content, Frightening or Intense Scenes, or Other. This functionality that allows users to flag titles for sexual content or course language is not in line with our core value of intellectual freedom.
Devon Greyson, a local health librarian-researcher and PhD candidate said on BCLA’s Intellectual Freedom Committee’s email list:
Perhaps the issue is a difference in the understanding of what is “viewpoint neutral.” From an IF standpoint, suggesting categories of concern is non-neutral. Deciding that sex, violence, scary and rude are the primary reasons one should/would be setting a notice to warn other users is non-neutral. Why not racism, sexism, homophobia & classism as the categories with sex, violence & swearing considered “other”?Academic library example
I also talked about the Feminist Porn Archive, a SSHRC funded research project at York University. Before the panel I chatted with Lisa Sloniowski who was really generous sharing some of the hypothetical issues that she imagines the project might encounter. She wondered if campus IT, the university’s legal department or university administration might be more conservative than the library. What would happen if they digitized porn and hosted it on university servers? Would they need to have a login screen in front of their project website?
This session was recorded and I’d love to hear your thoughts. How can libraries support or defend intellectual freedom online?
Today I found the following resources and bookmarked them on <a href=
- TEXTUS An open source platform for working with collections of texts. It enables students, researchers and teachers to share and collaborate around texts using a simple and intuitive interface.
Digest powered by RSS Digest
- Back to School w/ the Class of Web 2.0
- Collaborative Teaching for More Effective Learning
- NFAIS 2009: Conference Prep
Want to comment on the Internal Revenue Service’s (IRS) tax form delivery service? Discuss your experiences obtaining tax forms for your library during “Talk with the IRS about Tax Forms,” a no-cost webinar that will be hosted by the American Library Association (ALA).
The session will be held from 2:30–3:30 p.m. Eastern on Tuesday, March 10, 2015. To register, send an email to Emily Sheketoff, executive director of the ALA Washington Office, at esheketoff[at]alawash[dot]org. Register now as space is limited.
Leaders from the IRS’ Tax Forms Outlet Program (TFOP) will lead the webinar. The TFOP offers tax forms and products to the American public primarily through participating libraries and post offices. Carol Quiller, the newly appointed TFOP relationship manager, will join Dietra Grant, director of the agency’s Stakeholder, Partnership, Education and Communication office, in answering questions during the webinar from the library community about tax forms and instructional pamphlet distribution.
The webinar will not be archived.
Date: Tuesday, March 10, 2015
Time: 2:30–3:30 p.m. Eastern
Register now: Email: esheketoff[at]alawash[dot]org
The post Free webinar: IRS officials to discuss library tax form program appeared first on District Dispatch.
The beta release of Evergreen 2.8 is available to download and test!
New features and enhancements of note in Evergreen 2.8 include:
- Acquisitions improvements to help prevent the creation of duplicate orders and duplicate purchase order names.
- In the select list and PO view interfaces, beside the line item ID, the number of catalog copies already owned is now displayed.
- A new Apache access handler that allows resources on an Evergreen webs server, or which are proxied via an Evergreen web server, to be authenticated using user’s Evergreen credentials.
- Copy locations can now be marked as deleted. This allows information about disused copy locations to be retained for reporting purposes without cluttering up location selection drop-downs.
- Support for matching authority records during MARC import. Matches can be made against MARC tag/subfield entries and against a record’s normalized heading and thesaurus.
- Patron message center: a new mechanism via which messages can be sent to patrons for them to read while logged into the public catalog.
- A new option to stop billing activity on zero-balance billed transaction, which will help reduce the incidence of patron accounts with negative balances.
- New options to void lost item and long overdue billings if a loan is marked as claims returned.
- The staff interface for placing holds now offers the ability to place additional holds on the same title.
- The active date of a copy record is now displayed more clearly.
- A number of enhancements have been made to the public catalog to better support discoverability by web search engines.
- There is now a direct link to “My Lists” from the “My Account” area in the top upper-right part of the public catalog.
- There is a new option for TPAC to show more details by default.
For more information about what’s in the release, check out the draft release notes.
Note that the release was built yesterday before 2.7.4 existed, so the DB upgrade script applies to a 2.7.3 database. To apply to a 2.7.4 test database, remove updates 0908, 0913, and 0914 from the upgrade file, retaining the final commit. The final 2.8.0 DB upgrade script will be built from 2.7.4 instead.
Or how to use Google tools to assess user behavior across web properties.
Tuesday March 31, 2015
11:00 am – 12:30 pm Central Time
Register now for this webinar
This brand new LITA Webinar shows how Marquette University Libraries have installed custom tracking code and meta tags on most of their web interfaces including:
- Digital Commons
- Ebsco EDS
- WebPac, and the
- General Library Website
The data retrieved from these interfaces is gathered into Google’s
- Universal Analytics
- Tag Manager, and
- Webmaster Tools
When used in combination these tools create an in-depth view of user behavior across all these web properties.
For example Google Tag Manager can grab search terms which can be related to a specific collection within Universal Analytics and related to a particular demographic. The current versions of these tools make systems setup an easy process with little or no programming experience required. Making sense of the volume of data retrieved, however, is more difficult.
- How does Google data compare to vendor stats?
- How can the data be normalized using Tag Manager?
- Can this data help your organization make better decisions?
- Ed Sanchez, Head, Library Information Technology, Marquette University Libraries
- Rob Nunez, Emerging Technologies Librarian, Marquette University Libraries and
- Keven Riggle, Systems Librarian & Webmaster, Marquette University Libraries
In this webinar as they explain their new processes and explore these questions. Check out their program outline: http://libguides.marquette.edu/ga-training/outline
Can’t make the date but still want to join in? Registered participants will have access to the recorded webinar.
- LITA Member: $39
- Non-Member: $99
- Group: $190
Register Online page arranged by session date (login required)
Mail or fax form to ALA Registration
Call 1-800-545-2433 and press 5
Questions or Comments?
For all other questions or comments related to the course, contact LITA at (312) 280-4269 or Mark Beatty, firstname.lastname@example.org.
Last updated March 4, 2015. Created by Peter Murray on March 4, 2015.
Log in to edit this page.
- digilib is a web based client/server technology for images. The image content is processed on-the-fly by a Java Servlet on the server side so that only the visible portion of the image is sent to the web browser on the client side.
- digilib enables very detailed work on an image as required by scholars with elaborate viewing features like an option to show images on the screen in their original size.
- digilib facilitates cooperation of scholars over the internet and novel uses of source material by image annotations and stable references that can be embedded in URLs.
- digilib facilitates federation of image servers through a standards compliant IIIF image API.
- digilib is Open Source Software under the Lesser General Public License, jointly developed by the Max Planck Institute for the History of Science, the Bibliotheca Hertziana, the University of Bern and others.
- digilib - 2.2.2 3-Sep-2013
Last updated March 4, 2015. Created by Peter Murray on March 4, 2015.
Log in to edit this page.
An open-source, web-based 'multi-up' viewer that supports zoom-pan-rotate functionality, ability to display/compare simple images, and images with annotations.Package Type: Image Display and ManipulationLicense: Apache 2.0 Package Links Browser/Cross-Platform Releases for Mirador
- Mirador - 1.0rc1 4-Mar-2014
Last updated March 5, 2015. Created by Peter Murray on March 4, 2015.
Log in to edit this page.