Notes on converting this Github user page based site to Pelican, a Python based static site generator.
Today I found the following resources and bookmarked them on <a href=
- PyKota Open Source print management
Digest powered by RSS Digest
- RDA Print Survey
- E-book reading on the rise
- ATO2014: Building a premier storytelling platform on open source
To what extent is it important to get familiar with our environment?
If we think about how the world surrounding us has changed throughout the years, it is not so unreasonable that, while walking to work, we might encounter some new little shops, restaurants, or gas stations we had never noticed before. Likewise, how many times did we wander about for hours just to find green spaces for a run? And the only one we noticed was even more polluted than other urban areas!
Citizens are not always properly informed about the evolution of the places they live in. And that is why it would be crucial for people to be constantly up-to-date with accurate information of the neighborhood they have chosen or are going to choose.
(Image source: London Evening Standard)
London is a neat evidence of how transparency in providing data is basic in order to succeed as a Smart City. The GLA’s London Datastore, for instance, is a public platform of datasets revealing updated figures on the main services offered by the town, in addition to population’s lifestyle and environmental risks. These data are then made more easily accessible to the community through the London Dashboard.
The importance of dispensing free information can be also proved by the integration of maps, which constitute an efficient means of geolocation. Consulting a map where it’s easy to find all the services you need as close as possible can be significant in the search for a location.
(Image source: Smart London Plan)
The Global Open Data Index, published by Open Knowledge in 2013, is another useful tool for data retrieval: it showcases a rank of different countries in the world with scores based on openness and availability of data attributes such as transport timetables and national statistics.
As it was stated, making open data available and easily findable online not only represented a success for US cities but favoured apps makers and civic hackers too. Lauren Reid, a spokesperson at Code for America, reported according to Government Technology: “The more data we have, the better picture we have of the open data landscape.”
That is, on the whole, what Place I Live puts the biggest effort into: fostering a new awareness of the environment by providing free information, in order to support citizens willing to choose the best place they can live.
The outcome is soon explained. The website’s homepage offers visitors the chance to type address of their interest, displaying an overview of neighborhood parameters’ evaluation and a Life Quality Index calculated for every point on the map.
The research of the nearest medical institutions, schools or ATMs thus gets immediate and clear, as well as the survey about community’s generic information. Moreover, data’s reliability and accessibility are constantly examined by a strong team of professionals with high competence in data analysis, mapping, IT architecture and global markets.
For the moment the company’s work is focused on London, Berlin, Chicago, San Francisco and New York, while higher goals to reach include more than 200 cities.
US Open Data Census finally saw San Francisco’s highest score achievement as a proof of the city’s labour in putting technological expertise at everyone’s disposal, along with the task of fulfilling users’ needs through meticulous selections of datasets. This challenge seems to be successfully overcome by San Francisco’s new investment, partnering with the University of Chicago, in a data analytics dashboard on sustainability performance statistics named Sustainable Systems Framework, which is expected to be released in beta version by the the end of 2015’s first quarter.
(Image source: Code for America)
Another remarkable collaboration in Open Data’s spread comes from the Bartlett Centre for Advanced Spatial Analysis (CASA) of the University College London (UCL); Oliver O’Brien, researcher at UCL Department of Geography and software developer at the CASA, is indeed one of the contributors to this cause. Among his products, an interesting accomplishment is London’s CityDashboard, a real-time reports’ control panel in terms of spatial data. The web page also allows to visualize the whole data translated into a simplified map and to look at other UK cities’ dashboards.
Plus, his Bike Share Map is a live global view to bicycle sharing systems in over a hundred towns around the world, since bike sharing has recently drawn a greater public attention as an original form of transportation, in Europe and China above all.
O’Brien’s collaboration with James Cheshire, Lecturer at UCL CASA, furthermore gave life to a groundbreaking project called DataShine, aimed to develop the use of large and open datasets within the social science community through new means of data’s visualisation, starting from a mapping platform with 2011 Census data, followed by maps of individual census tables and the new Travel to Work Flows table.
(Image source: Suprageography)
The holidays are upon us, LITA Blog readers. As we all wind down end of year tasks and prepare for our own celebrations, this final installment of Tech Yourself Before You Wreck Yourself for 2014 is my way of saying thanks. Thanksgiving is maybe my favorite holiday- I love the way in which it is casual, hangout-focused, and food-intensive- but I also love the tone of gratitude that colors it. So, let me express how grateful I am for all of you, reading this blog and supporting our efforts. Thank you for being there.
For the uninitiated, Tech Yourself Before You Wreck Yourself (TYBYWY) is a monthly selection of free webinars, classes, and other education opportunities for the aspiring technologist and the total newbie alike.
The Monthly MOOC
If, like so many of us, you’re intrigued by use of gamification in content design and delivery, Coursera’s perennially popular MOOC on the subject is open starting January 26th. Make your New Year’s resolution to educate yourself on this powerful outreach method. It’s particularly interesting from a training/instructional design perspective.
OpenCon has posted its 2014 Webcast Round-Up, and the resources there are excellent if you are trying to learn more about Open Access.
I know that I’ve mentioned them in past post, but Library Journal’s Webcast series has been stepping up its game recently. These programs are on my docket, and you should consider attending too:
Two Cool Gigs:
Interested in in pursuing a career in media archives and social justice? Consider this paid internship in Democracy Now!’s Archives. Application deadline 11/15.
Another option, NPR’s Library Archives has a paid internship. Get on it and apply by 11/21!
Tech On, TYBYWYers-
Happy Thanksgiving! TYBYWY will return 12/12. As always, let me know if you have any questions or suggestions. Leave a message here or catch me on Twitter, @linds_bot.
I'm David Rosenthal and I'm a customer. This will be a very short talk making one simple point, which is the title of the talk:
Storage Will Be
Much Less Free
Than It Used To BeMy five minutes of fame happened last Monday when Chris Mellor at The Register published this piece, with a somewhat misleading title. It is based on work I had been blogging about since at least 2011, ever since a conversation at the Library of Congress with Dave Anderson of Seagate. For the last 16 years I've been working at Stanford Library's LOCKSS Program on the problem of keeping data safe for the long term. There are technical problems, but the more important problems are economic. How do you fund long-term preservation?
Working with students at UC Santa Cruz's Storage Systems Research Center I built an economic model of long-term storage. Here is an early version computing the net present value of the expenditures through time to keep an example dataset for 100 years, the endowment for short, as the rate at which storage gets cheaper, the Kryder rate for short, varies. The different lines reflect media service lives of 1 to 5 years.
At the historic 30-40%/year we are in the flat part of the graph, where the endowment is low and it doesn't vary much with the Kryder rate. This meant that long-term storage was effectively free; if you could afford to store the data for a few years, you could afford to store it "for ever" because the cost of storing it for the rest of time would have become negligible.
But suppose the Kryder rate drops below about 20%/year. We are in the steep part of the graph where the endowment needed is much higher and depends strongly on the precise Kryder rate. Which, of course, we are not going to know, so the cost of long-term storage becomes much harder to predict.
We don't have to suppose. This graph, from Preeti Gupta at UCSC, shows that in 2010, before the floods in Thailand, the Kryder rate had dropped. Right now, disk is about 7 times as expensive as would have been predicted in 2010. The red lines show the range of industry projections going forward, 10-20%/year. In 2020 disk is projected to be between 100 and 300 times as expensive as would have been projected in 2010. As my first graph showed, this is a big deal for anyone who needs to keep data for the long term.
No-one should be surprised that in the real world exponential curves can't go on for ever. Here is Randall Munroe's explanation. In the real world exponential growth is always the first part of an S-curve.
Why has the Kryder rate slowed? This 2009 graph from Seagate shows that what looks like a smooth Kryder graph is actually the superimposition of a series of S-curves, one for each technology. One big reason for the slowing is technical, each successive technology transition gets harder - the long delay in getting HAMR into production is the current example. But this has economic implications. Each technology transition is more expensive, so the technology needs to remain in the market longer to earn a return on the investment. And the cost of the transition drives industry consolidation, so we now have only a little over 2 disk manufacturers. This has transformed disks from a very competitive, low-margin business into a stable 2-vendor one with reasonably good margins. Increasing margins slows the Kryder rate.
This isn't about technology "hitting a wall" and the increase in bit density stopping. It is about the interplay of technological and business factors slowing the rate of decrease in $/GB. For people who look only at the current cost of storage, this is irritating. For those of us who are concerned with the long-term cost of storage, it is a very big deal.
The following is a guest post by the entire cohort of the NDSR Boston class of 2014-15.
The first ever Boston cohort of the National Digital Stewardship Residency kicked off in September, and the five residents have been busy drinking from the digital preservation firehose at our respective institutions. You can look forward to individual blog posts from each resident as this 9-month residency goes on, but we decided to start with a group post to outline each of our projects as they’ve developed so far. (To keep up with us on a more regular basis, keep an eye on our digital preservation test kitchen blog.)
Sam DeWitt – Tufts University
I will be at Tufts’ Tisch Library during my residency, looking at ways that the university might better understand the research data it produces. The National Science Foundation has required data management plans from grant-seekers for several years now and some scholarly journals have followed suit by mandating that researchers submit their data sets along with accepted work. These dictates play a significant role in the widespread movement.
Data sharing, as a concept, is particularly trendy right now (try adding ‘big data’ to the term ‘data sharing’ in a Google search) but the the practice is open to debate. Its advantages and disadvantages are articulated quite nicely here. As someone who works in the realm of information science, I generally believe research is meant to be shared and that concerns can be mitigated by policy. But that is easier said than done, as Christine Borgman so succinctly argues in “The Conundrum of Sharing Research Data”: “The challenges are to understand which data might be shared with whom, under what conditions, why, and to what effects. Answers to these questions will inform data policy and practice.”
I hope that in these few months I can gain a broader understanding of the data Tufts produces while I continue to examine the policies, practices and procedures that aid in their curation and dissemination.
Rebecca Fraimow – WGBH
My project is designed a little differently from the ones that my NDSR peers are undertaking; instead of tackling a workflow from the top down, I’m starting with the individual building blocks and working up. Over the course of my residency, my job is to embed myself into the different aspects of daily operations within the WGBH Media, Library and Archives department. Everything that I find myself banging my head into as I go along, I document and make part of the process for redesigning the overall workflow.
Since WGBH MLA is currently in the process of shifting over to a Fedora-based Hydra repository — a major shift from the previous combination of Filemaker databases and proprietary Artesia DAM — it’s the perfect time for the archives to take a serious look at reworking some legacy practices, as well as designing new processes and procedures for securing the longevity of a growing ingest stream that is still shifting from primarily object-based to almost entirely file-based.
At the end of the residency, I’ll be creating a webinar in order to share some best practices (or, at least, working practices) with the rest of the public broadcasting world. Many broadcasting organizations are struggling through archival workflow problems without having the benefit of WGBH’s strong archiving department. It’s exciting to know that the work I’m doing is going to have a wider outward-facing impact — after all, sharing knowledge is kind of what public broadcasting is all about.
Joey Heinen – Harvard University
As has been famously outlined by the Library of Congress, digital formats are just as susceptible to obsolescence as analog formats due to any number of factors. At Harvard Library, my host for the NDSR, we are grappling with formats migration frameworks at a broad level though looking to implement a plan for three specific, now-obsolete formats — Kodak PhotoCD, RealAudio and SMIL Playlists. So far my work has involved an examination of the biggest challenges for each format.
For example, Kodak PhotoCD incorporates a form of chroma subsampling (Photo YCC) based off of the Rec. 709 standard for digital video rather than the various RGB or CIE profiles more typical for still images. Photo YCC captures color information that is beyond what is perceptible to the human eye and is well beyond the confines of color profiles such as RGB (an example of format attributes that drive the migration process so as not to lose fundamental content and information from the original).
Other challenges that impact a project such as this are managing the human components (stakeholder roles and arriving upon shared conclusions about the format’s most noteworthy characteristics) as well as ensuring that existing tools for converting, validating and characterizing are correctly managing and reporting on the format (I explored some of these issues here). A bibliography (PDF) that I compiled is guiding this process, the contents of which has allowed me to approach the systems at Harvard in order to find the right partners and technological avenues for developing a framework. Look for more updates on the NDSR-Boston website (as well as my more substantive project update on “The Signal” in April 2015).
Jen LaBarbera – Northeastern University
My residency is at Northeastern University’s Archives and Special Collections, though as with a lot of digital preservation projects and/or programs, my work spans a number of other departments — library technology services, IT, Digital Scholarship Group and metadata management.
My project at Northeastern relies heavily on the new iteration of Northeastern’s Fedora-based digital repository (DRS), which is currently in its soft-launch phase and is set to roll out in a more public way in early 2015. My projects at Northeastern are best summed up by the following three goals: 1) create a workflow for ingesting recently born-digital content to the new DRS, 2) create a workflow for ingesting legacy born-digital (obsolete format) content to the new DRS, and 3) help Northeastern Libraries develop a digital preservation plan.
I’m starting with the first goal, ingesting recently born-digital content. As a test case to help us create a more general workflow, we’re working on ingesting the content of the Our Marathon archive. Our Marathon is a digital archive created as a digital humanities project following the bombing at the 2013 Boston Marathon. The goal is to transfer all materials (in a wide variety of formats) from their current states/platforms (Omeka, external hard drives, Google Drive, local server) to the new DRS. I’ve spent the first part of this residency drinking in all the information I can about the DRS, digital humanities projects (in general and at Northeastern), and wrapping my brain around these projects; now, the real fun begins!
Tricia Patterson – MIT Libraries
My residency is within MIT’s Lewis Music Library, a subject-specific library at MIT that is much-loved by students, faculty, and alumni. They are currently looking at digitizing and facilitating access to some of their analog audio special collections of MIT music performances, which has also catalyzed a need to think about their digital preservation. The “Music at MIT” digital audio project was developed in order to inventory, digitize, preserve, and facilitate access to audio content in their collections. And since audio content is prevalent throughout MIT collections, the “Making Music Last” initiative was designed to extend the work of the “Music at MIT” digital audio project and develop an optimal, detailed digital preservation workflow – which is where I came in!
Through the completion of a gap analysis of the existing workflow, a broad review of other fields’ workflow methodologies, and collaborations with stakeholders across the board, our team is working on creating a high and low-level life cycle workflow, calling out a digital audio use case, and evaluating suitable options for an access platform. This comprehensive workflow will contribute to the overall institutional knowledge instead of limiting important information to one stakeholder and clarify roles between individuals throughout the process, improving engagement and communication. Finally, mapping out the work process enhances our understanding of requirements for tools – such as Archivematica or BitCurator – that should be adopted and incorporated with a high degree of confidence for success. As the process moves from design to implementation and testing, the detailed workflow also ensures reliability and repeatable quality in our processes. It’s been a highly collaborative and educational process so far – stay tuned for how it pans out!
Another year, another lineup of Islandora Camps to bring the community together. We have a great roster of camps for 2015, hopefully providing all of you out there with at least one that's close and convenient so you can partake in islandora's secret sauce.
Dates are not quite set yet for the latter events, but here's the general schedule so you can plan ahead:Islandora Camp BC - Vancouver, BC February 16 - 18 Islandora Camp EU2 - Madrid, Spain May 27 - 29 Islandora Conference (way more info on this in days to come) - August Islandora Camp CT - Hartford, CT Late October or Early November See you at Islandora Camp!
Open Knowledge Foundation: Global Witness and Open Knowledge – Working together to investigate and campaign against corruption related to the extractives industries
Sam Leon, one of Open Knowledge’s data experts, talks about his experiences working as an School of Data Embedded Fellow at Global Witness.
Global Witness are a Nobel Peace Prize nominated not-for-profit organisation devoted to investigating and campaigning against corruption related to the extractives industries. Earlier this year they received the TED Prize and were awarded $1 million to help fight corporate secrecy and on the back of which they launched their End Anonymous Companies campaign.
In February 2014 I began a six month ‘Embedded Fellowship’ at Global Witness, one of the world’s leading anti-corruption NGOs. Global Witness are no strangers to data. They’re been publishing pioneering investigative research for over two decades now, piecing together the complex webs of financial transactions, shell companies and middlemen that so often lie at the heart of corruption in the extractives industries.
Like many campaigning organisations, Global Witness are seeking new and compelling ways to visualise their research, as well as use more effectively the large amounts of public data that have become available in the last few years.“Sam Leon has unleashed a wave of innovation at Global Witness”
-Gavin Hayman, Executive Director of Global Witness
As part of my work, I’ve delivered data trainings at all levels of the organisation – from senior management to the front line staff. I’ve also been working with a variety of staff to use data collected by Global Witness to create compelling infographics. It’s amazing how powerful these can be to draw attention to stories and thus support Global Witness’s advocacy work.
The first interactive we published on the sharp rise of deaths of environmental defenders demonstrated this. The way we were able to pack some of the core insights of a much more detailed report into a series of images that people could dig into proved a hit on social media and let the story travel further.
See here for the full infographic on Global Witness’s website.
But powerful visualisation isn’t just about shareability. It’s also about making a point that would otherwise be hard to grasp without visual aids. Global Witness regularly publish mind-boggling statistics on the scale of corruption in the oil and gas sector.“The interactive infographics we worked on with Open Knowledge made a big difference to the report’s online impact. The product allowed us to bring out the key themes of the report in a simple, compelling way. This allowed more people to absorb and share the key messages without having to read the full report, but also drew more people into reading it.”
-Oliver Courtney, Senior Campaigner at Global Witness
Take for instance, the $1.1 billion that the Nigerian people were deprived of due to the corruption around the sale of Africa’s largest oil block, OPL 245.
$1.1 billion doesn’t mean much to me, it’s too big of a number. What we sought to do visually was represent the loss to Nigerian citizens in terms of things we could understand like basic health care provision and education.
See here for the full infographic on Shell, ENI and Nigeria’s Missing Millions.
The aim was to bring together and visualise the vast number of corruption case studies involving shell companies that Global Witness and its partners have unearthed in recent years.
It was a challenging project that required input from designers, campaigners, developers, journalists and researchers, but we’re proud of what we produced.
Open data principles were followed throughout as Global Witness were committed to creating a resource that its partners could draw on in their advocacy efforts. The underlying data was made available in bulk under a Creative Commons Attribution Sharealike license and open source libraries like Leaflet.js were used. There was also an invite for other parties to submit case studies into the database.“It’s transformed the way we work, it’s made us think differently how we communicate information: how we make it more accessible, visual and exciting. It’s really changed the way we do things.”
-Brendan O’Donnell, Campaign Leader at Global Witness
For more information on the School of Data Embedded Fellowship Scheme, and to see further details on the work we produced with Global Witness, including interactive infographics, please see the full report here.
SPARQL queries are a great way to explore Linked Data sets - be it our STW with it's links to other vocabularies, the papers of our repository EconStor, or persons or institutions in economics as authority data. ZBW therefore offers since a long time public endpoints. Yet, it is often not so easy to figure out the right queries. The classes and properties used in the data sets are unknown, and the overall structure requires some exploration. Therefore, we have started collecting queries in our new SPARQL Lab, which are in use at ZBW, and which could serve as examples to deal with our datasets for others.
A major challenge was to publish queries in a way that allows not only their execution, but also their modification by users. The first approach to this was pre-filled HTML forms (e.g. http://zbw.eu/beta/sparql/stw.html). Yet that couples the query code with that of the HTML page, and with a hard-coded endpoint address. It does not scale to multiple queries on a diversity of endpoints, and it is difficult to test and to keep in sync with changes in the data sets. Besides, offering a simple text area without any editing support makes it quite hard for users to adapt a query to their needs.
- SPARQL syntax highlighting and error checking
- Extremely customizable: All functions and handlers from the CodeMirror library are accessible
- Persistent values (optional): your query is stored for easier reuse between browser sessions
- Prefix autocompletion (using prefix.cc)
- Property and class autocompletion (using the Linked Open Vocabularies API)
- Can handle any valid SPARQL resultset format
- Integration of preflabel.org for fetching URI labels
With a few lines of custom clue code, and with the friendly support of Laurens Rietveld, author of the YASGUI suite, it is now possible to load any query stored on GitHub into an instance on our beta site and execute it. Check it out - the URI
loads, views and executes the query stored at https://github.com/jneubert/sparql-queries/blob/master/class_overview.rq on the endpoint http://data.nobelprize.org/sparql (which is CORS enabled - a requirement for queryRef to work).
Links like this, with descriptions of query's purpose, grouped according to tasks and datasets, and ordered in a sensible way, may provide a much more accessible repository and starting point for explorations than just a directory listing of query files. For ongoing or finished research projects, such a repository - together with versioned data sets deployed on SPARQL endpoints - may offer a easy-to-follow and traceable way to verify presented results. GitHub provides an infrastructure for publicly sharing the version history, and makes contributions easy: Changes and improvements to the published queries can be proposed and integrated via pull requests, an issue queue can handle bugs and suggestions. Links to queries authored by contributors, which may be saved in different repositories and project contexts, can be added straightaway. We would be very happy to include such contributions - please let us know.SPARQL Lab Linked data
In early 2012, I started on the report that became Reordering Ranganathan: Shifting User Behaviors, Shifting Priorities with Lynn Silipigni Connaway. Back then we called it the User Behavior Report. Not a catchy title, but it broadly reflected what we both studied. Our intention was to learn about each others’ research and bring our experiences, perspectives, and research together under one umbrella.
You may be wondering why we had to learn about each others’ research given we worked for the same organization. I actually started at OCLC Research just 6 months prior in 2011. Lynn and I had very disparate experiences, perspectives, and paths to OCLC. I earned a Ph.D. in Business Administration – Information Systems; Lynn earned her Ph.D. in Library and Information Science with a minor in Public Policy. Before beginning a research career, I worked in tech companies and Lynn worked in school, academic, and public libraries.
As colleagues, we wanted to explore how our research interests overlapped and begin to think about collaborative user behavior projects. We wanted to develop a common set of ideas we could collectively contribute to through our research. We also wanted to describe the ideas in ways that would be relevant to our intended audiences– librarians, library researchers, information scientists.
In studying user behavior, we both are interested in how people discover, access, and use/reuse content. In an early outline for our report we wrote “We want to know how people are getting their information, why they are making these choices, and what information or sources are meeting their needs.”
At one of our meetings, Lynn suggested using Ranganathan’s five laws as a framework for our report. I was intrigued. Given my background, I never had heard of them. But as we began reviewing the laws and literature about them, it was interesting for me to think about them in the context of my research interests.
Over the course of several meetings we discussed our understanding of each law and thought about how our research areas applied. In doing so we began to stretch, adapt, and change each law’s wording to help us more clearly articulate to each other why we thought our research fit.
Take the first law, “books are for use.” Like many researchers, our interests extend beyond books to other physical and digital materials in the library and more generally on the Web. Moreover, we are interested in “how people are getting their information.” Our interpretation of the law reflects these overlapping interests – develop the physical and technical infrastructure needed to deliver physical and digital materials. Our interpretations of the other laws developed in similar ways.
Discussions with a colleague, Andy Havens, prompted us to reorder the laws as well. When we thought about it, we agreed that scarcity of time not content is the challenge for people these days. Inundated with information, we want not only quick, but also convenient ways to find, get, and use what we need. And with that the reordering began.
We organized the report so that each chapter could stand on its own. In each chapter, we examine the law in today’s environment given scholars’ interpretations and research in our areas of interest. We also discuss some ideas about how to apply our interpretations of the law given findings from the research.
Although the project began as a means to help us think about the purpose and scope of our research and how our interests overlapped, we also were interested to see what libraries were doing in practice when it came to our interpretation of Ranganathan’s five laws. Could we find examples of what we described?
We found a number of exciting, interesting ways the laws are currently unfolding in practice. We only could include a small fraction, but our hope is that reading the report or listening to the webinar will not only spark new initiatives, but also encourage you to share your current ones.About Ixchel Faniel
Ixchel Faniel is Associate Research Scientist for OCLC Research. She is currently working on projects examining data reuse within academic communities to identify how contextual information about the data that supports reuse can best be created and preserved. She also examines librarians' early experiences designing and delivering research data services with the objective of informing practical, effective approaches for the larger academic community.Mail | Web | More Posts (1)
We are very pleased to share the roster of workshop instructors for the upcoming Islandora Camp in Vancouver, BC. Camp will, as usual, split up into two groups for hands-on Islandora time on the second day: one group exploring the front-end in the Admin track, and the other looking at code in the Developer track. Here are your instructors:Developer Track
Mark Jordan is the Head of Library Systems at Simon Fraser University. He has been developing in Drupal since 2007 and is currently leading the effort to migrate SFU Library's digital collections to Islandora. He is a member of the Islandora 7.x-1.3 and 7.x-1.4 release teams and is component manager for several Islandora modules that deal with digital preservation (and developer of several other Islandora-related tools available at his GitHub page). He is also author of Putting Content Online: A Practical Guide for Libraries (Chandos, 2006). Mark taught in the Developer track at iCampCA in 2014.
Mitch MacKenzie is a Solution Architect at discoverygarden where he manages the execution of Islandora projects for institutions across North America and Europe. Mitch has been developing Islandora tools for three years and has been building with Drupal since 2006. His development contributions include the initial work on the Islandora Compound Solution Pack, Islandora Sync, Simple Workflow, and porting the XML Forms family of modules to Islandora 7. This is Mitch's first Islandora Camp as an instructor.Admin Track
Melissa Anez has been working with Islandora since 2012 and has been the Community and Project Manager of the Islandora Foundation since it was founded in 2013. She spends her time arranging Islandora events, doing what she can to keep the Islandora community ticking along, and writing about herself in the third person in blog posts. Melissa taught in the Admin Track at several previous camps.
Erin Tripp is a librarian, journalist, and business development manager for an open source software services company. Personally, Erin believes in investing in people and ideas – making the open source software space a natural fit. Since 2011, Erin’s been involved in the Islandora project and has been involved in close to 30 different Islandora projects as a project manager. The projects ranged from consulting, installation, custom development, and data migrations. This is Erin's first Islandora Camp as an instructor.
The rest of Camp will be filled with sessions, and we want you to get up and lead the room. A Call for Proposals is open until December 15th. You can check out the slides linked on schedules from previous camps to see what kinds of presentations the community has given in the past.
The 2014 LITA Forum took place in Albuquerque, NM the first week of November. I had the opportunity to present on Islandora and what we accomplished with the Detroit Public Library Digital Collections site that we built.Presentation
At the beginning of my presentation I took some time to answer the question, "Why use a DAMS?" instead of building an image gallery on your website or using a web service. (We will have a blog post up next week expanding on this topic.)
About half the audience had heard of Islandora, but only a few were using it in their libraries. Most of them did not know the details about what Islandora is and what...Read more »
When I first started my job as Digital Curation Coordinator in June, I didn’t quite know what I would be doing. Then I started figuring it out. As I’ve gotten settled, I’ve realized that I want to be more proactive in identifying tools and platforms that the researchers I’m working with are using so that I can connect with their experience more easily.
However, the truth is that I find it hard to know what tools I should focus on. What usually happens when I learn about a new tool is a cursory read through the documentation… I familiarize myself well enough to share in a few sentences what it does, but most of the time I don’t become incredibly familiar. There are just soooo many tools out there. It’s daunting.
Knowing my tendencies, I decided it would be a good challenge for me to dig deeper into three areas where I am more ignorant than I’d like to be.
I don’t know a lot about data analysis but I think it will be critical in terms of how well I can understand researchers. Of the three, I’m most familiar with SPSS already and I’ll probably devote the most time to learning R (perhaps through this data science MOOC, which fellow LITA blog writer Bryan pointed out). With SAS, I’m mostly interested in learning how it differs from the others rather than delving too deep.
Why these two? It’s pretty arbitrary, I guess: I learned about them in a recent ecology data management workshop I was presenting at. As is often the case, I learned a lot from the other presenters! A big part of my job is figuring out how to help researchers manage their data – and a big barrier to that is the painstaking work of creating metadata.
Digital forensics tool BitCurator
I was lucky enough to be able to attend a two-day workshop at my institution, so I have played around with this in the past. BitCurator is an impressive suite of tools that I’m convinced I need to find a use case to explore further. This is a perfect example of a tool I know decently already – but I really want to know better, especially since I already have people bringing me obsolete media and asking what I can do about it.
What tools do you want to learn? And for anyone who helps researchers with data management in some capacity, what additional tools do you recommend I look into?
Of interest to many Hydranauts:
The Tenth International Conference on Open Repositories, OR2015, will be held on June 8-11, 2015 in Indianapolis (Indiana, USA). The organizers are pleased to invite you to contribute to the program. This year’s conference theme is:
LOOKING BACK, MOVING FORWARD: OPEN REPOSITORIES AT THE CROSSROADS
OR2015 is the tenth OR conference, and this year’s overarching theme reflects that milestone: Looking Back/Moving Forward: Open Repositories at the Crossroads. It is an opportunity to reflect on and to celebrate the transformative changes in repositories, scholarly communication and research data over the last decade. More critically however, it will also help to ensure that open repositories continue to play a key role in supporting, shaping and sharing those changes and an open agenda for research and scholarship.
The full call for proposals can be found at http://www.or2015.net/call-for-proposals/
From Kristi Park, Marketing Manager, Texas Digital Library
Austin, Texas The Texas Digital Library (TDL) is pleased to announce the development of cost-effective, easy-to-use digital preservation storage for its member institutions through DuraCloudTM@TDL. With DuraCloudTM @TDL, members can accurately plan preservation costs, enjoy predictability of service, and rely on known, durable technologies for ensuring the integrity of their digital collections.
The Tenth International Conference on Open Repositories, OR2015, will be held on June 8-11, 2015 in Indianapolis (Indiana, USA). The organizers are pleased to invite you to contribute to the program. This year's conference theme is:
LOOKING BACK, MOVING FORWARD: OPEN REPOSITORIES AT THE CROSSROADS
Happy Movember, DPLA friends! The month of November brings about a great many things—Thanksgiving, brisk breezes, falling leaves—including ditching the razor for a good cause. Movember encourages participants to grow out mustaches and beards to raise awareness for men’s health issues.
In celebration, we’re providing some historic grooming inspiration. Check back once a week for a selection of some of the best beards and mustaches from the DPLA collection, and up your “Movember” game!
This week, we’re featuring the one thing the North and South could unite around: excellent facial hair.Major General John M. Schofield, Officer of the Federal Army Captain William Harris Northrup John F. Mackie, Medal of Honor Recipient Lewis C. Shepard, Medal of Honor Recipient Portrait of Captain George E. Dolphin, of Minnesota Portrait of Andrew Anderson, of Minnesota Portrait of Captain Asgrim K. Skaro [?], of Minnesota Portrait of Jacob Dieter, of Minnesota Portrait of Simon Gabert Portrait of Jeremiah C. Donahower, of Minnesota
New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.New This Week
Visit the LITA Job Site for more available jobs and for information on submitting a job posting.