Here are some more random thoughts about the seminar readings this week.Buckland, M. K. (1991). Information as thing. JASIS, 42(5):351–360.
This is another classic in the field where Buckland uses the notion of information-as-thing as a fulcrum for exploring what information is. I noticed on re-reading this paper that he seems to feel that information science theorists have dismissed the study of information-as-thing. So in many ways this article is a defense of the study of information as an object. He uses this focus on the materiality of documents to explore and delimit other aspects of information systems, such as how events can be viewed as information and the situational aspects of information. Early on in the paper is one of his most interesting findings, a matrix for characterizing information along two axes:Intangible Tangible Entity Knowledge Data Process Becoming informed Information processing
His analysis keeps returning to the centrality of information-as-thing, in an attempt to avoid this logical dead end:
If anything is, or might be, informative, then everything is, or might well be, information. In which case calling something “information” does little or nothing to define it. If everything is information, then being information is nothing special. (Buckland, 1991, p. 356)
It seems to me that the tension here is one of economy: you can’t put everything in the archive, things must be appraised, some things are left out of the information system. Buckland does note that not everything needs to be relocated into the information system to become information:
Some informative objects, such as people and historic buildings, simply do not lend themselves to being collected, stored, and retrieved. But physical relocation into a collection is not always necessary for continued access. Reference to objects in their existing locations creates, in effect, a “virtual collection.” One might also create some description or representation of them: a film, a photograph, some measurements, a directory, or a written description. What one then collects is a document describing or representing the person, building, or other object. (Buckland, 1991, p. 354)
But even in this case a reference or a representation of the thing must be created and it must be made part of the information system. This takes some effort by someone or an action by something. Even in the world of big data and the Internet of Things that we live in now, the information system is not as big as the universe. We make decisions to create and deploy devices to monitor our thermostats. We build systems to aggregate, analyze the data to inform more decisions. Can these systems be thought of as operating outside of human experience? I guess there are people like Stephen Wolfram who think that the universe itself (which includes us) is an information system, or really a computational system. I wonder what Wolfram and Buckland would have to say to each other…
Some of the paper seems to be defending information-as-thing a bit too strenuously, to the point that it seems like the only viable way of looking at information. So I liked that Buckland closes with this:
It is not asserted that sorting areas of information science with respect to their relationship to information-as-thing would produce clearly distinct populations. Nor is any hierarchy of scholarly respectability intended.
Information certainly can be considered as material, and Buckland demonstrates it’s a useful lever for learning more about what information is. But considering it only as material, absent information-as-process, and other situational aspects leads to some pretty deep philosophical problems. Somewhat relatedly Dorothea Salo and I recently wrote a paper that looks examines Linked Data using the work of Buckland and Suzanne Briet (Summers & Salo, 2013s).Buckland, M. K. (1997). What is a ”document”? JASIS, 48(9):804–809.
Again as in (Buckland, 1991) Buckland is attempting to defend ground that he feels many find untenable: defining the scope and limits of the word “document”. Reading between the lines a bit he sees the explosion of printed information as giving rise to attempts to control it, and since printed information exceeds our ability to organize it, it seems only natural to limit the scope of documentation, so the whole enterprise doesn’t seem like folly.
A document is evidence in support of a fact. (Briet in Buckland, 1991, p. 806)
Buckland quotes Briet to focus the discussion on the value of evidence. A star as not being a document, but a photo of a star as a document. This reminds me a lot of his discussion of situational information, where the circumstances have a great deal to say about whether something is information or a document.
traces of human activity, andCommentsother objects not intended as communication (Buckland, 1997, p. 807)
Reminds me of Geiger’s work on trace ethnography, e.g. looking at the behavior of Wikipedia bots (Geiger & Ribes, 2011). The quote of Barthes makes me think of pragmatic philosophy:
… the object effectively serves some purpose, but it also serves to communicate information (Buckland (1997)).
And what of the purpose? Can something communicate information while effectively not serving a purpose?
Information systems can also be used to find new evidence, so documents are not limited to things having evidential value now. Electronic documents push the boundaries even more, because anything information is less and less like a distinct thing, since everything in a computer is represented ultimately as logic gates, binary ones and zeros.
In both this article and (Buckland, 1991) Buckland seems to resist the notion that information could be anything:
If anything is, or might be, informative, then everything is, or might well be, information. In which case calling something “information” does little or nothing to define it. If everything is information, then being information is nothing special. (Buckland, 1991, p. 356)
if the term ‘‘document’’ were used in a specialized meaning as the technical term to denote the objects to which the techniques of documentation could be applied, how far could the scope of documentation be extended. What could ( or could not ) be a document? The question was, however, rarely formulated in these terms. (Buckland, 1997, p. 805)
Why is it a problem for anything to potentially be information? Is it only a problem because he wants to be able to identify information as an object? If he accepts that information always exists as part of a process, and that these processes are extended in time, does that help relieve this tension about what can be a document?Saracevic, T. (1999). Information science. Journal of the American Society for Information Science, 50(12):1051–1063.
Definitions of information science are best understood by considering the problems that practitioners are focused on. Saracevic sees information in three primary ways, which are not unique to information science:
- connected to technology
- with a social/human dimension that shapes society
He also sees there having been three powerful ideas:
- information retrieval (formal logic for processing information)
- relevance: a model for examining information retrieval systems
- interaction: models for feedback between people and information systems
Saracevic’s emphasis on problems seems like it could provide useful avenue and citation trail for me to explore with respect to Broken World Thinking outlined by (Jackson, 2014). Is it possible to see Saracevic’s problems as Jackson’s sites for repair? Saracevic claims to have divided information science research into two camps: systems researchers and human-centered research. Is Saracevic’s wanting to merge the two poles of information science an attempt to repair something he sees as broken?References
Buckland, M. K. (1991). Information as thing. JASIS, 42(5), 351–360.
Buckland, M. K. (1997). What is a “document”? JASIS, 48(9), 804–809.
Geiger, R. S., & Ribes, D. (2011). Trace ethnography: Following coordination through documentary practices. In System sciences (hICSS), 2011 44th hawaii international conference on (pp. 1–10). IEEE.
Jackson, S. J. (2014). Media technologies: Essays on communication, materiality and society. In P. Boczkowski & K. Foot (Eds.),. MIT Press. Retrieved from http://sjackson.infosci.cornell.edu/RethinkingRepairPROOFS(reduced)Aug2013.pdf
Summers, E., & Salo, D. (2013s). Linking things on the web: A pragmatic examination of linked data for libraries, archives and museums. ArXiv Preprint ArXiv:1302.4591. Retrieved from http://arxiv.org/abs/1302.4591
Here is a post that details some thoughts and experiences that lead to my short slides and other speaking notes for the Navigating Linked Data panel given at the Access YYZ conference. This was originally envisioned as a sort of interactive workshop, since I’ve learned a lot of this by tinkering, so forgive me if it flows a bit awkwardly in place. It was wrapped into a panel to give a better range of projects and approaches, which I’m excited about. All of the following notes were built off of experimentation for particular use cases from a data munger’s viewpoint, not a semantic web developer’s viewpoint, so any corrections, updates, or additions for future reference or edification are very much welcome.
If you have questions, please let me know - @cm_harlow or email@example.com.Exploding Library Data Authorities: The Theoretical Context
So this post/presentation notes/etc is going to work off the assumption that you know about RDF, that you’re down with the idea that library data should move towards a RDF model, and that you understand the benefits of exposing this data on the web.
One area that I’m particularly interested in experimenting with is the concept of Library Authorities in a Linked Open Data world. Not just the concept of Library Authorities, however; but how can we leverage years, nay, decades of structured (to varying quality levels) data that describes concepts we use regularly in library data creation? How do we best do this, without recreating the more complicated parts of Library Authorities in RDF? I would imagine we want to create RDF data and drop many of the structures around Libraries Authorities that make it not sustainable as it exists today. Why then call it authority here? Here is a quote I particularly agree with from Kevin Ford, when he was still working at the Library of Congress, and Ted Fons of OCLC:
[Authority the term] expresses, subtly or not so subtly, the opportunities libraries and other cultural organizations have in re-asserting credibility on the Web along with providing new means for connecting and contextualizing users to content. The word “Authority” (along with managing “authoritative” information on People, Places, Organizations, etc.) is more valuable and accurate in a larger Web of interconnected data. Nevertheless, because a BIBFRAME Authority is not conceptually identical to the notion of a traditional library authority, the name - Authority - may be confusing and distracting to traditional librarians and their developers. – “On BIBFRAME”, section 3.6, http://bibframe.org/documentation/bibframe-authority/
Okay, please don’t run away now because there is a mention of BIBFRAME. This post is not about BIBFRAME, and I just really like the approach to discussing the term ‘Authority’ here.
I continue to use the term Libraries Authority (but always with some sort of air quotes context) though because I’m uncertain what else to call this process I’m about to unravel, to be frank. And because Libraries Authorities carries with it a whole question of infrastructure as well as data creation/use, in my mind. I’d like to see Libraries Authorities in LOD evolve into how we interact with datastores that don’t directly describe some sort of binary or physical object. It would explain concepts we want to unite across platforms, systems, projects, and other. It would be curating data not just about physical/digital objects, but concepts, as expressed in the above quote.
In line with RDF best practices for ontology development, LOD Authorities should try to reuse concepts where possible, and where not possible, build relationships between the vocabularies and ontologies that exist to what we’re trying to describe locally. I want those relationships to be explicit and documented so we can then use Libraries Authorities data more readily to enhance our records. Though a necessary first step in Libraries Authorities in LOD (or any other, honestly) usage is just getting all our possible controlled access points - WHEREVER THEY MIGHT APPEAR IN LIBRARY DATA - connected to metadata describing concepts, i.e. LOD Authorities, through URI capture of some sort. Hence my really strong interest and a lot of tinkering in the area of library data reconciliation.
I should also mention that a lot of thinking on this was born of the fact that updating many commonly used Library Authorities is difficult if not impossible for many to access. Adding Library of Congress Name Authority Files in particular requires the institution to have the resources to support their employees deal with a huge training bureacracy, whether we are talking about PCC (Program for Cooperative Cataloging) institutional-level membership or NACO funnel cataloging. Training often takes months; then there is a review period that can last years. This has caused Libraries Authorities work, which is very much a public service, to be not really equitable nor sustainable, as many institutions find that navigating that bureacracy is prohibitive.
As I critique the system, this does not mean that I, in any way, devalue the individual effort that goes into Libraries Authority work. Many people donate time and energy to accurately describe concepts that don’t relate directly to their work or institution. They realize that good metadata is a public service, and I appreciate that because it is. I’m critiquing here the larger system, not the individuals working therein.
Regardless, back on the agenda, the following describes one way I am attempting to begin expand the concept of Library Authorities for a particular use case at the University of Tennessee Knoxville (UTK) Libraries. I’m hoping that this will then grow into a new way for us to handle Library Authorities, but exploding authorities to become eventually a store of RDF statements for metadata we use to describe concepts, or to negotiate our concepts with external data sources’ descriptions of them.Exploding Library Data Authorities: The Use Case
At UTK, there is a special unit in the Special Collections department: the Great Smoky Mountains Regional Collection. This deals with representing all kinds of materials and collections that focus on the Great Smoky Mountains in southern Appalachia, southeastern United States. They also try to represent concepts that are important to this region and culture, but weren’t represented adequately, or were open to their edits, in the usual Library Authorities and vocabularies. This became Database of the Smokes (DOTS) terms, a very simple list of terms, sort of in taxonomy form, that was primarily used for sorting citations of works and indexing works on the region and culture that goes into Database of the Smokies.
DOTS is currently just a Drupal plugin working off a database of citations/works with appropriate DOTS index terms applied. Some of these terms were haphazardly applied to non-MARC descriptive metadata records (largely applied when the DOTS project managers were creating the metadata themselves), and these digital objects with DOTS terms assigned do not show up in the DOTS database of works/citations/objects. These terms were not applied to MARC records describing resources that go into the larger Great Smoky Mountains Regional Collection either. The DOTS terms did not have identifiers, nor a hierarchical structure that was machine-parseable. Finally, the DOTS terms often loosely mirrored the preferred access point text string construction to a point, making inconsistent facets for subject terms in the digital collections platform (as LC authorities and DOTS are both used).
What we wanted to do with DOTS was a number of things, including addressing the above:
- Get the hierarchy formalized and machine-readable
- Assign terms unique identifiers instead of using text-string matching as a kind of identifier.
- Build out reconciliation services for using the updated DOTS in MARC and non-MARC metadata, replete with capturing the valueURI or $0 fields as well.
- Pull in subject terms used for DOTS resources outside of DOTS (in the digital collections platform)
- Allow the content specialists to be able to add information about these terms - such as local/alternative names, relationships to other terms, other such information describing the concept
- Allow external datasets, in particular Geonames and LoC, to enhance the DOTS terms through explicitly declaring relationships between DOTS terms and those datasets.
- Look to eventually building this taxonomy to a LOD vocabulary, then enhancing the LOD vocabulary into a full-fledged LOD ontology (the difference between vocabulary and ontology being that ontology have fuller use of formal statements so that accurate reasoning and inference based off DOTS can be done; a vocabulary may have some formal statements but the reasoning cannot entirely be trusted to return accurate results.)
- Find ways to then pull the updates to the term description where there are relationships to LoC vocabulary terms for seeding updated Authority records.
- Use this work and experimentation to support further exploding of the concept of Library Authorities.
We are focusing on relating DOTS to LoC (primarily LCSH and LCNAF) and Geonames at the moment because they are the vocabularies that have something to offer to DOTS - for the LCSH and LCNAF, they offer a broader context to place the DOTS vocabulary within; for Geonames, it offers hierarchical geographic information and coordinates in consistent encoding and part of the record. Inconsistency in use, coordinates encoding, and where the relationships are declared in the record are the reasons why we did not just rely on the LoC authorities to link out to Geonames, since there has been some matching of the two vocabularies done recently.Building a Vocabulary: The Setup
The DOTS taxonomy currently lives as a list of text terms. We first wanted to link those terms to LC Authorities and Geonames. This was done by pulling the terms into LODRefine and using existing reconciliation services for the vocabularies. For the LCNAF terms, a different approach was needed as there is no LODRefine reconciliation service currently - this was recently solved by using Linked Data Fragments server and HDT files to create an endpoint through which reconciliation scripts could work. We pulled in the URI and the label from the external vocabularies for the matching term.
We’ve then taken these terms and the matching URIs and created simple SKOS RDF N-Triples records with basic information included. In short:
- DOTS was declared as a skos:ConceptScheme and given some simple SKOS properties for name, contact.
- all terms were declared as skos:Concepts and skos:inScheme of DOTS.
- all terms were given a URI to be made into a URL by the platform below.
- the external URIs were applied with skos:closeMatch* then reviewed by content specialists for ones that could become skos:exactMatch.
- for all labels that end up with an skos:exactMatch to external vocabularies, the external vocabularies’ labels were brought in as skos:altLabel.
A snippet of one example of the SKOS RDF N-triples created:
<http://dots.lib.utk.edu/p54274> <http://www.w3.org/2000/01/rdf-schema#type> <http://www.w3.org/2004/02/skos/core#Concept> . <http://dots.lib.utk.edu/p54274> <http://www.w3.org/2004/02/skos/core#prefLabel> "Tellico"@en . <http://dots.lib.utk.edu/p54274> <http://www.w3.org/2004/02/skos/core#inScheme> <http://dots.lib.utk.edu/DOTS> . <http://dots.lib.utk.edu/p54274> <http://www.w3.org/2004/02/skos/core#altLabel> "Talequo"@en . <http://dots.lib.utk.edu/p54274> <http://www.w3.org/2004/02/skos/core#related> <http://id.loc.gov/authorities/names/no94017139> . <http://dots.lib.utk.edu/p54274> <http://www.w3.org/2004/02/skos/core#related> <http://id.loc.gov/authorities/names/n86034608> . <http://dots.lib.utk.edu/p54274> <http://www.w3.org/2004/02/skos/core#related> <http://sws.geonames.org/4662016/> . ...
This SKOS RDF N-triples document was then passed through Skosify for improving and ‘validating’ the SKOS document. Next, it was loaded into a Jena Fuseki SPARQL server and triple store, for then being access and used in Skosmos, “a web-based tool providing services for accessing controlled vocabularies, which are used by indexers describing documents and searchers looking for suitable keywords. Vocabularies are accessed via SPARQL endpoints containing SKOS vocabularies.” (https://github.com/NatLibFi/Skosmos/wiki). Skosmos, developed by the National Library of Finland, is open source and built originally for Finto, the Linked Open Data Vocabulary service used by government agencies in Finland. It means to help support interoperability of SKOS vocabularies, as well as allow editing.
We’ve got a local instance of Skosmos with the basic DOTS SKOS vocabulary in it, used primarily as a proof of concept for the content specialists. Our DOTS Skosmos test instance supports browsing and using the vocabulary, but not editing currently. I’m hoping we can use a simple form to connect to the SPARQL server (as many existing RDF vocabulary and ontology editors are too complicated for this use), but this has been a lower priority than working on a general MODS editor first. There is the ability to visualize relationships in Skosmos that supports the content specialists really understanding how SKOS structure can help better define their work and discovery.
With the SKOS document, both MARC and non-MARC data with subject terms can now be reconciled and the URI captured either though OpenRefine reconciliation services, or some reconciliation with scripts. This has already helped clean up so much metadata related to this collection. We hope to start using the SPARQL endpoint directly for this reconciliation work.DOTS SKOS Feedback
This work has inspired the DOTS librarians to want to expand a lot of the kind of ‘Library Authority’ information captured, and the inclusion of other schema/systems for additional classes and properties to support other types of information. This included everything from hiking trail lengths to the cemetery where a person is buried. In the above N-triples snippet example, a particularly strong use case is put forward: extending the Libraries Authorities record, so to speak, to better cover Cherokee concepts. ‘Tellico’ was a Cherokee town that was been partially replaced by the current U.S. town of Tellico Plains, as well as the site of many Tellico archaelogical digs. The LCNAF has authority records for the latter two concepts, but not the first - not the Cherokee town. What would happen with automated reconciliation is that Tellico was often linked to/overwritten by Tellico Plains (or other Tellico towns currently existing in the U.S.). We are building out DOTS and, we hope, other negotiation layers for Libraries Authorities being migrated to RDF then extended in a way that will not erase concepts like Tellico that don’t exist in the authority file. This is also an important motivation for extending many researcher identity management systems to work between the metadata that wants to link to a particular person and the authority file that may or may not have a record for that person. In moving from, in the case of the LoC vocabularies, relying on unique text-string matching to identifiers, we have moved from a sort of but not entirely closed world assumption to an open world assumption. So identifiers are not just pulled in as preparation for RDF.
Additionally, having this local vocabulary better connected in our local data systems to the external authorities has started many discussions about how we can create a new way of updating or expanding external authorities. UTK is not a PCC institution, but we do have 1 cataloger able to create NACO records for the Tennessee NACO funnel. This work still needs to be reviewed by a number of parties and follow the RDA standard for creating MARC Authorities, and it is limited by the amount of work we need to do beyond NACO work. So there is not much time at the present for this cataloger to spend on updating and/or creating records that relate in some way to DOTS terms. This should not mean that 1. the terms of regional interest to UTK will continue to not be adequately described in Library Authorities, and 2. we continue to keep out the content specialists from updating this metadata (though in a negotiated or moderated way, i.e. some kind of ingest form that can handle the data encoding and formation in the back end).
Another question that this project has brought up is where to keep this RDF metadata that we are using currently to negotiate with and extend external Library Authorities. The idea of keeping as a separate store from say descriptive metadata attached to objects has been mostly accepted as a default, but this doesn’t mean that storing concepts say in a Fedora instance as objects without binaries, then use Fedora to build the relationships, is not worth investigating. And, additionally, do we want to pull in the full external datasets as well? It is definitely possible with many LOD Library Authorities available as data dumps. I think at this moment, I would like to see this vocabulary and other vocabularies continue to expand in Fuseki and Skosmos, with an eye to making this work in some ways like VIVO has done for negotiating multiples datasources in describing researchers.DOTS SKOS Next Steps
In going forward, we would like to: - Expand the editing capabilities of DOTS SKOS so that the content specialists can more readily and directly do this work. - Enhance the hierarchical relationships that we can now support with SKOS. This will mostly involve a lot of manual review that can be done once we’ve got an actual editor in place. - Review beyond SKOS for properties that can support extending the descriptions. - Discuss pulling in full external datasets for better relationship building and querying locally, which is somewhat described above. - See if this is really is part of evolution to store Library Authorities & further concept descriptions not directly related to a physical/digital object in local data ecosystem, and how.Resources + References
Here are some links to tools, projects, or resources that we found helpful in working on this project. They are in alphabetical order:
The next version of Mirlyn (mirlyn.lib.umich.edu) is going to take some time to create, but let's take a peak under the hood and see how the next generation of search will work.
Less and less of the digital content that forms our cultural heritage consists of static documents, more and more is dynamic. Static digital documents have traditionally been preserved by migration. Dynamic content is generally not amenable to migration and must be preserved by emulation.
Successful emulation requires the entire software stack be preserved. Not just the bits the content creator generated and over which the creator presumably has rights allowing preservation, but also the operating system, libraries, databases and services upon which the execution of the bits depends. The creator presumably has no preservation rights over this software, necessary for the realization of their work. A creator wishing to ensure that future audiences can access their work has no legal way to do so. In fact, creators cannot even legally sell their work in any durably accessible form. They do not own an instance of the infrastructure upon which it depends, they merely have a (probably non-transferable) license to use an instance of it.
Thus a key to future scholars' ability to access the cultural heritage of the present is that in the present all these software components be collected, preserved, and made accessible. One way to do this would be for some international organization to establish and operate a global archive of software. In an initiative called PERSIST, UNESCO is considering setting up such a Global Repository of software. The technical problems of doing so are manageable, but the legal and economic difficulties are formidable.
The intellectual property frameworks, primarily copyright and the contract law underlying the End User License Agreements (EULAs), under which software is published differ from country to country. At least in the US, where much software originates, these frameworks make collecting, preserving and providing access to collections of software impossible except with the specific permission of every copyright holder. The situation in other countries is similar. International trade negotiations such as the TPP are being used by copyright interests to make these restrictions even more onerous.
For the hypothetical operator of the global software archive to identify the current holder of the copyright on every software component that should be archived, and negotiate permission with each of them for every country involved, would be enormously expensive. Research has shown that the resources devoted to current digital preservation efforts, such as those for e-journals, e-books and the Web, suffice to collect and preserve less than half of the materialin their scope. Absent major additional funding, diverting resources from these existing efforts to fund the global software archive would be robbing Peter to pay Paul.
Worse, the fact that the global software archive would need to obtain permission before ingesting each publisher's software means that there would be significant delays before the collection would be formed, let alone be effective in supporting scholars' access.
An alternative approach worth considering would separate the issues of permission to collect from the issues of permission to provide access. Software is copyright. In the paper world, many countries had copyright deposit legislation allowing their national library to acquire, preserve and provide access (generally restricted to readers physically at the library) to copyright material. Many countries, including most of the major software producing countries, have passed legislation extending their national library's rights to the digital domain.
The result is that most of the relevant national libraries already have the right to acquire and preserve digital works, although not the right to provide unrestricted access to them. Many national libraries have collected digital works in physical form. For example, the German National Library's CD-ROM collection includes half a million items. Many national libraries are crawling the Web to ingest Web pages relevant to their collections.
It does not appear that national libraries are consistently exercising their right to acquire and preserve the software components needed to support future emulations, such as operating systems, libraries and databases. A simple change of policy by major national libraries could be effective immediately in ensuring that these components were archived. Each national library's collection could be accessed by emulations on-site. No time-consuming negotiations with publishers would be needed.
An initial step would be for national libraries to assess the set of software components that would be needed to provide the basis for emulating the digital artefacts already in their collections, which of them were already to hand, and what could be done to acquire the missing pieces. The German National Library is working on a project of this kind with the bwFLA team at the University of Freiburg, which will be presented at iPRES2015.
The technical infrastructure needed to make these diverse national software collections accessible as a single homogeneous global software archive is already in place. Existing emulation frameworks access their software components via the Web, and the Memento protocol aggregates disparate collections into a single resource.
Of course, absent publisher agreements it would not be legal for national libraries to make their software collections accessible in this way. But negotiations about the terms of access could proceed in parallel with the growth of the collections. Global agreement would not be needed; national libraries could strike individual, country-specific agreements which would be enforced by their access control systems.
Incremental partial agreements would be valuable. For example, agreements allowing scholars at one national library to access preserved software components at another would reduce duplication of effort and storage without posing additional risk to publisher business models.
By breaking the link that makes building collections dependent on permission to provide access, by basing collections on the existing copyright deposit legislation, and by making success depend on the accumulation of partial, local agreements instead of a few comprehensive global agreements, this approach could cut the Gordian knot that has so far prevented the necessary infrastructure for emulation being established.
A while ago, I gave a webinar on the topic of the new technology frontier for libraries. This webinar was given for the South Central Regional Library Council Webinar Series. I don’t get asked to pick technologies that I think are exciting for libraries and library patrons too often. So I went wild! These are the six technology trends that I picked.
- Maker Programs
- Programmable Biology (or Synthetic Biology)
- Bitcoin (Virtual currency)
- Gamification (or Digital engagement)
OK, actually the maker programs, drones, and gamification are not too wild, I admit. But programmable biology, robots, and bitcoin were really fun to talk about.
I did not necessarily pick the technologies that I thought would be widely adopted by libraries, as you can guess pretty well from bitcoin. Instead, I tried to pick the technologies that are tackling interesting problems, solutions of which are likely to have a great impact on our future and our library patrons’ lives. It is important to note not only what a new technology is and how it works but also how it can influence our lives, and therefore library patrons and libraries ultimately.Back to the Future Part III: Libraries and the New Technology Frontier
This is odd, because Google, the owner of Blogger and Blogspot, has made noise about moving its services to HTTPS, marking HTTP pages as non-secure, and is even giving extra search engine weight to webpages that use HTTPS.
I'd like to nudge Google, now that it's remade its logo and everything, to get their act together on providing secure service for Blogger. So I set the "description" of my blog to "Move Blogspot to HTTPS NOW." If you have a blog on Blogspot, you can do the same. Go to your control panel and click settings. "description" is the second setting at the top. Depending on the design of your page, it will look like this:
So Google, if you want to avoid a devastating loss of traffic when I move Go-To-Hellman to another platform on January 1, 2017, you better get cracking. Consider yourself warned.
Library of Congress: The Signal: The National Digital Platform for Libraries: An Interview with Trevor Owens and Emily Reynolds from IMLS
I had the chance to ask Trevor Owens and Emily Reynolds at the Institute of Museum and Library Services (IMLS) about the national digital platform priority and current IMLS grant opportunities. I was interested to hear how these opportunities could support ongoing activities and research in the digital preservation and stewardship communities.
Erin: Could you give us a quick overview of the Institute of Museum and Library Services national digital platform? In what way is it similar or different from how IMLS has previously funded research and development for digital tools and services?
Trevor: The national digital platform has to do with the digital capability and capacity of libraries across the U.S. It is the combination of software applications, social and technical infrastructure, and staff expertise that provide library content and services to all users in the US. The idea for the platform has been developed in dialog with a range of stakeholders through annual convenings. For more information on those, you can see the notes (PDF) and videos from our 2014 and 2015 IMLS Focus convenings.
As libraries increasingly use digital infrastructure to provide access to digital content and resources, there are more opportunities for collaboration around the tools and services used to meet their users’ needs. It is possible for every library in the country to leverage and benefit from the work of other libraries in shared digital services, systems, and infrastructure. We need to bridge gaps between disparate pieces of the existing digital infrastructure for increased efficiencies, cost savings, access, and services.
IMLS is focusing on the national digital platform as an area of priority in the National Leadership Grants to Libraries and the Laura Bush 21st Century Librarian grant programs. Both of these programs have October 1st deadlines for two-page preliminary proposals and will have another deadline for proposals in February. It is also relevant to the Sparks! Ignition Grants for Libraries program.
Erin: One of the priorities identified in the 2015 NDSA National Agenda for Digital Stewardship (PDF) centers around enhancing staffing and training, and the report on the recent national digital platform convening (PDF) stresses issues in supporting professional development and training. There’s obvious overlap here; how do you see the current education and training opportunities in the preservation community contributing to the platform? How would you like to see them expanded?
Emily: We know that there are many excellent efforts that support digital skill development for librarians and archivists. Since so much of this groundwork has been done, with projects like POWRR, DigCCurr, and the Digital Preservation Management Workshops, we’d love to see collaborative approaches that build on existing curricula and can serve as stepping stones or models for future efforts. That is to say, we don’t need to keep reinventing the wheel! Increasing collaboration also broadens the opportunities for updating training as time passes and desirable skills change.
The impact that the education and training component has on the national digital platform as a whole is tremendous. Even for projects without a specific focus on professional development or training, we’re emphasizing things like documentation and outreach to professional staff. After all, what good is all of this infrastructure if the vast majority of librarians can’t use it? We need to make sure that the tools and systems being used nationally are available and usable to professionals at all types of organizations, even those with fewer resources, and training is a big part of making that happen.
Erin: Another priority identified in the Agenda is supporting content selection at scale. For example, there are huge challenges in collecting and preserving large amounts of digital content that libraries and archives that may be interested in for their users, patrons, or researchers. One of those challenges is knowing what’s been created or being collected or available for access. Do you see the national digital platform supporting any activities or research around digital content selection?
Trevor: Yes, content selection at scale fits squarely in a broader need for using computational methods to scale up library practices in many different areas. One of the panels at the national digital platform convening this year focused directly on scaling up practice in libraries and archives. Broadly, this included discussions of crowdsourcing, linked data, machine learning, natural language processing and data mining. All of these have considerable potential to move further away from doing things one at a time and duplicating effort.
As an example that directly addresses the issue of content selection at scale, in the first set of grants awarded through the national digital platform, one focuses directly on this issue for web archives. In Combining Social Media Storytelling with Web Archives (LG-71-15-0077) (PDF), Old Dominion University and the Internet Archive are working to develop tools and techniques for integrating “storytelling” social media and web archiving. The partners will use information retrieval techniques to (semi-)automatically generate stories summarizing a collection and mine existing public stories as a basis for librarians, archivists, and curators to create collections about breaking events.
Erin: Supporting interoperability seems to be a strong and necessary component of the platform. Could you discuss broadly and specifically what role interoperable tools or services could fill for the platform? For example, IMLS recently funded the Hydra-in-a-Box project, an open source digital repository, so it would be interesting to hear how you see the digital preservation community’s existing and developing tools and services working together to benefit the platform.
Trevor: First off, I’d stress that the platform already exists, it’s just not well connected and there are lots of gaps where it needs work. The Platform is the aggregate of the tools and services that libraries, archives and museums build, use and maintain. It also includes the skills and expertise required to put those tools and services into use for users across the country. Through the platform, we are asking the national community to look at what exists and think about how they can fill in gaps in that ecosystem. From that perspective, interoperability is a huge component here. What we need are tools and services that easily fit together so that libraries can benefit from the work of others.
The Hydra-in-a-box project is a great example of how folks in the library and archives community are thinking. The full name of that project, Fostering a New National Library Network through a Community-Based, Connected Repository System (LG-70-15-0006) (PDF), gets into more of the logic going on behind it. What I think reviewers found compelling about this project is how it brought together a series of related problems and initiatives, and is working to bridge different, but related, library communities.
On one hand, the Digital Public Library of America is integrating with a lot of different legacy systems, from which it’s challenging to share collection data. The Fedora Hydra open source software community has been growing significantly across academic libraries. There is a barrier for entrants to start using Hydra. Large academic libraries that often have several developers working on their projects are the ones who are able to use and benefit from Hydra at this point. By working together, these partners can create and promulgate a solution that makes it easier for more organizations to use Hydra. When more organizations can use Hydra, more organizations can then become content hubs for the DPLA. The partnership with DuraSpace brings their experience in sustaining digital projects, and the possibility of establishing hosted solutions for a system that could provide Hydra to smaller institutions.
Erin: IMLS hosted Focus Convenings on the national digital platform in April 2014 and April 2015. Engaging communities and end users at the local level seemed to be a recurring theme at both meetings, but also how to encourage involvement and share resources at the national level. What are some of the opportunities the digital preservation community could address related to engagement activities to support this theme?
Emily: I think this is a question we’re still actively trying to figure out, and we are interested in seeing ideas from libraries and librarians on how we can help in these areas. We know that there are communities whose records and voices aren’t equally represented in a range of national efforts, and we know that in many cases there are unique issues around cultural sensitivity. Addressing those issues requires direct and sustained contact with, and understanding of, the groups involved. For example, one of the reasons Mukurtu CMS has been so successful with Native communities is because of how embedded in the project those communities’ concerns are. Those relationships have allowed Mukurtu to create a national network of collections while still encouraging individual repositories to maintain local perspectives and relationships.
Engaging communities to participate in national digital platform activities is another way to address concerns about local involvement. We’ve seen great success with the Crowd Consortium, for example, and the tools and relationships that are being developed around crowdsourcing. Various institutions have also done a great deal of work in this area through use of HistoryPin and similar tools. Crowdsourcing and other opportunities for community engagement in digital collections have the unique capacity to solicit and incorporate the viewpoints and input of a huge range of participants.
Erin: Do you have any thoughts on what would make a proposal compelling? Either a theme or project-related topic that fits with the national digital platform priority?
Trevor: The criteria for evaluating proposals for any of our programs are spelled out in the relevant IMLS Notice of Funding Opportunity. The good news is that there aren’t any secrets to this. The proposals likely to be the most compelling are going to be the ones that best respond to the criteria for any individual program. Across all of the programs, applicants need to make the case that there is a significant need for the work they are going to engage in. Things like the report from the national digital platform convening are a great way to establish the case for the need for the work an applicant wants to do.
I’m also happy to offer thoughts on some points in proposals that aren’t quite as competitive. For the National Leadership Grants, I can’t stress enough the words National and Leadership. This is a very competitive program and the things that rise to the top are generally going to be the things that have a clear, straightforward path to making a national impact. So spend a lot of time thinking about what that national impact would be and how you would measure the change a project could make.
Emily: The Laura Bush 21st Century Librarian Program focuses on building human capital capacity in libraries and archives, through continuing education, as well as through formal LIS master’s and doctoral programs. Naturally, when we talk about “21st century skills” in this program, a lot of capabilities related to technology and the national digital platform surface. Projects in this program are most successful when they show awareness of work that has come before, and explain how they are building upon that previous work. Similarly, and as with all of our programs, reviewers are looking to see how the results of the project will be shared with the field.
For example, the National Digital Stewardship Residency (NDSR) has been very successful with Laura Bush peer reviewers. The original Library of Congress NDSR built on the Library’s existing DPOE curriculum. Subsequently, the New York and Boston NDSR programs adapted the Library of Congress’s model based on resident feedback and other findings. Now we’re seeing a new distributed version of the model being piloted by WGBH. This is a great example of a project that is replicable and iterative. Each organization modified it based on their specific situation, contributing to an overall vision of the program and increasing the impact of IMLS funding.
The Sparks! grants are a little different than the grants of other programs because the funding cap for this program is much lower, at $25,000, and has no cost share requirement. Sparks! is intended to fund projects that are innovative and potentially somewhat risky. It’s a great opportunity for prototyping new tools, exploring new collaborations, and testing new services. As a special funding opportunity within the IMLS National Leadership Grants for Libraries program, Sparks! guidelines also call for potential for broad impact and innovative approaches. Funded projects are required to submit a final report in the form of a white paper that is published on the IMLS website, in order to ensure that these new approaches are shared with the community.
Erin: I’m sure many of our readers have applied for IMLS grants in previous cycles. Could you talk a bit about the current proposal process? Is there any other info you’d like to share with our readers about it?
Emily: The traditional application process, and the one currently used in the Sparks! program, is that applicants submit a full proposal at the time of the application deadline. This includes a narrative, a complete budget and budget justification, staff resumes, and a great deal of other documentation. With Sparks!, these applications are sent directly to peer reviewers in the field, and funding decisions are made based on their scores.
We’ve made some significant changes to the National Leadership Grants and Laura Bush 21st Century Librarian program. For FY16, both programs will require the submission of only a two-page preliminary proposal, along with a couple of standard forms. The preliminary proposals will be sent to peer reviewers, and IMLS will hold a panel meeting with the reviewers to select the most promising proposals. That subset of applicants is then invited to submit full proposals, with a deadline six to eight weeks later. The full proposals go through another round of panel review before funding decisions are made. We’re also adding a second annual application deadline for each program, currently slated for February 2016.
This process was piloted with the National Leadership Grants this past year, and we’ve seen a number of substantial benefits for applicants. Of course, the workload of creating a two-page preliminary proposal is much less than for the full proposal. But for the applicants who are invited to submit a full proposal, also gain the peer reviewers’ comments to help them strengthen their applications. And for unsuccessful applicants, the second deadline makes it possible for them to revise and resubmit their proposal. We’ve found that the resulting full proposals are much more competitive, and reviewers are still able to provide substantial feedback for unsuccessful applicants.
Erin: Now for the quintessential interview question: where do you see the platform in five years?
Trevor: I think we can make a lot of progress in five years. I can see a series of interconnected national networks and projects where different libraries, archives, museums and related non-profits are taking the lead on aspects directly connected to the core of their missions, but benefiting from the work of all the other institutions, too. The idea that there is one big library with branches all over the world is something that I think can increasingly become a reality. In sharing that digital infrastructure, we can build on the emerging value proposition of libraries identified in the Aspen Institute’s report on public libraries (PDF). By pooling those efforts, and establishing and building on radical collaborations, we can turn the corner on the digital era. We can stop playing catch up and have a seat at the table. We can make sure that our increasingly digital future is shaped by values at the core of libraries and archives around access, equity, openness, privacy, preservation and the integrity of information.
We are now making a concerted effort to collect Contributor License Agreements (CLAs) from all project contributors. The CLAs are based on Apache's agreements; they give the Islandora Foundation non-exclusive, royalty free copyright and patent licenses for contributions. They do not transfer intellectual property ownership to the project from the contributor, nor do they otherwise limit what the creator can do with their contributions. This license is for your protection as a contributor as well as the protection of the Foundation and its users; it does not change your rights to use your own contributions for any other purpose.
The CLA's are here:
Current CLAs on file are here.
We are seeking corporate CLAs (cCLA) from all institutions that employ Islandora contributors. We are also seeking individual CLAs (iCLAs) from all individual contributors, in addition to the cCLA. (In most cases the cCLA is probably sufficient, but getting iCLAs in addition helps the project avoid worrying about whether certain contributions were "work for hire", and also help provide continuity in case a developer continues to contribute even after changing employment).
All Foundation members and individual contributors will soon be receiving a direct email request to sign the CLAs, along with instructions on how to submit them. At a certain point later this year, we will no longer accept code contributions that are not covered by a CLA and will look to excise any legacy code that isn't covered by an agreement.
The post Search-Time Parallelism at Etsy: An Experiment With Apache Lucene appeared first on Lucidworks.
So, this update is a bit of a biggie. If you are a Mac user, the program officially moves out of the Preview and into release. If you are a Mac user, this version brings the following changes:
** 1.1.25 ChangeLog
- Bug Fix: MarcEditor — changes may not be retained after save if you make manual edits following a global updated.
- Enhancement: Delimited Text Translator completed.
- Enhancement: Export Tab Delimited complete
- Enhancement: Validate Headings Tool complete
- Enhancement: Build New Field Tool Complete
- Enhancement: Build New Field Tool added to the Task Manager
- Update: Linked Data Tool — Added Embed OCLC Work option
- Update: Linked Data Tool — Enhance pattern matching
- Update: RDA Helper — Updated for parity with the Windows Version of MarcEdit
* Update: MarcValidator — Enhancements to support better checking when looking at the mnemonic format.
If you are on the Windows/Linux version – you’ll see the following changes:
* 6.1.60 ChangeLog
- Update: Validate Headings — Updated patterns to improve the process for handling heading validation.
- Enhancement: Build New Field — Added a new global editing tool that provides a pattern-based approach to building new field data.
- Update: Added the Build New Field function to the Task Management tool.
- UI Updates: Specific to support Windows 10.
The Windows update is a significant one. A lot of work went into the Validate Headings function, which impacts the Linked Data tools and the underlying linked data engine. Additionally, the Build New Fields tool provides a new global editing function that should simplify complex edits. If I can find the time, I’ll try to mark up a youtube video demoing the process.
You can get the updates from the MarcEdit downloads page: http://marcedit.reeset.net/downloads or if you have MarcEdit configured to check automated updates – the tool will notify you of the update and provide a method for you to download it.
If you have questions – let me know.
“Telling DSpace Stories” is a community-led initiative aimed at introducing project leaders and their ideas to one another while providing details about DSpace implementations for the community and beyond. The following interview includes personal observations that may not represent the opinions and views of the University of Texas or the DSpace Project.
In June I borrowed a BKON A-1 from the OLITA technology lending library. It’s a little black plastic box with a low energy Bluetooth transmitter inside, and you can configure it to broadcast a URL that can be detected by smartphones. I was curious to see what it was like, though I have no use case for it. If you borrow something from the library you’re supposed to write it up, so here’s my brief review.
- I took it out of its box and put two batteries in.
- I installed a beacon detector on my phone and scanned for it.
- I saw it:
- I followed the instructions on the BKON Quick Start Guide.
- I set up an account.
- I couldn’t log in. I tried two browsers but for whatever unknown reason it just wouldn’t work.
- I took out the two batteries and put it back in its box.
I’ll give it back to Dan Scott, who said he’s going to ship it back to the manufacturer so they can install the new firmware. I wish better luck to the next borrower.
Cast your FOMO feelings aside, a livestream of the conference will be on the website Wednesday to Friday HERE . An archived copy will be available on Youtube after the conference as well!
This has been a long-time coming – making up countless hours and the generosity of a great number of people to test and provide feedback (not to mention the folks that crowd sourced the purchase of a Mac) – but MarcEdit’s Mac version is coming out of Preview and will be made available for download on Labor Day. I’ll be putting together a second post officially announcing the new versions (all versions of MarcEdit are getting an update over labor day), so if this interests you – keep an eye out.
So exactly what is different from the Preview versions? Well, at this point, I’ve completed all the functions identified for the first set of development tasks – and then some. New to this version will be the new Validate Headings tool just added to the Windows version of MarcEdit, the new Build New Field utility (and inclusion into the Task Automation tool), updates to the Editor for performance, updates to the Linking tool due to the validator, inclusion of the Delimited Text Translator and the Export Tab Delimited Text Translator – and a whole lot more.
At this point, the build is made, the tests have been run – so keep and eye out tomorrow – I’ll definitely be making it available before the Ohio State/Virginia Tech football game (because everything is going to stop here once that comes on).
To everyone that has helped along the way, providing feedback and prodding – thanks for the help. I’m hoping that the final result will be worth the wait and be a nice addition to the MarcEdit family. And of course, this doesn’t end the development on the Mac – I have 3 additional sprints planned as I work towards functional parity with the Windows version of MarcEdit.
I wrote a short piece for the newsletter of the York University Faculty Association: York University Libraries: A Catalogue of Cuts. We’ve had year after year of budget cuts at York and York University Libraries, but we in the library we don’t talk about them in public much. We should.
Today I found the following resources and bookmarked them on Delicious.
- Gimlet Your library’s questions and answers put to their best use. Know when your desk will be busy. Everyone on your staff can find answers to difficult questions.
Digest powered by RSS Digest
Getting together at a public event can be a fun way to contribute to the 2015 Global Open Data Index. It can also be a great way to engage and organize people locally around open data. Here are some guidelines and tips for hosting an event in support of the 2015 Index and getting the most out of it.
Hosting an event around the Global Open Data Index is an excellent opportunity to spread the word about open data in your community and country, not to mention a chance to make a contribution to this year’s Index. Ideally, your event would focus broadly on open data themes, possibly even identifying the status of all 15 key datasets and completing the survey. Set a reasonable goal for yourself based on the audience you think you can attract. You may choose to not even make a submission at your event, but just discuss the state of open data in your country, that’s fine too.
It may make sense to host an event focused around one or more of the datasets. For instance, if you can organize people around government spending issues, host a party focused on the budget, spending, and procurement tender datasets. If you can organize people around environmental issues, focus on the pollutant emissions and water quality datasets. Choose whichever path you wish, but it’s good to establish a focused agenda, a clear set of goals and outcomes for any event you plan.
We believe the datasets included in the survey represent a solid baseline of open data for any nation and any citizenry; you should be prepared to make this case to the participants at your events. You don’t have to have be an expert yourself, or even have topical experts on hand to discuss or contribute to the survey. Any group of interested and motivated citizens can contribute to a successful event. Meet people where they are at, and help them understand why this work is important in your community and country. It will set a good tone for your event by helping participants realize they are part of a global effort and that the outcomes of their work will be a valuable national asset.
Ahmed Maawy, who hosted an event in Kenya around the 2014 Index, sums up the value of the Index with these key points that you can use to set the stage for your event:
- It defines a benchmark to assess how healthy and helpful our open datasets are.
- It allows us to make comparisons between different countries.
- Allows us to asses what countries are doing right and what countries are doing wrong and to learn from each other.
- Provides a standard framework that allows us to identify what we need to do or even how to implement or make use of open data in our countries and identify what we are strong at or what we are week at.
It’s great to start your event with an open discussion so you can gauge the experience in the room and how much time you should spend educating and discussing introductory materials. You might not even get around to making a contribution, and that’s ok. Introducing the Index in anyway will put your group on the right path.
- If your group is more experienced, everything you need to contribute to the survey can be found in this year’s Index contribution tutorial.
- If you’re actively contributing at an event, we recommend splitting into teams and assigning one or more datasets to each of the group and having them use the Tutorial as a guide. There can only be one submission per dataset, so be sure to not have teams working on the same task.
- Pair more experienced people with less experienced people so teams can better rely on themselves to answer questions and solve problems.
More practical tips can be found at the 2015 Open Data Index Event Guide.
Photo credits: Ahmed Maawy
I usually am quite careful when it comes to my phone: use phone case, apply the screen protector, things like that. But I suppose accident happens regardless. So, during the first week of August, I accidentally dropped a big screwdriver on the phone (don’t ask why) and heard a “crack” sound. Uugghh… my heart dropped when I saw the crack. Really bad.The phone with the cracked screen. Looks scary.
Hoping the screen protector was strong enough to protect the touchscreen (after all, I used tempered glass screen protector), I turned it on and, bummer, the touch screen is completely borked. Fortunately, the hard drive was not affected so software worked fine. However, I could not interact with the apps, even when I tried to shutdown the phone. So, the only thing I could do was to let the phone run until it was running out of the battery and shutdown by default.The software works just fine, but since the touch display is damaged, I cannot interact with it at all.
I checked the company’s website and their user forum, and found out one could send the phone back to the company in China and get charged for $150 (apparently this kind of physical damage doesn’t get covered by the warranty) or spend about $50 for the screen/touch display and replace it oneself. Being a tinkerer I am and always want to see the guts of any electronic devices, I decided to risk it and do the screen replacement myself. The downside: opening up the phone means I will void the warranty. But, at this point, warranty means little to me if I have to spend big bucks anyway to have the phone fixed. Besides, I am going to learn something new here. Worst case scenario: I failed. But then I can always sell the phone as parts on eBay. So, nothing really to loose here. Besides, I still have my Moto X phone as a backup phone.
YouTube provides various instructions on DIY phone screen replacement. I found two videos that really helped me to understand the ins and outs of replacing the screen.
The first video below nicely showed how to remove the damaged screen and put the replacement back. He showed which areas we need to pay attention to so we won’t damage the component.
The second video was created by a professional technician, so his method is very structured. The tools he used helped me to figure out the tools I need.
I basically watched those two videos probably a dozen times or so to make sure I didn’t miss anything (and, yes, I donated to their Paypal account as my thanks.)
It took me a while to finally finished the screen replacement work. I removed the cracked screen first, and then had to wait for about 3 weeks to receive the screen replacement. I just used whatever online store they recommended to get the parts that I need.
Below is a set of thumbnails with captions explaining my work. Each thumbnail is clickable to its original image.1. Phone with its cracked screen. Ready to be worked on for screen replacement. 2. The back of the phone. The SIM card is removed and the back cover is ready to be opened. 3. The phone with back cover removed. The battery occupies most of the section. There’s a white dot sticker on the top right corner covering one of the screws. Removing that screw will void the warranty. 4. The top part of the phone that covers the hard disk, camera lens, and SIM car reader is removed. There’s a white, square sticker on the top left corner. It will turn pink if the phone is exposed to moisture (dropped into a puddle of water, etc.) 5. Bottom part of the phone is removed. It houses the USB port, the touch capacity, and the antenna. 6. The battery is removed. It took me quite a while to work on this because the glue was so strong and I was so worried I might bend the battery too much and damage it. 7. All the components that would need to be removed had been removed. The hard disk, the main cable, the touch capacity/USB port/antenna part. Looking good. 8. The video instruction from ModzLink suggested to use a heat to loosen up the glue. Good thing I have a blow dryer with a nozzle that allows me to focus the hot air on certain section of the screen. The guitar pick was used to tease out the glass part once the surface is hot enough. 9. It took me about 20 minutes to finally get the screen hot enough and the glue loosen up. By the way, I vacuumed the screen first to remove glass debris so the blow drier won’t blow them all over the place. 1o. I used the magnifying glass from my soldering station to make sure all glue and loose debris were gone. 11. The screen replacement, on the left, finally arrived. Even though they said it’s an original screen, I’m not really sure, considering the original one has extra copper lines on the sides. 12. The casing is clean so all I need to do is inserting the screen replacement in it. 13. Carefully putting the adhesive strips on the sides of the casing. 14. New screen in place. I had to redo it because I forgot to put the speaker grill on the top at the first time. 15. Added new adhesive strips so the battery will stick on it. Put the rest of the components back. 16. Added a new tempered glass screen protector, put the SIM card back in, and turned on the phone.
Finally:Success. I got my favorite phone back.
It was scary the first time I worked on the phone, mostly because I don’t want to break things. But I eventually felt comfortable dealing with the components and, should similar thing happened again (knocks on the wood it won’t), I at least know what to do now.
As I approach 40 years old, I find myself getting nostalgic and otherwise engaged in memories of my youth.
I began high school in 1989. I was already a computer nerd, beginning from when my parents sent me to a Logo class for kids sometime in middle school; I think we had an Apple IIGS at home then, with a 14.4 kbps modem. (Thanks Mom and Dad!). Somewhere around the beginning of high school, maybe the year before, I discovered some local dial-up multi-user BBSs.
Probably from information on a BBS, somewhere probably around 1994, me and a friend discovered Michnet, a network of dial-up access points throughout the state of Michigan, funded, I believe, by the state department of education. Dialing up Michnet, without any authentication, gave you access to a gopher menu. It didn’t give you unfettered access to the internet, but just to what was on the menu — which included several options that would require Michigan higher ed logins to proceed, which I didn’t have. But also links to other gophers which would take you to yet other places without authentication. Including a public access unix system (which did not have outgoing network connectivity, but was a place you could learn unix and unix programming on your own), and ISCABBS. Over the next few years I spent quite a bit of time on ISCABBS, a bulletin board system with asynchronous message boards and a synchronous person-to-person chat system, which at that time routinely had several hundred simultaneous users online.
So I had discovered The Internet. I recall trying to explain it to my parents, and that it was going to be big; they didn’t entirely understand what I was explaining.
When visiting colleges to decide on one in my senior year, planning on majoring in CS, I recall asking at every college what the internet access was like there, if they had internet in dorm rooms, etc. Depending on who I was talking to, they may or may not have known what I was talking about. I do distinctly recall the chair of the CS department at the University of Chicago telling me “Internet in dorm rooms? Bah! The internet is nothing but a waste of time and a distraction of students from their studies, they’re talking about adding internet in dorm rooms but I don’t think they should! Stay away from it.” Ha. I did not enroll at the U of Chicago, although I don’t think that conversation was a major influence.
Entering college in 1993, in my freshmen year in the CS computer lab, I recall looking over someone’s shoulder and seeing them looking at a museum web page in Mozilla — the workstations in the lab were unix X-windows systems of some kind, I forget what variety of unix. I had never heard of the web before. I was amazed, I interupted them and asked “What is that?!?”. They said “it’s the World Wide Web, duh.” I said “Wait, it’s got text AND graphics?!?” I knew this was going to be big. (I can’t recall the name of the fellow student a year or two ahead who first showed me the WWW, but I can recall her face. I do recall Karl Fogel, who was a couple years ahead of me and also in CS, kindly showing me things about the internet on other occasions. Karl has some memories of the CS computer lab culture at our college at the time here, I caught the tail end of that).
Around 1995, the college IT department hired me as a student worker to create the first-ever experimental/prototype web site for the college. The IT director had also just realized that the web was going to be big, and while the rest of the university hadn’t caught on yet, he figured they should do some initial efforts in that direction. I don’t think CSS or JS existed yet then, or at any rate I didn’t use them for that website. I did learn SQL on that job. I don’t recall much about the website I developed, but I do recall one of the main features was an interactive campus map (probably using image maps). A year or two or three later, when they realized how important it was, the college Communications unit (ie, advertising for the college) took over the website, and I think an easily accessible campus map disappeared not to return for many years.
So I’ve been developing for the web for 20 years!
Ironically (or not), some of my deepest nostalgia these days is for the pre-internet pre-cell-phone society; even most of my university career pre-dated cell phones, you wanted to get in touch with someone you called their dorm room, maybe left a message on their answering machine. The internet, and then cell phones, eventually combining into smart phones, have changed our social existence truly immensely, and I often wonder these days if it’s been mostly for the better or not.
Filed under: General
These are some notes for the readings from my first Seminar class. It’s really just a test to see if my BibTeX/Jekyll/Pandoc integration is working. More about that in a future post hopefully…
(Shera, 1933) was written in the depths of the Great Depression … and it shows. There is a great deal of concern about fiscal waste in libraries and a strong push for centralization, in line with FDR’s New Deal. The paper sees increasing cultural homogenization and a blurring of the rural and the urban that hasn’t seemed to come to pass. His thoughts about the television apparatus at the elbow seems almost memex like in its vision of the future. I must admit given all of what he gets wrong, I really like his idea of looking at the current state of our social situation and relations for the seeds of what tomorrow might look like. But at the same time I have trouble understanding how else you could meaningfully try to predict future trends. There is a tension between his desire for centralization of control, while allowing for decentralization, that seems quintessentially American.
(Taylor, 1962) muses about the nature of questions, how they progress in an almost Freudian way from the unconscious to a fully sublimated formal question of an information system. One thing that is particularly interesting is his formulation about how questions themselves are only fully understood in the context of an accepted answer. It’s almost as if the causal chain of question/answer is inverted, with the question being determined by the answer, and time running backwards. I know this is a flight of fancy on my part, but it seemed like a quirky and fun interpretation. The paper is deeply ironic because it opens up new vistas of future information science research by asking a lot of questions about questions. The method is admittedly rhetorical, and the paper is largely a philosophical meditation on how people with questions fit into information systems, rather than a methodological qualitative or quantitative study of some kind. It makes me wonder about the information system his questions are aimed at. Is scientific inquiry an information system? Also, perhaps this is heretical, but is there really such a thing as an information need? Don’t we have needs/desires for particular outcomes which information can help us realize: information as tool for achieving something, not as an object that is needed? I guess this could be considered a pragmatist critique of a particular strand of information science. I guess this would be a good place to invoke Maslow’s Hierarchy of Needs.
(Borko, 1968) attempts to define what information since in the wake of the American Documentation Institute changed its name to the American Society for Information Science. He explicitly calls out Robert Taylor’s definition, who was instrumental in helping create the Internet at DARPA.
He summarizes information science as the interdisciplinary study of information behavior. It’s kind of strange to think of information behaving independent of humans isn’t it? Are we really studying the behavior of people as reflected in their information artifacts, or is the behavior of information really something that happens independent of people? This question makes me think of Object Oriented Ontology a bit. A key part of his definition is the feedback loop where the traditional library and archive professions apply the theories of information science, which in turn are informed by practice. This relationship between theory and practice is a significant dimension to his definition. It seems like perhaps today many of the disciplines he identified have been subsumed into computer science departments? But it seems information science has a way of tying different disciplines together that were previously siloed?
(Bush, 1945) is a classic in the field of computing, cited mostly for its prescience in anticipating the hyperlink, and the World Wide Web. He is quite gifted at connecting scientific innovation with tools that are graspable by humans. One disquieting thing is the degree to which women, or as he calls them, “girls” are made part of the machinery of computation. To what extent are people unwittingly made part of this machinery of war that Bush assembled in the form of the Manhattan Project. Who does this machinery serve? Does it inevitably serve those in power? If we fast forward to today, what machinery are we made part of, by the transnational corporations that run our elections, and deliver us our information? Can this information system resist the forms of tyranny that it was created by? Ok, enough crazy talk for now :-)
Borko, H. (1968). Information science: What is it? American Documentation, 3–5.
Bush, V. (1945). As we may think. Atlantic. Retrieved from http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/
Shera, J. H. (1933). Recent social trends and future library policy. Library Quarterly, 3, 339–353.
Taylor, R. S. (1962). The process of asking questions. American Documentation, 391–396.