You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 13 hours 43 min ago

pinboard: Prizewinners

Wed, 2015-07-08 01:05
2015 #LODLAM prize info up - enjoy @dannyarcher power of LOD mashup #musetech #code4lib #digitalhumanities

Code4Lib Journal: Connecting Historical and Digital Frontiers: Enhancing Access to the Latah County Oral History Collection Utilizing OHMS (Oral History Metadata Synchronizer) and Isotope

Wed, 2015-07-08 00:31
The University of Idaho Library received a donation of oral histories in 1987 that were conducted and collected by a local county historical society in the 1970s. The audio cassettes and transcriptions were digitized in 2013 and 2014, producing one of the largest digital collections of oral histories - over 300 interviews and over 569 hours - in the Pacific Northwest. To provide enhanced access to the collection, the Digital Initiatives Department employed an open-source plug-in called the Oral History Metadata Synchronizer (OHMS) - an XML and PHP driven system that was created at the Louie B. Nunn Center for Oral History at the University of Kentucky Libraries - to deliver the audio MP3 files together with their  indexes and transcripts. OHMS synchronizes the transcribed text with timestamps in the audio and provides a viewer that connects search results of a transcript to the corresponding moment in the audio file. This article will discuss how we created the infrastructure by importing existing metadata, customized the interface and visual presentation by creating additional levels of access using complex XML files, enhanced descriptions using the Getty Art and Architecture Thesaurus for keywords and subjects, and tagged locations discussed in the interviews that were later connected to Google Maps via latitude and longitude coordinates. We will also discuss the implementation of and philosophy behind our use of the layout library Isotope as the primary point of access to the collection. The Latah County Oral History Collection is one of the first successful digital collections created using the OHMS system outside of the University of Kentucky.

DuraSpace News: REGISTER: Fedora 4 Workshop and User Group Meeting in Paris

Wed, 2015-07-08 00:00

Winchester, MA  A one-day Fedora 4 Workshop and User Group Meeting will be held on Sept. 25, 2015 in Paris, France. The event coincides with the Sixth RDA Plenary Meeting and will take place in the same venue–the Conservatoire national des arts et métiers. The agenda includes a Fedora 4 workshop, Fedora user group presentations from regional Fedora users, and will conclude with a wrap-up discussion. There is no charge for this event.

HangingTogether: Archivists should be key players in managing born-digital library materials

Tue, 2015-07-07 22:55

As an archivist, I’m acutely aware of the broad applicability of archivists’ skills and expertise to the challenges facing research libraries—challenges in areas beyond those traditionally seen as within the archivist’s purview. This has become increasingly true as we become more deeply enmeshed in the complexities of the digital age. In recent years I’ve observed many situations in which teams of expert librarians and technologists have experienced setbacks in managing born-digital materials due to issues that an archivist could have recognized and addressed at the outset. How so, you might ask?

For the long answer, read The Archival Advantage: Integrating Archival Expertise into Management of Born-digital Library Materials, hot off the press today from OCLC Research. For a quick overview, read on.

In this essay I argue for involving archivists in managing digital materials that may be neither acquired by nor in the custody of the archives; examples include research data, websites, and email. These three types of material have analog equivalents (lab notebooks, university newsletters, and correspondence), while others are in digital formats that do not (blogs, Twitter, wikis, and software). All of these types of material are within the scope of what research libraries should consider acquiring and preserving, particularly as our concept of the scholarly record evolves.

Archivists specialize in issues crucial for managing unique materials—i.e., those for which the original exists in a single location, multiple copies aren’t published and distributed, and a particular owner (whether an individual or an organization) owns and controls the content. Consider the extent to which the digital formats named above match these characteristics.

The essay focuses on ten areas of archival expertise, describing some of the complexities of each and posing sample questions that can arise in the digital context. Other research library staff may have skills that intersect, but these ten are part of an archivist’s daily routine.

  • Ownership
  • Donor relations
  • Intellectual property
  • Appraisal
  • Context of creation and use
  • Authenticity
  • Restrictions on access and use
  • Transfer of ownership
  • Permanence
  • Collection-level metadata

For instance, let’s look at some of the issues associated with donor relations.

The decision to donate one’s papers, or those of a loved one, can be an emotional experience. Archivists therefore carefully establish and build upon relationships of trust, negotiate terms of the donation, raise any pertinent legal issues, and discuss possible restrictions on sensitive material. Some donors ask pointed questions about how the institution will manage the materials. Relationships may continue for years and must be nurtured to ensure the institution’s reputation as a desirable home for others’ materials. Institutional archivists often must educate administrators and staff about the importance and benefits of transferring material to the repository. Large organizations usually have records retention schedules that stipulate which types of office records have permanent value and should be transferred to the archives when no longer actively in use. In working with donors of all types, archivists discuss the scope of materials that have sufficient value to be placed in the archives. Typically, only a small percentage of materials created are declared permanent and designated for transfer to the archives.

Any or all of these issues may pertain with a digital donation: Some sample questions: Should the deed of gift cover any special issues because of the digital format? Does it matter if the donor or anyone else has copies of all or significant portions of the digital material? Do we have to consult the donor before we recover deleted files? Do the digital records contain any personal information we should redact? What happens if we choose not to retain some material after acquiring it? With whom do we discuss these issues if the creator is deceased?

The descriptions and sample questions for each of the ten areas reveal the tip of the iceberg of an archivist’s expertise. Research libraries should take full advantage of archivists’ array of skills so that unpublished digital resources can be managed efficiently, effectively, and responsibly.

About Jackie Dooley

Jackie Dooley leads OCLC Research projects to inform and improve archives and special collections practice. Activities have included in-depth surveys of special collections libraries in the U.S./Canada and the U.K./Ireland; leading the Demystifying Born Digital work agenda; a detailed analysis of the 3 million MARC records in ArchiveGrid; and studying the needs of archival repositories for specialized tools and services. Her professional research interests have centered on the development of standards for cataloging and archival description. She is a past president of the Society of American Archivists and a Fellow of the Society.

Mail | Web | Twitter | Facebook | More Posts (17)

Library of Congress: The Signal: How Academics Manage their Personal Digital and Paper Information in their Digital Work Space

Tue, 2015-07-07 17:48

This is a guest post from Assistant Professor Kyong Eun Oh and Doctoral Student Vanessa Reyes, Simmons College School of Library and Information Science.

I asked them to share their research with readers of The Signal because some of the digital preservation challenges that Simmons College faces — and Oh and Reyes researched — hold true for many other colleges and universities: spreading awareness to professors and students about curating their own digital files. How can their institution help? Should their institution even help?

Vanessa Reyes, Simmons College. Photo by Vanessa Reyes.

As a part of our ongoing research project, Managing Paper-Based Personal Information in Our Digital World, we investigated the differences between managing personal digital information and personal paper-based information among academics. Advances in technology have affected personal archiving behaviors; what is relatively unknown is:

  • Which practices are being carried out by academics in managing different formats of personal information?
  • Is there is any preference for one format over the other?
  • What is the proportion of paper-based and digital personal information?
  • What are the strengths and weaknesses in managing two different formats?

Kyong Eun Oh, Simmons College. Photo by Anne Pepitone.

We recruited nine full-time faculty members in various disciplines, visited each of their offices and asked them to give a grand-tour of their offices while describing how they keep and manage their personal collections. Then we interviewed each of them about how they managed their personal paper-based and digital information in their offices.

A preliminary analysis of the results showed that regardless of discipline, professors predominantly kept personal digital information over personal paper-based information in their offices. Also professors maintained digital information for a longer period of time compared to paper-based information.

For example, Participant 8 stated, “So the electronic information, I basically have not stopped collecting them since I got a computer. I have a hard drive back in my apartment that has papers that I wrote for my freshman year. Paper-based information doesn’t tend to stick around that long.” The reason digital information items were kept longer seemed to be primarily because of the vast space available to store their digital information. For instance, Participant 9 said, “I’ve got, like, a gigabyte on my hard drive on my laptop. There’s no real reason to get rid of it.” These responses indicated the importance of having enough space to keep personal information.

The strengths of managing digital information, compared to paper-based information, included:

  • Easier management
  • Vast storage space
  • Accessibility
  • Findability
  • Ability to have multiple copies
  • Perceived longer lifespan.

For example, Participant 7 responded, “Digital information takes a lot less space and I think because the way I’m set up, it’s easier to organize and I always have it on backup.” In a similar vein, Participant 6 said, “Digital. I think it’s faster, it’s easier, it’s more ecological – how do you say, environmentally friendly, you know, you always have it there, you go back to your email and you do search.” These responses reveal the significance of easy management and findability of information in personal information management (PIM).

It also brings up a common issue in the digital preservation field, which is the importance of having personal digital information in a safe and accessible place, especially when there is no preservation plan. Based on the initial results from this study, we can think about the following questions to widen the conversation. (1) How many academics have a plan for maintaining their personal information? (2) If they have backups of their personal digital information, do they have a regular backup or maintenance schedule? To have digital information always available in a safe location, it is necessary to have an adequate plan or system that supports this. In the case of our study, no participants mentioned following a preservation plan for their personal digital documents or having a reliable system that supports preserving their personal information items.

We also found the weaknesses of managing digital information when compared to paper-based information. These included:

  • Difficulty in taking notes
  • Loss of physical interaction
  • Decreased readability
  • Less portability
  • Need to have a machine/device to access information
  • Loss of personal sentiment.

For example, Participant 8 mentioned that, “Sometimes, in meetings, it’s easier to just draw a quick few notes than trying to put things on the text in a way that I will be sure to remember that I put it there later, and it won’t just kind of blend in.” While describing what he/she likes about managing paper-based information, Participant 2 said, “Transportability, readability, and I have it. I don’t have to access it.” Participant 5 also mentioned that, “I think there’s something tangible and quantifiable and weighable about paper that evades the electronic forms of communication.” These statements showed that while managing digital information is easier that people still like to have a tangible relationship with paper, which is a unique characteristic of interacting with non-digital information.

What were mentioned as weaknesses of managing digital information files including “less portability” as well as “need to have a machine/device to access information” highlighted the importance of having a machine-independent storage, which can be accessed without using a particular machine, such as virtual spaces. For example, if the storage is virtual, personal digital information could be easily accessed independent from the physical location or device.

Investigating how professors manage their personal information has been fascinating. The issue of personal digital archiving on campus, especially among academics, is emerging as an important topic. As we found in our study, most academics in the digital age predominantly keep their personal data in digital formats, although they still manage paper-based information.

Our study suggests that it would be ideal for academics to have a content management system that:

  1. provides ample space
  2. is easily accessible from various locations
  3. preserves information securely
  4. has enhanced searching and sorting functions.

If academics can set up a privacy level for different personal information items, having this system as a campus-wide digital repository may also promote effective information sharing.

Another recommendation is to partner information professionals with academics to create a system based on academics’ needs. This will contribute to securely preserving academics’ scholarly work and supporting the productivity of academics.

What we have introduced here is a portion of our preliminary analysis of data. We aim to dig deeper and further examine this area to enrich our understanding of the practices of the academics’ PIM. We expect that the results from our research project will deepen our understanding of how academics manage their personal information. We also hope that this study will contribute to the development and design of various PIM systems, tools, and applications that support academics’ management of their personal information.

District Dispatch: Amendment to help save school libraries moving now!

Tue, 2015-07-07 17:29
Ask Your U.S. Senators to Support Vital Reed-Cochran Amendment to ESEA!

Today, the U.S. Senate will begin debate on modifying and reauthorizing the Elementary and Secondary Education Act (ESEA), the federal government’s principal education statute. First up will be a bipartisan amendment crucial to school library funding by Sens. Jack Reed (D-RI) and Thad Cochran (R-MS).

Passing the “Reed-Cochran Amendment” (summarized here (pdf)) will help save and expand school libraries in every state in the nation by explicitly authorizing school districts to use ESEA funds to develop and foster effective school library programs . . . programs with certified school librarians at their core.

The Amendment already has the backing of Sen. Lamar Alexander (R-TN), Chair of the Senate’s Health, Education, Labor and Pensions (HELP) Committee that unanimously approved the basic bill now before the Senate (S. 1177, Every Child Achieves Act of 2015) . . . but the Reed-Cochran Amendment needs at least 48 more votes to pass.

Across the United States, studies have demonstrated that students in schools with effective school library programs learn more, get better grades, and score higher on standardized tests than their peers in schools without such resources.

Now is the time to speak up for our children!

Author James Patterson supports school libraries!

The Reed-Cochran Amendment will provide vital assistance to local education authorities and schools in “developing effective school library programs to provide students an opportunity to develop digital literacy skills and to help ensure that all students graduate from high school prepared for postsecondary education or the workforce without the need for remediation.”

TODAY is the day! Please, take a moment to call your Senators’ offices NOW!

Just tell the staff member who answers that you urge the Senator to “vote YES on the Reed-Cochran Amendment to S. 1177!” It’s easy and it’s crucial. Please act now!

The post Amendment to help save school libraries moving now! appeared first on District Dispatch.

HangingTogether: Six Takeaways: Stewardship of the Evolving Scholarly Record

Tue, 2015-07-07 15:28

Over the past year, OCLC Research has been exploring the evolving scholarly record and its implications for libraries – see our June 2014 report The Evolving Scholarly Record, and the follow-on workshop series aimed at identifying new challenges in gathering, organizing, and curating the scholarly record.

The evolution of the scholarly record calls for a corresponding evolution in the stewardship strategies that ensure scholarly materials persist for many generations to come. This is the focus of our new OCLC Research report, Stewardship of the Evolving Scholarly Record: From the Invisible Hand to Conscious Coordination. The report describes some new directions in constructing reliable, trusted stewardship arrangements around the highly distributed and diverse outputs of contemporary scholarship.

In Stewardship of the Evolving Scholarly Record, we suggest that today’s scholarly record requires a shift to a new stewardship paradigm driven by conscious coordination. We hope you take the time to read the report, but for those who would like a preview, here are six takeaways:

1. The scholarly record is growing in volume, diversity, complexity, and the distribution of custodial responsibility. The result is a much deeper, more complete record of scholarly inquiry than what was captured in the past – and one that differs significantly from the traditional print-based scholarly record that library collections, services, and infrastructure have been built around. Today, the scholarly record is imperfectly approximated in the aggregate library resource, because libraries are not – and cannot – collect the full range of scholarly outputs that now comprise the evolving scholarly record.

2. Traditional, print-centric stewardship models relied on the “invisible hand” to secure the scholarly record. Stewardship of the print-based scholarly record was largely a byproduct of an uncoordinated, highly distributed, and duplicative process of managing local collections for local use. In the manner of Adam Smith’s invisible hand, the aggregation of many internally-directed, relatively autonomous efforts to maintain and preserve local collections led, unintentionally, to a socially beneficial outcome – the gathering and curation of the overall scholarly record. But the “invisible hand” approach to stewardship is inadequate for today’s scholarly record.

3. Future stewardship models require conscious coordination. Conscious coordination will replace the “invisible hand” as the guiding principle behind stewardship of the evolving scholarly record. The keys to conscious coordination are context, commitments, specialization, and reciprocity: local stewardship decisions will be informed by a broader, system-wide context, including how the local collection fits into a system-wide stewardship effort; declarations of stewardship commitments will be made around portions of the local collection on behalf not only of local users, but also a broader external stakeholder community; networks of distributed stewardship responsibilities will coalesce into a well-defined division of labor within cooperative arrangements, with libraries placing greater emphasis on specialization in collection-building; and reliable, relatively frictionless access to all scholarly materials distributed across the network will be obtained through robust, trusted resource-sharing arrangements.

4. Consciously-coordinated stewardship strategies need to right-scale consolidation, cooperation, and community mix. Implementing consciously-coordinated stewardship for the scholarly record leads to a number of important decision points around right-scaling. Right-scaling consolidation means optimizing the degree of concentration or centralization of the collections, services, and infrastructure involved in a particular stewardship effort. Right-scaling cooperation involves finding the appropriate number of participants in a coordinated stewardship activity, e.g., a small group of institutions, a consortium, a region, etc. Finally, right-scaling community mix is about leveraging benefits from diversifying stewardship partners beyond peer institutions or legacy associations.

5. Reducing transaction costs facilitates conscious coordination. Conscious coordination involves increased interaction with, and reliance on, external partners. Effective interaction comes with costs: for example, the costs of finding appropriate partners, negotiating agreements, setting up and maintaining governance mechanisms, monitoring and enforcing performance. In economics, the costs of interaction are called transaction costs. An important element of building robust stewardship strategies for the evolving scholarly record is identifying and minimizing transaction costs within cooperative stewardship arrangements.

6. Incentives to participate in consciously-coordinated stewardship should be linked to broader institutional priorities. Consciously-coordinated stewardship often requires academic libraries to collect locally, and share globally. This can conflict with the academic library’s traditional role as a service provider in support of its local university community, to the extent that individual libraries function less as autonomous local service hubs, and more like nodes in complex networks of specialization, mutual dependence, and collective responsibility. Preserving a clear institutional identity in such circumstances, as well as strengthening local stewardship incentives, requires close alignment of increasingly global stewardship commitments with broader institutional priorities.

Our report is only the beginning of a conversation. In it, we limit ourselves to describing the broad contours of what consciously-coordinated stewardship might look like for the evolving scholarly record. Much work remains to be done to translate these ideas into robust stewardship arrangements that will preserve the scholarly record in all of its diversity and complexity. Connecting users to the scholarly record is a fundamental part of an academic library’s mission. That mission has not changed, but the means by which it is achieved must evolve in concert with the scholarly record itself.

Consciously-coordinated stewardship of the scholarly record is part of a broader trend in the library community to move various activities, services, and infrastructure “above the institution” and into networks of cooperation and coordination. OCLC Research explores this trend in its Understanding the System-wide Library (USL) research theme. Please visit the USL home page for more information.

About Brian Lavoie

Brian Lavoie is a Research Scientist in OCLC Research. Brian's research interests include collective collections, the system-wide organization of library resources, and digital preservation.

Mail | Web | LinkedIn | More Posts (15)

David Rosenthal: IIPC Preservation Working Group

Tue, 2015-07-07 15:00
The Internet Archive has by far the largest archive of Web content but its preservation leaves much to be desired. The collection is mirrored between San Francisco and Richmond in the Bay Area, both uncomfortably close to the same major fault systems. There are partial copies in the Netherlands and Egypt, but they are not synchronized with the primary systems.

Now, Andrea Goethals and her co-authors from the IIPC Preservation Working Group have a paper entitled Facing the Challenge of Web Archives Preservation Collaboratively that reports on a survey of Web archives' preservation activities in the following areas; Policy, Access, Preservation Strategy, Ingest, File Formats and Integrity. They conclude:
This survey also shows that long term preservation planning and strategies are still lacking to ensure the long term preservation of web archives. Several reasons may explain this situation: on one hand, web archiving is a relatively recent field for libraries and other heritage institutions, compared for example with digitization; on the other hand, web archives preservation presents specific challenges that are hard to meet. I discussed the problem of creating and maintaining a remote backup of the Internet Archive's collection in The Opposite of LOCKSS. The Internet Archive isn't alone in having less than ideal preservation of its collection. It's clear the major challenges are the storage and bandwidth requirements for Web archiving, and their rapid growth. Given the limited resources available, and the inadequate reliability of current storage technology, prioritizing collecting more content over preserving the content already collected is appropriate.

LITA: Online Surveys in Libraries: Tips and Strategies

Tue, 2015-07-07 13:00

Editor’s Note: This is part two of a two-part guest post on survey use in libraries by Celia Emmelhainz.

Learning the Craft of Surveys
  • Learn the craft. Survey-building is a craft, so study up on survey design. Luckily for you, there’s a free Coursera course on Questionnaire Design that started on June 1, 2015. I can attest that the lectures are useful.
  • Don’t be afraid to start small and develop more nuanced surveys over time. You’ll learn what sorts of questions and approaches actually work for you.
  • Consider representative, quota, or cluster sampling rather than trying to get responses from everyone. Don’t know what that is? Take Solid Science: Research Methods for free on Coursera, starting this August 31, 2015. It’s well worth it for library research.

Getting Responses
  • Why do this? Nobody wants to take surveys, unless they’re underemployed; is there a compelling reason they should take yours? Is the benefit really worth the collective time?
  • Keep it short. 
  • Seriously, the best way to get responses is to keep it to 4-5 focused questions.
  • A 20-30% response rate is good, especially if you don’t offer prizes. The more focused you can make the invitation look, the better your results may be.
Keep it Useful
  • Mix it Up. Don’t just ask the same questions over and over in a yearly survey. Unless your survey is well-designed by a social scientist, in line with the library’s strategic plan, and you have the tools to analyze longitudinal data, you’re not going to make good use of the data.
  • Don’t duplicate. Don’t collect data you could collect elsewhere (usage stats, gate counts). If you see that some questions don’t change much year to year, consider rotating questions in and out so that the survey stays both short and informative.
  • Always run a pilot: check your question wording with a few trial respondents before sending the whole thing out. Feel free to change or eliminate questions that aren’t returning useful answers.
  • Surveys get stronger if multiple institutions or social scientists do them together.
Good Survey Design
  • Important stuff first. Put demographic questions on the last page; getting topical the topic is usually more important.
  • Get partial data. Even when you put the important things first, unfinished surveys are normal. Choose a survey program that captures partially-completed pages. SurveyMonkey doesn’t return page results until respondents click ‘next’, while SurveyGizmo seems to capture even partial pages.
  • Consider your pages. The fewer pages you have, the more likely people will complete the survey. But if one page has too many questions, they may also stop. It’s a balancing act!
  • Stay phone-savvy. Check how easy the survey is for smartphone users. I learned the hard way that a long survey may scare mobile users away.
Survey Ethics
  • Get IRB Review. If you plan to publish or present results as ‘scientific research,’ submit the survey to your campus IRB board. An anonymous survey may be judged as ‘exempt’ from further review, but at least you’ve had the IRB take a look.
  • And/or, respect ethical principles. Often customer surveys, usability studies, educational surveys, or personal surveys of friends online don’t require an ethics review. But it’s good to live by the Belmont Principles anyway: design surveys that respect individuals, are just, do no harm, and benefit others.
  • Even if there is no ethics review required, maximize the benefit and minimize the harm.
  • Don’t collect identifying data. Google Apps and Qualtrics let you extract usernames or demographic data from campus accounts—but that’s likely a violation of privacy. Don’t collect data that could be leaked, and safeguard the data you do collect.
Survey Analysis and Results
  • Have a goal. As my colleague Amanda Rinehart has recommended: a library survey is strongest if you can map each question to a specific hypothesis. Don’t just throw questions into the dark; instead, make sure you can act on the answers to each question you ask.
  • Think before questioning. If analyzing by gender, race, or age isn’t useful, don’t ask those questions. Keep questions closely tied to your hypothesis or survey goal, as you can always survey a different subset of users later.
  • Show the value. Value the time others put into your surveys; make sure you do something for users with the results, and make the link clear!

Any other suggestions? Add them to the comments below!

Celia Emmelhainz is the social sciences data librarian at the Colby College, and leads a collaborative blog for data librarians at databrarians.org. She has worked on library ethnography and survey projects, and currently studies qualitative data archiving, data literacy, and global information research. Find her at @celiemme on twitter, or in the Facebook databrarians group.

Terry Reese: MarcEdit OSX Public Build #2

Tue, 2015-07-07 04:53

Interesting thing about software development — everything can work so great within your own environments, but then be so uneven once they move outside of them.  The variable that changed — real data…and that’s why you make things available for folks to play with.

First, thanks to those that downloaded the preview and gave it a whirl.  I got responses that ranged from — looks great, when will [my favorite missing function] be ported to, I tried click on this button and things crashed.  The crashing was something I didn’t expect — but it was a good lesson in making sure that all user data is validated.  I took for granted that all data passed between the API components would be OK — and it wasn’t, and when it wasn’t, problems ensued, which could not be fixed without resetting the config settings manually (which made me realize I need to ensure this can be done automatically like in the Windows/Linux version).

So, I had a late night ahead of me for some unrelated reasons, and I took a crack and hardening the validation and making the portions of the program that accept user data more fault tolerant.  And, I’m back to the point where I can’t break it…so, I’ll let you all take another crack at it.

If you downloaded the preview yesterday — the first time you open the program, you’ll be notified that a new version is available.  You can click on the download button and follow the link.  Otherwise, you can download the program from the downloads page.

Download Page URL: http://marcedit.reeset.net/downloads
Direct Link: http://marcedit.reeset.net/software/MarcEdit.dmg

Change log is below

–tr

****************************
1.0.7 ChangeLog
****************************
* Bug Fix: Open/Save Dialog Validation — These functions were not validating user data and this was causing problems. These functions now validate data, and if they cannot recover from an error, will simply return a blank value.
* Bug Fix: Run Tasks — Some of the task elements were not running. This has been corrected.
* Bug Fix: Window flashing when running tasks — this still exists a little bit (small flicker), but prior, windows were opening and staying open on each task element.
* Bug Fix: Change File prompt not being run on close – this occurred when an update was made that returned zero results. The value that managed data changes was cleared, and the window was allowed to close without prompt. This has been corrected.
* Bug Fix: The about page wasn’t listing the names that supported this development. This was a regression due to some changes made to how this particular UI component renders. This has been corrected.
* Enhancement: MARC Tools — when select a file to process, the program autofills the save file with the appropriate extension.
* Enhancement: MARC Tools — the Edit File button is now enabled after breaking
* Enhancement: Document Types — I’ve enabled document type support within the program. The application does not yet self register file extensions to the application, but if you associate the .mrc or .mrk files with the application, it will now handle opening these files correctly.

collidoscope: W3IDs for ISIL

Tue, 2015-07-07 00:00
W3IDs for ISIL

The International Standard Identifier for Libraries and Related Organisations (ISIL / ISO 15511) identifies an organization, i.e. a library, an archive, a museum or a related organization, or one of its subordinate units. The registration of ISILs takes place at the National ISIL Allocation Agencies (see list).

Todays identification is best expressed through HTTP URIs because of their uniqueness. Unfortunately the distributed maintenance of ISILs in the different national agencies makes it yery hard or impossible to reference cultural heritage organizations by ISIL through URIs.

However some agencies provide ISIL-URIs or even linked data services (see Code List for Cultural Heritage Organizations or Linked Data Service Adressdaten). But wouldn’t it be nice to access cultural heritage organizations within a single domain?

Permanent Identifiers for the Web (w3id.org) provides a secure, permanent URL re-direction service for Web applications. And this gives us the opportunity to provide a single access point to organization information identified by ISIL.

One can now reference an organization identified by an ISIL through

https://w3id.org/isil/<ISIL>

Such an URI may be not dereferenceable (no data for lookup). E.g. https://w3id.org/isil/CH-000015-0 will not return any organization data but may be used as an identifier for the “Schweizerisches Literaturarchiv, Bern”.

Current ISILs that are supported for dereferencing are:

District Dispatch: So, what does a Google Policy Fellow actually… do?

Mon, 2015-07-06 21:00

Guest post by Johnna Percell, 2015 Google Policy Fellow

I have spent much of my time over the past few weeks answering that question. From well-meaning family members curious about my unfamiliar career trajectory to strangers caught off guard by my answer to the classic DC introductory question – “And what do you do?” Typically telling them that I’m working at the American Library Association’s Office for Information Technology Policy (OITP) not only adds to their confusion but also provides an excellent opportunity to talk about the important work of libraries.

Albert Einstein Memorial at the National Academies.

What has been most surprising to me is how often my conversations with librarians tend to mirror the confusion of those unfamiliar with the field. My fellow librarians are already aware of the integral role libraries play in supporting the community, defending equal access to information, and bridging the digital divide. However, a surprising number of them don’t seem aware of the important policy work OITP does to ensure that libraries can best fulfill this obligation to the public.

As I wrap up my first month here at the Washington Office I thought I’d take some time to introduce myself, let you know what I’ve been doing with my time, and give you a little glimpse into the vital role policy plays in libraryland.

A little about me – I am a recent graduate of the University of Maryland’s College of Information Studies where I received my MLS with a specialization in Information and Diverse Populations. During my time at Maryland I had the pleasure of serving as president of iDiversity, the first LIS student group that promotes awareness of diversity, inclusivity, and accessibility within the information professions. Prior to beginning my Master of Library Science, I worked as an Education Coordinator for the Community Corrections Improvement Association (CCIA) in Iowa, a nonprofit organization serving the education and housing needs of individuals with community corrections involvement. My work with CCIA confirmed my interest in working to empower underserved populations and introduced me to the role libraries play as a tool to facilitate greater equality in our society.

Here at OITP this summer, I hit the ground running my first day in the office with a visit to the Federal Communications Commission (FCC) to discuss the upcoming Lifeline Modernization proceeding with Commissioner Rosenworcel and Commissioner Clyburn’s staff. The following week I tagged along to the FCC Open Commission Meeting to hear the commissioners’ plans to reform and modernize Lifeline. Now that we have our hands on the Notice of Proposed Rulemaking, I’m working my way through the proposals to see where ALA should weigh in and what role libraries can play in supporting this potentially important step toward closing the digital divide.

In addition to getting a crash-course in FCC proceedings, I’ve been able to attend a number of panel discussions and presentations to increase my understanding of the policies we’re dealing with as well as getting a glimpse into how other organizations are working on these issues. A few highlights include:

  • Kids, Learning, and Technology: Libraries as 21st Century Creative Spaces: One of the first

    A capitol chair at the congressional briefing.

    events I attended was a congressional briefing co-hosted by ALA and U.S. Representative Marcia Fudge (D-OH). It was great to see so many people (including some familiar faces from UMD!) show up to discuss the important role of libraries in advancing digital literacy in teens and exciting for me to attend my first congressional briefing. Check out Charlie Wapner’s write-up of the event on the District Dispatch last month.

  • The Future of Wifi: Public or Private?: This panel was co-hosted by the Microsoft Innovation & Policy Center and New America’s Open Technology Institute (OTI). Panelists discussed the upcoming FCC sale of spectrum bandwidth which could impact the quality of current wifi service. If you’ve ever wondered how wifi works or who owns the spectrum bands, I recommend delving into the riveting world of spectrum sales and the 2.4 GHz band. This was all new territory for me, and quite a bit more technical than I’m used to, but the implications for a large organization providing public wifi could be significant.
  • Making Mobile Broadband Affordable: Another discussion hosted by New America’s OTI. This one covered a broader view of the two major FCC initiatives that hold the potential to increase the affordability of mobile broadband. In addition to the upcoming spectrum sale, panelists discussed the Lifeline modernization proceedings. FCC Commissioner Clyburn was on hand to give the opening remarks and champion the program she has worked hard to make more effective.
  • Symposium on the Supply Chain for Middle Skill Jobs: The National Academy of Sciences hosted this two-day gathering to discuss a variety of innovative pathways to increase employment and income stability of middle skill employees. A video recording will be available in a few weeks on their site. If you want to get inspired about the important things happening to improve this portion of the job sector, I recommend streaming a few of these conversations.

The rest of my time has been spent researching the important role rural libraries play in their communities to support the Policy Revolution! initiative. I am reading up on the relevant research and reaching out to some rural librarians to get their insight into the needs and opportunities facing the communities they serve. If you’re a rural librarian you may be hearing from me soon!

As you can see, OITP’s work reaches into many corners of librarianship – and I’m just barely scratching the surface in my time here. There is so much more going on in the office and all of this work has direct implications for the work of librarians everywhere. Though it is easy to overlook while serving the complicated and urgent information needs of the patron right in front of you, policy decisions in Washington can seriously facilitate – or hinder – the work you do everyday. Having a group of dedicated information professionals advocating for libraries and library patrons here at the nexus of policy-making is indispensable to information professionals everywhere.

So what does a Google Policy Fellow do? Her very best to help advance our field.

The post So, what does a Google Policy Fellow actually… do? appeared first on District Dispatch.

LITA: LITA at ALA Annual, give us your opinions

Mon, 2015-07-06 19:56

Did you attend the 2015 ALA Annual conference in San Francisco?

Thank you! There were loads of dynamic, useful and fun LITA programming at the conference. Now we want your opinions. Please complete our

LITA at ALA Annual conference survey!

LITA programs included:

  • 3 preconferences
  • Sunday afternoon with LITA inlcuding the Top Technology Trends panel
  • Rachel Vacek’s presidents program with Lou Rosenfeld
  • A total of 20 programs
  • LITA Interest Groups discussions and meetings

You can review the LITA Highlights page for information on LITA programs and activities at Annual Conference, with the link to the full conference scheduler, and check out the LITA Interest Groups special managed discussions list too.

We’re trying very hard to make sure LITA programming meets your needs. To help us we have an

Evaluation Survey for all LITA Programs at 2015 ALA Annual conference.

Now that you attended Annual we hope you’ll take the few minutes to complete the survey. The results can have a direct effect on future programming from LITA.

Question or Comments?

For questions or comments contact Mark Beatty, LITA Programs and Marketing Specialist at mbeatty@ala.org or (312) 280-4268.

Islandora: Islandora Conference - The Logo

Mon, 2015-07-06 19:46

We have a little news for the upcoming Islandora Conference, taking place August 3rd to 7th on the campus of UPEI.

A few weeks ago, we got word of an amazing quote from a colleague at York University, who said, "When you fail to mention the #Islandora conference to a potential attendee, a lobster dies of sadness."

There was no way to respond except with Crayola markers and soulful lobster eyes:

Last week our conference team got together to discuss what should go on the t-shirt, and that poor sad lobster just would not go away. He did, however, change his mood to suit t-shirts that will be worn by folks who are going to the conference. We proudly present the logo for the First Annual Islandora Conference:

We hope you'll join us in Charlottetown next month and wear him proudly. Registration is still open!

HangingTogether: University reputation and ranking — getting the researchers on board

Mon, 2015-07-06 18:48

In the first of this series of blog posts about the OCLC Research Library Partnership June meeting in San Francisco, Jim compared the US to other parts of the world in terms of their engagement with research reputation and ranking.  He highlighted one of the things that was common to all of the geographic areas represented at the meeting:  the need for a balance between compliance and service goals.  The library does not want to be seen by researchers as a cop enforcing mandates and gathering assessment data, but rather as a source of support for, if not collaboration in, their research.  As Google Scholar’s Anurag Acharya put it, “Conflicting imperatives abound.”

To ensure that researchers are motivated to participate in activities that contribute to university reputation and ranking, the services we design to meet reputation and ranking goals need to deliver benefits to researchers.  Some of the ways libraries can offer benefits that resonate with researchers are:

  • Reduce the number of times they have to input data.  Register them with ORCID and ISNI.  Assign DOIs for their outputs.  And libraries should speak with a unified voice in attempting to get key research workflow tools to output information as input to other systems.
  • Leverage the library’s mastery of data by populating the profile system, creating personal bibliographies, helping them to find collaborators.  Make sure to accommodate multiple contributors and their roles.  Automate processing, but allow researcher to edit.  Look at how popular Google Scholar is and emulate some of its features locally.
  • Offer guidance on increasing the impact of their work and deploying their outputs where they will receive better exposure.

Measurement of impact favors the sciences by focusing on citation in high-impact STEM journals.  The library can be an ally in providing a more complete record of scholarship by:

  • Including monographs, performances, and other forms of output for arts and humanities disciplines.
  • Considering how awards, tech transfer, and altmetrics fit into the picture.
  • Finding ways to highlight interdisciplinary and global studies.
  • Considering including staff and student research.
  • Working with other libraries to get vendors to incorporate other sources (humanities and social science indexes, WorldCat, etc.) into their systems.

The Library is often seen as a neutral party and therefore could be instrumental in promoting reputation, ranking, and related services on campus. Here are some ways to grease the skids:

  • Take advantage of the library’s space and the library’s power to convene
  • Talk about what you can do, not how you can do it (i.e., don’t use words like: hydra/fedora, infrastructure, IR)
  • By streamlining processes, turn the IR into an institutional bibliography into which/from which all data about research outputs flows: consolidate infrastructure, eliminate redundant work, embed the OA Policy / data management requirements within known processes, include restricted content, promote good metadata practices (full names, contributor roles, etc.), integrate data-tracking activities…
  • Be sensitive about faculty perceptions about assessment.  You may need to overcome researcher distrust of productivity measures and their anxiety about how the data will be combined and used – and who will have access to it.
  • Make your goal to tell the story of your university’s contributions to society.  Connect researchers to that story and to university ranking, both of which are based on researcher reputation.

By doing these things the library will be seen less as an “instrument of compliance” and more as a Partner in achieving the institution’s research goals.

See the presentations by Peter Schiffer, Ginny Steel, David Seaman, Catherine Mitchell, and Amy Brand from whence all these good ideas and more.  And stay tuned for our next installment of outcomes from the meeting.

 

 

About Ricky Erway

Ricky Erway, Senior Program Officer at OCLC Research, works with staff from the OCLC Research Library Partnership on projects ranging from managing born digital archives to research data curation.

Mail | Web | Twitter | LinkedIn | More Posts (39)

Hydra Project: Sufia 6.1 released

Mon, 2015-07-06 14:22

We are pleased to announce that Sufia 6.1 has been released released.  A set of release notes can be found here (https://github.com/projecthydra/sufia/releases)

If you are currently using Sufia 6.0 we would recommend upgrading to Sufia 6.1 as soon as possible.  Beyond the additional features in the release a number of bugs were fixed.

Thanks to the 16 contributors for this release, which comprised 139 commits touching 187 files: Adam Wead, Michael Tribone, Gregorio Luis Ramirez, Justin Coyne, Nathan Rogers, Michael J. Giarlo, Carolyn Cole, Trey Terrell, Colin Brittle, Anna Headley, Hector Correa, E. Lynette Rayle, Chris Beer, Jeremy Friesen, Colin Gross, and Tricia Jenkins.

Hydra Project: Open Repositories 2017

Mon, 2015-07-06 14:21

Of potential interest to Hydranauts

Call for Expressions of Interest in hosting the annual Open Repositories Conference, 2017

The Open Repositories Steering Committee seeks Expressions of Interest from candidate host organizations for the 2017 Open Repositories Annual Conference. Proposals from all geographic areas will be given consideration.

Important dates

The Open Repositories Steering Committee is accepting Expressions of Interest (EoI) to host the OR2017 conference until August 31st, 2015.  Shortlisted sites will be notified by the end of September 2015.

Background

Candidate institutions must have the ability to host at least a four-day conference of approximately 300-500 attendees (OR2015 held in Indianapolis, USA drew more than 400 people). This includes appropriate access to conference facilities, lodging, and transportation, as well as the ability to manage a range of supporting services (food services, internet services, and conference social events; conference web site; management of registration and online payments; etc.). The candidate institutions and their local arrangements committee must have the means to support the costs of producing the conference through attendee registration and independent fundraising. Fuller guidance is provided in the Open Repositories Conference Handbook on the Open Repositories wiki.

Expressions of Interest Guidelines

Organisations interested in proposing to host the OR2017 conference should follow the steps listed below:

  1. Expressions of Interest (EoIs) must be received by August 31st, 2015. Please direct these EoIs and any enquiries to OR Steering Committee Chair William Nixon <william.nixon@glasgow.ac.uk>.
  1. As noted above, the Open Repositories wiki has a set of pages at Open Repositories Conference Handbook (https://wiki.duraspace.org/display/or11/Open+Repositories+Conference+Handbook) which offer guidelines for organising an Open Repositories conference. Candidate institutions should pay particular attention to the pages listed at “Preparing a bid” before submitting an EoI.
  1. The EoI must include:

* the name of the institution (or institutions in the case of a joint bid)

* an email address as a first point of contact

* the proposed location for the conference venue with a brief paragraph describing the local amenities that would be available to delegates, including its proximity to a reasonably well-served airport

  1. The OR Steering Committee will review proposals and may seek advice from additional reviewers. Following the review, one or more institutions will be invited to submit a detailed proposal.
  1. Invitations to submit a detailed proposal will be issued by the end of September 2015; institutions whose interest will not be taken up will also be notified at that time. The invitations sent out will provide a timeline for submitting a formal proposal and details of additional information available to the shortlisted sites for help in the preparation of their bid.  The OR Steering Committee will be happy to answer specific queries whilst proposals are being prepared.

About Open Repositories

Since 2006 Open Repositories has hosted an annual conference that brings together users and developers of open digital repository platforms. For further information about Open Repositories and links to past conference sites, please visit the OR home page: http://sites.tdl.org/openrepositories/.

Subscribe to announcements about Open Repositories conferences by joining the OR Google Group http://groups.google.com/group/open-repositories.

Mark E. Phillips: Punctuation in DPLA subject strings

Mon, 2015-07-06 14:00

For the past few weeks I’ve been curious about the punctuation characters that are being used in the subject strings in the DPLA dataset I’ve been using for some blog posts over the past few months.

This post is an attempt to find out the range of punctuation characters used in these subject strings and is carried over from last week’s post related to subject string metrics.

What got me started was that in the analysis used for last week’s post,  I noticed that there were a number of instances of em dashes “—” (528 instances) and en dashes “–” (822 instances) being used in place of double hyphens “–” in subject strings from The Portal to Texas History. No doubt these were most likely copied from some other source.  Here is a great subject string that contains all three characters listed above.

Real Property — Texas –- Zavala County — Maps

Turns out this isn’t just something that happened in the Portal data,  here is an example from the Mountain West Digital Library.

Highway planning--Environmental aspects–Arizona—Periodicals

To get the analysis started the first thing that I need to do is establish what I’m considering punctuation characters because that definition can change depending on who you are talking to and what language you are using.  For this analysis I’m using the punctuation listed in the python string module.

>>> import string >>> print string.punctuation !"#$%&'()*+,-./:;<=>?@[\]^_`{|}~

So this gives us 32 characters that I’m considering to be punctuation characters for the analysis in this post.

The first thing I wanted to do was to get an idea of which of the 32 characters were present in the subject strings, and how many instances there were.  In the dataset I’m using there are 1,871,877 unique subject strings.  Of those subject strings 1,496,769 or 80% have one or more punctuation characters present.  

Here is the breakdown of the number of subjects that have a specific character present.  One thing to note is that when processing if there were repeated instance of a character, they were reduced to a single instance, it doesn’t affect the analysis just something to note.

Character Subjects with Character ! 72 “ 1,066 # 432 $ 57 % 16 & 33,825 ‘ 22,671 ( 238,252 ) 238,068 * 451 + 81 , 607,849 – 954,992 . 327,404 / 3,217 : 10,774 ; 5,166 < 1,028 = 1,027 > 1,027 ? 7,005 @ 53 [ 9,872 ] 9,893 \ 32 ^ 1 _ 80 ` 99 { 9 | 72 } 9 ~ 4

One thing that I found interesting is that characters () and [] have different numbers of instances suggesting there are unbalanced brackets and parenthesis in subjects somewhere.

Another interesting note is that there are 72 instances of subjects that use the pipe character “|”.  The pipe is often used by programmers and developers as a delimiter because it “is rarely used in the data values”  this analysis says that while true it is rarely used,  it should be kept in mind that it is sometimes used.

Next up was to look at how punctuation was distributed across the various Hubs.

In the table below I’ve pulled out the total number of unique subjects per Hub in the DPLA dataset.  I show the number of subjects without punctuation and the number of subjects with some sort of punctuation and finally display the percentage of subjects with punctuation.

Hub Name Unique Subjects Subjects without Punctuation Subjects with Punctuation Percent with Punctuation ARTstor 9,560 6,093 3,467 36.3% Biodiversity_Heritage_Library 22,004 14,936 7,068 32.1% David_Rumsey 123 106 17 13.8% Harvard_Library 9,257 553 8,704 94.0% HathiTrust 685,733 56,950 628,783 91.7% Internet_Archive 56,910 17,909 39,001 68.5% J._Paul_Getty_Trust 2,777 375 2,402 86.5% National_Archives_and_Records_Administration 7,086 2,150 4,936 69.7% Smithsonian_Institution 348,302 152,850 195,452 56.1% The_New_York_Public_Library 69,210 9,202 60,008 86.7% United_States_Government_Printing_Office_(GPO) 174,067 14,525 159,542 91.7% University_of_Illinois_at_Urbana-Champaign 6,183 2,132 4,051 65.5% University_of_Southern_California._Libraries 65,958 37,237 28,721 43.5% University_of_Virginia_Library 3,736 1,099 2,637 70.6% Digital_Commonwealth 41,704 8,381 33,323 79.9% Digital_Library_of_Georgia 132,160 9,876 122,284 92.5% Kentucky_Digital_Library 1,972 579 1,393 70.6% Minnesota_Digital_Library 24,472 16,555 7,917 32.4% Missouri_Hub 6,893 2,410 4,483 65.0% Mountain_West_Digital_Library 227,755 84,452 143,303 62.9% North_Carolina_Digital_Heritage_Center 99,258 9,253 90,005 90.7% South_Carolina_Digital_Library 23,842 4,002 19,840 83.2% The_Portal_to_Texas_History 104,566 40,310 64,256 61.5%

To make it a little easier to see I make a graph of this same data and divided the graph into two groups,  on the left are the Content-Hubs and the right are the Service-Hubs.

Percent of Subjects with Punctuation

I don’t see a huge difference between the two groups and the percentage of punctuation in subjects, at least by just looking at things.

Next I wanted to see out of the 32 characters that I’m considering in this post,  how many of those characters are present in a given hubs subjects.  That data is in the table and graph below.

Hub Name Characters Present ARTstor 19 Biodiversity_Heritage_Library 20 David_Rumsey 7 Digital_Commonwealth 21 Digital_Library_of_Georgia 22 Harvard_Library 12 HathiTrust 28 Internet_Archive 26 J._Paul_Getty_Trust 11 Kentucky_Digital_Library 11 Minnesota_Digital_Library 16 Missouri_Hub 14 Mountain_West_Digital_Library 30 National_Archives_and_Records_Administration 10 North_Carolina_Digital_Heritage_Center 23 Smithsonian_Institution 26 South_Carolina_Digital_Library 16 The_New_York_Public_Library 18 The_Portal_to_Texas_History 22 United_States_Government_Printing_Office_(GPO) 17 University_of_Illinois_at_Urbana-Champaign 12 University_of_Southern_California._Libraries 25 University_of_Virginia_Library 13

Here is this data in a graph grouped in Content and Service Hubs.

Unique Punctuation Characters Present

Mountain West Digital Library had the most characters covered with 30 of the 32 possible punctuation characters. One the low end was the David Rumsey collection with only 7 characters represented in the subject data.

The final thing is to see the character usage for all characters divided by hub so the following graphic presents that data.  I tried to do a little coloring of the table to make it a bit easier to read, don’t know how well I accomplished that.

Punctuation Character Usage (click to view larger image)

So it looks like the following characters ‘(),-. are present in all of the hubs.  The characters %/?: are present in almost all of the hubs (missing one hub each).

The least used character is the ^ which is only in use by one hub in one record.  The characters ~ and @ are only used in two hubs each.

I’ve found this quick look at the punctuation usage in subjects pretty interesting so far,  I know that there were some anomalies that I unearthed for the Portal dataset with this work that we now have on the board to fix,  they aren’t huge issues but things that probably would stick around for quite some time in a set of records without specific identification.

For me the next step is to see if there is a way to identify punctuation characters that are used incorrectly and be able to flag those fields and records in some way to report back to metadata creators.

Let me know what you think via Twitter if you have questions or comments.

 

Terry Reese: MarcEdit OSX Public Preview 1

Mon, 2015-07-06 03:40

It’s with a little trepidation that I’m formally making the first Public Preview of the MarcEdit OSX version available for download and use.  In fact, as of today, this version is now *the* OSX download available on the downloads page.  I will no longer be building the old code-base for use on OSX.

When I first started this project around Mid-April, I began knowing that this process would take some time.  I’ve been working on MarcEdit continuously for a little over 16 years.  It’s gone through one significant rewrite (when the program moved from Assembly to C#) and has had way too many revisions to count.  In agreeing to take on the porting work — I’d hoped that I could port a significant portion of the program over the course of about 8 months and that by the end of August, I could produce a version of MarcEdit that would cover the 80% or so of the commonly used application toolset.  To do this, it meant porting the MARC Tools portion of the application and the MarcEditor.

Well, I’m ahead of schedule.  Since about 2014, I’ve been reworking a good deal of the application to support a smoother porting process sometime in the future — though, honestly, I wasn’t sure that I’d ever actual do the porting work.  Pleasantly, this early work has made a good deal of the porting work easier allowing me to move faster than I’d anticipated.  As of this posting, a significant portion of that 80% has been converted, and I think that for many people — most of what they probably use daily — has been implemented.  And while I’m ahead of schedule and have been happy with how the porting process has gone, make no mistake — it’s been a lot of work, and a lot of code.  Even though this work has primarily been centered around rewriting just the UI portions of MarcEdit, you are still talking, as of today, close to 200,000 lines of code.  This doesn’t include the significant amount of work I’ve done around the general assemblies that have provided improvements to all MarcEdit users.  Because of that — I need to start getting feedback from users.  While the general assemblies go through an automated testing process — I haven’t, as of yet, come up with an automated testing process for the OSX build.  This means that I’m testing things manually, and simply cannot go through the same leveling of testing that I do each time I build the Windows version.  Most folks may not realize it, but it takes about a day to build the Windows version — as the program goes through various unit tests processing close to 25 million records.  I simply don’t have an equivalent of that process yet, so I’m hoping that everyone interested in this work will give it a spin, use it for real work, and let me know if/when things fall down.

In creating the Preview, I’ve tried to make the process for users as easy as possible.  Users interested in running the program simply need to be running at least OSX 10.8 and download the dmg found here: http://marcedit.reeset.net/downloads.  Once downloaded, run the dmg an a new disk will mount called MarcEdit OSX.  Run this file, and you’ll see the following installer:

MarcEdit OSX installer

Drag the MarcEdit icon into the Applications folder and the application will either install, or overwrite an existing version.  That’s it.  No other downloads are necessary.  On first run, the program will generate a marcedit folder under /users/[yourid]/marcedit.  I realize that this isn’t completely normal — but I need the data accessible outside of the normal app sandbox to easily support updates.  I’d also considered the User Documents folder, but the configuration data probably shouldn’t live there either.  So, this is where I ended up putting it.

So what’s been completed — Essentially, all the MARC Tools functions and a significant amount of the MarcEditor has been completed.  There are some conspicuous functions that are absent at this point though.  The Call Number and Fast Heading generation, the Delimited Text Translator and Exporter, the Select and Delete Selected Records, everything Z39.50 related, as well as the Linked Data tools and the Integration work with OCLC and Koha.  All these are not currently available — but will be worked on.  At this point, what users can do is start letting me know what absent components are impacting you the most, and I’ll see how they fit into the current development roadmap.

Anyway — that’s it.  I’m excited to let you all give this a try, and a little nervous as well.  This has been a significant undertaking which has definitely pushed me a bit, requiring me to learn Object-C in a short period of time, as well as quickly assimilate a significant portion of Apples SDK documents relating to UI design.  I’m sure I’ve missed things, but it’s time to let other folks start working with it.

If you have been interested in this work — download the installer, kick the tires, and give feedback.  Just remember to be gentle.  

–TR

Download URL: http://marcedit.reeset.net/downloads

 

Terry Reese: MarcEdit 6.1 Update

Mon, 2015-07-06 03:33

This was something I’d hoped to get into the last update, but didn’t get the time to test it; so I got it done now.  While at the first MarcEdit User Group meeting at ALA, there was a question about supporting 880 fields when exporting data via tab delimited format.  When you use the tool right now, the program will export all the 880 fields, not a specific 880 field.  This update changes that.  After the update, when you select the 880 field in the Export tab delimited tool, the program will ask you for the linking field.  In this case, the program will then match the 880$6[linkingfield], and pull the selected subfield.  I’m not sure how often this comes up — but it certainly made a lot of sense when the problem was described to me.

You can pick up the download at: http://marcedit.reeset.net/downloads

–tr

Pages