You are here

Feed aggregator

Library of Congress: The Signal: Digital Library Federation to Host National Digital Stewardship Alliance

planet code4lib - Tue, 2015-10-20 21:11

The National Digital Stewardship Alliance announced that it has selected the Digital Library Federation (DLF), a program of the Council on Library and Information Resources (CLIR), to serve as NDSA’s institutional home starting in January 2016. The selection and announcement follows a nationwide search and evaluation of cultural heritage, membership, and technical service organizations, in consultation with NDSA working groups, their members, and external advisors.

Launched in 2010 by the Library of Congress as a part of the National Digital Information and Infrastructure and Preservation Program with over 50 founding members, the NDSA works to establish, maintain, and advance the capacity to preserve our nation’s digital resources for the benefit of present and future generations. For an inaugural four-year term, the Library of Congress provided secretariat and membership management support to the NDSA, contributing working group leadership, expertise, and administrative support. Today, the NDSA has 165 members, including universities, government and nonprofit organizations, commercial businesses, and professional associations.

CLIR and DLF have, respectively, a 60- and 20-year track record of dedication to preservation and digital stewardship, with access to diverse communities of researchers, administrators, developers, funders, and practitioners in higher education, government, science, commerce, and the cultural heritage sector.

“We are delighted at this opportunity to support the important work of the NDSA and collaborate more closely with its leadership and vibrant community,” said DLF Director Bethany Nowviskie. “DLF shares in NDSA’s core values of stewardship, collaboration, inclusiveness, and open exchange. We’re grateful for the strong foundation laid for the organization by the Library of Congress, and look forward to helping NDSA enter a new period of imagination, engagement, and growth.“

CLIR President Chuck Henry added, “The partnership between NDSA and DLF should prove of significant mutual benefit and national import: both organizations provide exemplary leadership by promoting the highest standards of preservation of and access to our digital cultural heritage. Together they will guide us wisely and astutely further into the 21st century.”

The mission and structure of the NDSA will remain largely unchanged and it will be a distinct organization within CLIR and DLF, with all organizations benefiting from the pursuit of common goals while leveraging shared resources. “The Library of Congress fully supports the selection of DLF as the next NDSA host and looks forward to working with NDSA in the future,” said Acting Librarian of Congress David Mao. “The talent and commitment from NDSA members coupled with DLF’s deep experience in supporting collaborative work and piloting innovative digital programs will ensure that NDSA continues its excellent leadership in the digital stewardship community.”

“The Library of Congress showed great vision and public spirit in launching the NDSA. And with the Library’s support and guidance, NDSA has grown to embrace a broad community of information stewards,” said Micah Altman, chair of the NDSA Coordinating Committee. “With the support and leadership of CLIR and DLF we aspire to broaden and catalyze the information stewardship community to safeguard permanent access to the world’s scientific evidence base, cultural heritage, and public record.”

CLIR is an independent, nonprofit organization that forges strategies to enhance research, teaching, and learning environments in collaboration with libraries, cultural institutions, and communities of higher learning. It aims to promote forward-looking collaborative solutions that transcend disciplinary, institutional, professional, and geographic boundaries in support of the public good. CLIR’s 186 sponsoring institutions include colleges, universities, public libraries, and businesses.

The Digital Library Federation, founded in 1995, is a robust and diverse community of practice, advancing research, learning, and the public good through digital library technologies. DLF connects its parent organization, CLIR, to an active practitioner network, consisting of 139 member institutions, including colleges, universities, public libraries, museums, labs, agencies, and consortia. Among DLF’s NDSA-related initiatives are the eResearch Network, focused on data stewardship across disciplines, and the CLIR/DLF Postdoctoral Fellows program, with postdocs in data curation for medieval, early modern, visual studies, scientific, and social science data, and in software curation.

Jonathan Rochkind: Blacklight Strengths, Weaknesses, Health, and Future

planet code4lib - Tue, 2015-10-20 20:49
My Own Personal Opinion Analysis of Blacklight Strength, Weaknesses, Health, and Future

My reflections on the Blacklight Community Survey results, and my own experiences with BL.

What people like about BL is it’s flexibility; what people don’t like is it’s complexity and backwards/forwards compatibility issues.

Developing any software, especially shared library/gem software, it is difficult to create a package which is on the one hand very flexible, extensible, and customizable; and on the other maintains a simple and consistent codebase, backwards compatibility with easy upgrades, and simple installation with a shallow learning curve for common use cases.

In my software engineering career, I see these tensions as one of the fundamental challenges in developing shared software. It’s not unique to Blacklight, but I think Blacklight is having some difficulties in weathering that challenge.

I think the diversity of Blacklight versions in use is a negative indicator for community health. People on old unsupported versions of BL (or Rails) can run into bugs which nobody can fix for them; and even if they put in work on debugging and fixing them themselves, it’s less likely to lead to a patch that can be of use to the larger BL community, since they’re working on an old version. It reduces the potential size of our collaborative development community. And it puts those running old versions of BL (or Rails) in a difficult spot eventually after much deferred upgrading, when they find themselves on unmaintained software with a very challenging upgrade path across many versions.

Also, if when a new BL release is dropped it’s not actually put into production by anyone (not even core committers?) for many months, that increases the chances that severe bugs are present but not yet found in even months-old releases (we have seen this happen), which can be a vicious circle that makes people even more reluctant to upgrade.

And we have some idea why BL applications aren’t being upgraded: Even though only a bare minority of respondents reported going through a major BL upgrade, issues with difficulty of upgrading are a very major represented theme in reported biggest challenges with Blacklight.  I know I personally have found that maintaining a BL app responsibly (which to me means keeping up with Rails and BL releases without too much lag) has had a much higher “total cost of ownership” than I expected or desire; you can maybe guess that part of my motivation in releasing this survey was to see if I was alone, I see I am not.

I think these pain points are likely to get worse: many existing BL deployments may have been originally written for BL 5.x and not yet had to deal with them but will; and many people currently using a “release and forget and never upgrade” practice may come to realize this is untenable. (“Software is a growing organism”, Ranganathan’s fifth law. Wait, Ranganathan wasn’t talking about software?)

To be fair, Blacklight core developers have gotten much better at backwards compatibility — in BL 4.x and especially 5.x — in the sense that backwards-incompatible changes within a major BL version are attempted, with much success, to be kept minimal to non-existent (in keeping with semver‘s requirements for release labelling).  This is a pretty major accomplishment.

But the backwards compatibility is not accomplished by minimizing code or architectural churn or change. Rather the changes are still pretty fast and furious, but the old behavior is left in and marked deprecated. Ironically, this has the effect of making the BL codebase even more complicated and hard to understand, with multiple duplicative or incompatible architectural elements co-existing and sometimes never fully disappearing. (More tensions between different software quality values, inherent challenges to any large software project.)  In BL 5.x, the focus on maintaining backwards compat was fierce — but we sometimes got deprecated behavior in one 5.x release, with suggested new behavior, where that suggested new behavior was sometimes itself deprecated in a future 5.x release in favor of yet newer behavior.  Backwards compatibility is strictly enforced, but the developer’s burden of keeping up with churn may not be as lightened as one would expect.

Don’t get me wrong, I think some of the 5.x changes are great designs. I like the new SearchBuilder architecture. But it was dropped in bits and pieces over multiple 5.x releases, without much documentation for much of it, making it hard to keep up with as a non-core developer not participating in writing it.  And the current implementation still has, to my mind, some inconsistencies or non-optimal choices (like `!` on the end of a method or lack thereof being inconsistently used to signal a method mutates the receiver vs returns-a-dup) — which now that they are in a release, need to be maintained for backwards compatibility (or if changed in a major version drop, still cause backwards compat challenges for existing app maintainers; just labeling it a major version doesn’t reduce these challenges, only reducing the velocity of such changes does).

In my own personal opinion, Blacklight’s biggest weakness and biggest challenge for continued and increased success is figuring out ways to maintain the flexibility, while significantly reducing code complexity, architectural complexity, code churn, and backwards incompatibility/deprecation velocity.

What can be done (in my own personal opinion)?

These challenges are not unique to Blacklight, they are tensions and challenges, in my observation/opinion/experience with nearly any shared non-trivial codebase.  But Blacklight can, perhaps, choose to take a different tack to approaching them, focus on different priorities in code evolution, think of practices to adopt to strike a better balance.

The first step is consensus on the nature of the problem (which we may not have, this is just my own opinion on the nature of the problem; I’m hoping this survey can help people think about BL’s strengths and weaknesses and build consensus).

In own brainstorming about possible approaches, I come up with a few, tentative, brainstorm-quality, possibilities:

  • Require documentation for major architectural components. We’ve built a culture in BL (and much of open source world) that a feature isn’t done and ready to merge until it’s covered by tests; I think we should have a similar culture around documentation, a feature isn’t done and ready to merge until documented. Which we lack in BL, and much of the open source world. But this can add a challenge, in a codebase with high churn, you now have to make sure to update the docs lest they become out of date and inaccurate too (something BL also hasn’t always kept up with)….
  • Accept less refactoring of internal architecture to make the code cleaner and more elegant.  Sometimes you’ve just got to stick with what you’ve got, for longer, even if the change would improve code architecture, as many of them have.  There’s an irony here. Often the motivation for an internal architectural refactoring is to better support something one wants to do. You can do that in the current codebase, but in a hacky not really supported way, that’s likely to break in future BL versions. You want to introduce the architecture that will let you do what you want in a ‘safer’ way, for forwards compatibility. But the irony is that the constant refactorings to introduce these better architectures actually have a net reduction on forwards compatibility, as they are always breaking some existing code.
  • Be cautious of the desire to expel functionality to external plugins. External BL plugins generally receive less attention, they are likely to not be up to date with current BL, and it has been difficult to figure out what version of an external plugin actually is compatible with what version of BL. If you’re always on the bleeding edge, you don’t notice, but if you have an older version of BL and are maybe trying to upgrade to a new one, figuring out plugin compatibility in BL can be a major nightmare. Expelling code to plugins makes core BL easier to maintain, but at the cost of making the plugins harder to maintain, less likely to receive maintainance, and harder to use for BL installers.  If the plugin code is an edge case not used by many people that may make sense. But I continue to worry about the expulsion of MARC support to a plugin. MARC is not used by as many BL implementers as it used to be, but “library catalog/discovery” is still BL use for a third of survey respondents, and MARC is still used by nearly half.
  • Do major refactorings in segregated branches, only merging into master (and including in releases) when they are “fully baked”. What does fully baked mean? I guess maybe it means understanding the use cases that will need to be supported; having a ‘story’ about how to use the architecture for those use cases; having a few people actually looked over the code and tried it out and given feedback.In BL 5.x, there were a couple major architectural refactorings, but they were released in dribs and drabs over multiple BL releases, sometimes reversing themselves, sometimes after realizing there were important use cases which couldn’t be supported. This adds TCO/maintenance burden to BL implementers, and adds backwards-compat-maintaining burder to BL core developers when they realize something already released should have been done differently.If I understand right, the primary motivation for some of the major 5.x-6.0 architectural refactorings was to support ElasticSearch as an alternate back-end. But, while these refactorings have already been released, there has actually been no demonstration of using ElasticSearch as a front-end, it’s not done yet. Without such demonstrations trying and testing the architecture, how confident can we be that these refactorings will actually be sufficient or the right direction for the goal? Yet more redesigns may be needed before we get there.

When I brought up this last point with a core BL developer, he said that it was unrealistic to expect this could be possible, because of the limited developer resources available to BL.

It’s true that there are very few developers making non-trivial commits to BL, and that BL does function in an environment of limited developer resources, which is a challenge. However, in fact, studies have shown that most succesful open source projects have the vast majority of commits contributed by only 1-3 developers. (Darnit, I can’t find the cite now).

I wonder if beyond developer resources as a ‘quantity’, the nature of the developers and their external constraints matters. Are many of the core BL developers working for vendors, where most hours need to be billable to clients on particular projects which need to be completed as quickly as practical?  Or working for universities that have a similar ‘entrepeneurial’ approach where most developer hours are spent on ‘sprints’ for particular features on particular projects?

Is anyone given time to steward the overall direction and architectural soundness of Blacklight?  If nobody really has such time, it’s actually a significant accomplishment that BL’s architecture has continued to evolve and improve regardless. But it’s not a surprise that it’s done so in a fairly chaotic and high-churn way, where people need to get just enough to accomplish the project in front of them into BL, and into a release asap.

I suspect that BL may, at this point in it’s development, need a bit more formality and transparency in who makes major decisions. (Eg, who decided that supporting ElasticSearch was a top priority, and how?) (And I say this as someone that, five years ago at the beginning of BL, didn’t think we needed any more formality there then a bunch of involved developers reaching consensus on a weekly phone call (that I don’t think happens anymore?). But I’ve learned from experience, and BL is at a different point in it’s life cycle now.)

In software projects where I do have some say (I haven’t made major commits to BL in at least 2-3 years), where they are projects that are expected to have long lives, I’ve come to try to push for a sort of “slow programming” (compare to ‘slow food’ etc) approach. Consider changes carefully, even at the cost of reducing velocity of improvements, release nothing into master before it’s time, prioritize backwards compatibility over time (not just over major releases, but actual calendar time). Treat your code like a bonsai tree, not a last-minute term paper.  But sometimes you can get away with this sometimes you can’t, sometimes your stakeholders will let you sometimes they won’t, and sometimes it isn’t really the right decision.

Software design is hard!


Filed under: General

Pages

Subscribe to code4lib aggregator