You are here

Feed aggregator

Library of Congress: The Signal: Welcoming the Newest Member of the Viewshare Team to the Library

planet code4lib - Wed, 2014-09-17 15:49

The following is a guest post by Patrick Rourke, an Information Technology Specialist and the newest member of the Library’s Viewshare team.

I made my first forays into computing on days when it was too cold, wet or snowy to walk in the woods behind our house, in a room filled with novels, atlases and other books.  Usually those first programming projects had something to do with books, or writing, or language – trying to generate sentences from word lists, or altering the glyphs the computer used for text to represent different alphabets.

After a traumatic high school exposure to the COBOL programming language (Edsger Dijkstra once wrote that “its teaching should be regarded as a criminal offense” (pdf)), in college I became fascinated with the study of classical Greek and Roman history and literature. I was particularly drawn to the surviving fragments of lost books from antiquity – works that were not preserved, but of which traces remain in small pieces of papyrus, in palimpsests, and through quotations in other works. I spent a lot of my free time in the computer room, using GML, BASIC and ftp on the university’s time sharing system.

My first job after graduation was on the staff of a classics journal, researching potential contributors, proofreading, checking references. At that time, online academic journals and electronic texts were being distributed via email and the now almost-forgotten medium of Gopher. It was an exciting time, as people experimented with ways to leverage these new tools to work with books, then images, then the whole panoply of cultural content.

This editorial experience led to a job in the technical publications department of a research company, and my interest in computing to a role as the company webmaster, and then as an IT specialist, working with applications, servers and networking. In my spare time, I stayed engaged with the humanities, doing testing, web design and social media engagement for the Suda On Line project, who publish a collaborative translation and annotation of the 10th century Byzantine lexicon in which many of those fragments of lost books are found.

My work on corporate intranets and my engagement with SOL motivated me to work harder on extending my programming skills, so before long I was developing web applications to visualize project management data and pursuing a master’s degree in computer science.  In the ten years I’ve been working as a developer, I’ve learned a lot about software development in multiple languages, frameworks and platforms, worked with some great teams and been inspired by great mentors.

I join the National Digital Information Infrastructure and Preservation Program as an Information Technology Specialist, uniting my interests in culture and computing. My primary project is Viewshare, a platform the Library makes available to cultural institutions for generating customized visualizations – including timelines, maps, and charts – of digital collections data. We will be rolling out a new version of Viewshare in the near future, and then I will be working with the NDIIPP team and the Viewshare user community on enhancing the platform by developing new features and new ways to view and share digital collections data. I’m looking forward to learning from and working with my new colleagues at the Library of Congress and everyone in the digital preservation community.

Islandora: Using Intellij IDEA Ultimate as a Dev Environment for Islandora.

planet code4lib - Wed, 2014-09-17 14:48

In the past I have used Netbeans as my preferred environment for developing Islandora code, I also tried Eclipse and others periodically to see if they had any new must have features. At Drupalcon Portland in 2013 I noticed many of the presenters were using PHPStorm and developers spoke highly of it, so I thought I should give it a try.

Most of the code for Islandora is PHP but some of the opensource projects we rely on are written in Java or something else, so instead of trying out PHPStorm I download a trial of Intellij IDEA Ultimate Edition which has the functionality of PHPStorm (via a plugin) plus support for many other languages and frameworks.

My first impressions of IDEA Utlimate Edition were good. It was quick to load (compared to Netbeans) and the user interface was snappy, there was no lag for code completion etc. I also really liked the Darcula theme which was easy on the eyes. My first impression of IDEA was enough to make me think it was worthwhile to spend a bit more time using it. The more I used it, the more I liked it! I have been using IDEA as my main IDE for a year now.

IDEA has many plugins and supports many frameworks for various languages so initial configuration can take some time, but once you have things configured it works well and runs smoothly. Islandora has strict coding standards, and IDEA is able to help with this; we are able to point it at the same codesniffer configuration that the Drupal Coder module uses. IDEA then highlights anything that does not conform to the configured coding standards. It will also fix a lot of the formatting errors if you choose to reformat the code. The PHP plugin also has support for Mess Detector, Composor etc.

I also like the PHP debugger in IDEA. You can have several different configurations setup for various projects. While the debugger is a useful tool, I have run into some situations where it opens a second copy of a file in the editor, which can cause issues if you don't notice.

You can also open an ssh session within IDEA which is great for running stuff like git commands. The editor does have built in support for git and svn etc. but I prefer to use the command line for this and in Intellij I can do this while still in the IDE.

IDEA has good support for editing xml files and running/debugging transforms within the IDE.

Overall, Intellij IDEA Ultimate is definitely worth trying! It is a commercial product so you'll have to be prepared to buy a license after your trial. However, they do have a free community edition; be sure to check whether it supports PHP. Most of the functionality I discussed here is also available in PHPStorm which is cheaper but it doesn't support languages other than PHP, HTML etc. If you are part of an opensource project you can apply for an opensource license (Islandora has one), if you qualify you may get a free license.

District Dispatch: It must be “FCC month” at the ALA

planet code4lib - Wed, 2014-09-17 14:26

Well, yes, almost any month could be “FCC month” with the number of proceedings that affect libraries and our communities, but September has been particularly busy. Monday we entered the next round of E-rate activity with comments in response to the Federal Communication Commission’s Further Notice of Proposed Rulemaking (emphasis added), and closed out a record-setting public comment period in relation to promoting and protecting the Open Internet with two public filings.

I’ll leave it to Marijke to give the low-down on E-rate, but here’s a quick update on the network neutrality front:

ALA and the Association of College & Research Libraries (ACRL) filed “reply” comments with a host of library and higher education allies to further detail our initial filing in July. We also joined with the Center for Democracy & Technology (CDT) to re-affirm that the FCC has legal authority to advance the Open Internet through Title II reclassification or a strong public interest standard under Section 706. This work is particularly important as most network neutrality advocates agree the “commercially reasonable” standard originally proposed by the FCC does not adequately preserve the culture and tradition of the internet as an open platform for free speech, learning, research and innovation.

For better or worse, these filings are just the most recent milestones in our efforts to support libraries’ missions to ensure equitable access to online information. Today the FCC is beginning to hold round tables related to network neutrality (which you can catch online at www.fcc.gov/live). ALA and higher education network neutrality counsel John Windhausen has been invited to participate in a roundtable on October 7 to discuss the “Internet-reasonable” standard we have proposed as a stronger alternative to the FCC’s “commercially reasonable” standard.

The Senate will take up the issue in a hearing today, including CDT President and CEO Nuala O’Connor. And a library voice will again be included in a network neutrality forum—this time with Sacramento Public Library Director Rivkah Sass speaking at a forum convened by Congresswoman Doris Matsui on September 24. Vermont State Librarian Martha Reid testified at a Senate field hearing in July, and Multnomah County Library Director Vailey Oehlke discussed network neutrality with Senator Ron Wyden at part of an event in May.

This month ALA also filed comments in support of filings from the Schools, Health and Libraries Broadband (SHLB) Coalition, State E-rate Coordinators Alliance (SECA) and NTCA—the Broadband Coalition calling for eligible telecommunications carriers (ETCs) in the Connect America Fund to connect anchor institutions at higher speeds than those delivered to residents. Going further, ALA proposes that ETCs receiving CAF funding must serve each public library in its service territory at connection speeds of at least 50 Mbps download and 25 Mbps upload. Access and affordability are the top two barriers to increasing library broadband capacity, so both the Connect America Fund and the E-rate program are important components of increasing our ability to meet our public missions. AND we presented at the Telecommunication Policy Research Conference! Whew.

Buckle your seat belts and stay tuned, because “FCC Month” is only half over!

The post It must be “FCC month” at the ALA appeared first on District Dispatch.

DPLA: More than 148,000 items from the U.S. Government Printing Office now discoverable in DPLA

planet code4lib - Wed, 2014-09-17 13:50

We were pleased to share yesterday that nearly 60,000 items from the Medical Heritage Library have made their way into DPLA, and we’re now doubly pleased to share that more than 148,000 items from the Government Printing Office’s (GPO) Catalog of U.S. Government Publications (CGP) are now also available via DPLA.

To view the Government Printing Office in DPLA, click here.

Notable examples of the types of records now available from the GPO include the Federal Budget, laws such as the Patient Protection and Affordable Care Act, Federal regulations, and Congressional hearings, reports, and documents. GPO continuously adds records to the CGP which will also be available through DPLA, increasing the discoverability of and access to Federal Government information for the American public.

“GPO’s partnership with DPLA will further GPO’s mission of Keeping America Informed by increasing public access to a wealth of information products available from the Federal Government,” said Public Printer Davita Vance-Cooks. “We look forward to continuing this strong partnership as the collection of Government information accessible through DPLA continues to grow”.

GPO is the Federal Government’s official, digital, secure resource for producing, procuring, cataloging, indexing, authenticating, disseminating, and preserving the official information products of the U.S. Government. The GPO is responsible for the production and distribution of information products and services for all three branches of the Federal Government, including U.S. passports for the Department of State as well as the official publications of Congress, the White House, and other Federal agencies in digital and print formats. GPO provides for permanent public access to Federal Government information at no charge through our Federal Digital System (www.fdsys.gov), partnerships with approximately 1,200 libraries nationwide participating in the Federal Depository Library Program, and our secure online bookstore. For more information, please visit www.gpo.gov.

To read the full GPO press release announcing its partnership with DPLA, click here.

All written content on this blog is made available under a Creative Commons Attribution 4.0 International License. All images found on this blog are available under the specific license(s) attributed to them, unless otherwise noted.

Andromeda Yelton: jQuery workshop teaching techniques, part 2: techniques geared at affective goals

planet code4lib - Wed, 2014-09-17 13:30

I’m writing up what I learned from teaching a jQuery workshop this past month. I’ve already posted on my theoretical basis and pacing. Today, stuff I did to create a positive classroom climate and encourage people to leave the workshop motivated to learn more. (This is actually an area of relative weakness for me, teaching-wise, so I really welcome anyone’s suggestions on how to cultivate related skills!)

Post-it notes

I distributed a bunch of them and had students put them on their laptops when they needed help. This lets them summon TAs without breaking their own work process. I also had them write something that was working and something that wasn’t on post-its at the end of Day 1, so I could make a few course corrections for Day 2 (and make it clear to the students that I care about their feedback and their experience). I shamelessly stole both tactics from Software Carpentry.

Inclusion and emotion

The event was conducted under the DLF Code of Conduct, which I linked to at the start of the course material. I also provided Ada Initiative material as background. I talked specifically, at the outset, about how learning to code can be emotionally tough; it pushes the limits of our frustration tolerance and often (i.e. if we’re not young, white men) our identity – “am I the kind of person who programs? do people who program look like me?” And I said how all that stuff is okay. Were I to do it over again, I’d make sure to specifically name impostor syndrome and stereotype threat, but I’ve gotten mostly good feedback about the emotional and social climate of the course (whose students represented various types of diversity more than I often see in a programming course, if less than I’d like to see), and it felt like most people were generally involved.

Oh, and I subtly referenced various types of diversity in the book titles I used in programming examples, basically as a dog-whistle that I’ve heard of this stuff and it matters to me. (Julia Serano’s Whipping Girl, which I was reading at the time and which interrogated lots of stuff in my head in awesome ways, showed up in a bunch of examples, and a student struck up a conversation with me during a break about how awesome it is. Yay!)

As someone who’s privileged along just about every axis you can be, I’m clueless about a lot of this stuff, but I’m constantly trying to suck less at it, and it was important to me to make that both implicit and explicit in the course.

Tomorrow, how ruthless and granular backward design is super great.

LITA: Browser Developer Tools

planet code4lib - Wed, 2014-09-17 13:00

Despite what the name may imply, browser developer tools are not only useful for developers. Anyone who works with the web (and if you are reading this blog, that probably means you) can find value in browser developer tools because they use the browser, the tool we all use to access the riches of the web, to deconstruct the information that makes up the core of our online experience. A user who has a solid grasp on how to use their browser’s developer tools can see lots of incredibly useful things, such as:

  • Dynamic views of a page’s HTML elements & data
  • CSS rules being applied to any given element
  • The effects of new user-supplied CSS rules
  • Margin & padding boundaries around elements
  • External files being loaded by a page (CSS & JS)
  • JavaScript errors, right down to the line number
  • The speed with which JavaScript files are loaded
  • An interactive JavaScript console (great for learning!)

The first step in understanding your browser’s developer tools is knowing that they exist. If you can only get to this step, you are far ahead of most people. Every browser has its own set of embedded developer tools, whether you are using Internet Explorer, Safari, Firefox, Chrome, or Opera. There’s no special developer version of the browser to install or any add-ons or extensions to download, and it doesn’t matter if you are on Windows, Mac or Linux. If a computer has a browser, it already has developer tools baked in.

The next step on the journey is learning how to use them. All browser developer tools are pretty similar, so skills gained in one browser translate well to others. Unfortunately the minor differences are substantial enough to make a universal tutorial impossible. If you have a favorite browser, learn how to activate the various developer tools, what each one can do, how to use them effectively, and how to call them with their own specific keyboard shortcut (learning to activate a specific tool with a keyboard shortcut is the key to making them a part of your workflow). Once you have a solid understanding of the developer tools in your favorite browser, branch out and learn the developer tools for other browsers as well. After you have learned one, learning others is easy. By learning different sets of developer tools you will find that some are better at certain tasks than others. For instance, (in my opinion) Firefox is best-in-class when dealing with CSS issues, but Chrome takes first place in JavaScript utilities.

Google search results using Firefox’s 3D view mode, which shows a web page’s nested elements as stacks of colored blocks. This is incredibly helpful for debugging CSS issues.

Another great reason to learn developer tools for different browsers has to do with the way browsers work. When most people think of web programming, they think of the server side versions of files because this is where the work is done. While it’s true that server side development is important, browsers are the real stars of the show. When a user requests a web page, the server sends back a tidy package of HTML, CSS and JavaScript that the browser must turn into a visual representation of that information. Think of it like a Lego kit; every kid buys the same Lego kit from the store which has all the parts and instructions in a handy portable package, but it’s up to the individual to actually make something out of it and often the final product varies slightly from person to person.  Browsers are the same way, they all put the HTML, CSS and JavaScript together in a slightly different way to render a slightly different web page (this causes endless headaches for developers struggling to make a consistent user experience across browsers). Browser developer tools give us an insight into both the code that the browser receives and the way that the individual browser is putting the web page together. If a page looks a bit different in Internet Explorer than it does in Chrome, we can use each browser’s respective developer tools to peek into the rendering process and see what’s going on in an effort to minimize these differences.

Now that you know browser developer tools exist and why they are so helpful, the only thing left to do is learn them. Teaching you to actually use browser developer tools is out of the scope of this post since it depends on what browser you use and what your needs are, but if you start playing around with them I promise you will find something useful almost immediately. If you are a web developer and you aren’t already using them, prepare for your life to get a lot easier. If you aren’t a developer but work with web pages extensively, prepare for your understanding of how a web page works to grow considerably (and as a result, for your life to get a lot easier). I’m always surprised at how few people are aware that these tools even exist (and what happens when someone stumbles upon them without knowing what they are), but someone with a solid grasp of browser developer tools can expose a problem with a single keyboard shortcut, even on someone else’s workstation. A person who can leverage these tools to figure out problems no one else can often acquires the mystical aura of an internet wizard with secret magic powers to their relatively mortal coworkers. Become that person with browser developer tools.

Ed Summers: Google’s Subconscious

planet code4lib - Wed, 2014-09-17 11:50

Can a poem provide insight into the inner workings of a complex algorithm? If Google Search had a subconscious, what would it look like? If Google mumbled in its sleep, what would it say?

A few days ago, I ran across these two quotes within hours of each other:

So if algorithms like autocomplete can defame people or businesses, our next logical question might be to ask how to hold those algorithms accountable for their actions.

Algorithmic Defamation: The Case of the Shameless Autocomplete by Nick Diakopoulos

and

A beautiful poem should re-write itself one-half word at a time, in pre-determined intervals.

Seven Controlled Vocabluaries by Tan Lin.

Then I got to thinking about what a poem auto-generated from Google’s autosuggest might look like. Ok, the idea is of dubious value, but it turned out to be pretty easy to do in just HTML and JavaScript (low computational overhead), and I quickly pushed it up to GitHub.

Here’s the heuristic:

  1. Pick a title for your poem, which also serves as a seed.
  2. Look up the seed in Google’s lightly documented suggestion API.
  3. Get the longest suggestion (text length).
  4. Output the suggestion as a line in the poem.
  5. Stop if more than n lines have been written.
  6. Pick a random substring in the suggestion as the seed for the next line.
  7. GOTO 2

The initial results were kind of light on verbs, so I found a list of verbs and randomly added them to the suggested text, occasionally. The poem is generated in your browser using JavaScript so hack on it and send me a pull request.

Assuming that Google’s suggestions are personalized for you (if you are logged into Google) and your location (your IP address), the poem is dependent on you. So I suppose it’s more of a collective subconscious in a way.

If you find an amusing phrase, please hover over the stanza and tweet it — I’d love to see it!

Library Tech Talk (U of Michigan): Image Class Update

planet code4lib - Wed, 2014-09-17 00:00
The last visual refresh to the DLPS Image Class environment updated the layout and styles, but mostly worked the same way. Starting this year, we've been making more drastic changes. These updates were based on what our analytics showed about browser use (larger, wider screens and of course, mobile use) and conversations with collection managers.

District Dispatch: I left the E-rate Order on my desk last night

planet code4lib - Tue, 2014-09-16 19:47

After schlepping the 176 pages of the E-rate modernization Order around since July 23 (when the Commission released the Order, voted on July 11), my bag is remarkably empty today. While I didn’t continually refer to it over the last month and a half, it has been a constant companion as we prepared our comments to the Commission on the Further Notice of Proposed Rulemaking (FNPRM) that accompanied the July Order. I can unabashedly leave it behind since we filed our comments (pdf) last night.

E-rate may be the “other” proceeding with comments due yesterday, but for ALA they represent a milestone of sorts. True to form, the Commission asks many detailed questions in the FNPRM, but two issues stand out for us. First, the Commission opened the door to talk about the long-term funding needs of the program. Second, it’s now time for the Commission to take up our concern that has followed ALA certainly since this proceeding began a year ago, but really since ALA started tracking broadband capacity of libraries. We reopen the call to immediately address the broadband gap among the majority of libraries. With 98% of libraries below the 1 gigabit capacity goal asserted in the National Broadband Plan and adopted by the Commission, we have a long way to go before we can comfortably say we have made a dent in the gap.

In looking to the next order (hopefully sometime this fall) we have heard from our members that while having access to more funding for Wi-Fi (the heart of the July Order) is important, if the library only has a 3 or even 10 Mbps connection to the door, the patron trying to upload a résumé, or stream an online certification course, or download a homework assignment is still going to have a marginal experience.

Our comments therefore focus on these two primary issues—adequate funding to sustain the program and closing the broadband gap for libraries. Among other recommendations we ask the Commission to increase and improve options for special construction where libraries do not have access to affordable, scalable high-capacity broadband by:

  • Clarifying the amortization rules;
  • Eliminating the ban on special construction for dark fiber;
  • Allowing longer term contracts where there is special construction involved; and
  • Requiring service providers to lock in affordable prices for a significant number of years for agreements involving special construction.

As to the overall funding question, ALA is engaged with partners to gather data that will give us an understanding of the costs necessary for libraries to achieve the Commission’s capacity goals. We plan to submit information to the Commission in the next several weeks.

For more details on our comments you can certainly read the whole thing. Or, we prepared a summary (pdf) to start with. With reply comments due at the end of the month, it’s time to get started reading other submissions and picking up where we left off (and with FCC filing system intermittently down—all those net neutrality filers, no doubt). We will continue connecting with our library colleagues and will begin more meetings at the Commission. More to come!

The post I left the E-rate Order on my desk last night appeared first on District Dispatch.

David Rosenthal: Two Sidelights on Short-Termism

planet code4lib - Tue, 2014-09-16 17:26
I've often referred to the empirical work of Haldane & Davies and the theoretical work of Farmer and Geanakoplos, both of which suggest that investors using Discounted Cash Flow (DCF) to decide whether an investment now is justified by returns in the future are likely to undervalue the future. This is a big problem in areas, such as climate change and digital preservation, where the future is some way off.

Now Harvard's Greenwood & Shleifer, in a paper entitled Expectations of Returns and Expected Returns, reinforce this:
We analyze time-series of investor expectations of future stock market returns from six data sources between 1963 and 2011. The six measures of expectations are highly positively correlated with each other, as well as with past stock returns and with the level of the stock market. However, investor expectations are strongly negatively correlated with model-based expected returns.They compare investors' beliefs about the future of the stock market as reported in various opinion surveys, with the outputs of various models used by economists to predict the future based on current information about stocks. They find that when these models, all enhancements to DCF of one kind or another, predict low performance investors expect high performance, and vice versa. If they have experienced poor recent performance and see a low market, they expect this to continue and are unwilling to invest. If they see good recent performance and a high market they expect this to continue. Their expected return from investment will be systematically too high, or in other words they will suffer from short-termism.

Yves Smith at Naked Capitalism has a post worth reading critiquing a Washington Post article entitled America’s top execs seem ready to give up on U.S. workers. It reports on a Harvard Business School survey of its graduates entitled An Economy Doing Half Its Job. Yves writes:
In the early 2000s, we heard regularly from contacts at McKinsey that their clients had become so short-sighted that it was virtually impossible to get investments of any sort approved, even ones that on paper were no-brainers. Why? Any investment still has an expense component, meaning some costs will be reported as expenses on the income statement, as opposed to capitalized on the balance sheet. Companies were so loath to do anything that might blemish their quarterly earnings that they’d shun even remarkably attractive projects out of an antipathy for even a short-term lowering of quarterly profits. Note "Companies were so loath". The usually careful Yves falls into the common confusion between companies (institutions) and their managers (individuals). Managers evaluate investments not in terms of their longer-term return to the company, but in terms of their short-term effect on the stock price, and thus on their stock-based compensation. Its the IBGYBG (I'll Be Gone, You'll Be Gone) phenomenon, which amplifies the underlying problems of short-termism.

Galen Charlton: Libraries, the Ada Initiative, and a challenge

planet code4lib - Tue, 2014-09-16 16:28

I am a firm believer in the power of open source to help libraries build the tools we need to help our patrons and our communities.

Our tools focus our effort. Our effort, of course, does not spring out of thin air; it’s rooted in people.

One of the many currencies that motivates people to contribute to free and open source projects is acknowledgment.

Here are some of the women I’d like to acknowledge for their contributions, direct or indirect, to projects I have been part of. Some of them I know personally, others I admire from afar.

  • Henriette Avram – Love it or hate it, where would we be without the MARC format? For all that we’ve learned about new and better ways to manage metadata, Avram’s work at the LC started the profession’s proud tradition of sharing its metadata in electronic format.
  • Ruth Bavousett – Ruth has been involved in Koha for years and served as QA team member and translation manager. She is also one of the most courageous women I have the privilege of knowing.
  • Karen Coyle – Along with Diane Hillmann, I look to Karen for leadership in revamping our metadata practices.
  • Nicole Engard – Nicole has also been involved in Koha for years as documentation manager. Besides writing most of Koha’s manual, she is consistently helpful to new users.
  • Katrin Fischer – Katrin is Koha’s current QA manager, and has and continues to perform a very difficult job with grace and less thanks than she deserves.
  • Ruth Frasur – Ruth is director of the Hagerstown Jefferson Township Public Library in Indiana, which is a member of Evergreen Indiana. Ruth is one of the very few library administrators I know who not only understands open source, but actively contributes to some of the nitty-gritty work of keeping the software documented.
  • Diane Hillmann – Another leader in library metadata.
  • Kathy Lussier – As the Evergreen project coordinator at MassLNC, Kathy has helped to guide that consortium’s many development contributions to Evergreen.  As a participant in the project and a member of the Evergreen Oversight Board, Kathy has also supplied much-needed organizational help – and a fierce determination to help more women succeed in open source.
  • Liz Rea – Liz has been running Koha systems for years, writing patches, maintaing the project’s website, and injecting humor when most needed – a true jill of all trades.

However, there are unknowns that haunt me. Who has tried to contribute to Koha or Evergreen, only to be turned away by an knee-jerk “RTFM” or simply silence? Who might have been interested, only to rightly judge that they didn’t have time for the flack they’d get? Who never got a chance to go to a Code4Lib conference while her male colleague’s funding request got approved three years in a row?

What have we lost? How many lines of code, pages of documentation, hours of help have not gone into the tools that help us help our patrons?

The ideals of free and open source software projects are necessary, but they’re not sufficient to ensure equal access and participation.

The Ada Initiative can help. It was formed to support women in open technology and culture, and runs workshops, assists communities in setting up and enforcing codes of conduct, and promotes ensuring that women have access to positions of influence in open culture projects.

Why is the Ada Initiative’s work important to me? For many reasons, but I’ll mention three. First, because making sure that everybody who wants to work and play in the field of open technology has a real choice to do so is only fair. Second, because open source projects that are truly welcoming to women are much more likely to be welcoming to everybody – and happier, because of the effort spent on taking care of the community. Third, because I know that I don’t know everything – or all that much, really – and I need exposure to multiple points of view to be effective building tools for libraries.

Right now, folks in the library and archives communities are banding together to raise money for the Ada Initiative. I’ve donated, and I encourage others to do the same. Even better, several folks, including Bess SadlerAndromeda Yelton, Chris Bourg, and Mark Matienzo are providing matching donations up to a total of $5,120.

Go ahead, make a donation by clicking below, then come back. I’ll wait.

Money talks – but whether any given open source community is welcoming, both of new people and of new ideas, depends on its current members.

Therefore, I would also like to extend a challenge to men (including myself — accountability matters!) working in open source software projects in libraries. It’s a simple challenge, summarized in a few words: “listen, look, lift up, and learn.”

ListenListening is hard.  A coder in a library open source project has to listen to other coders, to librarians, to users – and it is all too easy to ignore or dismiss approaches that are unfamiliar.  It can be very difficult to learn that something you’ve poured a lot of effort into may not work well for librarians – and it can be even harder to hear that your are stepping on somebody’s toes or thoughtlessly stomping on their ideas.

What to do? Pay attention to how you communicate while handling bugs and project correspondence. Do you prioritize bugs filed by men? Do you have a subtle tendency to think to yourself, “oh, she’s just not seeing the obvious thing right in front of her!” if a women asks a question on the mailing list about functionality she’s having trouble with? If so, make an effort to be even-handed.

Are you receiving criticism? Count to ten, let your hackles down, and try to look at it from your critic’s point of view.

Be careful about nitpicking.  Many a good idea has died after too much bikeshedding – and while that happens to everybody, I have a gut feeling that it’s more likely to happen if the idea is proposed by a woman.

Is a women colleague confiding in you about concerns she has with community or workplace dynamics? Listen.

Look. Look around you — around your office, the forums, the IRC channels, and Stack Exchanges you frequent. Do you mostly see men who look like yourself?  If so, do what you can to broaden your perspective and your employer’s perspective. Do you have hiring authority? Do you participate in interview panels? You can help who surrounds you.

Remember that I’m talking about library technology here — even if the 70% of the employees of the library you work for are women, if the systems department only employs men, you’re missing other points of view.

Do you have no hiring authority whatsoever? Look around the open source communities you participate in. Are there proportionally far more men participating openly than the gender ratio in librarianship as a whole?  If so, you can help change that by how you choose to participate in the community.

Lift upThis can take many forms.  In some cases, you can help lift up women in library technology by getting out of the way – in other words, by removing or not supporting barriers to participation such as sexist language on the mailing list or by calling out exclusionary behavior by other men (or yourself!).

Sometimes, you can offer active assistance – but ask first! Perhaps a woman is ready to assume a project leadership role or is ready to grow into it. Encourage her – and be ready to support her publicly. Or perhaps you may have an opportunity to mentor a student – go for it, but know that mentoring is hard work.

But note — I’m not an authority on ways to support women in technology.  One of the things that the Ada Initiative does is run Ally Skills workshops that teach simple techniques for supporting women in the workplace and online.  In fact, if you’re coming to Atlanta this October for the DLF Forum, one is being offered there.

Learn. Something I’m still learning is just the sheer amount of crap that women in technology put up with. Have you ever gotten a death threat or a rape threat for something you said online about the software industry? If you’re a guy, probably not. If you’re Anita Sarkeesian or Quinn Norton, it’s a different story entirely.

If you’re thinking to yourself that “we’re librarians, not gamers, and nobody has ever gotten a death threat during a professional dispute with the possible exception of the MARC format” – that’s not good enough. Or if you think that no librarian has ever harassed another over gender – that’s simply not true. It doesn’t take a death threat to convince a women that library technology is too hostile for her; a long string of micro-aggressions can suffice. Do you think that librarians are too progressive or simply too darn nice for harassment to be an issue? Read Ingrid Henny Abrams’ posts about the results of her survey on code of conduct violations at ALA.

This is why the Ada Initiative’s anti-harassment work is so important – and to learn more, including links to sample policies, a good starting point is their own conference policies page. (Which, by the way, was quite useful when the Evergreen Project adopted its own code of conduct). Another good starting point is the Geek Feminism wiki.

And, of course, you could do worse than to go to one of the ally skills workshops.

If you choose to take up the challenge, make a note to come back in a year and write down what you’ve learned, what you’ve listened to and seen, and how you’ve helped to lift up others. It doesn’t have to be public – though that would be nice – but the important thing is to be mindful.

Finally, don’t just take my word for it – remember that I’m not an authority on supporting women in technology. Listen to the women who are.

Update: other #libs4ada posts

DuraSpace News: REGISTER: ADVANCED DSPACE TRAINING

planet code4lib - Tue, 2014-09-16 00:00
Winchester, MA  In response to overwhelming community demand, we are happy to announce the dates for an in-person, 3-day Advanced DSpace Course in Austin October 22-24, 2014. The total cost of the course is being underwritten with generous support from the Texas Digital Library and DuraSpace. As a result, the registration fee for the course for DuraSpace Members is only $250 and $500 for Non-Members (meals and lodging not included). Seating will be limited to 20 participants.  

DuraSpace News: AVAILABLE: The April-June 2014 Quarterly Report from Fedora

planet code4lib - Tue, 2014-09-16 00:00

From The Fedora Steering Group

The Quarterly Report from Fedora
April-June 2014

Fedora Development - In the past quarter, the development team released one Alpha and three Beta releases of Fedora 4; detailed release notes are here:

Roy Tennant: I’m So Very Sorry

planet code4lib - Mon, 2014-09-15 21:56

Two different but very related things happened last week which brought my own fallibility into painful focus for me.

One is that I blogged in support of the work of the Ada Initiative. They do great work to advance women in open technology and culture. If you are not familiar with their work, then by all means go and find out.

The other is that I discovered I had acted badly in exactly the kind of situation where I should have known better. The wake-up call came in the form of a blog post where the writer was kind enough not to call me out by name. But I will. It was me. Go ahead, read it. I’ll wait.

This, from someone who had fancied himself a feminist. I mean, srlsy. To me this shows just how deeply these issues run.

I was wrong, for which I am now apologizing. But allow me to be more specific. What am I sorry about?

  • I’m sorry that I shoved my way into a conversation where I didn’t belong. 
  • I’m sorry that I was wrong in what I advocated.
  • I’m sorry that my privilege and reputation can be unwittingly used to silence someone else.
  • I’m sorry that ignorance of my innate privilege has tended to support ignorance of my bad behavior.

I can’t change the past, but I can change the future. My slowly growing awareness of the effects of my words and actions can only help reduce my harmful impacts, while hopefully enforcing my positive actions.

Among the things that the Ada Initiative lists as ways that they are making a difference is this:

Asking men and influential community members to take responsibility for culture change.

I hear you, and I’m trying, as best as I can, to do this. It isn’t always quick, it isn’t always pretty, but it’s something. Until men stand up and own their own behavior and change it, things aren’t going to get better. I know this. I’m sorry for what I’ve done to perpetuate the problem, and I’m taking responsibility for my own actions, both in the past and in the future. Here’s hoping that the future is much brighter than the past.

 

Photo by butupa, Creative Commons License Attribution 2.0 Generic.

District Dispatch: Libraries, E-rate, and ALA featured at TPRC

planet code4lib - Mon, 2014-09-15 20:59

The scene at the 2014 Telecommunications Policy and Research Conference. Photo by TPRC.

Last Friday, the American Library Association (ALA) made its first appearance (and through a whole panel no less) at the Telecommunications Policy and Research Conference (TPRC), the most prestigious conference in information policy. The telecommunications policy topic, not surprisingly, that has dominated our time for over the past year: E-rate.

The panel “900 Questions: A Case Study of Multistakeholder Policy Advocacy through the E-rate Lens” was moderated by Larra Clark, director of the Program on Networks for ALA’s Office for Information Technology Policy (OITP). The panel featured Jon Peha, professor of Engineering and Public Policy at Carnegie Mellon University and former chief technologist of the Federal Communications Commission (FCC); and Tom Koutsky, chief policy counsel for Connected Nation and a former Attorney-Advisor at the FCC. Rounding out the panel were Marijke Visser, ALA’s own Empress of E-rate and OITP Director Alan S. Inouye.

The panel served as a great opportunity for ALA to cohesively consider the extensive effort on the current proceeding that we’ve expended since June 2013. Of course, it was rather a challenge to pack it in 90 minutes!

Marjike Visser, Larra Clark, and Alan S. Inouye focused on the multiple key tradeoffs that arose in the past year. Supporting the FCC proposal that led to the first order, even though it focused on Wi-Fi—important, but not ALA’s top priority, which is broadband to libraries (and schools)—based on the promise of a second order focusing on broadband to the building. We worked hard to stand with our long-standing coalitions, while not in full agreement with some coalition positions. The panel explored tensions with: school versus library interests and the importance of both differentiation and collaboration; rural versus urban concerns; near-term versus long-term considerations; and the risks and rewards of creative disruption.

Tom Koutsky and Jon Peha provided context and analysis beyond the library lens. The E-rate proceeding emanated from a multi-year process that began with the National Broadband Plan and investments in the Broadband Technology Opportunities Program (BTOP). Koutsky and Peha illuminated the oft-hidden complexity behind advocate groups, who on the surface may seem to represent similar interests or organizations, but in fact engage in considerable conflict and compromise among themselves. They also discussed the challenges with new stakeholder entrants and their competing interests, both in the short run and long run.

This TPRC session is an important milestone for OITP. The Policy Revolution! Initiative is predicated upon reaching decision makers and influencers outside of the library community who affect critical public policies of interest to our community. Thus, increasing the ALA and library presence at key venues such as TPRC represents important progress for us as we continue to work through re-imagining and re-engineering national public policy advocacy. Also in the September-October timeframe, OITP representatives will present at the conferences of the International City/County Management Association (ICMA), NTCA—the Rural Broadband Association, and the National Association of Telecommunications Officers and Advisors (NATOA).

The E-rate saga continues: ALA will submit comments in the most recent round—due tonight (September 15th)—and will submit further comments in the weeks ahead, as well as continue our discussions with the commissioners and staff of the FCC and our key contacts on Capitol Hill.

The post Libraries, E-rate, and ALA featured at TPRC appeared first on District Dispatch.

Manage Metadata (Diane Hillmann and Jon Phipps): Who ya gonna call?

planet code4lib - Mon, 2014-09-15 19:31

Some of you have probably noted that we’ve been somewhat quiet recently, but as usual, it doesn’t mean nothing is going on, more that we’ve been too busy to come up for air to talk about it.

A few of you might have noticed a tweet from the PBCore folks on a conversation we had with them recently. There’s a fuller note on their blog, with links to other posts describing what they’ve been thinking about as they move forward on upgrading the vocabularies they already have in the OMR.

Shortly after that, a post from Bernard Vatant of the Linked Open Vocabularies project (LOV) came over the W3C discussion list for Linked Open Data. Bernard is a hero to those of us toiling in this vineyard, and LOV (lov.okfn.org/dataset/lov/) one of the go-to places for those interested in what’s available in the vocabulary world and the relationships between those vocabularies. Bernard was criticizing the recent release of the DBpedia Ontology, having seen the announcement and, as is his habit, going in to try and add the new ontology to LOV. His gripes fell into a couple of important categories:

* the ontology namespace was dereferenceable, but what he found there was basically useless (his word)
* finding the ontology content itself required making a path via the documentation at another site to get to the goods
* the content was available as an archive that needed to be opened to get to the RDF
* there was no versioning available, thus no way to determine when and where changes were made

I was pretty stunned to see that a big important ontology was released in that way–so was Bernard apparently, although since that release there has apparently been a meeting of the minds, and the DBpedia Ontology is now resident in LOV. But as I read the post and its critique my mind harkened back to the conversation with PBCore. The issues Bernard brought up were exactly the ones we were discussing with them–how to manage a vocabulary, what tools were available to distribute the vocabulary to ensure easy re-use and understanding, the importance of versioning, providing documentation, etc.

These were all issues we’d been working hard on for RDA, and are still working on behind the RDA Registry. Clearly, there are a lot of folks out there looking for help figuring out how to provide useful access to their vocabularies and to maintain them properly. We’re exploring how we might do similar work for others (so ask us!).

Oh, and if you’re interested on our take on vocabulary versioning, take a look at our recent paper on the subject, presented at the IFLA satellite meeting on LOD in Paris last month.

I plan on posting more about that paper and its ideas later this week.

Andromeda Yelton: What I learned teaching jQuery (part 1)

planet code4lib - Mon, 2014-09-15 13:30

On August 11-12, I taught an Introduction to Programming Concepts via jQuery course at the DLF/Code4Lib unconference at the George Washington University. I was playing with several theories in developing this course:

  • Porting to jQuery so that it could be 100% browser-based: familiar environment, no installfest, maximizes time available for actual programming concepts.
  • Porting to jQuery so that it could be 100% visual (on which more below).
  • Simply giving up on the idea of getting true novices to the point of being able to write real-world-applicable code in a day-and-a-half workshop, and focusing instead on building a foundation that makes existing code-learning resources more intelligible, and leaves students with enough good feelings about code that they’ll be inclined to learn more.

Bottom line: I think it worked really well!

Today I’m going to talk about my theoretical inspiration for the course; upcoming posts will cover teaching techniques I used to operationalize that, and then future plans. (Look, there’s a jquery workshop tag so you can find them all!)

yo dawg i heard you like tests…

The whole workshop was, in a sense, a way to play with this paper: “A fresh look at novice programmers’ performance and their teachers’ expectations”. Its jaw-dropping result was that providing novice programming students with a test apparatus for a programming task quadrupled the number of subtasks they could successfully complete (students without the tests completed an average of 0.83 out of 4 tasks, compared to 3.26 for students who could check their work against the tests — in other words, students without tests didn’t even get one subtask working, on average).

Well gosh. If tests are that effective, I’m obligated to provide them. This is consistent with my intuitive observations of the CodingBat section of Boston Python Workshop — being asked to write provably correct code is the point where students discover whether their existing mental models are right, and start to iterate them. But the CodingBat interface is confusing, and you need to sink some instructional time just into making sense of it. And, honestly, with a lot of conventional intro programming tasks, it’s hard to tell if you’ve succeeded; you’ve got a command-line-ish interface (unfamiliar to many of my students) and a conceptual problem with abstract success criteria. I wanted something that would give immediate, obvious feedback.

Hence, jQuery. Manipulating the DOM produces instant visual effects. If you were asked to make a button disappear, it’s super obvious if you succeeded. (Well. Possibly assuming sightedness, and (with some of my tasks) possibly also non-colorblindness — I stayed away from red/green semantic pairs, but I didn’t audit for all the forms of colorblindness. I need to mull this one over.) And as it turns out, when you ask your students to add a class that changes a bunch of things to have a kitten pic background, it’s also super obvious to you as the instructor when they’ve succeeded (wait for it…wait…“awwww!”).

My hope for this class was that it would provide students who were genuinely novices at coding with the conceptual background they needed to get mileage out of the many intro-programming learning options out there. As Abigail Goben notes, these courses tend to implicitly assume that you already know how to code and just need to be told how to do it in this language, even when they brand themselves as intro courses. People will need much more practice than a day-and-a-half bootcamp to get from novice to proficient enough to write things they can use in everyday work, so I want to get them to a place where that practice will feel manageable. And for the students who do have some experience, hopefully I can introduce them to a language they don’t know yet in a way that has enough meat not to bore them.

Tomorrow, teaching techniques I used to get there, part 1: pacing.

LITA: Technology Skills and Your Resume/CV

planet code4lib - Mon, 2014-09-15 13:00

As I thought about what I wanted to write for my first LITA post, I really wasn’t sure until inspiration struck as I procrastinated by scrolling down my Facebook feed. I had been tagged in a status written by a library student who felt unsure of how she was displaying her tech skills on her CV. She asked for opinions. Was it even relevant to put a tech section on her CV if she wasn’t applying for a digital library job? If she already mentioned tech skills in a cover letter, did they need to be put on a CV, too?

The thread got a lot of different responses, some aligning with my thoughts on the subject and others that befuddled me. Why, for instance, was someone suggesting that you should only list tech skills you got in the classroom and not those you picked up on the job? Why did people seem to think that if you were writing a cover letter you should list your tech skills there and not on a CV?

Today, I thought I would share a few brief thoughts on how I list tech skills on my professional documents and how that connects to how I talk about them in a cover letter. Keep in mind that I am an academic librarian with a job in digital libraries, so the usefulness of my perspective beyond this specific area may be limited. And just to clarify, I recognize that everyone has different opinions on content, formatting, and length of professional documents. Just check out one of the myriad library resources for job hunters. It’s a good thing to have varying perspectives, actually, and I welcome all the opinions out there, whether they agree or disagree with my take on the subject.

What I Do

Why would I write a paragraph about it when I can just show you? This is how the tech section of my resume and CV looks now (very similar to when I applied for jobs in late 2013/early 2014).

  • Coding – HTML5, CSS
  • Digital Collection/Content Management – Drupal, Omeka
  • Digitization Software  – Epson Scan, Silverfast
  • Document Design – Adobe Creative Suite 5, Microsoft Office 2010 suite
  • Markup Languages & Standards – EAD, MODS, RDF, TEI, XML
  • Operating Systems – Mac OS X, Windows, UNIX
  • Social Media – Facebook, Twitter, WordPress, podcasting, wikis
  • Repository Software DSpace, Fedora
  • Other – ArcGIS, Neatline

This section is listed under the header “Technology” and does not include bullet points (used in this post for formatting reasons). Check out my entire CV to see how this section fits in with the rest of my content.

Conveying my tech skills in this way provides a quick way for a potential employer to understand the different software I know. It doesn’t provide a lot of usable information since there’s no indication of my skill level or familiarity with these tools. I consider this section of my CV a catch-all for my tech knowledge, but it’s up to my cover letter to educate the reader about my depth of understanding on specific tools relevant to the job description. I don’t include any tools here that I wouldn’t be able to easily answer, “So tell me how you have used ___ in the past?”

I have tinkered around with this section more times than I can count over the past few years.  Even now, writing this blog post, I’m looking at it and thinking, “Is that really relevant to me anymore?” I haven’t looked at other people’s CVs in a long time, and though those might be good to reference in this post, let’s be real: it’s a gloomy Friday afternoon as I type this and I just can’t bring myself to do a quick search.

My laziness aside, I’m particularly interested in how different types of info professionals, from archivists to public, academic, and special librarians, convey their tech skills in professional documents. So many jobs in libraries involve working with technology. I would think you’d be hard-pressed to find a new job that doesn’t involve tech in some way. So is there a way to standardize how we convey this type of information, or are our jobs so diverse that there’s really no way to do so?

I’m curious: How do you highlight your technology skills on professional documents like a resume or CV? Tell me in the comments!

District Dispatch: Reminder: Social Security webinar this week

planet code4lib - Mon, 2014-09-15 06:16

Photo by Jessamyn West via flickr

Reminder: The American Library Association (ALA) is encouraging librarians to participate in “My SSA,” a free webinar that will teach participants how to use My Social Security (MySSA), the online Social Security resource.

Do you know how to help your patrons locate information on Supplemental Security Income or Social Security? Presented by leaders and members of the development team of MySSA, this session will provide attendees with an overview of MySSA. In addition to receiving benefits information in print, the Social Security Administration is encouraging librarians to create an online MySSA account to view and track benefits.

Attendees will learn about viewing earnings records and receiving instant estimates of their future Social Security benefits. Those already receiving benefits can check benefit and payment information and manage their benefits.

Speakers include:

  • Maria Artista-Cuchna, Acting Associate Commissioner, External Affairs
  • Kia Anderson, Supervisory Social Insurance Specialist
  • Arnoldo Moore, Social Insurance Specialist
  • Alfredo Padilia Jr., Social Insurance Specialist
  • Diandra Taylor, Management Analyst

Date: Wednesday, September 17, 2014
Time: 2:00 PM – 3:00 PM EDT
Register for the free event

If you cannot attend this live session, a recorded archive will be available. To view past webinars also hosted collaboratively with iPAC, please visit Lib2Gov.org.

The post Reminder: Social Security webinar this week appeared first on District Dispatch.

Mita Williams: The story of our future : This changes everything

planet code4lib - Mon, 2014-09-15 01:54
In the middle of her column that is ostensibly about the television series Red Band Society, New Yorker critic Emily Nausbaum summarized John Green's YA bestseller The Fault in Our Stars with insight:

Among the many appealing qualities of Green's novel is how much it's about storytelling itself, and the way in which books function as a badge of identity, a marker of taste and values... For all it's romantic contours, "The Fault in Our Stars" is centrally a dialectic about why people seek out stories, one that never quite takes a stand on the question of whether we're right to wish for greater clarity in our art, characters we can "relate" to, or, for that matter, a happy ending.
If you had to encapsulate the future of libraries as a story, what story would that be?

Stewart Brand's How Buildings Learn?

In this world, technology creates a fast, globalised world where digital services and virtual presence are commonplace. Overall, the mood is fairly optimistic, but digitalisation and connectivity soon create too much information and format instability, so there is a slight feeling of unease amongst the general population. Physical books are in slight decline in this world although library services are expanding. The reason for this is that public libraries now take on a wide range of e-government services and are important as drop-in centres for information and advice relating to everything from education and childcare to immigration. In this scenario, libraries have also mutated into urban hubs and hangouts; vibrant meeting places for people and information that house cafés, shops, gyms, crèches, theatres, galleries and various cultural activities and events.
William Gibson's Neuromancer?

This is a world gone mad. Everything is accelerating and everything is in short supply and is priced accordingly. Electricity prices are sky-high and the internet is plagued by a series of serious issues due to overwhelming global demand. In this scenario, public libraries are initially written-off as digital dinosaurs, but eventually there is a swing in their favour as people either seek out reliable internet connections or because there is a real need for places that allow people to unplug, slow down and reflect. In this world, information also tends to be created and owned by large corporations and many small and medium sized firms cannot afford access. Therefore, public libraries also become providers of business information and intelligence. This creates a series of new revenue streams but funding is still tight and libraries are continually expected to do more with less and less funding and full-time staff.
Ray Bradbury's Fahrenheit 451?

This world is a screenager’s paradise. It is fast-paced, global and screen-based. Digitalisation has fundamentally changed the way that people consume information and entertainment, but it has also changed the way that people think. this is a post-literate world where physical books are almost dead and public libraries focus on digital collections and virtual services. In this scenario, books take up very little physical space so more space is given over to internet access, digital books and various other forms of digital entertainment. Public libraries blur the boundaries with other retailers of information and entertainment and also house mental health gyms, technology advice desks, download centres and screening rooms. Despite all this, public libraries struggle to survive due to a combination of ongoing funding cuts, low public usage and global competition. 
Or Rachel Carson's Silent Spring?

In this scenario, climate change turns out to be much worse than expected. Resource shortages and the high cost of energy in particular mean that the physical movement of products and people is greatly reduced and individuals are therefore drawn back to their local communities. It is a world where globalisation slows down, digital technology is restrained and where all activities are related to community impact. Public libraries do well in this world. People become voracious consumers of physical books (especially old books) and libraries are rediscovered and revered by the majority of the population due to their safety and neutrality. they are also highly valued because they are free public spaces that promote a wide variety of community-related events. Nevertheless, there are still pressures caused by the high cost of energy and the need to maintain facilities. The phrase ‘dark euphoria’ (Bruce Sterling) sums up the mood in this scenario, because on one level the world is falling apart but on another level people are quite content. 
These scenarios come from a remarkable document produced five years ago in 2009 for The Library Council of New South Wales called The Bookends Scenarios [pdf].

It's the only document in the library literature that I've seen that seriously addresses our global warming future.  It's the only one that I've come across that confronts us and forces us to consider how we may shape our institution and our services now so we can be there for our community when its in greatest need.


If you had to encapsulate the future as a story, what story would that be?





I suffer from dark euphoria.  I worry about global warming.

That's why I'm going to take part in the People's Climate March in New York City on September 21th, 2014.

I'm going because our leaders are not even talking about taking the necessary action to reduce atmospheric carbon and to mitigate the effects of climate change.  This is a movement that requires all of us to become the leaders that we so desperately need.

There's a book that goes with this march: This changes everything.

I'm not normally one for marches. I share the suspicion that gatherings and marches themselves don't change anything.

But events change people. There are events that define movements.

You couldn't have an Occupy Movement without Occupy Wall Street.  And without Occupy Wall Street, we wouldn't have had Occupy Sandy.


Fight to #EndRacism...for #ClimateJustice. #peoplesclimate BOOM pic.twitter.com/nOJSoLMUJd
— REEP (@reep_ace) September 14, 2014
I understand the feelings of helplessness and darkness when reading or hearing about another terrifying warning about the threat of global warming. I struggle with these feelings more than I care to admit.

I find solace from these feelings from a variety of different sources beyond my family, friends and community.  Of these, the study of history oddly enough, gives me great comfort.  It has helped me find stories to help me understand the present.

There are those who call the Climate Change Movement, the second Abolition Movement, and I think this description is fitting for several reasons. For one, it gets across that we need to draw upon our shared moral fortitude to make it politically necessary to force those in power to forfeit profit from oil and coal, which unchecked, will continue to cost us grievous human suffering.

It also describes the sheer enormity of the work that must be done. The analogy makes clear how it will be necessary to change every aspect of society to mitigate climate change at this point.

And yet, it has happened before.  Ordinary people came together to stop slavery.

On that note, and I hope I'm not spoiling it for you, I took great comfort in the last passage of David Mitchell's Cloud Atlas, a book of several pasts and a future.

Upon my return to San Francisco, I shall pledge myself to the abolitionist cause, because I owe my life to a self-freed slave & because I must begin somewhere.
I hear my father-in-law’s response:  “Oho, fine, Whiggish sentiments, Adam.  But don’t tell me about justice!  Ride to Tennessee on an ass and convince the rednecks they are merely white-washed negroes and their negroes are black-washed whites!  Sail to the Old World, tell ‘em their imperial slaves’ rights are as inalienable as the Queen of Belgium’s!  Oh, you’ll grow hoarse, poor and gray in caucuses!  You’ll be spat upon, shot at, lynched, pacified with medals, spurned by backwoodsmen! Crucified!  Naïve, dreaming Adam.  He who would do battle with the many headed hydra of human nature must pay a world of pain and his family must pay it along with him! And only as you gasp your dying breath shall you understand your life amounted to no more than one drop in a limitless ocean!”

Yet what is any ocean but a multitude of drops?

Pages

Subscribe to code4lib aggregator