You are here

Feed aggregator

District Dispatch: FOIA reform unanimously passed by Senate faces final hurdle

planet code4lib - Wed, 2016-03-16 15:28

Legislation to reform FOIA will offer greater government transparency and provide the public with greater and more timely access to government information.

It’s taken just over a year for the Senate to vote on S. 337, the FOIA Improvement Act, but its unanimous approval yesterday is a wonderful way to celebrate Sunshine Week 2016! ALA and many other advocates’ attention will now be focused on clearing the final hurdles to marking FOIA’s 50th anniversary (fittingly on July 4th) with a White House signing ceremony.

Before that can happen, however, Senate and House negotiators first must reconcile S. 337 with the House’s own version of FOIA reform, H.R. 653, passed unanimously in that chamber in January of this year. While similar, the bills are not identical in several substantive ways as this excellent Congressional Research Service history and side-by-side comparison details. With an extra-long summer recess to accommodate the major parties’ political conventions looming, and a legislative calendar further truncated by the 2016 elections themselves, time will be tight if Congress and the public are to avoid the sad situation we were left in at the end of the 113th Congress when time simply ran out to enact FOIA reform in 2014!

As just passed by the Senate, key provisions of the FOIA Improvement Act would: strengthen the Office of Government Information Services (OGIS); “require the Director of the Office of Management and Budget to ensure the operation of a consolidated online request portal that allows a member of the public to submit a request for records to any agency from a single website;” and codify the President’s “presumption of openness” policy instituted for all federal agencies at the very start of this Administration.

ALA sincerely thanks Senator John Cornyn (R-TX), Senate Judiciary Committee Chairman Charles Grassley (R-IA) and Judiciary Ranking Member Patrick Leahy (D-VT) not only for introducing and supporting S. 337 in the current Congress, but for their longstanding commitment to meaningful FOIA reform over many years and multiple Congresses. With their continued leadership, ALA will continue to push with our allies for the House and Senate to quickly “conference” their two bills so that both chambers of Congress can vote again before time runs out to send broad FOIA reform to the President for the first time in many years.

Stay tuned for more on how you can help support that effort, and secure the President’s signature, soon.

The post FOIA reform unanimously passed by Senate faces final hurdle appeared first on District Dispatch.

DPLA: Job Opportunity: DPLA Network Manager

planet code4lib - Wed, 2016-03-16 13:00

The Digital Public Library of America has an opening for the position of DPLA Network Manager.

The Digital Public Library of America is growing our Hubs Network.  DPLA Hubs include Content and Service Hubs, and represent almost 2,000 cultural heritage institutions throughout the country.  Over the next several years, we have the goal for cultural heritage institutions in every state to have an on-ramp to DPLA.  This position will play a critical role in helping to grow, document and coordinate activities for the Hubs Network.  

Reporting to the DPLA Director for Content, the DPLA Network Manager would perform the follow job duties:

  • assist the DPLA Director for Content in building the DPLA Network by working with potential Hubs to assure their success as members of the network
  • manage communications with the Hub network
  • coordinate the Hubs application process
  • provide documentation for the network on various activities
  • oversee website updates related to Hubs network information needs, i.e. materials designed to help plan new Hubs, information about the Hubs network, application materials, etc.
  • field inquiries about joining the network
  • keep statistics related to the network and network activities
  • provide education and training to the Hub network, including assisting with the implementation of
  • assist in curation activities, such as building Exhibitions, Primary Source Sets or Network-owned ebook collections

Experience required:  The ideal candidate will have 5+ years of working in digital libraries or a related setting, preferably in a collaborative environment.  The Network Manager will understand the operations of the DPLA Hubs including digitization, aggregation, metadata standards and normalization, rights status determination and the human resources required to carry out these activities.  The ideal candidate will also possess excellent written and verbal communication skills and strong customer service orientation. The ability to travel to Hub locations and/or other locations to deliver education, training or project presentations is required.  

Experience preferred:   Direct experience with aggregation of metadata; Knowledge of metadata aggregation tools; Knowledge of the resources required to build and maintain a DPLA Hub.  Prior experience with project management and/or personnel management.

Education Required: MLS or related degree

Like its collection, DPLA is strongly committed to diversity in all of its forms. We provide a full set of benefits, including health care, life and disability insurance, and a retirement plan. Starting salary is commensurate with experience.

This position is full-time. DPLA is a geographically-distributed organization, with roughly half of its employees in its headquarters in Boston, Massachusetts, and most in the Northeast corridor between Washington and Boston. Given the significant travel and collaboration associated with this position, proximity to the majority of DPLA’s staff is helpful, and easy access to a major airport is essential.

About DPLA

The Digital Public Library of America strives to contain the full breadth of human expression, from the written word, to works of art and culture, to records of America’s heritage, to the efforts and data of science. Since launching in April 2013, it has aggregated over 11 million items from nearly 2,000 institutions. DPLA is a registered 501(c)(3) non-profit.

To apply, send a letter of interest detailing your qualifications, resume and a list of 3 references in a single PDF to  First review of applications will begin April 15, 2016 and will continue until the position is filled.

pinboard: Twitter

planet code4lib - Wed, 2016-03-16 02:03
It’s me, yr girl who went to her tech job orientation in a #code4lib shirt

Access Conference: Call for Proposals 2016

planet code4lib - Tue, 2016-03-15 20:10

The Access 2016 Program Committee invites proposals for participation in this year’s Access Conference, which will be held on the beautiful campus of the University of New Brunswick in the hip city of Fredericton, New Brunswick from 4-7 October.

There’s no special theme to this year’s conference, but — in case you didn’t know — Access is Canada’s annual library technology conference, so … we’re looking for presentations about cutting-edge library technologies that would appeal to librarians, technicians, developers, programmers, and managers.

Access is a single-stream conference that will feature:
• 45-minute sessions,
• lightning talks (speakers have five minutes to talk while slides—20 in total—automatically advance every 15 seconds),
• a half-day workshop on the last day of the conference,
• and maybe a surprise or two: if you have a bright idea for something different (panel, puppet show, etc.), we’d love to hear it

To submit your proposal, please fill out the form by 15 April.

Please take a look at the Code of Conduct too.

If you have any questions, check out the site at or write to David Ross, Chair of the Program Committee.

We’re looking forward to hearing from you!

Access Conference: 2016 Ticket Prices

planet code4lib - Tue, 2016-03-15 17:57

Ticket prices have been set for 2016.

Full conference tickets include admission to hackfest, two and a half days of our amazing single-stream conference and a half-day workshop on the last day. It is all you can eat for one amazingly low price. All prices in Canadian dollars.

Ticket Options: 1. Early Bird – $350

A limited number of tickets will be available and should go on sale in June. Don’t miss out on this amazing deal.

2. Regular – $450

Standard ticket rates are still unbeatable. You can’t go wrong for four days at this price.

3. Speaker Rate – $300

Want to pitch in? Have you got a great project to share? Get your proposal approved and we’ll cut you a great deal. Speakers are provided discounted tickets and can register once approved. As with other tickets, includes hackfest, two and half day conference and half-day workshop.

4. Student Rate – $200 (limited)

Limited to 25 tickets, these should go on sale in June as well. Student rate includes hackfest, two and half day conference and half-day workshop. Educational identification will be required.

5. One-Day Pass – $225

Only interested or only have time for one great day? We’ve got you covered.

We’ll let you know when tickets are on sale!

David Rosenthal: Elsevier and the Streisand Effect

planet code4lib - Tue, 2016-03-15 15:00
Nearly a year ago I wrote The Maginot Paywall about the rise of research into the peer-to-peer sharing of academic papers via mechanisms including Library Genesis, Sci-Hub and #icanhazpdf. Although these mechanisms had been in place for some time they hadn't received a lot of attention. Below the fold, a look at how and why this has recently changed.

In 2001 the World Health Organization worked with the major publishers to set up Hinari, a system whereby researchers in developing countries could get free or very-low-cost access to health journals. There are similar systems for agriculture, the environment and technology. Why would the publishers give access to their journals to researchers at institutions that hadn't paid anything?

The answer is that the publishers were not losing money by doing so. There was no possibility that institutions in developing countries could pay the subscription. Depriving them of access would not motivate them to pay; they couldn't possibly afford to pay. Cross-subsidizing their access cost almost nothing and had indirect benefits, such as cementing the publishers' role as gatekeepers for research, and discouraging the use of open access.

Similarly, peer-to-peer sharing of papers didn't actually lose the major publishers significant amounts of money. Institutions that could afford to subscribe were not going to drop their subscriptions and encourage their researchers to use these flaky and apparently illegal alternatives. The majority usage of these mechanisms was from researchers whose institutions would never subscribe, and who could not afford the extortionate pay-per-view charges. Effective techniques to suppress them would be self-defeating. As I wrote in The Maginot Paywall:
Copyright maximalists such as the major academic publishers, are in a similar position. The more effective and thus intrusive the mechanisms they implement to prevent unauthorized access, the more they incentivize "guerilla open access". Then last June Elsevier filed a case in New York trying to shut down Library Genesis and Sci-Hub. Both are apparently based in Russia, which is not highly motivated to send more of its foreign reserves to Western publishers. So the case was not effective at shutting them down. It turned out, however, to be a classic case of the Streisand Effect, in which attempting to suppress information on the Web causes it to attract far more attention.

The Streisand Effect started slowly, with pieces at Quartz and BBC News in October. The EFF weighed in on the topic in December with What If Elsevier and Researchers Quit Playing Hide-and-Seek?:
Sci-Hub and LibGen have now moved to new domains, and Sci-Hub has set up a .onion address; this allows users to access the service anonymously through Tor. How quickly the sites have gotten back on their feet after the injunction underscores that these services can't really be stopped. Elsevier can't kill unauthorized sharing of its papers; at best, it can only make sharing incrementally less convenient. But the Streisand Effect really kicked in early last month with Simon Oxenham's Meet the Robin Hood of Science, which led to Fiona MacDonald's piece at Science Alert, Kaveh Waddell's The Research Pirates of the Dark Web and Kieran McCarthy's Free science journal library gains notoriety, lands injunctions. Mike Masnick's Using Copyright To Shut Down 'The Pirate Bay' Of Scientific Research Is 100% Against The Purpose Of Copyright went back to the Constitution:
Article 1, Section 8, Clause 8 famously says that Congress has the following power:
To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries. and the 1790 Copyright Act, which was subtitled "An Act for the Encouragement of Learning." Encouragement of learning is what Sci-Hub is for. Mike Taylor's Barbra Streisand, Elsevier, and Sci-Hub was AFAIK the first to point out that Elsevier had triggered the Streisand Effect. Simon Oxenham followed up with The Robin Hood of Science: The Missing Chapter, making the connection with the work of the late Aaron Swartz.

Barbara Fister made the very good point that Universities don't just supply the publishers with free labor in the form of authoring and reviewing:
Because it is labor - lots of labor - to maintain link resolvers, keep license agreements in order, and deal with constant changes in subscription contents. We have to work a lot harder to be publishers' border guards than people realize. and she clearly lays out the impossible situation librarians are in:
We feel we are virtually required to provide access to whatever researchers in our local community ask for while restricting access from anyone outside that narrowly-defined community of users. Instead of curators, we're personal shoppers who moonlight as border guards. This isn't working out well for anyone. Unaffiliated researchers have to find illegal work-arounds, and faculty who actually have access through libraries are turning to the black market for articles because it seems more efficient than contacting their personal shopper, particularly when the library itself doesn't figure in their work flow. In the meantime, all that money we spend on big bundles of articles (or on purchasing access to articles one at a time when we can't afford the bundle anymore) is just a really high annual rent. We can't preserve what we don't own, and we don't curate because our function is to get what is asked for. The Library Loon has a series of posts that are worth reading (together with some of their comments). She links to A Short History of The Russian Digital Shadow Libraries by Balázs Bodó, a must-read analysis starting in Soviet times showing that Sci-Hub is but one product of a long history of resistance to censorship. Bodó has a more reflective piece In the Name of Humanity in Limn's Total Archive issue, where he makes the LOCKSS argument:
This is the paradox of the total piratical archive: they collect enormous wealth, but they do not own or control any of it. As an insurance policy against copyright enforcement, they have already given everything away: they release their source code, their databases, and their catalogs; they put up the metadata and the digitalized files on file-sharing networks. They realize that exclusive ownership/control over any aspects of the library could be a point of failure, so in the best traditions of archiving, they make sure everything is duplicated and redundant, and that many of the copies are under completely independent control. The Loon's analysis of the PR responses from the publishers is acute:
Why point this effluent at librarians specifically rather than academe generally? Because publishers are not stupid; libraries are their gravy train and they know that. The more they can convince librarians that it is somehow against the rules (whether “rules” means “law” or “norms” or even merely “etiquette,” and this does vary across publisher sallies) to cross or question them, the longer that gravy train keeps rolling. Researchers, you simply do not matter to publishers in the least until you credibly threaten a labor boycott or (heaven forfend) actually support librarian budget-reallocation decisions. The money is coming from librarians.Last weekend the Streisand Effect reached the opinion pages of the New York Times with Kate Murphy's Should All Research Papers Be Free?, replete with quotes from Michael Eisen, Alicia Wise, Peter Suber and David Crotty. Alas, Murphy starts by writing "Her protest against scholarly journals’ paywalls". Sci-Hub isn't a protest. Calling something a protest is a way of labelling it ineffectual. Sci-Hub is a tool that implements a paywall-free world. Occupy Wall Street was a protest, but had  it actually built a functioning alternative financial system no-one would be describing it that way.

The result of the Streisand Effect has been, among other things, to sensitize the public to the issue of open access. Oxenham writes:
vast numbers of people who read the story thought researchers or universities received a portion of the fees paid by the public to read the journals, which contain academic research funded by taxpayers. This clearly isn't in Elsevier's interest. So, having failed to shut down the services and garnering them a lot of free publicity, where does Elsevier go from here? I see four possible paths:
  • They can try to bribe the Russians to clamp down on the services, for example by offering Russian institutions very cheap subscriptions as a quid pro quo. But they only control a minority of the content, and they would be showing other countries how to reduce their subscription costs by hosting the services.
  • They can try to punish the Russians for not clamping down, for example by cutting the country off from Elsevier content. But this would increase the incentive to host the services.
  • They can sue their customers, the institutions whose networks are being used to access new content. In 2008 publishers sued Georgia State for: pervasive, flagrant and ongoing unauthorized distribution of copyrighted materials Eight years later the case is still being argued on appeal. But in the meantime the landscape has changed. Many research funders now require open access. Many institutions now require (but fail to enforce) deposit of papers in institutional repositories. Institutions facing publisher lawsuits would have a powerful incentive to enforce deposit, because their network isn't needed to leak open access content to Sci-Hub.
  • They can sue the sources of their content, the individual researchers who they may be able to trace as the source of Sci-Hub materials. This would be a lot easier if the publishers stopped authenticating via IP address and moved to a system based on individual logins. Although this would make life difficult for Sci-Hub-like services if they used malware-based on-campus proxies, it would also make using subscription journals miserable for the vast majority of researchers and thus greatly increase the attractiveness of open access journals. But the Library Loon correctly points out that Sci-Hub's database of credentials is a tempting target for the publishers and others to attempt to compromise.
None of these look like a winning strategy in the longer term. One wonders if Elsevier gamed out the consequences of their lawsuit. The cost of pay-per-view access is the reason Elbakyan gives for starting Sci-Hub:
“Prices are very high, and that made it impossible to obtain papers by purchasing. You need to read many papers for research, and when each paper costs about 30 dollars, that is impossible.”It seems I was somewhat prophetic in pointing to the risk pay-per-view poses for the publishers in my 2010 JCDL keynote:
Libraries implementing PPV have two unattractive choices:
  • Hide the cost of access from readers. This replicates the subscription model but leads to overuse and loss of budget control.
  • Make the cost of access visible to readers. This causes severe administrative burdens, discourages use of the materials, and places a premium on readers finding the free versions of content.
Placing a premium on finding the open access copy is something publishers should wish to avoid.Elsevier and the other major publishers have a fundamental problem. Their customers are libraries, but libraries don't actually use the content access they buy. The libraries' readers are the ones that use the access. What the readers want is a single portal, preferably Google, that provides free, instant access to the entire corpus of published research. As Elbakyan writes:
On the Internet, we obviously need websites like Sci-Hub where people can access and read research literature. The problem is, such websites oftenly cannot operate without interruptions, because current system does not allow it.
The system has to be changed so that websites like Sci-Hub can work without running into problems. Sci-Hub is a goal, changing the system is one of the methods to achieve it.Sci-Hub is as close as anyone has come to providing what the readers want. None of the big publishers can provide it, not merely because doing so would destroy their business model, but also because none of them individually control enough of the content. And the publishers' customers don't want them to provide it, because doing so would reduce even further the libraries' role in their institutions. No-one would need "personal shoppers who moonlight as border guards".

D-Lib: RAMLET: a Conceptual Model for Resource Aggregation for Learning, Education, and Training

planet code4lib - Tue, 2016-03-15 14:13
Article by Katrien Verbert, KU Leuven, Belgium; Nancy J. Hoebelheinrich, Knowledge Motifs, USA; Kerry Blinco, Northern Territory Library, Australia; Scott Lewis, Austin, Texas, USA; and Wilbert Kraan, University of Bolton, UK

D-Lib: Humanities Data in the Library: Integrity, Form, Access

planet code4lib - Tue, 2016-03-15 14:13
Article by Thomas Padilla, Michigan State University

D-Lib: Transforming User Knowledge into Archival Knowledge

planet code4lib - Tue, 2016-03-15 14:13
Article by Tarvo Karberg, University of Tartu and National Archives of Estonia and Koit Saarevet, National Archives of Estonia

D-Lib: Grappling with Data

planet code4lib - Tue, 2016-03-15 14:13
Editorial by Laurence Lannom, CNRI

D-Lib: A New Approach to Configuration Management for Private LOCKSS Networks

planet code4lib - Tue, 2016-03-15 14:13
Article by Tobin M. Cataldo, Birmingham Public Library, Birmingham, Alabama, USA

LITA: Another upcoming LITA web course and webinar, register now!

planet code4lib - Tue, 2016-03-15 14:00

Register now for the next great LITA continuing education web course and webinar offerings.

Don’t miss out on this repeat of last springs sold out LITA webinar:

Yes, You Can Video: A how-to guide for creating high-impact instructional videos without tearing your hair out
Presenters: Anne Burke, Undergraduate Instruction & Outreach Librarian, North Carolina State University Libraries; and Andreas Orphanides, Librarian for Digital Technologies and Learning, North Carolina State University Libraries
Tuesday, April 12, 2016
1:00 pm – 2:30 pm Central Time
Register Online, page arranged by session date (login required)

Have you ever wanted to create an engaging and educational instructional video, but felt like you didn’t have the time, ability, or technology? Are you perplexed by all the moving parts that go into creating an effective tutorial? In this 90 minute session, Anne Burke and Andreas Orphanides will help to demystify the process, breaking it down into easy-to-follow steps, and provide a variety of technical approaches suited to a range of skill sets. They will cover choosing and scoping your topic, scripting and storyboarding, producing the video, and getting it online. They will also address common pitfalls at each stage. This webinar is for anyone wanting to learn more about making effective videos.

Details here and Registration here.

Make the investment in deeper learning with this web course:

Universal Design for Libraries and Librarians
Instructors: Jessica Olin, Director of the Library, Robert H. Parker Library, Wesley College; and Holly Mabry, Digital Services Librarian, Gardner-Webb University.
Starting Monday, April 11, 2016, running for 6 weeks
Register Online, page arranged by session date (login required)

Universal Design is the idea of designing products, places, and experiences to make them accessible to as broad a spectrum of people as possible, without requiring special modifications or adaptations. This course will present an overview of universal design as a historical movement, as a philosophy, and as an applicable set of tools. Students will learn about the diversity of experiences and capabilities that people have, including disabilities (e.g. physical, learning, cognitive, resulting from age and/or accident), cultural backgrounds, and other abilities. The class will also give students the opportunity to redesign specific products or environments to make them more universally accessible and usable. By the end of this class, students will be able to…

  • Articulate the ethical, philosophical, and practical aspects of Universal Design as a method and movement – both in general and as it relates to their specific work and life circumstances
  • Demonstrate the specific pedagogical, ethical, and customer service benefits of using Universal Design principles to develop and recreate library spaces and services in order to make them more broadly accessible
  • Integrate the ideals and practicalities of Universal Design into library spaces and services via a continuous critique and evaluation cycle

Details here and Registration here.

And don’t miss the other upcoming LITA continuing education offerings by checking the Online Learning web page.

Questions or Comments?

For all other questions or comments related to the course, contact LITA at (312) 280-4268 or Mark Beatty,

HangingTogether: Metadata for archived websites

planet code4lib - Tue, 2016-03-15 02:39

That was the topic discussed recently by OCLC Research Library Partners metadata managers, initiated by Dawn Hale of Johns Hopkins University. For some years now, archives and libraries have been archiving web resources of scholarly or institutional interest to ensure their continuing access and long-term survival. Some websites are ephemeral or intentionally temporary, such as those created for a specific event. Institutions would like to archive and preserve the content of their websites as part of their historical record. A large majority of web content is harvested by web crawlers, but the metadata generated by harvesting alone is considered insufficient to support discovery.

Examples of archived websites among OCLC Research Library Partnership institutions include:

  • Ivy-Plus collaborative collections: Collaborative Architecture, Urbanism, and Sustainability Web Archive (CAUSEWAY) and Contemporary Composers Web Archive (CCWA);
  • The New York Art Resources Consortium (NYARC), which captures dynamic web-based versions of auction catalogs and artist, gallery and museum websites;
  • Thematic collections supporting a specific research area, such as Columbia University’s Human Rights, Historic Preservation and Urban Planning, and New York City Religions;
  • Teaching materials, such as MIT’s OpenCourseWare (OCW), which aspires to make the content available to scholars and instructors for reuse for the foreseeable future;
  • Government archives, such as the Australian Government Web Archive.

Approaches to web archiving are evolving. Libraries are developing policies regarding content selection, exploring potential uses of archived content and considering the requirements for long-term preservation. Our discussion focused on the challenges for creating and managing the metadata needed to enhance machine-harvested metadata from websites.

Some of the challenges raised in the discussions:

  • Descriptive metadata requirements may depend on the type of website archived, e.g., transient sites, research data, social media, or organizational sites. Sometimes only the content of the sites is archived when the look-and-feel of the site is not considered significant.
  • Practices vary. Some characteristics of websites are not addressed by existing descriptive rules such as RDA (Resource Description and Access) and DACS (Describing Archives: A Content Standard). Metadata tends to follow bibliographic description traditions or archival practice depending on who creates the metadata.
  • Metadata requirements may differ depending on the scale of material being archived and its projected use. For example, digital humanists look at web content as data and analyze it for purposes such as identifying trends, while other users merely need individual pages.
  • Many websites are updated repeatedly, requiring re-crawling when the content has changed. Some types of change can result in capture failures.
  • The level of metadata granularity (collection, seed/URL, document) may vary based on anticipated user needs, scale of material being crawled, or available staffing.
  • Some websites are archived by more than one institution. Each may have captured the same site on different dates and with varying crawl specifications. How can they be searched and used in conjunction with one another?

Some of the issues raised such as deciding on the correct level of granularity, determining relevance to one’s existing collection and handling concerns about copyright are routinely addressed by archivists. Jackie Dooley’s The Archival Advantage: Integrating Archival Experience into Management of Born-Digital Library Materials is applicable to archiving websites as well.

Focus group members agreed we had a common need for community-level metadata best practices applicable to archived websites, perhaps a “core metadata set”. Since the focus group discussions started in January, my colleagues Jackie Dooley and Dennis Massie have convened a 26-member OCLC Research Library Partnership Web Archiving Metadata Working Group with a charge to “evaluate existing and emerging approaches to descriptive metadata for archived websites” and “recommend best practices to meet user needs and to ensure discoverability and consistency”. Stay tuned!


About Karen Smith-Yoshimura

Karen Smith-Yoshimura, senior program officer, works on topics related to creating and managing metadata with a focus on large research libraries and multilingual requirements.

Mail | Web | Twitter | More Posts (64)

William Denton: Passing blocks to a thread in Sonic Pi

planet code4lib - Tue, 2016-03-15 00:47

A short example of how to pass a block to a thread in Sonic Pi, where it will be run and control will return to you immediately. The block can contain whatever you want. (Thanks to Sam Aaron, creator of Sonic Pi, for this; he sent it to the mailing list but I can’t find the original now.)

define :play_in_a_thread do |&block| in_thread do end end play_in_a_thread do play :c4 sleep 180 play :e4 sleep 180 play :g4 end play :c3

The play_in_a_thread (you can call it whatever you want) function will take whatever block you give it, and combined with Sonic Pi’s in_thread this parcels the block off, runs it, and returns control. In this example the C major chord will take 6 minutes to play, but the two Cs (:C4 and :C3) will play at the same time. You can call play_in_a_thread as many times as you want, from anywhere. You can pass in whatever you want, simple or complex, it doesn’t matter, Sonic Pi will handle it and let the program flow continue uninterrupted.

This is part of Ruby (in and on which Sonic Pi is built), and tells the function to expect a block to be passed in. The block operator (it’s also called an ampersand operator) section of the Ruby documentation on methods explains more.

Thanks to Sonic Pi, I’ve learned something new about Ruby.


Subscribe to code4lib aggregator