You are here

Feed aggregator

Open Library Data Additions: Talis MARC records

planet code4lib - Wed, 2016-03-23 09:18

5.5 million MARC records contributed by Talis to Open Library under the ODC PDDL (http://www.opendatacommons.org/odc-public-domain-dedication-and-licence/)..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata

Open Library Data Additions: Amazon Crawl: part 9

planet code4lib - Wed, 2016-03-23 05:24

Part 9 of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata, Text

FOSS4Lib Recent Releases: Evergreen - 2.10.1

planet code4lib - Wed, 2016-03-23 03:08

Last updated March 22, 2016. Created by gmcharlt on March 22, 2016.
Log in to edit this page.

Package: EvergreenRelease Date: Tuesday, March 22, 2016

Evergreen ILS: Evergreen 2.10.1 released

planet code4lib - Wed, 2016-03-23 03:05

Evergreen 2.10.1 is now available for download.

This is a bugfix release that fixes the following significant bug:

  • LP#1560174: Importing MARC records can fail in database upgraded to 2.10.0

This bug affected only databases that were upgraded to 2.10.0 from a previous version; fresh installations of 2.10.0 are not affected.

Evergreen users who prefer not to perform a full upgrade from 2.10.0 to 2.10.1 can fix the bug by applying the database update script 2.10.0-2.10.1-upgrade-db.sql (found in the source directory Open-ILS/src/sql/Pg/version-upgrade).

For more information about what’s in the release, check out the release notes.

Open Library Data Additions: Amazon Crawl: part gq

planet code4lib - Wed, 2016-03-23 01:41

Part gq of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata, Text

David Rosenthal: The Dawn of DAWN?

planet code4lib - Tue, 2016-03-22 22:00
At the 2009 SOSP David Anderson and co-authors from C-MU presented FAWN, the Fast Array of Wimpy Nodes. It inspired me to suggest, in my 2010 JCDL keynote, that the cost savings FAWN realized without performance penalty by distributing computation across a very large number of very low-power nodes might also apply to storage.

The following year Ian Adams and Ethan Miller of UC Santa Cruz's Storage Systems Research Center and I looked at this possibility more closely in a Technical Report entitled Using Storage Class Memory for Archives with DAWN, a Durable Array of Wimpy Nodes. We showed that it was indeed plausible that, even at then current flash prices, the total cost of ownership over the long term of a storage system built from very low-power system-on-chip technology and flash memory would be competitive with disk while providing high performance and enabling self-healing.

Although flash remains more expensive than hard disk, since 2011 the gap has narrowed from a factor of about 12 to about 6. Pure Storage recently announced FlashBlade, an object storage fabric composed of large numbers of blades, each equipped with:
  • Compute – 8-core Xeon system-on-a-chip – and Elastic Fabric Connector for external, off-blade, 40GbitE networking,
  • Storage – NAND storage with 8TB or 52TB raw capacity of raw capacity and on-board NV-RAM with a super-capacitor-backed write buffer plus a pair of ARM CPU cores and an FPGA,
  • On-blade networking – PCIe card to link compute and storage cards via a proprietary protocol.
Chris Mellor at The Register has details and two commentaries.

FlashBlade clearly isn't DAWN. Each blade is much bigger, much more powerful and much more expensive than a DAWN node. No-one could call a node with an 8-core Xeon, 2 ARMs, and 52TB of flash "wimpy", and it'll clearly be too expensive for long-term bulk storage. But it is a big step in the direction of the DAWN architecture.

DAWN exploits two separate sets of synergies:
  • Like FlashBlade, it moves the computation to where the data is, rather then moving the data to where the computation is, reducing both latency and power consumption. The further data moves on wires from the storage medium, the more power and time it takes. This is why Berkeley's Aspire project's architecture is based on optical interconnect technology, which when it becomes mainstream will be both faster and lower-power than wires. In the meantime, we have to use wires.
  • Unlike FlashBlade, it divides the object storage fabric into a much larger number of much smaller nodes, implemented using the very low-power ARM chips used in cellphones. Because the power a CPU needs tends to grow faster than linearly with performance, the additional parallelism provides comparable performance at lower power.
So FlashBlade currently exploits only one of the two sets of synergies. But once Pure Storage has deployed this architecture in its current relatively high-cost and high-power technology, re-implementing it in lower-cost, lower-power technology should be easy and non-disruptive. They have done the harder of the two parts.

Library of Congress: The Signal: A National Digital Stewardship Resident at the U.S. Senate

planet code4lib - Tue, 2016-03-22 17:43

This is a guest post by John Caldwell.

Meeting in the Dirksen Senate Office Building. Image courtesy of Brandon Hirsch.

On Friday, January 29, 2016, I hosted my fellow National Digital Stewardship residents, their mentors, and the NDSR program staff to our cohort’s first enrichment session at the US Senate.

The morning started with two presentations. First, Mark Evans, Director of Digital Archives and Information Resources Management Services at History Associates talked about the challenges of preserving a Senator’s digital legacy. Second, Brandon Hirsch, IT Specialist at the Center for Legislative Archives, a division of the National Archives, shared with us the difficulty in preserving the permanent electronic records of Congress.

Mark talked about how History Associates performs a digital assessment and how one of the important elements of the assessment is to use digital preservation tools such as DROID to identify file formats. A graphical representation can show you where to focus future preservation and description efforts. Unfortunately, the tail can also be misleading; in this collection, there were only 116 email files, but email can be a treasure trove of digital information.

Brandon shared an experience where, in order to extract permanent records that were transferred to the National Archives in proprietary software, Center for Legislative Archives’ staff leveraged virtualization to build a temporary instance of the proprietary application. The recovery operation allowed Center staff to extract and preserve the records in their native format, dissociated from the proprietary container.

Statue of Ben Franklin on the 2nd floor of the Senate Wing. Image courtesy of Valerie Collins

While this project demonstrated a significant achievement for the Center in terms of preservation, it is not a viable strategy for ongoing and future preservation work. Aside from the increased staff resources devoted to this single operation, the underlying technology used for virtualization changes rapidly. In addition to technological changes, business strategies also change and may alter the long-term support for virtualization products and formats.

This not only introduces additional format sustainability problems, but it seems in the world of digital preservation, archivists, curators and librarians are at the hands of the technology sector. In a short amount of time, tools and systems we rely on can disappear or cease to be supported.

Some more general discussion topics included the idea that considerations for preservation should be baked in at the point of creation; concern over the growing volume of electronic records, especially in government records (the Center doubled its holdings between FY14 and FY15); and how digital preservation concerns will be managed by emulation or migration.

After the presentations, Mark and Brandon demonstrated digital preservation tools and we talked about how these tools can be integrated into digital preservation workflows (for a longer discussion of digital preservation tools, see my Signal blog post from November 2015).

District Dispatch: Google Policy Fellowship applications due this Friday (3/25)!

planet code4lib - Tue, 2016-03-22 16:46

Application window for 2016 Google Policy Fellowship closing soon.

Earlier this month, ALA announced the opening of the application process for the 2016 Google Policy Fellowship program. Consider this post a friendly reminder that applications for the program are due on Friday, March 25th (that’s right, just three days from today).

The program is a great opportunity for graduate students to gain experience working on information policy within the dynamic beltway ecosystem. As we mentioned in the announcement, Fellows work on a wide gamut of issues that may include digital copyright, e-book licenses and access, future of reading, international copyright policy, broadband deployment, online privacy, telecommunications policy (including e-rate and network neutrality), digital divide, open access to information, free expression, digital literacy, the future of libraries generally, and more.

This summer, the selected fellow will spend 10 weeks learning about national policy from ALA’s Washington Office staff, and completing a major project. Google provides the $7,500 stipend for the summer, but the work agenda is determined by the ALA and the selected fellow. Throughout the summer, Google’s Washington Office will provide an educational program for all of the fellows, such as lunchtime talks and interactions with Google Washington staff.

Johnna Percell, a graduate of the College of Information Studies at the University of Maryland, served as our fellow last summer. Margaret Kavaras, our fellow from the summer of 2014, now serves as an Office for Information Technology Policy Research Associate.

ALA encourages all interested graduate students – and especially those in the library science and information fields – to apply for the program. Further information on the program is available here.

The post Google Policy Fellowship applications due this Friday (3/25)! appeared first on District Dispatch.

Peter Murray: Modify Islandora objects on-the-fly using Devel “Execute PHP Code”

planet code4lib - Tue, 2016-03-22 15:45

Alan Stanley taught me this trick at an Islandora Camp a few years ago, and when trying to remember it this morning I messed up one critical piece. So I’ll post it here so I have something to refer back to when I need to do this again.

The Drupal Devel module includes a menu item for executing arbitrary PHP code on the server. (This is, of course, something you want to set permissions on very tightly because it can seriously wreck havoc on your day if someone uses it to do bad things.) Navigate to /devel/php on your Islandora website (with the Devel module enabled), and you’ll get a nice, big &lgtextarea> and an “Execute” button:

Execute arbitrary PHP using Drupal Devel module.

In this case, I’m generating the TECHMD datastream using the FITS module and displaying the results of the function call on the HTML page using the Devel module’s dpm() function:

include drupal_get_path('module', 'islandora_fits') . '/includes/derivatives.inc'; $object= islandora_object_load('demo:6'); $results = islandora_fits_create_techmd($object, False, array('source_dsid' => 'OBJ')); dpm($results);

Works like a charm!

Open Library Data Additions: Amazon Crawl: part fz

planet code4lib - Tue, 2016-03-22 14:33

Part fz of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata, Text

HangingTogether: Impact of identifiers on authority workflows

planet code4lib - Tue, 2016-03-22 12:00

That was the topic discussed recently by OCLC Research Library Partners metadata managers, initiated by Chew Chiat Naun of Cornell University. Using identifiers now to point to “things” rather than relying on text strings will facilitate transforming legacy data into linked data later. By linking to authoritative sources through identifiers, libraries can reduce the need for local maintenance of authority files. A number of institutions have already started adding identifiers to their catalog records, including the national libraries of France and Germany.

A Program for Cooperative Cataloging task group is developing a plan to incorporate identifiers (URIs) in MARC bibliographic and authority records as mainstream practice. One challenge is to differentiate real-world objects from descriptions about real-world objects. This distinction may be difficult for catalogers to make, but maybe tools could be created to make this differentiation easier. The goal is to align library practices with those of the semantic web. The task group has focused on MARC fields that support the $0 for identifiers, and the British Library is preparing a proposal to use $4 to specify relationships with identifiers.

These identifiers could point to non-library resources. For example, Wikidata already has identifiers for such roles as trumpeter, violinist, translator, librettist, and narrator. The task group’s focus has been on identifiers in bibliographic records because all catalogers can create bibliographic records while only a much smaller subset can create authority records. In some countries, only the national libraries create national authority records. Opportunities for batch enhancement of authorities are limited currently. Ideally, the bibliographic record would have a $0 URI pointing to a real-world object described by an authority record.

OCLC’s recent Person Entity Lookup pilot indicated how identifiers might impact authority workflows. By looking up a person and retrieving a number of identifiers, libraries could aggregate associated information from other authorities or sources having a “same as” relationship. For example, Wikidata shows that Noam Chomsky is affiliated with MIT, information that neither the LC/NAF authority file nor VIAF (Virtual International Authority File) includes. One of the most important—and powerful—aspects of adding identifiers is to reduce the amount of copying/pasting in the library environment when the identifier is stewarded elsewhere. Identifiers could provide a bridge between MARC and non-MARC environments and to non-library resources. Librarians wouldn’t have to be the experts in all domains.

Other potential areas of impact:

  • Much journal literature is described by non-library agencies. Identifiers could link the forms of name in journal articles vs. scholarly profiling services vs. library catalogs, thus transcending currently siloed domains. This should also help catalogers disambiguate names more easily.
  • Identifiers could provide links to digital collections and other resources that are not under authority control currently.
  • Identifiers linking to other sources could allow us to present users with labels in non-Latin scripts for entitites that are represented only by romanization in our current authority files.
  • In a linked data environment, identifiers could bypass authority records. Content negotiation could determine the preferred labels to display to the user. Ultimately, there could be much less emphasis on establishing an “authoritative text string”.

Tools mentioned during the discussions:

  • Terry Reese’s MARCEdit (“Build Links Data” enhancements) and the editor produced by LC for its BIBFRAME project include lookups of remote authority services that allow incorporating a range of identifier schemes into cataloging workflows.
  • The RIMFF (RDA in Many Metadata Formats) tool captures various attributes of an entity. Its focus is on concatenating elements the cataloger has selected rather than establishing an authorized access point. The application can decide what data to extract or display, such as an English or a Chinese language version.
  • W3C SHACL (Shapes Constraint Language) helps define the shapes that our data will need, such as what types of descriptions we’ll want for various attributes. These could include the attributes catalogers might want to add to enhance an entity such as a missing birthplace, institutional affiliation or discipline.
  • Catmandu is a data processing toolkit developed to build up digital libraries and research services.

Many challenges lie ahead. We’re going to need a larger vocabulary of relationships between entities.  Libraries will want their book vendors to also include identifiers in the records they supply. We will still have many name authorities without dates or other attributes that cannot be matched by algorithms alone, still requiring human curation. It is unclear how libraries—or their support systems—will deal with multiple identifiers referring to the same object or resource.  We need more editing tools that add URIs in the process of editing records. Libraries must educate their local systems vendors on the need for identifiers for both cataloging and discovery to avoid their stripping out the data added. Identifiers’ impact on authority workflows will depend on tools that don’t exist yet.

About Karen Smith-Yoshimura

Karen Smith-Yoshimura, senior program officer, works on topics related to creating and managing metadata with a focus on large research libraries and multilingual requirements.

Mail | Web | Twitter | More Posts (65)

Open Knowledge Foundation: #OpenDataDay 2016 – Lima, Peru

planet code4lib - Tue, 2016-03-22 11:43

For the third consecutive year, Open Data Peru organised the #OpenDataDay 2016, an international event about #OpenData.

Currently, the open data is becoming a trend adopted by governments to provide information about public spending, budgets, etc. in open formats, free to use and available to any citizen. In this way it seeks to create a more transparent and citizen participatory system. As data is released under this license, any citizen can access and use the information to build distribution platforms, data visualizations, and so on. This not only benefits citizens, it also allows specialists, academics, journalists and organizations to process this information to generate research, articles, and much more complete applications.

On March 5, Open Data Peru invite different specialists, citizens and organizations to discuss and learn about the use and contribution of open data. During the morning, they could hear various talks on the implementation of open data in Peru. Several initiatives, such as the collective @QDatosAbiertos (We Want Open Data), which seeks to inform and engage citizens with different communication campaigns and workshops aimed at demonstrating how to use simple technologies that allow this data to be used without needing specialist skills. The group, Ciudadanos al Día,  presented an initiative for Best Practices of Open Data in the public sector, which aims to reward public institutions to publish information in open data.

Another great presentation was that of the Municipality of San Isidro, which has been promoting a culture of technology and innovation has been since 2015. They implemented an open data portal and organized a hackathon. They also signed up to the International Open Data Charter.

Open Data Peru presented a summary of the work carried out during 2015, and one of the main activities (workshops, hackathons, Dataton, etc.) that was undertaken was the National Scholars Program, which focused on the decentralization of open data at the national level. Throughout this program, we worked with communities of different technology departments in Peru and selected leaders to become data trainers. With this work, Open Data Peru seeks to create a network of trainers and specialists who can work steadily and advise on creating platforms and applications using open data, and create a space for experimentation and citizen participation.

During the afternoon, we simultaneously held workshops with different specialists. Participants were able to learn more about data journalism work, visualizations, narrative, semantic web, usability and internet governance.

The #OpenDataDay 2016 in Peru, finished the day with dynamic lighting talks during #PiscoyDatos.

Open Data Peru is constantly in search of volunteers to work with open data technology projects, train more journalists in the dynamics of working with data and promote a more transparent system through the release of open data. To date we keep on improving and contribution of open data in our platform d.odpe.org

We thank all the communities and organizations that were part of #OpenDataDay: StoryCode, OjoPúblico, Hiperderecho, Hack IT Labs, Ciudadanos al Día and each of the speakers.

The event was sponsored by Hack IT Labs, Municipality of San Isidro, the Peruvian Press Council and the Latin American Open Data Initiative (ILDA), thank you for your contribution and support of the event.

Eric Lease Morgan: Failure to communicate

planet code4lib - Tue, 2016-03-22 10:35

In my humble opinion, what we have here is a failure to communicate.

Libraries, especially larger libraries, are increasingly made up of many different departments, including but not limited to departments such as: cataloging, public services, collections, preservation, archives, and now-a-days departments of computer staff. From my point of view, these various departments fail to see the similarities between themselves, and instead focus on their differences. This focus on the differences is amplified by the use of dissimilar vocabularies and subdiscipline-specific jargon. This use of dissimilar vocabularies causes a communications gap and left unresolved ultimately creates animosity between groups. I believe this is especially true between the more traditional library departments and the computer staff. This communications gap is an impediment to when it comes to achieving the goals of librarianship, and any library — whether it be big or small — needs to address these issues lest it wastes both its time and money.

Here are a few examples outlining failures to communicate:

  • MARC – MARC is a data structure. The first 24 characters are called the leader. The second section is called the directory, and the third section is intended to contain bibliographic data. The whole thing is sprinkled with ASCII characters 29, 30, and 31 denoting the ends of fields, subfields, and the record itself. MARC does not denote the kinds of data it contains. Yet, many catalogers say they know MARC. Instead, what they really know are sets of rules defining what goes into the first and third sections of the data structure. These rules are known as AACR2/RDA. Computer staff see MARC (and MARCXML) as a data structure. Librarians see MARC as the description of an item akin to a catalog card.
  • Databases & indexes – Databases & indexes are two sides of the same information retrieval coin. “True” databases are usually relational in nature and normalized accordingly. “False” databases are flat files — simple tables akin to Excel spreadsheets. Librarians excel (no puns intended) at organizing information, and this usually manifests itself through the creation of various lists. Lists of books. Lists of journals. Lists of articles. Lists of authoritative names. Lists of websites. Etc. In today’s world, the most scalable way to maintain lists is through the use of a database, yet most librarians wouldn’t be able to draw an entity relationship diagram — the literal illustration of a database’s structure — to save their lives. With advances in computer technology, the problem of find is no longer solved through the searching of databases but instead through the creation of an index. In reality, modern indexes are nothing more than enhancements of traditional back-of-the-book indexes — lists of words and associated pointers to where those words can be found in a corpus. Computer staff see databases as MySQL and indexes as Solr. Librarians see databases as a matrix of rows & columns, and the searching of databases in a light of licensed content such as JSTOR, Academic Search Primer, or New York Times.
  • Collections – Collections, from the point of view of a librarian, are sets of curated items with a common theme. Taken as a whole, these collections embody a set of knowledge or a historical record intended for use by students & researchers for the purposes of learning & scholarship. The physical arrangment of the collection — especially in archives — as well as the intellectual arrangment of the collection is significant because they bring together like items or represent the development of an idea. This is why libraries have classification schemes and archives physically arrange their materials in the way they do. Unfortunately, computer staff usually do not understand the concept of “curation” and usually see the arrangements of books — classification numbers — as rather arbitrary.
  • Services – Many librarians see the library profession as being all about service. These services range from literacy programs to story hours. They range from the answering of reference questions to the circulation of books. They include social justice causes, stress relievers during exam times, and free access to computers with Internet connections. Services are important because the provide the means for an informed public, teaching & learning, and the improvement society in general. Many of these concepts are not in the forefront of the minds of computer staff. Instead, their idea of service is making sure the email system works, people can log into their computers, computer hardware & software are maintained, and making sure the connections to the Internet are continual.

As a whole, what the profession does not understand is that everybody working in a library has more things in common than differences. Everybody is (suppose to be) working towards the same set of goals. Everybody plays a part in achieving those goals, and it behooves everybody to learn & respect the roles of everybody else. A goal is to curate collections. This is done through physical, intellectual, and virtual arrangment, but it also requires the use of computer technology. Collection managers need to understand more of the computer technology, and the technologist needs to understand more about curation. The application of AACR2/RDA is an attempt to manifest inventory and the dissemination of knowledge. The use of databases & indexes also manifest inventory and dissemination of knowledge. Catalogers and database administrators ought to communicate on the similar levels. Similarly, there is much more to preservation of materials than putting bits on tape. “Yikes!”

What is the solution to these problems? In my opinion, there are many possibilities, but the solution ultimately rests with individuals willing to take the time to learn from their co-workers. It rests in the ability to respect — not merely tolerate — another point of view. It requires time, listening, discussion, reflection, and repetition. It requires getting to know other people on a personal level. It requires learning what others like and dislike. It requires comparing & contrasting points of view. It demands “walking a mile in the other person’s shoes”, and can be accomplished by things such as the physical intermingling of departments, cross-training, and simply by going to coffee on a regular basis.

Again, all of us working in libraries have more similarities than differences. Learn to appreciate the similarities, and the differences will become insignificant. The consequence will be a more holistic set of library collections and services.

DuraSpace News: Recordings Available: “VIVO plus SHARE: Closing the Loop on Tracking Scholarly Activity”

planet code4lib - Tue, 2016-03-22 00:00

Austin, TX  DuraSpace launched its 14th Hot Topics Community Webinar Series, “VIVO plus SHARE: Closing the Loop on Tracking Scholarly Activity” last month.  Curated by Rick Johnson, Program Co-Director, Digital Initiatives and Scholarship Head, Data Curation and Digital Library Solutions Hesburgh Libraries, University of Notre Dame and Visiting Program Officer for SHARE at the Association of Research Libraries, this series explored how the effort to link VIVO and SHARE together will bring us closer to a wider picture of today’s scholarship.

M. Ryan Hess: The L Word

planet code4lib - Mon, 2016-03-21 23:42

I’ve been working with my team on a vision document for what we want our future digital library platform to look like. This exercise keeps bringing us back to defining the library of the future. And that means addressing the very use of the term, ‘Library.’

When I first exited my library (and information science) program, I was hired by Adobe Systems to work in a team of other librarians. My manager warned us against using the word ‘Librarian’ among our non-librarian colleagues. I think the gist was: too much baggage there.

So, we used the word ‘Information Specialist.’

Fast forward a few years to my time in an academic environment at DePaul University Library and this topic came up in the context of services the library provided. Faculty and students associated the library in very traditional ways: a quiet, book-filled space. But the way they used the library was changing despite the lag in their semantic understanding.

The space and the virtual tools we put in place online helped users not only find and evaluate information, but also create, organize and share information. A case in point was our adoption of digital publishing tools like Bepress and Omeka, but also the Scholar’s Lab.

I’m seeing a similar contradiction in the public library space. Say library and people think books. Walk into a public library and people do games, meetings, trainings and any number of online tasks.

This disconnect between what the word ‘Library’ evokes in the mind’s eye and what it means in practice is telling. We’ve got a problem with our brand.

In fact, we may need a new word.

Taken literally, a library has  been a word for a physical collection of written materials. The Library of Alexandria held scrolls for example. Even code developers rely on ‘libraries’ today, which are collections of materials. In every case, the emphasis is on the collection of things.

Now, I’m not suggesting that we move away from books. Books are vessels for ideas and libraries will always be about ideas.

In fact, this focus on ideas rather than any one mode for transmitting ideas is key. In today’s library’s people not only read about ideas, they meet to discuss ideas, they brainstorm ideas.

I don’t pretend to have the magic word. In fact, maybe it’s taking so long for us to drop ‘Library’ because there is not a good word in existence. Maybe we need create a new one.

One tactic that comes to mind as we navigate this terminological evolution is to retain the library, but subsume it inside of something new. I’ve seen this done to various degrees in other libraries. For example, Loyola University in Chicago built an entirely new building adjacent to the book-filled library. Administratively, the building is run by the library, but it is called the Klarchek Information Commons. In that rather marvelous space looking out over Lake Michigan, you’ll find the modern ‘library’ in all its glory. Computers, Collaboration booths, etc. I like this model for fixing our identity problem and I think it would work without throwing the baby out with the bathwater.

However, its done, one thing is for sure. Our users have moved on from ‘the library’ and are left with no accurate way to describe that place that they love to go to when they want to engage with ideas. Let’s put our thinking caps on and puts a word on their lips that does justice to what the old library has become. Let’s get past the L Word.


Islandora: Islandora CLAW Lessons - Update!

planet code4lib - Mon, 2016-03-21 13:31

We are three lessons into our series of webinars detailing how to develop in Islandora CLAW, led by CLAW Committer Diego Pino (METRO.org). If you haven't been attending, you've missed out on some great expressions of the CLAW stack via colorful doodles, such as the difference between the Islandora 7.x-1.x hamburger and the Islandora 7.x-2.x lobster chimera.

   

If that doesn't make any sense to you, then good news! You can catch up on what you've missed by viewing the recorded sessions:

Week One: Intro to Fedora 4.x

Week Two: Hands-on Creating Fedora 4.x Resources

Week Three: Data Flow in the CLAW

Once you have caught up, why not join us for the rest of the lessons in real time? They will continue on for another six weeks, every Tuesday at 11AM Eastern time, on Adobe Connect. Here's what's in store:

General Outline:
  • Basic Notions of Fedora 4 (sessions 1 and 2)
    • How Fedora 4 Works - General Intro and differences between Fedora 3 and 4
      • RDF instead of XML
      • Fedora 4 REST API
  • Introduction to CLAW
    • How Data Flows (session 3)
    • Sync Gateway (how to trigger the sync) (session 3)
      • Basics of Camel (session 4)
    • Adding/creating new content type (session 5 - 6)
    • PHP Microservices (session 7 - 8)
      • Intro/Overview
      • Basics of Silex
      • Dissecting a Service
      • Interacting with Fedora via Microservices
  • How to Join a Sprint (session 9)

With thanks to our hosts:

       

Access Conference: Discounts and Scholarships 2016

planet code4lib - Mon, 2016-03-21 05:23

We all know that Access is one of the best deals around for a tech conference. There are great speakers, great activities and great food, all for one amazingly low price. You get the hackfest, two and half days of our single-stream conference and the workshop for one great price of $450 Canadian (I know, right?).

The keeners and well-organized won’t miss out on the Early Bird tickets, which should hit store shelves in June for an incredible $350. That translates into less than $100 per day of awesomeness and we feed you. Shut the front door!

If you are working on an even tighter budget, there are still some options for you:

  1. Students – if you are a full-time student and trying save for Access, we are also making available 25 deeply discounted tickets just for you at the rock-bottom price of $200.
  2. Be a Presentersubmit a proposal, rock our world and we’ll hook you up for $300. That’s a 33.33333% savings for sharing your awesome project, idea or words of wisdom with your peers. What can go wrong?

Finally, if attending Access is still a stretch for your budget, we will once again have two Diversity Scholarships available. To qualify, you need to be from a “traditionally underrepresented and/or marginalized group,” be unable to attend the conference without some financial assistance and must not have received a scholarship to attend either of the previous two conferences. Meet the criteria and you’ll be eligible for a draw of one our $1000 Diversity Scholarships to help you attend the conference.

We hope to see you in Fredericton.

Jonathan Rochkind: “Apple Encryption Engineers, if Ordered to Unlock iPhone, Might Resist”

planet code4lib - Mon, 2016-03-21 03:33

From the NYTimes, “Apple Encryption Engineers, if Ordered to Unlock iPhone, Might Resist

SAN FRANCISCO — If the F.B.I. wins its court fight to force Apple’s help in unlocking an iPhone, the agency may run into yet another roadblock:Apple’s engineers.

Apple employees are already discussing what they will do if ordered to help law enforcement authorities. Some say they may balk at the work, while others may even quit their high-paying jobs rather than undermine the security of the software they have already created, according to more than a half-dozen current and former Apple employees.

Do software engineers have professional ethical responsibilities to refuse to do some things even if ordered by their employers?


Filed under: General

DuraSpace News: NOW AVAILABLE: DSpace 5.5 With Security Fixes/Bug Fixes to 5.x

planet code4lib - Mon, 2016-03-21 00:00

From Tim Donohue, DSpace Tech Lead on behalf of the DSpace developers

Austin, TX  DSpace 5.5 is now available providing security fixes to both the XMLUI and JSPUI, along with bug fixes to the DSpace 5.x platform.

DuraSpace News: VIVO Updates for March 20–User Group Meeting, Summit Recap++

planet code4lib - Mon, 2016-03-21 00:00

From Mike Conlon, VIVO Project Director

Pages

Subscribe to code4lib aggregator