You are here

Feed aggregator

Open Library Data Additions: Miami University of Ohio MARC

planet code4lib - Sat, 2016-03-26 08:08

MARC records from the Miami University of Ohio..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata

Mita Williams: Knight News Challenge: Library Starter Deck: a 21st-century game engine and design studio for libraries

planet code4lib - Sat, 2016-03-26 02:38
The Library Starter Deck from FutureCoast on Vimeo.

Last week, Ken Eklund and myself submitted our proposal for the 2016 Knight News Challenge which asks,  How might libraries serve 21st century information needs?

Our answer is this: The Library Starter Deck: a 21st-century game engine and design studio for libraries. We also have shared a brief on some of the inspirations behind our proposal (pdf).


Two years ago I reviewed the 680+ applications to the 2014 Knight News Challenge for Libraries entries and shared some of my favourites. It was, and it is still a very useful exercise because there are not many opportunities to read grant applications (if you are not the one handing out the grant) and this particular set offer applications from both professionals and those from the public.

You can also review the entries as an act of finding signals of the future, as the IFTF might put it. That's what I've chosen to do for this year's review. What this means is that I've chosen not to highlight here what I think are the best or most deserving to win applications (that's up to these good people) but instead, I made note of the applications that, for lack of a better word, surprised me:


I'd like to add there are many other deserving submissions that I have given a 'heart' to on the Knight News Challenge website and if you are able to, I'd encourage you to do the same.

FOSS4Lib Recent Releases: Koha - 3.22.5

planet code4lib - Fri, 2016-03-25 20:59
Package: KohaRelease Date: Wednesday, March 23, 2016

Last updated March 25, 2016. Created by David Nind on March 25, 2016.
Log in to edit this page.

Koha 3.22.5 is a security and maintenance release. It includes one security fix and 63 bug fixes (this includes enhancements, as well as fixes for problems).

As this is a security releases, we strongly recommend anyone running Koha 3.22.* upgrade as soon as possible.

See the release announcement for the details:

Open Library Data Additions: MIT Barton Catalog MODS

planet code4lib - Fri, 2016-03-25 20:53

Catalog records from MIT's Barton Catalog in MODS format. Downloaded from http://simile.mit.edu/rdf-test-data/barton/.

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata

NYPL Labs: Introducing the Photographers’ Identities Catalog

planet code4lib - Fri, 2016-03-25 18:27

Today the New York Public Library is pleased to announce the launch of Photographers’ Identities Catalog (PIC), a collection of biographical data for over 115,000 photographers, studios, manufacturers, dealers, and others involved in the production of photographs. PIC is world-wide in scope and spans the the entire history of photography. So if you’re a historian, student, archivist, cataloger or genealogist, we hope you’ll make it a first stop for your research. And if you’re into data and maps, you’re in luck, too: all of the data and code are free to take and use as you wish.

Each entry has a name, nationality, dates, relevant locations and the sources from which we’ve gotten the information—so you can double check our work, or perhaps find more information that we don’t include. Also, you might find genders, photo processes and formats they used, even collections known to have their work. It’s a lot of information for you to query or filter, delimit by dates, or zoom in and explore on the map. And you can share or export your results.

Blanche Bates. Image ID: 78659

How might PIC be useful for you? Well, here’s one simple way we make use of it in the Photography Collection: dating photographs. NYPL has a handful of cabinet card portraits of the actress Blanche Bates, but they are either undated or have a very wide range of dates given.

The photographer’s name and address are given: the Klein & Guttenstein studio at 164 Wisconsin Street, Milwaukee. Search by the studio name, and select them from the list. In the locations tab you’ll find them at that address for only one year before they moved down the street; so, our photos were taken in 1899. You could even get clever and see if you can find out the identities of the two partners in the studio (hint: try using the In Map Area option).

But there’s much more to explore with PIC: you can find female photographers with studios in particular countries, learn about the world’s earliest photographers, and find photographers in the most unlikely places…

Often PIC has a lot of information or can point you to sources that do, but there may be errors or missing information. If you have suggestions or corrections, let us know through the Feedback form. If you’re a museum, library, historical society or other public collection and would like to let us know what photographers you’ve got, talk to us. If you’re a scholar or historian with names and locations of photographers and studios—particularly in under-represented areas—we’d love to hear from you, too!

 

Open Library Data Additions: Amazon Crawl: part hf

planet code4lib - Fri, 2016-03-25 08:33

Part hf of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata, Text

FOSS4Lib Upcoming Events: DC Fedora Users Group

planet code4lib - Thu, 2016-03-24 19:07
Date: Wednesday, April 27, 2016 - 08:00 to Thursday, April 28, 2016 - 17:00Supports: Fedora RepositoryHydra

Last updated March 24, 2016. Created by Peter Murray on March 24, 2016.
Log in to edit this page.

Our next DC Fedora Users Group meeting will be held on April 27 + 28 at the National Library of Medicine.

Registration

Please register in advance (registration is free) by completing this brief form:
https://docs.google.com/forms/d/1TAvx6n2GaOSwHPy4SsCE4qZD75aUcz4eooGfpsL...

As indicated on the form,we are also looking for sponsors for snacks - this could be for one or both days.

Schedule

DPLA: Announcing the 2016 DPLA+DLF “Cross-Pollinator” grant awardees

planet code4lib - Thu, 2016-03-24 17:05

We are pleased to announce the recipients of the 2016 DPLA + DLF Cross-Pollinator Travel Grants, three individuals from DLF member organizations who will be attending DPLAfest 2016 April 14-15 in Washington, D.C.

The DPLA + DLF Cross-Pollinator Travel Grants are part of a broader vision for partnership between the Digital Library Federation (DLF) and the Digital Public Library of America. It is our belief that robust community support is key to the sustainability of large-scale national efforts. Connecting the energetic and talented DLF community with the work of the DPLA is a positive way to increase serendipitous collaboration around this shared digital platform. The goal of this program is to bring “cross-pollinators” to DPLAfest— DLF community contributors who can provide unique personal perspectives, help to deepen connections between our organizations, and bring DLF community insight to exciting areas of growth and opportunity at DPLA.

Meet the 2016 DPLA + DLF Cross-Pollinators

Jasmine Burns
Image Technologies and Visual Literacy Librarian and Interim Head, Fine Arts Library
Indiana University Bloomington

Twitter: @jazz_with_jazz

Jasmine Burns’ primary duties are to manage and curate the libraries’ multimedia image collections for teaching and research in the fine arts, including studio, art history, apparel merchandising, and fashion design. She holds an MLIS from the University of Wisconsin-Milwaukee with a concentration in Archive Studies, and an MA in Art History from Binghamton University. She has worked previously as an assistant curator of a slide library, a museum educator, a junior fellow at the Library of Congress, and as a digitization assistant for a university archives.

Burns writes:

As a new emerging professional, one major limitation that I face is that I have yet to build a strong foundation in organizations outside of those few that guide my daily work… Attending DPLAfest would offer an alternate conference experience that would enhance my understanding of the field of digital cultural heritage, and introduce me to how individuals within this and other allied fields are approaching similar issues in collections building and support, teaching with a variety of visual materials, and the presentation and preservation of digital images. My participation in DPLAfest would give me broad ideas on how to expand the scope of my projects in a way that addresses a larger community, instead of limiting my sphere to art and art history. My ultimate professional goal is to explore, create, and enhance open access image collections through digital platforms. My DLF colleagues provide me with guidance for the technical and data management aspects of managing digital image collections, while DPLAfest would expose me to the nuances of managing and curating the content within such collections.

Nancy Moussa
Web Developer, University of Michigan Library
DPLA Community Rep

In her role at University of Michigan, Nancy Moussa has worked on various projects including Islamic Manuscripts, Omeka online exhibits, and with other open source platforms such as Drupal and WordPress. Her background is in information science, with a B.S. in Computer Science from American University in Cairo, an MMath in Computer Science from University of Waterloo, Canada, and an MSI in Human Computer Interaction from School of Information at University of Michigan, Ann Arbor. She is also a member of DPLA’s newest class of community reps.

Moussa writes:

In the past three years my focus has been on customizing and building plugins for Omeka… I would like to research and investigate the DPLA API to understand how to integrate open source platforms with DLPA resources and digital objects. My second interest is to understand how DPLA’s growing contents can benefit teachers in schools, librarians, researchers and students. I hope there is more collaboration between DPLA and DLF. It is a very important step. The collaboration will reveal more incredible digital works that are contributed by DLF members. I am envisioning that DLF members (institutions) will have more opportunities to access digital works provided by other members through the DPLA portal & DPLA API /Apps. Therefore, I am looking forward to attending DPLAfest to increase my understanding and to network with other DPLA representatives and DPLA community in general.

 

T-Kay Sangwand
Librarian for Digital Collection Development
Digital Library Program, UCLA

Twitter: @tttkay

Prior to her current position at UCLA, T-Kay Sangwand served as the Human Rights Archivist and Librarian for Brazilian Studies at University of Texas at Austin. In 2015, she was named one of Library Journal’s “Movers and Shakers” in the Advocate category for her collaborative work with human rights organizations through the UT Libraries Human Rights Documentation Initiative. She is currently a Certified Archivist and completed the Archives Leadership Institute in 2013. Sangwand holds an MLIS and MA degree in Latin American Studies from UCLA with specializations in Archives, Spanish and Portuguese and a BA in Gender Studies and Latin American Studies from Scripps College.

Sangwand writes:

As an information professional that is committed to building a representative historical record that celebrates the existence and contributions of marginalized groups (i.e. people of color, women, queer folks), I am particularly excited about the possibility of attending DPLAfest and learning about how the DPLA platform can be leveraged in pursuit of this more representative historical record…. While UCLA is not yet a contributor to DPLA, this is something we are working towards and a process I am looking forward to being a part of as my current position focuses on digital access for a wide cross-section of materials from Chicano Studies, Gender Studies, UCLA Oral History Center and more. Since DLF explicitly describes itself as a “robust community of practice advancing research, learning, social justice & the public good [my emphasis],” I am hopeful that DLF community members, including UCLA, can form a critical mass around building out a representative and diverse historical record in support of the values espoused by DLF.

 

Congratulations to all — we look forward to meeting you at DPLAfest!

David Rosenthal: Long Tien Nguyen & Alan Kay's "Cuneiform" System

planet code4lib - Thu, 2016-03-24 15:00
Jason Scott points me to Long Tien Nguyen and Alan Kay's paper from last October entitled The Cuneiform Tablets of 2015. It describes what is in effect a better implementation of Raymond Lorie's Universal Virtual Computer. They attribute the failure of the UVC to its complexity:
They tried to make the most general virtual machine they could think of, one that could easily emulate all known real computer architectures easily. The resulting design has a segmented memory model, bit-addressable memory, and an unlimited number of registers of unlimited bit length. This Universal Virtual Computer requires several dozen pages to be completely specified and explained, and requires far more than an afternoon (probably several weeks) to be completely implemented. They are correct that the UVC was too complicated, but the reasons why it was a failure are far more fundamental and, alas, apply equally to Chifir, the much simpler virtual machine they describe. Below the fold, I set out these reasons.

The reasons are strongly related to the reason why the regular announcements of new quasi-immortal media have had almost no effect on practical digital preservation. And, in fact, the paper starts by assuming the availability of a quasi-immortal medium in the form of a Rosetta Disk. So we already know that each preserved artefact they create will be extremely expensive.

Investing money and effort now in things that only pay back in the far distant future is simply not going to happen on any scale because the economics don't work. So at best you can send an insignificant amount of stuff on its journey to the future. By far the most important reason digital artefacts, including software, fail to reach future scholars is that no-one could afford to preserve them. Suggesting an approach whose costs are large and totally front-loaded implicitly condemns a vastly larger amount of content of all forms to oblivion because the assumption of unlimited funds is untenable.

Its optimistic to say the least to think you can solve all the problems that will happen to stuff in the next say 1000 years in one fell swoop - you have no idea what the important problems are. The Cuneiform approach assumes that the problems are (a) long-lived media and (b) the ability to recreate an emulator from scratch. These are problems, but there are many other problems we can already see facing the survival of software over the next millennium. And its very unlikely that we know all of them, or that our assessment of their relative importance is correct.

Preservation is a continuous process, not a one-off thing. Getting as much as you can to survive the next few decades is do-able - we have a pretty good idea what the problems are and how to solve them. At the end of that time, technology will (we expect) be better and cheaper, and we will understand the problems of the next few decades. The search for a one-time solution is a distraction from the real, continuing task of preserving our digital heritage.

And I have to say that analogizing a system designed for careful preservation of limited amounts of information for the very long term with cuneiform tablets is misguided. The tablets were not designed or used to send information to the far future. They were the equivalent of paper, a medium for recording information that was useful in the near future, such as for accounting and recounting stories. Although:
Between half a million and two million cuneiform tablets are estimated to have been excavated in modern times, of which only approximately 30,000 – 100,000 have been read or published.The probability that an individual tablet would have survived to be excavated in the modern era is extremely low. A million or so survived, many millions more didn't. The authors surely didn't intend to propose a technique for getting information to the far future with such a low probability of success.

Open Knowledge Foundation: Open Data Day 2016 Birmingham, UK

planet code4lib - Thu, 2016-03-24 11:34

This blogpost was written by Pauline Roche, MD of voluntary sector infrastructure support agency, RnR Organisation, co-organiser Open Mercia, co-Chair West Midlands Open Data Forum, steering group member Open Data Institute (ODI) Birmingham node, founder Data in Brum

20 open data aficionados from across sectors as diverse as big business, small and medium enterprises, and higher education, including volunteers and freelancers gathered in Birmingham, UK on Friday, March 4th to share our enthusiasm for and knowledge of open data in our particular fields, to meet and network with each other and to plan for future activities around open data in the West Midlands. We met on the day before Open Data Day 2016 to accommodate most people’s schedules.

Organised by Open Mercia colleagues, Pauline Roche and Andrew Mackenzie, and hosted at ODI Birmingham by Hugo Russell, Project Manager, Innovation Birmingham. The half day event formally started with introductions, a brief introduction to the new ODI Birmingham node, and watching a livestream of the weekly ODI Friday lecture: ‘Being a Data Magpie’. In the lecture, ODI Senior Consultant Leigh Dodds explained how to find small pieces of data that are shared – whether deliberately or accidentally – in our cities. Delegates were enthralled with Leigh’s stories about data on bins, bridges, lamp posts and trains.

We then moved on to lightning talks about open data with reference to various subjects: highways (Teresa Jolley), transport (Stuart Harrison), small charities (Pauline Roche), mapping (Tom Forth), CartoDB (Stuart Lester), SharpCloud (Hugo Russell) and air quality (Andrew Mackenzie). These talks were interspersed with food and comfort breaks to encourage the informality which tends to generate the sharing and collaboration which we were aiming to achieve.

During the talks, more formal discussion focused on Birmingham’s planned Big Data Corridor, incorporating real-time bus information from the regional transport authority Centro, including community engagement through the East of Birmingham to validate pre/post contract completion, for example, in: road works and traffic management changes. Other discussion focussed on asset condition data, Open Contracting, and visualisation for better decisions

Teresa Jolley’s talk (delivered via Skype from London), showed that 120 local authorities (LA) in England alone are responsible for 98% of the road network but have only 20% of the budget; also each LA gets £30m but actually needs £93m to bring the network back to full maintenance.The talk highlighted that there is a need for more collaboration, improved procurement, new sources of income and data on asset condition which is available in a variety of places, including in people’s heads! The available research data is not open, which is a barrier to collaboration. Delegates concluded from Teresa’s talk that opening the contracts between private and public companies is the main challenge.

Stuart Harrison, ODI Software Superhero, talked about integrated data visualisation and decision making, showing us the London Underground: Train Data Demonstrator. He talked about visualisation for better decisions on train capacity and using station heat maps to identify density of use.

Pauline Roche, MD of the voluntary sector infrastructure support agency, RnR Organisation, shared the Small Charities Coalition definition of their unique feature (annual income less than £1m) and explained that under this definition, 97% of the UK’s 164,000 charities are small. In the West Midlands region alone, the latest figures evidence 20,000 local groups (not all are charities), 34,000 FTE paid staff, 480,000 volunteers and an annual £1.4bn turnover.

Small charities could leverage their impact through the use of Open Data to demonstrate transparency, better target their resources, carry out gap analysis (for example, Nepal NGOs found that opening and sharing their data reduced duplication amongst other NGOs in the country) and measure impact. One small charity which Pauline worked with on a project to open housing data produced a comprehensive Open Data “Wishlist” including data on health, crime and education. Small charities need active support from the council and other data holders to get the data out.   Tom Forth from the ODI Leeds node, showed delegates how he uses open data for mapping with lots of fun demonstrations. Pauline shared some of Tom’s specific mapped data on ethnicity with 2 relevant charities and we look forward to examining that data more closely in the future. It was great to have a lighter, though no less important, view of what can often be seen as a very serious subject. Tom invited delegates to the upcoming Floodhack at ODI Leeds on the following weekend. He also offered to run another mapping event the following week for some students present, with more assistance being proffered by another delegate, Mike Cummins.

Stuart Lester of Digital Birmingham, gave an introduction to CartoDB and reminded delegates of the Birmingham Data Factory where various datasets were available under an open license.

The second last talk of the day was a demonstration of SharpCloud from Hugo Russell, who described using this and other visualisation tools such as Kumu to tell a story and spot issues / relationships

Finally, Andrew Mackenzie presented on air quality and gave some pollution headlines, relating his presentation topically to the LEP, Centro and HS2. He said that some information, while public, is not yet published as data yet, but it can be converted. There were some questions about the position of the monitoring stations and a possible project “What is the quality of the air near me/a location?”. Andrew says it’s currently £72,000 to build an air quality monitoring station and gave some examples of work in the field e.g. http://www.treehugger.com/clean-technology/environmental-sensors.html , http://airpi.es/ and Smart Citizen . He also mentioned the local organisation Birmingham Friends of the Earth and a friendly data scientist Dr Andy Pryke. One of the delegates tweeted a fascinating visualisation of air pollution data

Summary

Our diverse audience represented many networks and organisations: two of the Open Data Institute nodes, Birmingham  and Leeds , West Midlands Open Data Forum , Open Mercia , Open Data Camp, Birmingham Insight, Hacks and Hackers (Birmingham) , Brum by Numbers and Data in Brum. Our primary themes were transport and social benefit, and we learned about useful visualisation tools like CartoDB, SharpCloud and Kumu. The potential markets we explored included: an Open commercialisation model linked to the Job Centre, collaboration where a business could work with a transport authority and an ODI Node to access Job Centres of applicable government departments on a Revenue Share and an Air Quality Project.

Future Events information shared included the Unconference Open Data Camp 3 in Bristol, 14-15 May (Next ticket release 19 March), an Open Government Partnership meeting on 7 April at Impact Hub Birmingham, a Mapping workshop with Tom Forth (date TBC), and offers of future events: CartoDB with Stuart Lester (½ day), OpenStreetMap with Andy Mabbett (½ day) and WikiData with Andy Mabbett (½ day) Pauline also compiled a Post-event Storify: https://storify.com/RnROrganisation/open-data-day-2016-birmingham-uk

LITA: Jobs in Information Technology: March 23, 2016

planet code4lib - Thu, 2016-03-24 00:57

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week:

Yale University, Senior Systems Librarian / Technical Lead, ID 36160BR, New Haven, CT

Misericordia University, University Archivist and Special Collections Librarian, Dallas, PA

University of Arkansas, Accessioning and Processing Archivist, Fayetteville, AR

University of the Pacific, Information and Educational Technology Services (IETS) Director, Stockton, CA

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

Open Library Data Additions: University of Michigan PD Scan Records

planet code4lib - Wed, 2016-03-23 22:09

Records retrieved from the OAI interface to the University of Michigan's collection of scanned public domain books. Crawl done on 2007-01-11 at http://quod.lib.umich.edu/cgi/o/oai/oai.

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata

SearchHub: Secure Fusion: Leveraging LDAP

planet code4lib - Wed, 2016-03-23 20:37

This is the third in a series of articles on securing your data in Lucidworks Fusion. Secure Fusion: SSL Configuration covers transport layer security and Secure Fusion: Authentication and Authorization covers general application-level security mechanisms in Fusion. This article shows you how Fusion can be configured to use an LDAP server for authentication and authorization.

Before discussing how to configure Fusion for LDAP, it’s important to understand when and why to do this. Given that Fusion’s native security realm can manage authentication and passwords directly, why bother to use LDAP? And conversely, if you can use LDAP for authentication and authorization, why not always use LDAP?

The answer to the latter question is that Fusion’s native security realm is necessary to bootstrap Fusion. Because all requests to Fusion require authentication and authorization, you must start building a Fusion application by first logging in as the native user named “admin”. Built-in authentication provides a fallback mechanism in case of LDAP server or communication failure.

Why use LDAP? Using LDAP simplifies the task of user administration. Individual user accounts are managed directly by LDAP. Access to services and data is managed by mapping LDAP users and groups to Fusion roles and permissions.

A common use case for an LDAP security realm is for search over a collection of documents with ACLs that restrict access to specific users or groups, e.g. indexing a MS Sharepoint repository managed by Active Directory. In order to make sure that search respects the access permissions on these documents, when indexing those documents, you must the access permissions as well as the document contents. At query time, the user account information sent along with the search query and Fusion restricts the search results set to only those documents that the user is allowed to access.

LDAP for Noobs

If you understand LDAP and are comfortable configuring LDAP-based systems, you can skip this section and go to section Fusion Configuration.

The LDAP protocol is used to share information about users, systems, networks, and services between servers on the internet. LDAP servers are used as a central store for usernames, passwords, and user and group permissions. Applications and services use the LDAP protocol to send user login and password information to the LDAP server. The server performs name lookup and password validation. LDAP servers also store Access Control Lists (ACLs) for file and directory objects which specify the users and groups and kinds of access allowed for those objects.

LDAP is an open standard protocol and there are many commercial and open-source LDAP servers available. Microsoft environments generally use Active Directory. *nix servers use AD or other LDAP systems such as OpenLDAP, although many *nix systems don’t use LDAP at all. To configure Fusion for LDAP, you’ll need to get information about the LDAP server(s) running on your system either from your sysadmin or via system utilities.

Directories and Distinguished Names

An LDAP information store is a Directory Information Tree (DIT). The tree is composed of entry nodes; each node has a single parent and zero or more child nodes. Every node must have at least one attribute which uniquely distinguishes it from its siblings which is used as the node’s Relative Distinguished Name (RDN). A node’s Distinguished Name (DN) is a globally unique identifier.

The string representation of a DN is specified in RFC 4514. It consists of the node’s RDN followed by a comma, followed by the parent node’s DN. The string representation of the RDN is the attribute-value pair name, connected by an equals (“=”) sign. This recursive definition means that the DN of a node is composed by working from the node back through its parent and ancestor nodes up to the root node.

Here is a small example of a DIT:

The person entry in this tree has the DN: “uid=babs, ou=people, dc=example, dc=com”.

Attribute names include many short strings based on English words and abbreviations, e.g.:

Name Description

cn

commonName

dc

domainComponent

mail

email address

ou

organizationalUnitName

sn

surname

uid

userId

LDAP entry attributes can refer to other LDAP entries by using the DN of the entry as value of that attribute. The following example of a directory which contains user and groups information shows how this works:

This tree contains two organizational units: “ou=people” and “ou=groups”. The children of the “group” organizational unit are specific named groups, just as the child nodes of organization unit “people” are specific users. There are three user entries with RDNs “uid=bob”, “uid=alice”, “uid=bill” and two groups with RDNs “cn=user” and “cn=admin”. The dotted lines and group labels around the person nodes indicates group membership. This relationship is declared on the groups nodes by adding an attributes named “member” whose value is a users DN. In the LDAP data interchange format (LDIF), this is written:

cn=user,ou=groups,dc=acme,dc=org member: uid=bob,ou=people,dc=acme,dc=org member: uid=alice,ou=people,dc=acme,dc=org cn=admin,ou=groups,dc=acme,dc=org member: uid=bill,ou=people,dc=acme,dc=org

See the Wikipedia’s LDAP entry for details.

LDAP Protocol Operations

For authentication purposes, Fusion sends Bind operation requests to the LDAP server. The Bind operation authenticates clients (and the users or applications behind them) to the directory server, establishes authorization identity used for subsequent operations on that connection, and specifies the LDAP protocol version that the client will use.

Depending on the way that the host system uses LDAP to store login information about users and groups, it may be necessary to send Search operation requests to the LDAP server as well. The Search operation retrieves partial or complete copies of entries matching a given set of criteria.

LDAP filters specify which entries should be returned. These are specified using prefix notation. Boolean operators are “&” for logical AND, “|” for logical OR, e.g., “A AND B” is written “(&(A)(B))”. To tune and test search filters for a *nix-based LDAP system, see the ldapsearch command line utility documentation. For Active Directory systems, see AD Syntax Filters.

 

Fusion Configuration for an LDAP Realm

To configure Fusion for LDAP, you’ll need to get information about the LDAP server(s) running on your system, either from your system or your sysadmin.

To configure an LDAP realm from the Fusion UI, you must be logged in as a user with admin-level privileges. From the “Applications” menu, menu item “Access Control”, panel “Security Realms”, click on the “Add Security Realm” button:

This opens an editor panel for a new Security Realm, containing controls and inputs for all required and optional configuration information.

Required Configuration Step One: Name and Type

The first step in setting up an LDAP security realm is filling out the required information at the top of the realm config panel:

The first three required configuration items are:

  • name – must be unique, should be descriptive yet short
  • type – choice of “LDAP” or “Kerberos”
  • “enabled” checkbox – default is true (i.e., the box is checked). The “enabled” setting controls whether or not Fusion allows user logins for this security realm.
Required Configuration Step Two: Server and Port

The name and port of the LDAP server are required, along with whether or not the server is running over SSL. In this example, I’m configuring a hypothetical LDAP server for company “Acme.org”, running a server named “ldap.acme.org” over SSL, on port 636:

Required Configuration Step Three: Authentication Method and DN Templates

Next, you must specify the authentication method. There are three choices:

  • Bind – the LDAP authentication operation is carried out via a single “Bind” operation.
  • Search – LDAP authentication is carried out indirectly via a Search operation followed by a Bind operation.
  • Kerberos – Kerberos authenticates Fusion and an LDAP Search operation is carried out to find group-level authorizations.

The Bind authentication method is used when the Fusion login username matches a part of the LDAP DN. The rest of the LDAP DN is specified in the “DN Template” configuration entry, which uses a single pair of curly brackets (“{}”) as a placeholder for the value of the Fusion username.

The Search authentication method is used when the username used for Fusion login doesn’t match a part of the LDAP DN. The search request returns a valid user DN, which is used together with the user password for authentication via a Bind request.

The Search authentication method is generally required when working with Microsoft Active Directory servers. In this case, you need to know the username and password of some user who has sufficient privileges to query the LDAP server for user and group memberships; this user doesn’t have to be the superuser. In addition to a privileged user DN and password, the Search authentication method requires crafting a search request. There are two parts to the request: the first part is the base DN of the LDAP directory tree which contains user account objects. The second part of the request is a Search Filter object which restricts the results to a matching subset of the information.

As a simple example, I configure Fusion for acme.org’s Linux-based LDAP server via the Bind authentication method:

In the LDAP directory example for organization “acme.org” above, the DNs for the three nodes in the “people” organizational unit are: “uid=bob,ou=people,dc=acme,dc=org”, “uid=alice,ou=people,dc=acme,dc=org”, and “uid=bill,ou=people,dc=acme,dc=org”. This corresponds to the DN Template string:

uid={},ou=people,dc=acme,dc=org Testing the Configured Connection

The last part of the form allows you to test the LDAP realm config using a valid username and password:

When the “Update and test settings” button is clicked, the username from the form is turned into a DN according to the DN template, and a Bind operation request is sent to the configured LDAP server. Fusion reports whether or not authentication was successful:

Optional Configuration: Roles and Groups Mappings

A Fusion role is a bundle of permissions tailored to the access needs of different kinds of users. Access to services and data for LDAP-managed users is controlled by mappings from LDAP users and groups to Fusion roles.

Roles can be assigned globally or restricted to specific LDAP groups. The security realm configuration panel contains a list of all Fusion roles with a checkbox for each, used to assign that role to all users in that realm. LDAP group names can be mapped directly to specific Fusion roles and LDAP group search and filter queries can also be used to map kinds of LDAP users to specific Fusion roles.

Putting It All Together

To see how this works, while logged as the Fusion native realm admin user, I edit the LDAP security realm named “test-LDAP” so that all users from this realm have admin privileges:

At this point my Fusion instance contains two users:

I log out as admin user:

Now I log in using the “test-LDAP” realm:

Because all users from “test-LDAP” realm have admin privileges, I’m able to use the Access Controls application to see all system users. Checking the USERS panel again, I see that there’s now a new entry for username “mitzi.morris”:

The listing for username “mitzi.morris” in the USERS panel doesn’t show roles, API or UI permissions because this information isn’t stored in Fusion’s internal ZooKeeper. The only information stored in Fusion is the username, realm, and uuid. Permissions are managed by LDAP. When the user logs in, Fusion’s LDAP realm config assigns roles according to the user’s current LDAP status.

Fusion manages all of your Solr data. Fusion’s security mechanisms ensure that only your users see all of their data and only their data, no more, no less. This post shows you how Fusion can be configured to use an external LDAP server for authentication and how to map user and group memberships to Fusion permissions and roles. Future posts in this series will show how to configure Fusion datasources so that document-level permissions sets (ACLs) are indexed
and how to configure search pipelines so that the results set contains only those documents that the user is authorized to see.

The post Secure Fusion: Leveraging LDAP appeared first on Lucidworks.com.

ACRL TechConnect: Evaluating Whether You Should Move Your Library Site to Drupal 8

planet code4lib - Wed, 2016-03-23 16:00

After much hard work over years by the Drupal community, Drupal users rejoiced when Drupal 8 came out late last year. The system has been completely rewritten and does a lot of great stuff–but can it do what we need Drupal websites to do for libraries?  The quick answer seems to be that it’s not quite ready, but depending on your needs it might be worth a look.

For those who aren’t familiar with Drupal, it’s a content management system designed to manage complex sites with multiple types of content, users, features, and appearances.  Certain “core” features are available to everyone out of the box, but even more useful are the “modules”, which extend the features to do all kinds of things from the mundane but essential backup of a site to a flashy carousel slider. However, the modules are created by individuals or companies and contributed back to the community, and thus when Drupal makes a major version change they need to be rewritten, quite drastically in the case of Drupal 8. That means that right now we are in a period where developers may or may not be redoing their modules, or they may be rethinking about how a certain task should be done in the future. Because most of these developers are doing this work as volunteers, it’s not reasonable to expect that they will complete the work on your timeline. The expectation is that if a feature is really important to you, then you’ll work on development to make it happen. That is, of course, easier said than done for people who barely have enough time to do the basic web development asked of them, much less complex programming or learning a new system top to bottom, so most of us are stuck waiting or figuring out our own solutions.

Despite my knowledge of the reality of how Drupal works, I was very excited at the prospect of getting into Drupal 8 and learning all the new features. I installed it right away and started poking around, but realized pretty quickly I was going to have to do a complete evaluation for whether it was actually practical to use it for my library’s website. Our website has been on Drupal 7 since 2012, and works pretty well, though it does need a new theme to bring it into line with 2016 design and accessibility standards. Ideally, however, we could be doing even more with the site, such as providing better discovery for our digital special collections and making the site information more semantic web friendly. It was those latter, more advanced, feature desires that made me really wish to use Drupal 8, which includes semantic HTML5 integration and schema.org markup, as well as better integration with other tools and libraries. But the question remains–would it really be practical to work on migrating the site immediately, or would it make more sense to spend some development time on improving the Drupal 7 site to make it work for the next year or so while working on Drupal 8 development more slowly?

A bit of research online will tell you that there’s no right answer, but that the first thing to do in an evaluation is determine whether any the modules on which your site depends are available for Drupal 8, and if not, whether there is a good alternative. I must add that while all the functions I am going to mention can be done manually or through custom code, a lot of that work would take more time to write and maintain than I expect to have going forward. In fact, we’ve been working to move more of our customized code to modules already, since that makes it possible to distribute some of the workload to others outside of the very few people at our library who write code or even know HTML well, not to mention taking advantage of all the great expertise of the Drupal community.

I tried two different methods for the evaluation. First, I created a spreadsheet with all the modules we actually use in Drupal 7, their versions, and the current status of those modules in Drupal 8 or if I found a reasonable substitute. Next, I tried a site that automates that process, d8upgrade.org. Basically you fill in your website URL and email, and wait a day for your report, which is very straightforward with a list of modules found for your site, whether there is a stable release, an alpha or beta release, or no Drupal 8 release found yet. This is a useful timesaver, but will need some manual work to complete and isn’t always completely up to date.

My manual analysis determined that there were 30 modules on which we depend to a greater or lesser extent. Of those, 10 either moved into Drupal core (so would automatically be included) or the functions on which used them moved into another piece of core. 5 had versions available in Drupal 8, with varying levels of release (i.e. several in stable alpha release, so questionable to use for production sites but probably fine), and 5 were not migrated but it was possible to identify substitute Drupal 8 modules. That’s pretty good– 18 modules were available in Drupal 8, and in several cases one module could do the job that two or more had done in Drupal 7. Of the additional 11 modules that weren’t migrated and didn’t have an easy substitution, three of them are critical to maintaining our current site workflows. I’ll talk about those in more detail below.

d8upgrade.org found 21 modules in use, though I didn’t include all of them on my own spreadsheet if I didn’t intend to keep using them in the future. I’ve included a screenshot of the report, and there are a few things to note. This list does not have all the modules I had on my list, since some of those are used purely behind the scenes for administrative purposes and would have no indication of use without administrative access. The very last item on the list is Core, which of course isn’t going to be upgraded to Drupal 8–it is Drupal 8. I also found that it’s not completely up to date. For instance, my own analysis found a pre-release version of Workbench Moderation, but that information had not made it to this site yet. A quick email to them fixed it almost immediately, however, so this screenshot is out of date.

I decided that there were three dealbreaker modules for the upgrade, and I want to talk about why we rely on them, since I think my reasoning will be applicable to many libraries with limited web development time. I will also give honorable mention to a module that we are not currently using, but I know a lot of libraries rely on and that I would potentially like to use in the future.

Webform is a module that creates a very simple to use interface for creating webforms and doing all kinds of things with them beyond just simply sending emails. We have many, many custom PHP/MySQL forms throughout our website and intranet, but there are only two people on the staff who can edit those or download the submitted entries from them. They also occasionally have dreadful spam problems. We’ve been slowly working on migrating these custom forms to the Drupal Webform module, since that allows much more distribution of effort across the staff, and provides easier ways to stop spam using, for instance, the Honeypot module or Mollom. (We’ve found that the Honeypot module stopped nearly all our spam problems and didn’t need to move to Mollom, since we don’t have user comments to moderate). The thought of going back to coding all those webforms myself is not appealing, so for now I can’t move forward until I come up with a Drupal solution.

Redirect does a seemingly tiny job that’s extremely helpful. It allows you to create redirects for URLs on your site, which is incredibly helpful for all kinds of reasons. For instance, if you want to create a library site branded link that forwards somewhere else like a database vendor or another page on your university site, or if you want to change a page URL but ensure people with bookmarks to the old page will still find it. This is, of course, something that you can do on your web server, assuming you have access to it, but this module takes a lot of the administrative overhead away and helps keep things organized.

Backup and Migrate is my greatest helper in my goal to be someone who would like to at least be in the neighborhood of best practices for web development when web development is only half my job, or some weeks more like a quarter of my job. It makes a very quick process of keeping my development, staging, and production sites in sync, and since I created a workflow using this module I have been far more successful in keeping my development processes sane. It provides an interface for creating a backup of your site database, files directories, or your database and files that you can use in the Backup and Migrate module to completely restore a site. I use it at least every two weeks, or more often when working on a particular feature to move the database between servers (I don’t move the files with the module for this process, but that’s useful for backups that are for emergency restoration of the site). There are other ways to accomplish this work, but this particular workflow has been so helpful that I hate to dump a lot of time into redoing it just now.

One last honorable mention goes to Workbench, which we don’t use but I know a lot of libraries do use. This allows you to create a much more friendly interface for content editors so they don’t have to deal with the administrative backend of Drupal and allows them to just see their own content. We do use Workbench Moderation, which does have a Drupal 8 release, and allows a moderation queue for the six or so members of staff who can create or edit content but don’t have administrative rights to have their content checked by an administrator. None of them particularly like the standard Drupal content creation interface, and it’s not something that we would ever ask the rest of the staff to use. We know from the lack of use of our intranet, which also is on Drupal, that no one particularly cares for editing content there. So if we wanted to expand access to website editing, which we’ve talked about a lot, this would be a key module for us to use.

Given the current status of these modules  with rewrites in progress, it seems likely that by the end of the year it may be possible to migrate to Drupal 8 with our current setup, or in playing around with Drupal 8 on a development site that we determine a different way to approach these needs. If you have the interest and time to do this, there are worse ways to pass the time. If you are creating a completely new Drupal site and don’t have a time crunch, starting in Drupal 8 now is probably the way to go, since by the time the site would be ready you may have additional modules available and get to take advantage of all the new features. If this is something you’re trying to roll out by the end of the semester, maybe wait on it.

Have you considered upgrading your library’s site to Drupal 8? Have you been successful? Let us know in the comments.

Open Knowledge Foundation: Open Data Day Cairo 2016

planet code4lib - Wed, 2016-03-23 14:07

This blog post was written by Adham Kalila from Transport for Cairo

There is a strong institutional fear of open data in Egypt. In a culture attuned to privacy and private spaces, the concern with the potential negative impacts of opening up data and giving access arouses suspicion towards asking too many questions. There is often a tendency to withhold information. For these institutions,  It seems unlikely that some nerdy enthusiasts just want to learn more and solve what they are capable of solving, for little more than the experience and thrill of getting it done. Few imagine this because we do not do it enough. Open Data day and the Cairo Mobility Hackathon were an excellent first step in showing everyone that some of us want to think a little harder and do a bit more with our time and skills. One by one, people and institutions will stop being so suspicious when we can offer help in exchange for their data, openly.

Transport for Cairo (TfC) is a group initiative of young professionals that aims to gather and share information about public transportation to everyone in the most convenient and practical ways: for example printed maps and digital feeds. This project is fundamentally about open data since this data belongs to every citizen. Leading by example, TfC released a GTFS dataset of the Cairo Metro as open data three days before the event.

To celebrate open data TfC in collaboration with the Access 2 Knowledge 4 Development (A2K4D) research centre and Open Knowledge International, called out to Egypt’s open data community to spend a day learning, engaging, and networking. Participants could attend the Cairo Mobility Hackathon or attend workshops held by four organizations from the Cairo community who came to speak and raise awareness about different projects and opportunities around open data in Egypt. The response was uplifting!

The day started with an ice-breaking activity that involved a tennis ball and some funny confessions. After a brief introduction by Mohamed Hegazy, TfC’s director, about the activities of the day and some much-needed coffee, the hackathon and the workshops commenced in earnest. Originally, the workshops were scheduled in parallel but after feedback from participants about wanting to attend overlapping ones, the workshops were rearranged to follow one another. The workshops focused on establishing and fostering an open data culture in Egypt and were given by a number of established organizations including Takween integrated Community Development, the Cairo node of the Open Data Institute, the Support for Information and Technology Center (SITC), and InfoTimes. At the end of the day, A2K4D held a pitching competition for data-fuelled start-ups.

One of the main achievements of the day is the crowd of around 70 people that gathered at the American University in Cairo in Tahrir for ODD. One of the first participants to show up arrived by train all the way from the coastal city of Alexandria just to attend. The hackathon that took place focused on mobility around Cairo, which is a problematic issue close to everyone’s heart. It gave participants the opportunity to learn more about the released dataset, build upon it and engage with the team that created it. To structure the ideathon and give participants a chance to share their projects and ideas, we had a fillable schedule board on the wall for sessions to take place between 6 tables and four-time slots. Slowly but surely, teams started forming around similar projects or topics to be discussed. In one session of the hackathon, everyone was asked to dream up public transit routes (bus, tram, and metro) that would make their daily commutes faster and easier. Different routes were drawn in various colors on a map of Cairo, and the final product has started a thought experiment on where investment was most needed and how to prioritize one route over another. The day ended with our minds opened to new possibilities and ways to engage with the data and with one another. The one striking thing that was lacking from the day, and I dare say it was not missed, was suspicion. Nobody questioned the motives behind our interest in one another’s experiences, projects, and goals. There was a shared sense of collaboration and engagement and above all, community. Open Data day 2016 in Cairo was a resounding success and we hope to play a bigger role in its organization in the future. If you would like to see more pictures of the day, check out our facebook album.

Islandora: Seeking Expressions of Interest to host the 2017 Islandoracon

planet code4lib - Wed, 2016-03-23 13:59

The Islandora Foundation is seeking Expressions of Interest to host the 2017 Islandora Conference (Islandoracon).

The first Islandora Conference was held in August, 2015 at Islandora’s birthplace in Charlottetown, PE. It was a resounding success and the community has expressed a strong interest in making it a repeat event. If you would like to host the next Islandoracon please contact community@islandora.ca with your response to the following by May 31st, 2016:

Requirements:

  • The host must cover the cost of the venue (whether by offering your own space or paying the rent on a venue). All other costs (transportation, catering, supplies, etc) will be covered by the Islandora Foundation. The venue must have:

    • space for up to 150 attendees total, with room for at least two simultaneous tracks, and additional pre-conference workshop facilities, with appropriate A/V equipment. Laptop-friendly seating a strong preference.

    • Provides wireless internet capable of supporting 150+ simultaneous connections, at no extra charge for conference attendees.

    • A location convenient to an airport and hotels (or other accommodations, such as student housing)

    • A local planning committee willing to help with organization

  • The host is not responsible for developing the Islandoracon program, pre-conference events, sponsorships, or social events.

The EOI must include:

  • The name of the institution(s)

  • Primary contact (with email)

  • Proposed location, with a brief description of amenities, travel, and other consideration that would make it a good location for the conference.

  • A proposed time of year. We do not have a set schedule, so if there is a season when your venue is particularly attractive, the conference dates can move accordingly.

The location will be selected by the Islandoracon Planning Committee, a working group of the Islandora Roadmap Committee.

Open Library Data Additions: Amazon Crawl: part 14

planet code4lib - Wed, 2016-03-23 13:55

Part 14 of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata, Text

Open Library Data Additions: Talis MARC records

planet code4lib - Wed, 2016-03-23 09:18

5.5 million MARC records contributed by Talis to Open Library under the ODC PDDL (http://www.opendatacommons.org/odc-public-domain-dedication-and-licence/)..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata

Open Library Data Additions: Amazon Crawl: part 9

planet code4lib - Wed, 2016-03-23 05:24

Part 9 of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata, Text

FOSS4Lib Recent Releases: Evergreen - 2.10.1

planet code4lib - Wed, 2016-03-23 03:08

Last updated March 22, 2016. Created by gmcharlt on March 22, 2016.
Log in to edit this page.

Package: EvergreenRelease Date: Tuesday, March 22, 2016

Pages

Subscribe to code4lib aggregator