At the NCSU Libraries we recently migrated from Djatoka as our image server and from a bespoke user interface for paginated reading and search inside to a IIIF-compatible image server and viewers. While we gained a lot from the switch, we pretty quickly saw the interoperability wins gained from adopting the IIIF standards.
For a long while we had been using a book reader that I developed that wasn’t great, but that was better than the paginated readers that were out there at the time in a number of ways. Since then it hasn’t aged well compared to the most recent viewers. When we were moving towards IIIF it gave us the opportunity to evaluate our options again. I really like a lot of the features that UniversalViewer (UV) provides including a thumbnail navigation pane and support for the IIIF Content Search API with hit highlighting.
While UV is somewhat responsive, it does not completely respond well to very narrow windows or small mobile device screens. We had spent a lot of effort with our past reader to provide a decent experience on mobile, so it was disappointing to take away functionality that we had provided to users for years.
While UV has plans to make progress soon on updating the interface to work better on mobile, we didn’t want to wait to push out the improved desktop experience to users. So like much of the progress on the web we relied on a fallback. Desktop and other large display users would get the more powerful UV interface, while narrow window and mobile users would get–something else. At first we just included the first image for a resource as a static image along with a PDF when available. So we didn’t provide any pan/zoom interface or way to see any other images for the resource. That at least let us get on to other things.
And here is where the story gets interesting and IIIF really comes into its own. It is worth understanding just a bit of how IIIF works to understand the next part. All we needed to create in order to implement UV was a IIIF Presentation manifest, which is a JSON-LD document with information on how to display (or present) the resource to a user including a list of all the images to display in order. You can see an example of a manifest here. While many probably still associate IIIF only with the Image API and the image servers that have developed around that specification, the real wow factor is with the Presentation API. The same manifest can be used in multiple viewers that know how to parse these manifests. And further the other viewer could be on our own site or within a completely different interface developed and hosted by someone else.
We looked at other viewers that have implemented the Presentation API and chose to use Leaflet-IIIF. Leaflet is a simple, mobile-friendly tiled map viewer, and Jack Reed developed a plugin to add support for IIIF Presentation manifests to Leaflet. This was almost all we needed, except the default way to change images in Leaflet-IIIF is via a layers control that works well enough for a few choices of map layers, but does not scale to a 20+ page book. I needed another way to move through the pages. It had been a long while since I had done anything with Leaflet, but once I had found Leaflet.Easybutton it was simple to create two buttons for next and previous pages, and the EasyButton plugin also helped with changing the states of the buttons when reaching the first or last page. Here’s what our current solution looks like in action:
We hope this is a temporary solution, but I think it clearly shows how shared standards can be a big interoperability win. With the short deadline we had for pushing out this new functionality, having multiple viewers available meant that we could forge ahead while still providing a good experience for most users. Having multiple viewers available that cover different niches saved us countless hours of work.
You can see more of how we’ve implemented these viewers here: http://d.lib.ncsu.edu/collections
Thanks to Simeon Warner for pointing out how this was a IIIF interoperability win. And here he created a short screencast of this in action as well:
Hats off to Bookshare — a global literacy initiative aimed at providing accessible books free for people unable to read standard print — as they reach a record 10 million ebook downloads by print-disabled readers. Because 90% of books published in the United States are unavailable in accessible formats, people who are dyslexic, blind or have low vision have extremely limited access to books. Bookshare helps to bridge that gap by obtaining accessible files (when available) from over 820 publishers. Bookshare also scans titles when print is the only available format. As a result, K-12 students with a qualifying disability have free access to more than 460,000 books.
Bookshare is a service provided by Benetech, a non-profit technology organization in Silicon Valley that also works on human rights and environmental issues worldwide. Jim Fruchterman, CEO and founder of Benetech, wanted to use technology to dramatically reduce the costs of creating and delivering ebooks. With grant funds from the U.S. Department of Education, Bookshare initially focused on service to the K-12 population, but last year expanded service to public libraries in Georgia, Pennsylvania and at the New York Public Library. With over 425,000 members, Bookshare joins ALA in the pursuit of providing equitable access to all people regardless of circumstance.
ALA’s Office for Information Technology Policy (OITP) has had the pleasure to work with Benetech for a number of years on advocating for national and international copyright exceptions for people with print disabilities to increase access to content and to share accessible content across borders. We have also worked with Benetech on 3D printing, exploring ways that people with disabilities can use and benefit from the latest technologies. We congratulate Benetech on this milestone and look forward to future collaborations.
Monday afternoon I attended the Department of Labor’s Customer Centered Design Challenge, which is all about how groups are implementing new workforce programs. I started out sitting next to Teresa Hitchcock from Bakersfield, California. She was telling me that she reached out to Nancy Kerr, her local library director and they have created an exciting partnership.
Nancy said that she had some empty space created when she down-sized her microfisch, but didn’t have any money for staff. They created a vibrant Teen Space with Maker technology and the Kern Youth Partnership staffs the space. Now the space is filled with teens looking for work and entrepreneurial opportunities.
When others heard our conversation, they talked about looking for public space for their programs too. They would like to integrate their services with libraries. A woman from the San Diego Career Centers suggested that local librarians reach out to their local workforce board and ask to meet. The workforce people don’t know we’re here and are looking to partner.
The Department of Labor released a “Training and Employment Notice (TEN)” in May 2016 recommending to Workforce Boards that they work with their local library: Department of Labor Training and Employment Notice 35-15
The post Department of Labor recommends Workforce Boards work with libraries appeared first on District Dispatch.
As always, I'm grateful to the Library of Congress for inviting me. I was asked to give a brief report of what happened at the DARPA workshop on "The Future of Storage" that took place at Columbia last May. There has yet to be a public report on the proceedings, so I can't be specific about who (other than me) said what.
Three broad areas were discussed. First, I and others looked at the prospects for bulk storage over the medium term. How long is the medium term? Hard disk has been shipping for 60 years. Flash as a storage medium is nearly 30 years old (Eli Harari filed the key enabling patent in 1988), and it has yet to make an impact on bulk storage. It is pretty safe to say that these two media will dominate the bulk storage market for the next 10-15 years.
WD unit shipmentsThe debate is about how quickly flash will displace hard disk in this space. Flash is rapidly displacing hard disk in every market except bulk storage. High performance, low power, high density and robustness overwhelm the higher price per byte of flash.
WD revenuesIn unit volume terms, we have hit peak disk. Since disk manufacture is a volume business, these reduced unit volumes are causing both major manufacturers financial difficulties, resulting in layoffs at both, and manufacturing capacity reductions.
Seagate revenuesThese financial difficulties make the investments needed to further increase densities, specifically HAMR, more difficult. Continuing the real-time schedule slip of this much-delayed technology further into the future is reducing the rate at which $/GB decreases, and thus making hard disk less competitive with flash. Though it is worth noting that shingled drives are now available. We're starting to use Seagate's very affordable 8TB archive drives.
Exabytes shippedDespite these difficulties, hard disk completely dominates the bytes of storage shipped. What would it take for flash to displace hard disk in the bulk storage market?
The world's capacity to make bytes of flash would have to increase dramatically. There are two possible (synergistic) ways to do this; it could be the result of either or both of:
- Flash vs HDD CapexBuilding a lot of new flash fabs. This is extremely expensive, but flash advocates point to current low interest rates and strategic investment by Far East governments as a basis for optimism.
But even if the money is available, bringing new fabs into production takes time. In the medium term it is likely that the fabs will come on-line, and accelerate the displacement of hard disk, but this won't happen quickly.
- Increasing the bytes of storage on each wafer from existing fabs. Two technologies can do this; 3D flash is in volume production and quad-level cell (16 bits/cell) is in development. Although both are expensive to manufacture, the investment in doing so is a lot less than a whole new fab, and the impact is quicker.
Write enduranceAs the table shows, the smaller the cell and the more bits it holds the lower the write endurance (and the lower the reliability). But QLC at larger cell size is competitive with TLC at a smaller size. QLC isn't likely to be used for write-intensive workloads but archival uses fit its characteristics well. Whether enough of the bulk storage market has low enough write loads to use QLC economically is an open question.
- SourceWriting data to DNA needs to get 6-8 orders of magnitude cheaper. The goal of the recently announced HGP-W project is to reduce it only 3 orders of magnitude in a decade. It has been getting cheaper more slowly than hard disk or flash.
- Reading the data may be cheap but is always going to be very slow, so the idea that "DNA can store all the world's data" is misleading. At best it could store all the world's backups; there needs to be another copy on some faster medium.
- The use of long-lived media whose writing cost is vastly greater than their reading cost is extremely difficult to justify. It is essentially a huge bet against technological progress.
- As we see with HAMR there is a very long way between lab demos of working storage media and market penetration. We are many years from working DNA storage media.
Thanks are due to Brian Berg and Tom Coughlin for input to this talk, which drew on the reporting of Chris Mellor at The Register, but these opinions are mine alone.
Equinox is pleased to announce that we have hired a new Office Manager. Her name is Terri Harry and we couldn’t be more thrilled to have her on board! Terri is local to the metro Atlanta area and started work in August.
Terri completed her Associate’s degree in Liberal Arts in 1985 from Polk Community College in Florida. She pursued a degree in Industrial Engineering before family obligations put her education on hold. Terri worked at Walt Disney World for ten years before moving north to Georgia. Upon moving to Georgia, Terri was a stay at home mom to her two kids and menagerie of pets. For the past 16 years, she has been heavily involved in local sci-fi conventions, giving her just the skill set she needed to take over office duties at Equinox.
Equinox Vice President Grace Dunbar said this about our newest employee, “We’re so pleased to have Terri join the team here at Equinox. I know she’ll be great at handling the ‘non-linear, non-subjective, wibbly-wobbly, timey-wimey stuff’.”
When she’s not herding cats at local conventions, Dragon Con being her favorite; she enjoys spending time with her husband of 28 years and their two kids. We’re happy to have her here at Equinox, herding all of our cats employees.
- Literature Review
- Evaluating Accessibility
- Data Interpretation
- Limitations and Recommendations for Further Research
In the context of digital libraries or digital repositories, the word “accessibility” sometimes refers to access as in open access literature, which is freely available via the internet to anyone anywhere (Crawford, 2011). While digital library systems fall within the context of this paper, accessibility is not used to refer to open access. Rather, the Web Accessibility Initiative (WAI) definition of accessibility is used, enabling people with disabilities to use the web (W3C, 2005). Kleynhans & Fourie (2014) note the lack of definition surrounding accessibility and indicate the importance of defining it. Overall, accessibility means that users with all disability types – visual, hearing, motor, and cognitive – are able to use the website (WebAIM, 2013).
According to 2013 American Community Survey data, an estimated 12.6% of the population of the United States has a disability (U.S. Census Bureau, 2013). Web accessibility is important so that people with disabilities have equal access to the information, and resources available on the web. Just as in the physical world, accessibility also benefits users without disabilities; accessible websites also benefit users on slower internet connections, or antiquated mobile devices (W3C, 2005). Further, attention to website accessibility improves the usability, or ease of use, and, improves search engine optimization (SEO) of websites (Kleynhans & Fourie, 2014; Moreno & Martinez, 2013; Nielsen, 2012; Rømen & Svanæs, 2011). Inaccessible websites widen the digital divide, because they restrict access to information on the basis of ability (Adam & Kreps, 2009).Literature Review
Bertot, Snead, Jaeger, and McClure (2006) conducted a study to develop evaluations for assessing digital libraries. The assessment framework developed by Bertot et al. (2006) includes functionality, usability, and accessibility as determining factors for the success of digital libraries. The accessibility criteria include the provision of alternate forms of content, not using color to convey information, using clear navigation structures, and structuring tables to transform gracefully when enlarged.
Southwell and Slater (2012) evaluated academic library digital collection websites, using a variety of screen reader software. Rather than evaluate the accessibility of the website overall, the focus was placed on whether the digital item selected was screen-readable. The primary focus was to determine if digitized texts, as opposed to born-digital documents, were accessible. Thirty-eight of the libraries evaluated by Southwell and Slater used homegrown systems, and 31 used content management systems. An overwhelming majority of the libraries using content management systems, 25 (81%) for digital library content used CONTENTdm. Results of the study indicated that 42% of the items evaluated are accessible to screen readers. Typically, the absence of a transcript for image-based information was the cause of accessibility failure.
Cervone (2013) provides an overview of accessibility considerations and evaluation tools. Further, Cervone notes many visually impaired people do not use screen readers, instead opting to use browser and computer settings to compensate for their impairments. Using responsive design to gracefully accommodate increases in text sizes is suggested. However, many organizations, educational institutions, and libraries are still working to integrate responsive design on their websites (Rumsey, 2014). Organizations without responsive design should be mindful of how tables reflow (Bertot et al, 2006).
Fox (2008) suggests five principles to be mindful of when developing or redesigning a digital library website: simplicity, usability, navigability and findability, standards compliance, and accessibility. Adhering to any one of the aforementioned principles actually serves to support adherence of the other principles. For example, standards compliance sets the stage for accessibility, and accessible websites support findability of information (Moreno & Martinez, 2013).Evaluating Accessibility
The task of evaluating accessibility is very complex. To begin with, there are a variety of standards with which to measure accessibility compliance. The most recent standard to measure accessibility is the Web Content Accessibility Guidelines (WCAG) 2.0, which were finalized by the W3C in 2008. The WCAG 2.0 is preceded by WCAG 1.0, which was recommended in May, 1999 (Mireia et al, 2009; W3C, 2008). Further, Section 508, Subpart 1194.22 can also be used to evaluate the accessibility of websites. Eleven of the 16 Section 508 checkpoints are based on the WCAG 1.0 specification. Recent studies of accessibility typically use the WGAG 2.0 guidelines (Billingham, 2014; Ringlaben, Bray & Packard, 2014; Rømen & Svanæs, 2012). A variety of tools for automatically assessing the accessibility compliance of websites are available. Using an automated validation tool is an excellent place to start when evaluating website accessibility. However, it is essential to follow automated accessibility checks with other processes to evaluate accessibility (W3C, 2005b).
In addition to the complexities of evaluation caused by the variety of standards to evaluate website accessibility, the number and variety of accessibility assessment tools convolutes the accessibility assessment process. The W3C provides a list of web accessibility evaluation tools. At the time of this writing, the list, which can be filtered by guideline, language, type of tool, type of output, automaticity, and license, contained 48 accessibility evaluation tools (W3C, 2014).Method
Digital library websites using the CONTENTdm platform were identified using the “CONTENTdm in action” website (CONTENTdm in action, n.d.). In some cases, links to collections pointed directly to content residing on the CONTENTdm platform, while in other cases, the landing page for the collection was stand-alone, with links to the content in the CONTENTdm system for further exploration.
The differences in how digital library content is displayed provided an additional opportunity for analysis – evaluating the collection landing page and the CONTENTdm driven page. Analyzing the two different page types provides an opportunity to identify and differentiate between accessibility issues on the collection landing pages and on the collection browse pages. Two academic library digital collections with a landing page separate from the “Browse Collection” interface were identified for analysis: the Carver-VCU Partnership Oral History Collection at the Virginia Commonwealth University (VCU) Libraries, and the Civil War in the American South Collection at the University of Central Florida (UCF). While some of the digital library collections landing pages were standalone, outside of the CONTENTdm system, both of the collection landing pages evaluated in this research project were generated within the CONTENTdm.
Preliminary accessibility evaluations were conducted with several of the automated tools listed by the W3C, in order to select the most appropriate tool for formal analysis. AChecker Web Accessibility Checker was selected after evaluating the results and output format generated by the following tools: Functional Accessibility Evaluator (FAE) 2.0, HTML_CodeSniffer, WAVE Web Accessibility Evaluation Tool, Accessibility Management Platform (AMP), and Cynthia Says from HiSoftware. Each evaluation tool has strengths and weaknesses, which are outside of the scope of this paper. AChecker Web Accessibility Evaluation Tool was selected for use based on its base functionality, readability of reports and data, and data export options.
Automated evaluation was conducted using AChecker Web Accessibility Checker. Pages were evaluated at the WCAG 2.0 AA level. WCAG 2.0 level A includes specifications websites must conform to, and level AA includes the specifications websites should conform to for accessibility. The URLs listed in Appendix A were inputted into the “address” field, with the option to check against, “WCAG 2.0 (Level AA)” with the view by guideline report format. Then the full accessibility review was outputted to PDF using the export option.
The WCAG 2.0 guidelines were selected for evaluation because the WCAG encompasses more disability types and usability principles than WCAG 1.0 and Section 508 (Mireia et al, 2009). To be clear, it is possible for a website to meet WCAG 2.0 standards, while not being functionally accessible (Clark, 2006). However, a website is certainly not accessible if it does not meet the WCAG 2.0 guidelines. Further, the automated accessibility check does not check for accessibility of individual items in the collection, as in the Southwell and Slater (2012) research.Results
The results of the accessibility evaluation are presented in the following three tables. Table 1 displays the general highest-level overview of the results – the total number of problems identified for each page. AChecker categorizes the results into three separate categories: known problems, likely problems, and potential problems. Issues AChecker can identify with certainty are categorized as known issues, more ambiguous barriers that “could go either way” and need human intervention to determine whether an issue exists are listed as likely problems, and issues that need human review for evaluation are listed as potential problems (Gay & Li, 2010).Table 1: AChecker Accessibility Evaluation Results by Type of Problem Page Known Problems (n) Likely Problems (n) Potential Problems (n) VCU: Oral History Landing 4 0 160 VCU: Oral History Browse 231 0 1500 UCF: Civil War Landing 3 0 180 UCF: Civil War Browse 58 1 945
Table 2 and Table 3 display the specific guidelines where accessibility issues were identified by AChecker, for VCU and UCF content, respectively.Table 2: Accessibility Evaluation Known Problem Flags by Guideline – VCU Criteria Problem Detail Landing (n) Collection Browse (n) 1.1 Text Alternatives (A) Image Missing Alt Text 1 1 1.3 Adaptable (A) Missing Form Labels 0 148 1.4 Distinguishable (AA) Bold Element Used 1 3 2.4 Navigable (AA) Improper Header Nesting 0 1 3.1 Readable (A) Document Language Not Identified 2 2 3.3 Input Assistance (A) Element with more than one label 0 1 3.3 Input Assistance (A) Empty Label Text 0 74 Table 3: Accessibility Evaluation Known Problem Flags by Guideline – UCF Criteria Problem Detail Landing (n) Collection Browse (n) 1.1 Text Alternatives (A) Image Missing Alt Text 0 0 1.3 Adaptable (A) Missing Form Labels 0 34 1.4 Distinguishable (AA) Bold Element Used 1 3 2.4 Navigable (AA) Improper Header Nesting 0 1 3.1 Readable (A) Document Language Not Identified 2 2 3.1 Input Assistance (A) Empty Label Text 0 17 4.1 Compatible (A) Non-unique ID Attribute 0 1 Data Interpretation
At the onset, the primary weakness of the interpretation of the accessibility evaluation results lies in not having direct access to a CONTENTdm system as a contributor or administrator. Therefore, interpretation relies on assumptions, which are supported by the similarity of the results of the two separate digital library collections on the CONTENTdm system.
The two digital library collections evaluated presented nearly identical known accessibility issues. Two errors were identified with the VCU collection that were not identified in the UCF collection; the issues were missing image alt text (on the landing and browse page), and element with more than one label (collection browse page). The image missing an alt tag is the header image for the template. Since no issue with a missing image alt tag was identified on the UCF collection, presumably the alt attribute of the header image can be modified by local administrators of the CONTENTdm system. The number of errors identified related to missing labels appears to be related to the number of collections available in the system. For example, 148 missing form label errors were identified on the VCU collection browse page, while only 34 were identified on the UCF collection browse page; the VCU system had 37 separate collections and the UCF system had 17 separate collections. The missing form labels are related to the faceted navigation to “add or remove other collections to the search. Although the collections may be reached directly from a specified landing page, the absence of form labels could make it impossible for visitors using screen reader technology to navigate to other collections in the system, or to view other collections along with the current selection.Recommendations
Based on the number of known problems identified on the collection brows pages in the accessibility evaluation, it is important to determine if labels can be added by a local CONTENTdm system administrator. If a local CONTENTdm system administrator has the ability to add labels, then meaningful labels should be added for each element where labels were required. Because both collection browse pages presented the errors in the same structural location, it is likely that the missing labels are a function of how the system outputs the collection information onto the page. In the case of a system structure that generates inaccessible content, advocacy for the importance and necessity of accessibility is invaluable. Clients of OCLC should strongly urge the vendor to make accessibility corrections a priority for future updates and releases of the system. When customers consider features a priority, vendors should follow suit, especially in the competitive tech marketplace that currently exists. The value of accessibility advocacy to create positive change cannot be overstated.
There is plenty of work, beyond the initial automated check, that must be done in order to evaluate and improve the accessibility of digital library collections on the CONTENTdm platform. Each of the likely problems and potential problems identified in the AChecker report should be reviewed to determine if additional action is needed in order to provide accessible content. Some of the potential problems identified by AChecker include items with the need for long description, issues with table structure or display, and areas where visual context may be required. Correcting potential problems related to the need for visual context, where consumption of the information in the image requires being able to view the image, will provide at least some of the information needed to ensure individual items in the collection are accessible. After corrections are made, re-evaluate the pages with the AChecker tool. Follow up automated accessibility evaluation with manual evaluation, and, whenever possible, involve users with disabilities in the evaluation (Henry, 2006; W3C, 2005b). Although, many people with visual impairments do not use screen readers, they are invaluable evaluation tools, especially for projects where users with disabilities are not directly involved in the testing process (Southwell & Slater, 2012; W3C, 2005b).Limitations and Recommendations for Further Research
The primary weakness of this research report is that it only scratches the surface of evaluating the accessibility of digital library content using CONTENTdm. Accessibility evaluation was conducted using only one automated assessment tool, the AChecker Web Accessibility Evaluation Tool. As Gay and Li (2010) point out, different automated accessibility evaluation tools perform different checks, and identify different problems. Comparing the results from a selection of automated accessibility evaluation tools would provide valuable information about the individual strengths and weaknesses of the tools, and when use of one tool over another can prove more beneficial. Although a CONTENTdm driven landing page and browse collection page were evaluated for accessibility, no individual item detail page was evaluated for accessibility. While evaluating an individual item detail page would not necessarily inform the discussion regarding individual collection item accessibility, identifying other potentially inaccessible system structures is a benefit of such analysis. Another limitation of the current study is that only the accessibility issues identified as known problems were analyzed to inform the results. A great deal of data from the initial automated accessibility remains untapped in this study. Providing additional detail regarding the issues identified as likely problems and potential problems allow for a more comprehensive view of the accessibility of the CONTENTdm system, even though this study identified some specific structural changes that are needed for accessibility. Further, accessibility assessments using other tools, such as screen readers, and additional manual accessibility evaluation would help fill in gaps in the information currently available about the accessibility. Finally, conducting accessibility studies of the CONTENTdm system with users with disabilities would help to identify any lingering accessibility issues that were not identified in the previously mentioned methods.References
Accessibility management platform. (n.d.). Retrieved March 1, 2015, from https://amp.ssbbartgroup.com/
Adam, A., & Kreps, D. (2009). Disability and discourses of web accessibility. Information, Communication & Society, 12(7), 1041-1058. doi: 10.1080/13691180802552940
Bertot, J. C., Snead, J. T., Jaeger, P. T., & McClure, C. R. (2006). Functionality, usability, and accessibility. Performance Measurement and Metrics, 7(1), 17-28. doi:10.1108/14678040610654828
Billingham, L. (2014). Improving academic library website accessibility for people with disabilities. Library Management, 35(8/9), 565-581. doi: 10.1108/LM-11-2013-0107
Chowdhury, S., Landoni, M., & Gibb, F. (2006). Usability and impact of digital libraries: a review. Online Information Review, 30(6), 656-680. doi:10.1108/14684520610716153
Clark, J. (2006, May 23). To Hell with WCAG 2. Retrieved March 14, 2015, from http://alistapart.com/article/tohellwithwcag2
CONTENTdm in action. (n.d.). Retrieved January 31, 2015, from http://www.oclc.org/en-US/contentdm/collections.html
Crawford, W. (2011). Open Access: What you need to know now. Chicago, IL, USA: American Library Association.
Functional Accessibility Evaluator 2.0. (n.d.). Retrieved March 1, 2015, from http://fae20.cita.illinois.edu/
Gay, G., & Li, C. Q. (2010). AChecker: open, interactive, customizable, web accessibility checking. Paper presented at the Proceedings of the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A), Raleigh, North Carolina.
Henry, S. L. (2006). Understanding web accessibility. Web Accessibility (pp. 1-51): Apress.
HiSoftware Cynthia says portal. (n.d.). Retrieved March 1, 2015, from http://www.cynthiasays.com/
HTML_CodeSniffer. (n.d.). Retrieved March 1, 2015, from http://squizlabs.github.io/HTML_CodeSniffer/
Kleynhans, S. A., & Fourie, I. (2014). Ensuring accessibility of electronic information resources for visually impaired people. Library Hi Tech, 32(2), 368-379. doi: 10.1108/LHT-11-2013-0148
Mireia, R., Merce, P., Marc, B., Miquel, T., Andreu, S., & Pilar, P. (2009). Web content accessibility guidelines 2.0. Program, 43(4), 392-406. doi: 10.1108/00330330910998048
Moreno, L., & Martinez, P. (2013). Overlapping factors in search engine optimization and web accessibility. Online Information Review, 37(4), 564-580. doi:10.1108/OIR-04-2012-0063
Nielsen, J. (2012). Usability 101: Introduction to usability. Retrieved October 21, 2014, from http://www.nngroup.com/articles/usability-101-introduction-to-usability/
Ringlaben, R., Bray, M., & Packard, A. (2014). Accessibility of American university special education departments’ web sites. Universal Access in the Information Society, 13(2), 249-254. doi: 10.1007/s10209-013-0302-7
Rømen, D., & Svanæs, D. (2012). Validating WCAG versions 1.0 and 2.0 through usability testing with disabled users. Universal Access in the Information Society, 11(4), 375-385. doi: 10.1007/s10209-011-0259-3
Rumsey, E. (2014, July). Responsive design sites: Higher ed, libraries, notables. Retrieved March 14, 2015, from http://blog.lib.uiowa.edu/hardinmd/2012/05/03/responsive-design-sites-higher-ed-libraries-notables/
Southwell, K. L., & Slater, J. (2012). Accessibility of digital special collections using screen readers. Library Hi Tech, 30(3), 457-471. doi:10.1108/07378831211266609
Total validator. (n.d.). Retrieved March 1, 2015, from https://www.totalvalidator.com/
U.S. Census Bureau. (2013). DP02 Selected Social Characteristics in the United States [Data]. 2013 American Community Survey 1-Year Estimates. Retrieved from http://factfinder2.census.gov
W3C. (2014, March). Easy checks – A first review of web accessibility. Retrieved March 5, 2015, from http://www.w3.org/WAI/eval/preliminary
W3C. (2005a) Introduction to web accessibility. Retrieved October 3, 2014, from http://www.w3.org/WAI/intro/accessibility.php
W3C. (2005b). Selecting web accessibility evaluation tools. Retrieved March 5, 2015, from http://www.w3.org/WAI/eval/selectingtools.html
W3C. (2014, December 18). Web accessibility evaluation tools list. Retrieved March 1, 2015, from http://www.w3.org/WAI/ER/tools/
W3C. (2008, December 11). Web content accessibility guidelines (WCAG) 2.0. Retrieved March 5, 2015, from http://www.w3.org/TR/WCAG20/
WAVE web accessibility tool. (n.d.). Retrieved March 1, 2015, from http://wave.webaim.org/
WebAIM. (2014, April 22). Introduction to web accessibility. Retrieved March 5, 2015, from http://webaim.org/intro/
Open Knowledge Foundation: How to advance open data research: Renewing our focus on the demand of open data, user needs and data for society.
Ahead of this year’s International Open Data Conference #iodc16, Danny Lämmerhirt and Stefaan Verhulst provide information on the Measuring and Increasing Impact Action Session, which will be held on Friday October 7, 2016 at IODC in Room E. Further information on the session can be found here.
Lord Kelvin’s famous quote “If you can not measure it, you can not improve it” equally applies to open data. Without more evidence of how open data contributes to meeting users’ needs and addressing societal challenges, efforts and policies toward releasing and using more data may be misinformed and based upon untested assumptions.
When done well, assessments, metrics, and audits can guide both (local) data providers and users to understand, reflect upon, and change how open data is designed. What we measure and how we measure is therefore decisive to advance open data.
Back in 2014, the Web Foundation and the GovLab at NYU brought together open data assessment experts from Open Knowledge International, Organisation for Economic Co-operation and Development, United Nations, Canada’s International Development Research Centre, and elsewhere to explore the development of common methods and frameworks for the study of open data. It resulted in a draft template or framework for measuring open data. Despite the increased awareness for more evidence-based open data approaches, since 2014 open data assessment methods have only advanced slowly. At the same time, governments publish more of their data openly, and more civil society groups, civil servants, and entrepreneurs employ open data to manifold ends: the broader public may detect environmental issues and advocate for policy changes, neighbourhood projects employ data to enable marginalized communities to participate in urban planning, public institutions may enhance their information exchange, and entrepreneurs embed open data in new business models.
In 2015, the International Open Data Conference roadmap made the following recommendations on how to improve the way we assess and measure open data.
- Reviewing and refining the Common Assessment Methods for Open Data framework. This framework lays out four areas of inquiry: context of open data, the datapublished, use practices and users, as well as the impact of opening data.
- Developing a catalogue of assessment methods to monitor progress against the International Open Data Charter (based on the Common Assessment Methods for Open Data).
- Networking researchers to exchange common methods and metrics. This helps to build methodologies that are reproducible and increase credibility and impact of research.
- Developing sectoral assessments.
In short, the IODC called for refining our assessment criteria and metrics by connecting researchers, and applying the assessments to specific areas. It is hard to tell how much progress has been made in answering these recommendations, but there is a sense among researchers and practitioners that the first two goals are yet to be fully addressed.“…there seems to be a disconnect between top-level frameworks and on-the-ground research”
Instead we have seen various disparate, yet well meaning, efforts to enhance the understanding of the release and impact of open data. A working group was created to measure progress on the International Open Data Charter, which provides governments with principles for implementing open data policies. While this working group compiled a list of studies and their methodologies, it has not (yet) deepened the common framework of definitions and criteria to assess and measure the implementation of the Charter. In addition, there is an increase of sector- and case-specific studies that are often more descriptive and context specific in nature, yet do contribute to the need for examples that illustrate the value proposition for open data.
As such, there seems to be a disconnect between top-level frameworks and on-the-ground research, preventing the sharing of common methods and distilling replicable experiences about what works and what does not. How to proceed and what to prioritize will be the core focus of the “Action Track: Measurement” at IODC 2016. The role of research for (scaling) open data practice and policy and how to develop a common open data research infrastructure will also be discussed at various workshops during the Open Data Research Summit, and the findings will be shared during the Action Track.
In particular, the Action Track will seek to focus on:
- Demand and use: Specifically, whether and how to study the demand for and use of open data—including user needs and data life cycle analysis (as opposed to being mainly focused on the data supply or capturing evidence of impact), given the nascent nature of many initiatives around the world. And how to identify how various variables including local context, data supply, types of users, and impact relate to each other, instead of regarding them as separate. To be more deductive, explanatory, and generate insights that are operational (for instance, with regard to what data sets to release) there may be a need to expand the area of demand and use case studies (such as org).
- Informing supply and infrastructure: How to develop deeper collaboration between researchers and domain experts to help identify “key data” and inform the government data infrastructure needed to provide them. Principle 1 of the International Open Data Charter states that governments should provide key data open by default, yet the questions remains in how to identify “key” data (e.g., would that mean data relevant to society at large?). Which governments (and other public institutions) should be expected to provide key data and which information do we need to better understand government’s role in providing key data? How can we evaluate progress around publishing these data coherently if countries organize the capture, collection, and publication of this data differently?
- Networking research and researchers: How to develop more and better exchange among the research community to identify gaps in knowledge, to develop common research methods and frameworks and to learn from each other? Possible topics to consider and evaluate include collaborative platforms to share findings (such as Open Governance Research Exchange – OGRX), expert networks (such as https://networkofinnovators.org/), implementing governance for collaboration, dedicated funding, research symposia (more below on ODRS), and interdisciplinary research projects.
Make the most of this Action Track: Your input is needed
To maximize outcomes, the Measurement Action Area will catalyze input from conversations prior to the IODC. Researchers who want to shape the future agenda of open data research are highly encouraged to participate and discuss in following channels:
1) The Measurement and Increasing Impact Action Session, which will take place on Friday October 7, 2016 at IODC in Room E (more details here).
2) The Open Data Research Symposium, which is further outlined below. You can follow this event on Twitter with the hashtag #ODRS16.
The Open Data Research Symposium
The Measurement and Increasing Impact Action Session will be complemented by the second Open Data Research Symposium (#ODRS16), held prior to the International Open Data Conference on October 5, 2016 from 9:00am to 5:00pm (CEST) in Madrid, Spain (view map here for exact location). Researchers interested in the Measurement and Increasing Impact Action Session are encouraged to participate in the Open Data Research Symposium.
The symposium offers open data researchers an opportunity to reflect critically on the findings of their completed research and to formulate the open data research agenda.
Interested researchers may register here. Please note that registration is mandatory for participation.
This piece originally appeared on the IODC blog and is reposted with permission.
In Chapter 5 Nicolini takes a look at how practice theories have been informed by activity theory. Activity theory was pioneered by the psychologist Lev Vygotsky in the 1920s and 1930s. Since Vygotsky activity theory has grown and evolved in a variety of directions that are all characterized by the attention to the role of objects and an attention to the role of conflict or dialectic in human activity. Nicolini focuses specifically on cultural and historical activity theory which focuses on practice and has been picked up by the organization and management studies.
Things start off by talking about Marx again, specifically the description of work in [Das Kapital], where work is broken up into a set of interdependent components:
- the worker
- the material upon which the worker works
- the instruments used to carry out the work
- the actions of the worker
- the goal towards which the worker works
- the product of the work
The identity of the worker is a net effect of this process. Vygotsky and other activity theorists took these rough categories and refined them. Vygotsky in particular focused attention on mediation, or how we as humans typically interact with our environments using cultural artifacts (things designed by people) and that language itself was an example of such an artifact. These artifacts transform the person using them, and the environment: workers are transformed by their tools.
Instead of focusing on individual behavior, activity theorists often examine how actions are materially situated at various levels: actions, activities and operations which are a function of thinking about the collective effort involved. This idea was introduced by Leont’ev (1978). (???) is cited a few times, which is interesting because Kuutti & Bannon (2014) is how I found out about Nicolini in the first place (small world). To illustrate the various levels Leont’ev has an example of using the gears in a car with manual transmission, and how a person starts out performing the individual actions of shifting gears as they learn, but eventually they become automatic operations that are performed without much thinking during other activities such speeding up, stopping, going up hills, etc. The operations can also be dismantled and reassembled and recomposed to create new actions. I’m reminded of push starting my parent’s VW Bug when the battery was dead. The example of manual transmission is particularly poignant because of the prevalence of automatic cars today, where those shifting actions have been subsumed or embodied in the automatic transmission. The actions can no longer be decomposed, at least not by most of us non-mechanics. It makes me wonder briefly about the power dynamics are embodied in that change.
It wasn’t until Engestrom:1987 that the focus came explicitly to bear on the social. Yrjö Engeström (who is referenced and linked in Wikipedia but there is not an article for him yet) is credited for starting the influential Scandinavian activity theory strand of work, and helping bring it to the West. The connection to Scandinavia makes me think about participatory design which came from that region, and what connections there are between it and activity theory. Also action research seems similarly inflected, but perhaps it’s more of a western rebranding? At any rate Engeström got people thinking about an activity system which Nicolini describes as a “collective, systemic, object-oriented formation”, which is summarized with this diagram:Activity System
This makes me wonder if there might be something in this conceptual diagram from Engeström for me to use in analyzing my interviews with web archivists. It’s kind of strange to run across this idea of object-oriented again outside of the computer science context. I can’t help but wonder how much cross-talk there was between psychology/sociology and computer science. The phrase is also being deployed in humanistic circles with the focus on object oriented ontology. It’s kind of ironic given how object-oriented programming has fallen out of favor a bit in software development, with a resurgence of interest in functional programming.
Kuutti, K., & Bannon, L. J. (2014). The turn to practice in HCI: Towards a research agenda. In Proceedings of the 32nd annual ACM Conference on Human Factors in Computing Systems (pp. 3543–3552). Association for Computing Machinery. Retrieved from http://dl.acm.org/citation.cfm?id=2557111
Leont’ev, A. N. (1978). Activity, consciousness, personality. Prentice Hall.
It all started when I polled some librarians about recent permission fees paid for journal articles, just to have more background on current state of interlibrary loan. If permission fees were unreasonably high, it might be a data point to share if the House Judiciary Committee on the Courts, Intellectual Property, and the Internet considers the U.S. Copyright Office’s senseless proposal to rewrite Section 108. I expected to be shocked by high permission fees—and I was—but I also discovered something else that I just had to share.
I received a few examples from librarians regarding a particular journal. One in particular struck me. “I received a request today for a five page article from The Journal of Nanoscience and Nanotechnology and while processing it through ILLiad, the Copyright Clearance Center (CCC) indicated a fee of $503.50. So that would be a $100 a page — call me crazy, but something doesn’t seem right to me with that fee. I went to the publisher’s website and the article is available for $113, just over $20 a page.”
I then asked CCC to clarify why an article from CCC was five times the cost of the very same article direct from the publisher. I received a quick response from CCC that said “Unfortunately, the prices that appear in our system are subject to change at the publishers’ discretion. CCC only processes the fees that the publisher provides us.”
I discovered that the publisher—who allegedly sets the price of the permission fee—also was used Ingenta document delivery, as an additional online permissions service. Just as the librarian said, Ingenta only charged $113 (which is still a big number for a five page article). I contacted the journal editor and asked about the difference and he responded immediately via email, “You are right that article is available for $113 from Ingenta. Just download from the Ingenta website.”
The difference in price can only be explained as a huge markup by CCC. Surely processing a 5-page article request cannot cost CCC an additional $400. Think about it. CCC is giving the rights holder $113 and taking the other $390.50. Deep pockets, right?
But wait, there’s more. I discovered that the publisher of the journal is American Scientific Publishers, a publisher on the predatory journal blacklist. (Holy cow!) Predatory journals are bogus journals that charge publication fees to gullible scholars and researchers to publish in a journal essentially posing as a reputable publication. With no editorial board and no peer review, academics are duped into publishing with a journal they believe to be trustworthy.
Here’s where we are at. CCC is collecting permission fees five times the amount of other permission services for journal articles from likely bogus publications. Are they sending any of the permission fees collected to the predatory journal publishers? And if they are, isn’t this a way to help predatory journals stay in business? Trustworthy publishers surely would not like that. In any case, with predatory journals numbering in the thousands, CCC has discovered a very large cash cow.
For years, the CCC masqueraded as a non-profit organization until the Commissioner of Internal Revenue caught up with them in 1982, in Copyright Clearance Center, Inc. v. Commissioner of Internal Revenue. Now that CCC is a privately held, for-profit company, we have limited information on its financials, but we do know that in 2011 (according to a CCC press release), they distributed over 188 million dollars to rights holders. That’s a big number from five years ago. How much money they pocketed for themselves is unknown, but I think we can rest assured that it was more than enough to jointly fund (with the Association of American Publishers) Cambridge University Press et al v. Patton et al, a four year-long litigation against Georgia State University’s e-reserve service. (They lost, but are requesting an appeal).
CCC is making a lot of money collecting permission fees, even on public domain materials and disreputable journal publications. Their profit margin could be as high as Elsevier’s! Academics are duped by predatory journals that are apparently doing fairly well financially. Libraries are paying high permission fees from the CCC unless they know to pay the predatory journal directly, keeping the predatory journal people in the black. As if the traditional scholarly communication cycle could get any more absurd!
As we countdown to the annual Lucene/Solr Revolution conference in Boston this October, we’re highlighting talks and sessions from past conferences. Today, we’re highlighting Solr Committer Ramkumar Aiyengar’s talk, “Building the News Search Engine”.
Meet the backend which drives News Search at Bloomberg LP. In this session, Ramkumar Aiyengar talks about how he and his colleagues have successfully pushed Solr to unchartered territories over the last three years, delivering a real-time search engine critical to the workflow of hundreds of thousands of customers worldwide.
Ramkumar Aiyengar leads the News Search backend team at the Bloomberg R&D office in London. He joined Bloomberg from his university in India and has been with the News R&D team for nine years. He started working with Apache Solr/Lucene four years ago, and is now a committer to the project. Ramkumar is especially curious about Solr’s search distribution, architecture, and cloud functionality. He considers himself a Linux evangelist, and is one of those weird geeky creatures who considers Lisp beautiful and believes that Emacs is an operating system.
Building a Real-Time News Search Engine: Presented by Ramkumar Aiyengar, Bloomberg LP from Lucidworks
Join us at Lucene/Solr Revolution 2016, the biggest open source conference dedicated to Apache Lucene/Solr on October 11-14, 2016 in Boston, Massachusetts. Come meet and network with the thought leaders building and deploying Lucene/Solr open source search technology. Full details and registration…
Presenter: Hui Zhang
Tuesday, October 11, 2016
11:00 am – 12:30 pm Central Time
Librarians and repository managers are increasingly asked to take a data-centric approach for content management and impact measurement. Usage statistics, such as page views and downloads, have been widely used for demonstrating repository impacts. However, usage statistics restrict your capacity of identifying user trends and patterns such as how many visits are contributed by crawlers, originated from a mobile device, or redirected by a search engine. Knowing these figures will help librarians to optimize the digital contents for better usability and discoverability. This 90 minute webinar will teach you the concepts of metrics and dimensions along with hands-on activities of how to use Google Analytics (GA) on library data from an institutional repository. Be sure to check the details page for takeaways and prerequisites.
Hui Zhang is the Digital Application Librarian at Oregon State University Libraries and Press. He has years of experience in generating impact reports with major platforms such as DSpace and Hydra Sufia using Google Analytics or local statistics index. Other than repository development, his interests include altmetrics, data visualization, and linked data
Social Media For My Institution; from “mine” to “ours”
Instructor: Plamen Miltenoff
Starting Wednesday October 19, 2016, running for 4 weeks
Register Online, page arranged by session date (login required)
Online Productivity Tools: Smart Shortcuts and Clever Tricks
Presenter: Jaclyn McKewan
Tuesday November 8, 2016
11:00 am – 12:30 pm Central Time
Register Online, page arranged by session date (login required)
Questions or Comments?
For questions or comments, contact LITA at (312) 280-4268 or Mark Beatty, firstname.lastname@example.org
I recorded this episode at 2 a.m. this morning, because I’ve been feeling pretty good about the consistency of this podcast lately and by gosh I am not going to ruin it over a little something like sleep. No fooling, I am pretty entertained. This one’s a shorty, in which I make some enemies and defend the use of carousels on behalf of actually good user experiences – maybe.
Also, thank you for your kind reviews! Your brief reviews wherever you listen to LibUX make it easier to discover it.Listen and please subscribe!
6 pm rode bike to regents drive and rte 1 Noting that the campus is very traditional rd brick Heavy traffic north and south Lots of students walking around Grabbing food Jogging People waiting for bus going south People going from one store to another: target to bagel place Mostly young people/ late teens early 20 People waking in pairs, in groups and alone Lots of mobile devices out and headphones Walking with Pizza takeout 16 people went into target in 10 minutes at 6:20 Two women having 15 min conversation in front of target Inside girls Talking about cooking dinner Fresh meat and veg People having convos in the produce aisle Walking with baskets Frozen food Cooking equipment, towels, blankets Phones & accessories Pharmacy
Left down college ave Sushi place that is open late Beauty shop Pizza Cigarettes Thai restaurant Place for lease
Walking a dog - leads to convo Surveillance Out behind lots of parking Lots of takeout cars Fraternity Japan center Person with shopping bag Feeling old Ledo restaurant on Knox fairly busy Lots of parking decks above 12 Sorority sisters all dressed the same blue jeans and black outside their house Red brick sororities seem like the university bldgs Princeton ave gives way to what looks like more residential Meor Maryland house Playing basketball hoop in parking lot Running with takeout, wait for me, putting sports equipment in car 3 20 somethings More fraternity mixed with residential a People walking away from campus Newly paved road Vacant lot being turned into housing near where the notice was ; looks like they are building Girls driving and singing loudly with music Sound of highway and trains at Norwich Bungalow First political sign trump More mopeds than usual Police auxiliary 6 Small apt complex grey units Ny nj pa plates Big Square with Greek orgs along perimeter guys playing frisbee Umd Police station nearby with parked police cars Zip car pickup Residential parking looks full Gym inside parking lot The building has apartments Back where I started Noticing it is one contiguous new building, must have been built at the same time What students get in here? What is the process? Weird to have place for lease across the road Why is Landmark written on the front? Only saw one family out with a stroller. Zags tee bikes Traffic north slower after 7 Nandos is packed Parking lot full in chipotle shopping area 20 secs to cross rte 1 after waiting like 4 mins 2 empty stores next to 711 prune real estate South campus commons newer red brick Music and grilling People wandering walking Walking with takeout Emptying trash in recycling Busy bus stop 115 bus 2 women “I picked up hit hikers in Iceland” 4 Chinese girls speaking in Chinese Everyone getting in the 117 - like 20-30 people Cookie store delivers until 3am very busy at 7
Chapter 4 focuses on the idea of practice as something that is tied to tradition and community, which is something Nicolini sees Giddens and Bourdieu departing from. Nicolini is presenting this chapter mostly in order to critique the idea, because its focus on people transmitting ideas to each other, when left unexamined, tends to give solidity to social actors and groups:
I will argue that while a coherent theory of learning and transmission is a requisite element of any theory of practice, there is a fine balance to be struck between recognizing that all practices need to be recognized by a group of practitioners, and the reification of such a collective into a social body that exists independently of the practice. (p. 78)
Socialization (family and schooling) is important to the work of Durkheim, who influenced Giddens. Apprenticeship is another concept that has been used to explain how practices are transmitted–but it requires the master/pupil power dynamic, and hence the acceptance of inequality of social positions. It is also more limited in that it is focused primarily on learned skills of craftsmen or artists.
Legitimate Peripheral Participation (LPP) is a term introduced by Lave & Wenger (1991) that attempts to take apprenticeship out of the particular historical environments (the craftsman’s shop) and explain apprenticeship as a learning process. They do this by making it essential that the learner take responsibility for the thing they are doing – this is what makes it a practice. Nicolini cites Foucault in pointing out that this acceptance of responsibility also means an acceptance of the social order and power dynamics present in it. It’s interesting that the term community of practice was first introduced in Lave & Wenger (1991) as well. Well, at least for me since I find myself using that phrase quite a bit. The idea of apprenticeship is decentered, as not only happening between master and apprentice, but includes advanced novices, other apprentices, other master craftsmen, and the material artifacts used. So practice becomes socially situated.
Apparently Lave & Wenger (1991) gave rise to many ethnographic studies of situated learning, that loked at learning as a social phenomena rather than something that happens inside someone’s head. Nicolini sees two drawbacks to LPP. The first is similar to his criticism of Boudrieu’s idea of habitus: it fails to account for non-incremental change in a convincing way. And the second is that it doesn’t take into account the wider socio-historical context, and specifically the role that power, ideology and domination play in practice. This criticism can be found in Contu & Willmott (2000).
It is clear that Nicolini doesn’t particularly like the term community since he launches into a critique of its fuzziness, morality and the way that it is used ideologically to define groups of people in order to obscure power, conflict and differences. He references Foucault (1966) by calling community a discursive formation that controls what can and cannot be talked about. He sees the use of the term community with practice as problematic, because one obscures what the other is attempting to make clear. It might be interesting to look closer at this criticism, especially since I have used the term community of practice myself so often. Nicolini says that Handley, Sturdy, Fincham, & Clark (2006) has a good review of the debate.
With these criticisms in mind it does still seem like Wenger (1998) has some useful concepts for the study of practice in the idea of situated learning, which involves:
- mutual engagement
- communal negotiation
- shared repertoire
- shared history
- boundaries (Star & Griesemer, 1989)
Nicolini makes a case for dropping the use of community and instead simply talking about practice, because of the way community obscures processual, social, temporary and conflictual properties. He seems to be saying that communities do exist, but they are an effect of practices in operation. Making communities the unit of analysis obscures the way that practices create communities. But then he goes on to say that it’s not practical to remove it because it is such a useful term in management circles. More importantly it does highlight the importance of shared practices, that things don’t just happen in our heads–they are social.
Nicolini cites Barley & Orr (1997) to explain how the phrase “community of practice” can in fact be a way for “semi-professions” to legitimate themselves–which is kind of an interesting idea. In fact Barley & Orr (1997) looks like it could be a very useful example of an ethnographic study of technical work, that could possibly be a useful model for my own examination of web archiving work. Here’s the summary from Amazon:
Between Craft and Science brings together leading scholars from sociology, anthropology, industrial relations, management, and engineering to consider issues surrounding technical work, the most rapidly expanding sector of the labor force. Part craft and part science, part blue-collar and part white-collar, technical work demands skill and knowledge but is rarely rewarded with commensurate status or salary. The book first considers the anomalous nature of technical work and the difficulty of locating it in any conventional theoretical framework. Only an ethnographic approach, studying the actual doing of the work, will make sense of the subject, the authors conclude. The studies that follow report daily practice filled with disjunctures and ironies that mirror the ambiguities of technical work’s place in the larger culture. On the basis of those studies, the authors probe questions of policy, management, and education. Between Craft and Science considers the cultural difficulties in understanding technical work and advances coherent, practice-oriented insights into this anomalous phenomenon.
Now I’m kind of wondering if I need to adjust what I read next this semester…References
Barley, S. R., & Orr, J. E. (1997). Between craft and science: Technical settings in US settings. Cornell University Press.
Contu, A., & Willmott, H. (2000). Knowing in practice: A “delicate flower” in the organizational learning field. Organization, 7(2).
Foucault, M. (1966). The order of things: An archaeology of the human sciences. Pantheon.
Handley, K., Sturdy, A., Fincham, R., & Clark, T. (2006). Within and beyond communities of practice: Making sense of learning through participation, identity and practice. Journal of Management Studies, 43(3), 641–653.
Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge Ueniversity Press.
Star, S. L., & Griesemer, J. R. (1989). Institutional ecology, ’translations’ and boundary objects: Amateurs and professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39. Social Studies of Science, 19(3), 387–420.
Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. Cambridge University Press.
The fire is a thorough & voracious reader.
Page by page my old manuscript turns gray & brittle
& when the mist thickens into rain,
the smoking pile emits a long thin sigh.
Dave Bonta, Book-burning
What happens when Elsevier – one of the most profitable publishers of scholarly journals and research materials – buys an open access working paper platform like the Social Science Resource Network (SSRN)? Michael Wolfe from Authors Alliance will explore this topic at our next CopyTalk on October 6th, 2016.
After being acquired by Elsevier, SSRN has made headlines following the discovery that the popular pre-print and working paper service had started pulling user-posted works following its own, internal copyright review process. Authors Alliance has been among those to condemn the actions, and to question SSRN’s continuing reliability as a provider of important scholarly infrastructure. In this webinar, Authors Alliance executive director Mike Wolfe will discuss the controversy, Authors Alliance’s response, and what we can learn from the experience about copyright and Digital Millennium Copyright Act best practices for hosts of user-submitted scholarship.
Mike Wolfe is the executive director of Authors Alliance and a copyright research fellow at the University of California, Berkeley, School of Law. Mike has a B.A. from Harvard and J.D. from Duke, and is licensed to practice law in California.
Date/Time: Thursday, October 6, 2:00 PM (ET)
Sign in here as a guest. You’re in.
CopyTalk is a free monthly webinar brought to you by the copyright education subcommittee of ALA’s Office for Information Technology Policy.
It’s a new academic year and we are excited to be kicking it off with great educational projects underway, new additions to our team, and a brand new office suite at Boston Public Library (BPL) to officially serve as DPLA’s national headquarters.
We are thrilled to announce that DPLA now calls BPL’s Digital Partners suite home. The Digital Partners space, which we share with two of our hubs, Digital Commonwealth and Internet Archive’s Boston Scanning Center, as well as BPL’s Digital Services team, was completely redesigned as part of Boston Public Library’s $78 million renovation of the historic Central Branch in Copley Square unveiled earlier this summer. At 6,000 square feet, the Digital Partners space represents BPL’s continued commitment to digitization, digital services, and digital collaboration with state-of-the art facilities, technology, and room to expand.
For DPLA, our new office space represents a huge step forward and is designed to meet the needs of our growing team. We now have access to a conference room wired with a large screen and a webcam to facilitate easy collaboration between our staff in Boston and staff working across the country. Two smaller private conference rooms allow for breakout meetings and small team video calls. When cross team collaboration is in order, our open floor plan now provides a great layout for both conversation and individualized work space.
This week, DPLA staff gathered from around the country to map out plans for the coming months and, thanks to our new conference room, we were able to watch together as Carla Hayden was sworn in as the next Librarian of Congress.
To see more great photos and new features unveiled as part of Boston Public Library’s renovation, check out this feature in Boston Magazine. We would also like to send a special shout out and sincere thank you to Boston Public Library for being such a generous host and collaborator to DPLA!
Giddens is reportedly one of the most influential sociologists of the 20th century. His idea of structuration draws on the work of Marx, Weber and Durkheim. Originally I was going to focus specifically on structuration in my independent study, but I decided against it because of the breadth of Giddens’ influence, and the idea that it might be more useful to focus on the practice theory angle, which conceptually ties together Giddens work with the work of other folks in the field of IS and ITC. Also, I’ll admit, once I discovered he served as an advisor to Tony Blair my interest waned a little bit.
Giddens uses the idea of structuration to resolve dualist tensions in social theory related to subjectivity and objectivity. Structuration is a recursive model of society defined by practices that are composed of actors, rules and resources.
- actors: the producers of activity, who draw on rules and resoures
- rules: generalized procedures for action, not to be confused with instructions or prohibitions (Wittgenstein)
- resources: the ways in which power, or the ability to mobilize people, is manifested (Marx)
Nicolini uses language as an example. Spoken language and the rules of language mutually constitute themselves. Spoken language is based on rules of language, but the rules of language would not exist if they were not enacted and reinvented in spoken language. So there’s the recursion.
Actors are required to be knowledgeable and reflexive in structuration theory. However their knowledge and abilities are finite which is how change and mutation can get in. Giddens also emphasizes that activity is always situated in time and place, which shows his connection to Marx’s historical materialism. And finally practices are related to each other–they form interdependencies and accrete which manifests as structures and systems. Sometimes practices may result in structures that contravene each other which can result in reorderings and revolutions in practice.
Another interesting concept Giddens introduces is practical and discursive conciousness. Where practical consciousness is “saturated with taken for grantedness” and has a lot of parallels to the idea of tacit knowledge that we saw earlier in Heidegger’s idea of ready-to-hand.
Could Giddens’ rules be comparable to algorithms? Who follows the rules in either case? Could people using technologies that embody alogorthmic rules be thought of as following the rules? Or does the level of indirection break that. When the algorithms break, they become visible, kind of like infrastructure. I wonder if the controversy involving adaptive structuration theory (DeSanctis & Poole, 1994) is centered around whether the rules can be written down I also wonder if focusing on the site of practice provides a way out of some of this controversy about the prescriptive application of structuration theory?
According to Nicolini uptake of Giddens was low, because of rise of postmodernism at the same time, which eschewed the theory building that Giddens was doing. They were also tired of the conservative implications of his system. (p. 50). His work was also highly theoretical, and difficult to put into practice. He actively dissuaded people from using his concepts in their own research! It was to be used as sensitizing principles. It sounds like I could read Giddens (1991) for more abou this. Giddens resisted the idea that material artifacts could be structural resources. This seems rather odd, and perhaps at odds with ANT. Orlikowski (1992) introduced the use of structuration theory into organizational studies and ICT. But in Orlikowski (2000) she moved away from it, towards practice theory. These might be useful transition to focus on later in the semester.
Giddens appeared too busy developing a theory of society and individuals which put everything in the right place, portrayed people as reflexive and rational, and allowed almost no room for pathos, emotions, disorder, conflict, and violence. Moreover,Giddens’ structurationism failed to inspire a community that had been held to ransom for decades by the boxes, arrows, and loops of system theory. In spite of its innovative, strong, processual character, Giddens’ system theory looked suspiciously like more of the same. Finally, critical authors were some-what unhappy with Giddens’ flat and a-conflictual view of the social, and were weary of the potentially deeply conservative implications of structurationism.Bourdieu
According to Nicolini, Bourdieu’s core point is that representing practice, or praxeology as he called it, is not enough (anthropology)–practice needs to be explained (sociology). I interpret this as saying that descriptions of practice must reflect on they ways in which description is being performed: what is being made visible, and what is being made invisible. These are important things for Bourdieu. I find this explanation much more compelling than the strong/weak distinction that Nicolini makes in the introduction.
Habitus is a key concept or theme throughout all of Bourdieu’s work. It helps get around the problems of objectivism and subjectivisme. I feel like uunderstanding objectivism as Nicolini describes it, would involve more reading, especially Levi-Strauss and the Structuralists. Habitus isn’t a way of understanding the world – it’s more a way of being in the world. Habitus relates to the body, in ways that are similar to Merleau-Ponty’s ideas of schema and habit as well as Polanyi’s idea of personal tacit knowledge. Schema and habit in particular really remind me a bit of Dewey’s ideas about norms. Schema is compared to the feeling of driving a car where the car is an extension of the body’s corporeal schema. It’s only where that meshing breaks down that the schema is noticed. Again breakdown plays an important role. It seems like this meshing is the content domain of HCI.
Tacit knowledge was used by Polyani to explain how scientists work. Explicit knowledge is traditional scientific knowledge exemplified by the scientific method. But tacit knowledge is an awareness of knowing how to do something that defies analytical description. “We know much more than we know we know.”
Bourdieu summarizes his idea of practice using the following formula that is desribed in Bourdieu (1984), p. 101.(habitus * capital) + field = practice
Capital is anything rare and worthy of being sought after. It can be material and symbolic. Symbolic capitol in particular sustains domination, because it includes the power to name, and renders the entire process invisible. Fields are domains or structured spaces in which the distribution of capital is disputed.
Habitus is a group phenomenon.
Lau (2004) is cited quite a bit for distinguishing and criticizing these ideas – which might be useful to read.
Ways of studying practice (or rather, what not to do):
need to participate in “daily endeavors”. You need to live, not represent. You also need to dismantle or side step the power relation of the Academy over the practitioner
Simply providing a description of practice is not enough. You need to describe how the practices are propagated and work together.
Reflexivity if important to Bourdieu – since he saw how his own work was itself problematic in the way that it theorized capital in metaphysical terms. Michel De Certau criticizes Bourdieu’s split theoretical personality, and points to levels of pratices: dominant ones that are organized by institutions and many minor ones that operate as micro-tactices of resistance, local deformations, and reinvention. I almost put Certeau (2011) on the reading list for this semester after reading about him in an essay by Alan Liu. It’s just a matter of time since de Certeau’s approach, much like Latour, seems to be a great bridging work between the humanities and social sciences, which is kinda where I live.
Nicolini sees Bourdieu’s idea of habitus as not accounting for practices, and suggests that perhaps the very idea of trying to theorize practices is at the heart of the problem. The solution is the problem. Habitus is self contradictory: it says that practices are historically and socially contingent, but operates at a theoretical level that are outside place and time. Bourdieu fails to account for change (only reproduction), mediation (technology), and reflexivity as part of practice.
So, I’m left feeling Bourdieu has some quite subtle theoretical ideas, almost too subtle – but from this brief introduction I feel much more aligned with his politics than with Giddens. Bourdieu’s attention to everyday life is attractive:
Bourdieu directs our attention to the fact that practice is the locus of the social reproduction of everyday life and symbolic orders, of the taken-for-grantedness of the experienced world and the power structure that such a condition both carries and conceals. (p. 69)References
Bourdieu, P. (1984). Distinction: A social critique of the judgement of taste. Harvard University Press.
Certeau, M. de. (2011). The practice of everyday life (3rd ed.). University of California Press.
DeSanctis, G., & Poole, M. S. (1994). Capturing the complexity in advanced technology use: Adaptive structuration theory. Organization Science, 5(2), 121–147.
Giddens, A. (1991). Modernity and self-identity. self and society in the late modern age. Polity Press.
Lau, R. W. (2004). Habitus and the practical logic of practice an interpretation. Sociology, 38(2), 369–387.
Orlikowski, W. J. (1992). The duality of technology: Rethinking the concept of technology in organizations. Organization Science, 3(3), 398–427.
Orlikowski, W. J. (2000). Using technology and constituting structures: A practice lens for studying technology in organizations. Organization Science, 11(4), 404–428.