You are currently browsing the archives for the Linked Data category.





Archive for the 'Linked Data' Category

Penguin Archive software

Tuesday, November 13th, 2012

The open source software developed as part of the Bricolage project to manage the workflow of publishing EAD data as Linked Data is now available from github: https://github.com/ilrt/bricolage

It comprises three components:

  • Bricol: the management interface
  • Fuseki: a deployment of Fuseki configured for use with Bricolage
  • Elda: a deployment of Elda configured for use with Bricolage

Each of these components is a maven project, with build instructions.

Technical Approach: Penguin Archive

Tuesday, November 6th, 2012

The workflow architecture we adopted to support the publishing of Linked Data from the Penguin Archive is illustrated here:

The workflow breaks down into these steps:

  1. Export from CALM. This is achieved via the CALM desktop client and managed by the archivist administrator. It produces EAD XML format files.
  2. Import into Linked Data Publishing Service. Again managed by the archivist, a browser-based interface allows them to upload an EAD file per collection.
  3. Convert to RDF. Done via the LDPS, produces an RDF version of a collection’s EAD document by running it against a conversion XSLT stylesheet. Both the EAD and RDF documents are simply files held on the LDPS server.
  4. Publish Linked Data. The collection’s RDF is pushed (using SPARQL Update over HTTP) to our triple store (Fuseki).
  5. Link data. The LDPS aslo lists people details published in the previous step (via a SPARQL query run against the triple store). These can be used to query a third-party authority service (VIAF) with the aim of identifying a record for the individual there. If the archivist selects a record this information is asserted into our triple store.

At this point the data is available as Linked Data via the Linked Data API served from an instance of Elda. This data is therefore available in formats suitable for machine-processing, but as an example here is the human-friendly HTML view for the collection at http://tc-bricol.ilrt.bris.ac.uk/elda/api/penguin/id/archivalresource/gb-3-dm1187. (Note that this view is not intended as the primary interface to this material and as such not all the Elda functionality works at present.)

The software developed to support this process will be detailed in a subsequent post.

Lessons learned: summary

Monday, October 15th, 2012

At the final Advisory Group meeting towards the end of July 2012, the following points were made in relation to the evaluation of the quality of Linked Data produced and techniques used:

  • Quality depends not only on time-consuming human-crafted links to third party datasets, but also on the quality of those datasets (the project had identified some potential mistakes in VAIF)
  • Export processes such as those developed for the Penguin Archive use case were not really sustainable with the limited resources that archives usually have
  • Limitations of some parts of Drupal mean that, in the Geology use case, we may not be able to make as much of the Linked Data as we would like
  • Although we have made considerable efforts to make the user interface to the export and publication processes as smooth as possible, they are still not integrated enough to be adopted in normal working practice
  • There is a considerable learning curve in understanding Linked Data and what is needed to create and publish them, which requires intensive support and/or time to read around the subject
  • From a technical perspective, the project has highlighted how much impact on the data the production of Linked Data has – it’s not a simple conversion process
  • The extent to which Linked Data has the potential to ‘draw in’ new audiences for collections is more limited than envisaged, as collection level descriptions are already available in the Archives Hub, ranked highly in Google searches and accessible via Calm
  • It needs more resource, more space and more time!

In terms of methodology, the bringing together of different use cases and technical expertise had worked well, despite learning curves on all sides. The project had been beneficial in raising awareness of Linked Data issues in the Special Collections and Geology teams, and of archival and cataloguing practice in the technical team. Geology and Special Collections were also more aware of each others’ collections and potential for working together in the future.

 

Lessons learned: sustainability of workflow

Monday, October 15th, 2012

The two use cases present different scenarios for the sustainability of processes for publishing Linked Data. As noted in an earlier blogpost, for the Penguin Archive, the process has been unexpectedly time-consuming. The production of Linked Data requires enhancement of the collection metadata way beyond the requirements of currently accepted archival standards and, for the majority of archivists, presents a considerable technical learning curve. The key lesson for the archives community and for those promoting the creation of Linked Data is around the limitations that archivists’ core values and practice, and their very constrained time, impose.

The Penguin Archive had benefitted from a funded cataloguing post for two and a half years to transfer paper catalogue records to Calm; one of the archivists too up to a week to create new records for publication as Linked Data as part of the project.

For both collections, free-text catalogue entries were particularly problematic; if any structure is needed, time has to be spent on extracting specific text into new fields manually, or on re-categorisation. Although the project coincided with the development of a new Drupal-based online catalogue for the Geology Museum and publication of Linked Data is therefore largely automatic, the extent of data cleaning required as part of the process came as a surprise. The project allowed for effort to be expended on data cleaning; without this effort, any Linked Data published would have been of very limited use, even though publication is largely automatic through the additional Drupal module. Even where structure exists internally to collection data, there are still issues of linking these with third party datasets.

One recommendation may be for JISC to support a project to identify the amount of effort required for the creation and publication of Linked Data for different types of collections.

Overall, the message from both use cases is that the creation of Linked Data, links to authority files and to third party datasets should not be considered a quick and easy solution. Future upgrades of products such as Calm may well integrate creation and publication of Linked Data without there needing to be a separate process; without this, it is unlikely that data from collections like the Penguin Archive, where much human intervention is needed, will be published as Linked Data.

Lessons learned: linked data hosting

Monday, October 15th, 2012

The hosting review workpackage had originally identified Talis as the main option for external hosting. Talis wound up development of its external hosting platform during the Bricolage project’s lifetime, so this was no longer an option. The sustainability or otherwise of hosting and other platforms in relatively experimental areas of work is something to consider for future projects; having several options, in this case including internal hosting solutions, helps reduce risk of not being able to deliver on project plans. The Geology data will in any case be hosted as an integrated part of the museum’s new enhanced online presence. The Penquin Archive data will remain as a snapshot, also hosted on internal servers.

Users and use cases – overview

Thursday, June 28th, 2012

The Bricolage project will publish catalogue metadata as Linked Open Data for two of the University of Bristol’s most significant collections: the Penguin Archive and the Geology Museum (site in development). We will also encode resource microdata into the Geology Museum’s forthcoming online catalogue with the aim of improving collection visibility via major search engines and develop two simple demonstrators to illustrate the potential of data linking and reuse.

The project’s users are therefore archive and museum staff responsible for cataloguing and managing these important collections. The Linked Data production workflows need to be easy to use to enable embedding in the collection teams’ routine and to maximise sustainability of export and publication processes beyond the end of the project lifetime so that Linked Data can continue to be produced for reuse. Separate blog posts describe the use case for the Penguin Archive  and for the Geology Museum indicate how the project affects our users and how they are being engaged and are reacting to the project.

Users and use cases: The Geology Museum

Thursday, June 28th, 2012

The Geology Museum (site under development) is based in the University of Bristol’s School of Earth Sciences. It holds historically and scientifically important collections that are unique to the institution. The museum holds an estimated 100,000 museum specimens, many of which are unique and of international importance. Highlights include: an estimated 20,000 invertebrate fossils including material with important historical associations, over 4,500 mineral specimens, including many display-quality items from nowadays inaccessible mines, over 3,000 vertebrate fossils and casts and the Fry collection of over 4,000 invertebrate and plant fossils from the UK. There is also an extensive teaching collection of 16,000 specimens. Over the past 15 years 41,420 digital records have been produced on the basis of historic museum registers, card index catalogues and specimen labels. The creation of digital metadata has focused on valuable specimens and collection of national or international importance. These records represent about two thirds of the entire collection. Each metadata record contains information in 30 categories, 18 of which will be published by this project.

The School of Earth Sciences is already undertaking work to enhance the online presence of the Geology Museum by improving the museum website and online access to the collections. Included in this work is the migration of the existing collection metadata into a Drupal backed system, which can be used to publish Linked Data automatically.

Initial work focused on moving data from existing spreadsheet format into the Drupal database. Issues arise in the formats used, including free text, and the need to restrict terminology. There is a huge amount of data but it is largely unstructured, so requires manual effort to review and test. Unlike the Penguin Archive use case, the export and publication processes are largely automated by Drupal’s in-built modules for handling RDF, returning it in response to a Linked Data request. The aim is to embed data from the catalogue in the Geology Museum’s new public website using schema.org metadata in the HTML of the site, so that large search engines can find structured data.

The Collections & Practicals Manager in the School of Earth Sciences has suggested that a map demonstrator would be useful for the Geology Museum Linked Data. She is concerned, however, that much of the geo-location data about the collection is embedded as free text in description fields, which would make it difficult to plot the data on a map consistently, if at all. She has proposed using geodata for ‘type specimen’ data for the centre of the UK, although this also raises questions about the level of resolution at which these data could be plotted: for some, the catalogue may only include data about the nearest town or village rather than a precise geolocation related to OS references. Given the Museum’s relationships with local schools and geology enthusiast  groups, one way of resolving this issue – and assisting the ‘clean up’ of the data and giving information on use of the site overall – could be to invite these ‘end users’ to provide feedback and correct location data via the site. She has arranged a meeting with one such group in July which could provide a starting point for this. It will need to be made clear to any users beyond the Museum staff, however, that the demonstrators are not at ‘full service grade’.

The Collection Manager has engaged fully with the project, participating in Advisory Board meetings, 1-1 meetings with the development team and piloting and providing feedback on data migration to Drupal. The demonstrator will provide a concrete example of how Linked Data published via Drupal can be used but evaluation of the value of embedding microdata to facilitate search engine optimisation is unlikely to extend beyond the lifetime of the project.

Users and use cases: The Penguin Archive

Thursday, June 28th, 2012

The Penguin Archive, housed in the Special Collections of the University of Bristol Library, contains the archives of Penguin Books Limited from its foundation in 1935 through to the 1980s. Its wide variety of materials covers the company’s establishment and business life, social events, legal cases (particularly the Lady Chatterley’s Lover trial of 1960), exhibitions on the company’s history and the private lives of prominent figures in the early history of the company. The archive also includes a large collection of Penguin books from 1935 to date. The collection comprises 2093 archive boxes of editorial files, 466 archive boxes, 24 records management boxes and 84 box files of other archival material and approximately 30,000 book titles. The digital catalogue is held in the Special Collections CALM (Computer Aided Library Management) installation. Holdings there comprise: 123 collection level descriptions containing over 4000 individual metadata records, plus detailed digital guides to areas of the archive.

JISC  has already undertaken work looking at techniques for exporting Linked Data from CALM and the current Step Change project will ensure that Linked Data support is embedded in a future release of CALM, albeit not within the Bricolage project’s lifetime. We will follow the approach developed by LOCAH and SALDA projects: data will be exported as EAD/XML, transformed via XSLT into Linked Data expressed in RDF/XML format, based on the XSLT stylesheet developed within LOCAH and made available as Linked Data. A handful of collection level Penguin Archive records are already lodged with the Archives Hub. Our project will augment this data with a Linked Data set containing thousands of resource-level catalogue records, which will be linked to the Archives Hub identifiers as and when these become available.

Initial work in the project focused on archivists trying to add authority terms to catalogue metadata but this proved extremely labour-intensive, more so than anticipated. The process has revealed how good authority data is needed for good Linked Data and that this needs to be taken into account when initially cataloguing collections – not an option for an existing catalogue like the Penguin Archive. Issues with the CALM export process and stability of URIs have been reported in other project blogposts.

Early development of tools to automate as far as possible the workflow of metadata review and export indicates the need to make it easy to keep the Lnked Data up to date after project funding ends. A batch upload process could be used for initial publication. The archivists confirm that the catalogue is “quite fluid” and is often updated, so ease of use and maintenance of Linked Data are important to our users. One option for increasing the automation of the publishing process could be to upload exports to a folder which was monitored for changes.  This may also address concerns that users have already expressed, i.e. that “any non-trivial publishing process would not be used in practice after the project ends”.  The project will aim to make the process as ‘light-touch’ as possible.

The Archivist in the University’s Special Collections notes that the primary concern of archivists is to publish sufficient metadata to enable those interested in the materials to be able to identify what exists, and to visit the Penguin Archive to use them for research, journalistic or other purposes. The Archivists have considered what would make an appropriate demonstrator for Linked Data published through the project; they would like to focus on the ‘administrative history’ of the Archive, plotting collection level records against a timeline of, for example, dates when key staff were appointed. Administrative history is a familiar archival concept so the demonstrator would be of interest both to other archivists and potentially to end users of the catalogue/Linked Data. A visual representation of the timeline list of events would need to be created manually; within the scope and timeframe of the project this will  only be possible for 1-2 decades, with just some key events plotted for the whole timeline.

The Penguin Archives archivists have engaged fully with the project, participating in Advisory Board meetings, 1-1 meetings with the development team and piloting and providing feedback on workflow processes.

Penguin Archive workflow progress

Friday, June 1st, 2012

Progress to date: we’ve combined the components described in our previous post and produced a simple UI, illustrated below. The interface allows an administrator to:

  • Upload EAD XML files exported from CALM
  • Transform to RDF (using XSLT)
  • Publish to the triple store (Fuseki)
  • ‘Unpublish’ and delete EAD/RDF files

The triple store is fronted by a Linked Data API (elda), so this process results in the publishing of Linked Data (a web-friendly version is shown in the screenshot below)

This seems a reasonable start, though there are some obvious next steps:

  • Bulk actions. There are over 100 collections within the archive. We’ll need at least some of bulk upload, bulk RDF conversion, bulk publishing.
  • Automation. Again, thinking of the administrator who ends up looking after this, we should have the option for non-interactive upload through to publishing
  • Security. There is none as yet.
  • Link suggestions & validations. Need to include a process for producing lists of suggested links to third-party data, and for user validation of those links.

Bricolage: demonstrators

Friday, May 25th, 2012

At the Advisory Group meeting on 20 April, we discussed potential scope and focus of the two demonstrators that the project will develop. For the Geology Museum, we may want to focus on a demonstrator that links to promoting their work in schools, which could include a mapping feature. The Penguin Archive may want to consider a timeline demonstrator linked to a specific area of the Archive.

We looked at some examples to help refine thinking on demonstrators:

Examples of the use of a timeline:

Example of a geographical view:

The Advisory Group will finalise demonstrators to be developed at its meeting in May; the key will be in demonstrating how the use of Linked Data can enhance the collections, which may in turn encourage sustainability of the tools and processes used.