You are currently browsing the Bricolage weblog archives for March, 2012.

Archive for March, 2012

First steps with the Penguin Archive data

Wednesday, March 21st, 2012

Over the last few weeks, Jasper and I have met with Anna and Hannah, the archivists managing the Penguin Archive held by the University Library Special Collections Department, and I’ve had a first stab at processing some sample EAD XML data for a small subset of the collections which make up the archive.

As in the case of the metadata for the Mass Observation Archive that I worked on in the SALDA project last year, the data is held and managed within a CALM data management system, and the EAD data is generated by an export process from the CALM database. In comparison with the case of the Archives Hub, where data is aggregated from diverse sources and systems, this offers the advantage that there is much less structural variation across the documents, as the XML markup is being generated by a single common process. A second benefit is that the data content has been subject to various normalisation/validation processes within the CALM system.

I’m taking an approach similar to that I applied in the SALDA project, taking as a starting point (though we may refine/amend this) the model, URI patterns and the XSLT transform used in the LOCAH and Linking Lives projects, overriding or discarding some of the elements that are specific to the Archives Hub context, and adding (so far relatively few) elements specific to the Bristol/Penguin context. (Aside: in the Linking Lives project, I’ve been doing some work on the transform recently, to fix bugs, to extend it slightly and generally try to make it a bit more “robust” in terms of the range of inputs it handles, so I felt using this version was probably the best starting point – I hope something will be available about that on the Linking Lives blog very soon.)

Also within Linking Lives, I’ve spent some time tidying up the conversion processing, wrapping it up in some shell scripts, driving it from URI lists and adding some capture of (very basic) process metadata. My scripting skills are limited and I’m sure it’s not as elegant and efficient as it could be, but I was pleased to find that I could repurpose that work and get things up and running for Bricolage with a minimum amount of tweaking, and Jasper and I will be looking at getting it onto a firmer footing over the next few weeks.

The Penguin Archive data differs from the MOA data in that it is made up of a large number of collections, exported as distinct EAD documents. However, as noted above, the export process ensures a good level of structural consistency across the set. I think there are some (relatively minor) variations in the “style” of cataloguing, and we probably need to examine a larger sample to make sure the process is coping with that, but so far, the results look pretty good.

Also in contrast to the MOA data, the Penguin data does have index terms applied – in the data I’ve seen so far, personal and corporate names following National Council of Archives’ Rules for the Construction of Personal, Place and Corporate Names. We’ve also had access to a sample of “authority record data” exported from CALM: this gives us access to the name data in structured form, so by transforming this data alongside the EAD data we can add that structured data into the RDF output.

Currently the URI pattern for Persons (and “Conceptualisations of Persons”) makes use of a “slug” constructed from the “authority form” of the name, e.g. the EAD construct

  <persname rules="ncarules">Schmoller; Hans (1916-1985); typographer; designer</persname>

is transformed into RDF data like the following (I’ve omitted some triples for the sake of brevity as I really just want to show the URI structures):

@prefix rdfs: <> .
@prefix foaf: <> .
@prefix skos: <> .
@prefix concept: <> .
@prefix person: <> .

  a skos:Concept ;
  rdfs:label "Schmoller; Hans (1916-1985); typographer; designer"@en ;
  foaf:focus person:schmollerhans1916-1985typographerdesigner .

  a foaf:Person ;
  rdfs:label "Schmoller; Hans (1916-1985); typographer; designer"@en ;
  foaf:familyName "Schmoller" ;
  foaf:giveName "Hans" .

However, it’s been at the back of my mind that there is possibly some “brittleness” in the URI construction here: if there are changes to the name in the source data (e.g. the addition of a new “epithet” or “title”, or of a previously unknown date of death), when that data is reprocessed a different URI is generated. In principle, we could maintain both the old and new URIs, especially if such changes are rare, but it would be preferable to ensure from the offset that our URIs are based on data that does not change. Within the CALM system the authority records do use reference numbers for identification, so this raises the question of whether those reference numbers might be used as the basis for these URIs. But would those reference numbers offer more stability than the names? Are they stable across any internal reorganisation within CALM, or across upgrades between versions of CALM? Would they survive any future migration from CALM to some other data management system? These are questions we need to explore with Anna and Hannah before making any changes.

Finaly, where there are also some similarities with the MOA/SALDA data is that there are sets of resources which don’t have explicit index terms in the EAD data but for which the names of other entities are present, typically embedded in titles – names of persons in some cases, names of publications in others – which might usefully be linked to things described in other datasets. So one of our next steps is to analyse these cases further and decide whether it is useful/effective to apply some specific/”local” processing to generate additional entities in the RDF output in those cases.

The Project Plan

Thursday, March 8th, 2012

Aims, Objectives and Final Output(s) of the project

The Bricolage (University of Bristol Collections as Linked Open Data) project will work with two of its most significant collections to publish catalogue metadata as Linked Open Data.

  • The Penguin Archive, a comprehensive collection of the publisher’s papers and books.
  • The Geology Museum, a 100,000 specimen collection housing many unique and irreplaceable resources.

The project will re-apply the best practice processes and tools produced by relevant preceding projects to create persistent identifiers, identify and create links to authoritative datasets and vocabularies, and work with the two collections’ infrastructure platforms: CALM and Drupal. The Linked Data production workflows will be embedded in the collections’ teams to ensure future sustainability. The project will also produce two simple demonstrators to illustrate the potential of data linking and reuse, and will encode resource microdata into the Geology Museum’s forthcoming online catalogue with the aim of improving collection visibility via the major search engines.

The metadata will be also licensed for ease of reuse according to JISC guidelines.

The main outputs of this project:

  • Linked Data sets for the Geology Museum and the Penguin Archive, with reuse guidance.
  • Two demonstrators illustrating data reuse: a browser-based mapping application for exploring the Geology collection via its geography, and an interactive timeline displaying the chronology of selected resources within the Penguin Archive.
  • A case study report on the experiences of embedding microdata into the Geology Museum website.

Wider Benefits to Sector & Achievements for Host Institution

One of the main achievements for the project’s host institution will be the sustainable production of public open Linked Data for two of its largest collections. As well as increasing the profile, visibility and potential for reuse of the catalogues in question, the experience gained during the project will provide a solid grounding for the reapplication of the methods to other collections in future.

For the sector, the wider benefits of this work include the following.

  • The addition of two significant collections to the Linked Data ecosystem. The new datasets will be interlinked with existing public vocabularies and datasets, so aiding their ease of discovery and reuse.
  • Both CALM and Drupal are widely used within the HE sector. CALM is an established library tool and Drupal, though originally a content management system, is increasingly found in cataloguing environments. The project’s work with both platforms will provide useful learning for the community.
  • Other valuable lessons will also be shared with the community, in particular as regards working with subject experts to identify authoritative public schemas and datasets, embedding sustainable processes within collection teams, our experience of using microdata and of using the data to produce examples of reuse.

Risk Analysis and Success Plan








Action to Prevent/Manage Risk





The staff named below all have significant experience within their areas of expertise. IT Services and Bristol University in general offer a pool of staff with suitably equivalent skills in the event of any staff departures occurring in the project.





The need to manage a team spanning three departments has been considered when allocating the proportion of project management.





The project remit will be highly focused, and is building upon work already done in this area. In addition there is experience of Linked Data within the team gained from previous JISC projects. The project also has two hosting options.





Licensing issues that may limit the reuse of the data produced have been considered and are not deemed to be a barrier. Both collections have committed to use permissive licences. Any software produced will be available under an open source licence.

Stakeholder engagement




Engagement with stakeholders is important to the project and the workplan includes effort to support engagement activities. These will also be evaluated.

The main issue that would arise for the project if its outputs were to prove popular would be managing any excessive demands on the hosting resources. Simple downloads of the data set would not be problematic in this regard but interfaces that required server-side processing (e.g. SPARQL) could be. These questions will be considered when the project is conducting its review of the data hosting options.


Both collections within the project have committed to release their catalogue metadata as Linked Data for reuse under the ODC-PDDL or CC0 licence, as per the guidance given by the Open Bibliographic Data Guide. This commitment will ensure that the Linked Data produced will be open to reuse, and it also meets the requirement for involvement with the Talis Platform Connected Commons scheme.

Any source code produced will be the copyright of the University of Bristol. It will be made available under an open source licence for free and non-commercial use and will be available to the UK Higher Education and Further Education community in perpetuity.

Project Team Relationships and End User Engagement

The team and their roles:

  • Professor Mike Benton, Professor of Vertebrate Palaeontology. Advisory Group.
  • Claudia Hildebrandt, Collections and Practicals Manager. Geology catalogue expertise and Advisory Group.
  • Pete Johnston, Eduserv. Metadata Consultant.
  • Hannah Lowery, Archivist, Special Collections. Advisory Group.
  • Anna Riggs, Archivist. Penguin catalogue expertise.
  • Jasper Tredgold, Senior Technical Developer.
  • Geology Studentship. Working on the Geology catalogue.
The project team will devise a dissemination plan at an early stage. We will disseminate good practice and lessons learned through the project blog and JISC events. We will seek to collaborate with related JISC projects where possible and will make our reports freely available online. We have a strong record of collaboration and regularly disseminate project outputs to the HEI community.
The project will also engage with stakeholders via the existing relationships held by the Penguin Archive and the Geology Museum. These include researchers, teachers and other academics. The engagement will raise awareness of the project and seek to gain input into potential applications for data reuse.

Projected Timeline, Workplan & Overall Project Methodology

M1 M2 M3 M4 M5 M6 M7
Governance and Engagement Establish mailing lists, project blog and project wiki
Advisory group establish and meet
Detailed work plan (to be evaluated monthly)
Linked Data Hosting review
Collection metadata review and preparation
Export process development
Identifiers and linking
Export implementation
Documentation for reuse
Microdata Schema review
Microdata markup creation
Embedding in Geology online catalogue
Sustainability Embed Linked Data maintenance processes
Demonstrators Produce two demonstrations of reuse
Evaluation Evaluation of the Linked Data produced and the techniques used. The project methodology will also be evaluated.
Final Reporting & Dissemination Lessons learned, findings of value to the JISC community
Final release of Linked Data with documentation

A few more details on selected workpackages follow.

Linked Data: Hosting review

The project has the commitment of both the Geology Museum and the Library Special Collections as regards the hosting of the Linked Data produced. In addition the team has experience of hosting Linked Data from previous projects. However, at an early stage we will also assess the suitability of using the Talis Platform Connected Commons scheme to host the project’s Linked Data outputs. This scheme supports the publishing and the reuse of Linked Data by removing, for qualifying data sets, the associated hosting costs.

Linked Data: Collection metadata review and preparation

One of the first tasks of the project will be to review the current collection metadata with particular regard to its structure. While labour-intensive changes are not in scope the team will seek to make edits that will ensure coherency and aid the subsequent transformation of the data to a format that supports reuse. Examples of this may be date, place name and person name formats.

The project will also assess the scope for the archivists to undertake some limited manual enrichment of the data. An example, related to the Penguin Archive in particular, might be to add event information. So metadata for a set of minutes of a committee meeting would be extended with data describing the meeting as an event associated with a time, place, people etc.

Linked Data: Export process development

The Penguin Archive is held in the Special Collection’s CALM installation. JISC has already undertaken work looking at techniques for exporting Linked Data from CALM, and this project will reference and build on that work, in particular the SALDA and LOCAH projects. It will also maintain links with the recently funded JISC Step Change project. This latter project will ensure Linked Data support is embedded in a future release of CALM. Although this release will not occur in the lifetime of Bricolage by keeping up-to-date with their work and other developments relevant to the Discovery programme, we will ensure our outputs will be compatible with outputs from current infrastructure projects.

For the catalogue data held in CALM the project will follow the approach developed by LOCAH and SALDA. Data will be exported as EAD/XML, transformed via XSLT into Linked Data expressed in RDF/XML format. The starting point for the transformation will be the XSLT stylesheet developed within LOCAH and made available by the Archives Hub.

Linked Data: Identifiers and linking

Part of the project is to use metadata released by the project in conjunction with already existing open metadata. In pursuit of this goal the subject experts within the team will identify appropriate open datasets and vocabularies and lead the work to inter-link the Bristol datasets with them. Obvious examples include DBpedia and the LCSH (or FAST) and VIAF authority services. The Linked Data version of the British National Bibliography will be of particular interest to the Penguin Archive, and the CIDOC Conceptual Reference Model (CRM), and perhaps the BBC Wildlife Finder, will be for Geology. The project will also reuse the RDF vocabulary produced by the LOCAH project. We anticipate that the techniques developed by our subject experts for this process will provide interesting lessons for the community.

As noted in the Discovery programme’s draft high-level technical principles, resource discovery “relies on persistent global identifiers”. The project will follow best practice in this area and use carefully designed URIs, in consultation with other on-going institutional work in this area. These URIs will be created with interoperability and persistence in mind.

Within the Geology domain the project envisages linking geographical information about museum specimens with open access geographical databases (e.g. GeoNames) and GIS systems and interfaces. We believe that this will allow users to not only search the collection database but visualise geographical distributions of specimens and familiarise themselves with local and regional geology – a useful tool for scientists and schools.


The project will also work with the University’s online enhancement team (co-located with the project team) to embed microdata derived from the Geology metadata into their new museum website. This
microdata work will seek to use and extend the schemas found at, and as a result, will provide structured data recognizable by the major search providers. This strategy aims to improve the discoverability of the museum’s collections, as described in the Discovery programme’s draft technical principles.


For the Geology data the demonstrator will be a browser-based mapping application, allowing a user to navigate the collection via the geographic locations of the resources. This will utilise the links made from the resource metadata to open access geographical databases and will provide an example of a new and versatile way to explore the museum’s collection.

For the Penguin Archive the project will produce an interactive timeline-based interface to aspects of the collection, in particular the resources associated with the Lady Chatterley’s Lover trial. This will provide a chronological view of the data not possible using traditional catalogue data and interfaces.


Total project cost: £81,557.

Of which £43,095 from JISC, £38,462 from University of Bristol.