Is there a Library shaped black hole in the web? was the question posed by an OCLC event at the Royal College of Surgeons last week that focused on exploring the potential benefits of using linked data to make library data available to users through the web. For a comprehensive overview of the event, I’ve put together a Storify of tweets here: https://storify.com/LornaMCampbell/oclc-linked-data
Following a truly dreadful pun from Laura J Wilkinson…
…Owen Stephens kicked off the event with an overview of linked data and its potential to be a lingua franca for publishing library data. Some of the benefits that linked data can afford to libraries including improving search, discovery and display of library catalogue record information, improved data quality and data correction, and the ability to work with experts across the globe to harness their expertise. Owen also introduced the Open World Assumption which, despite the coincidental title of this blog, was a new concept to me. The Open World Assumption states that
“there may exist additional data, somewhere in the world to complement the data one has at hand”.
This contrasts with the Closed World Assumption which assumes that
“data sources are well-known and tightly controlled, as in a closed, stand-alone data silo.”
Learning Linked Data
http://lld.ischool.uw.edu/wp/glossary/
Traditional library catalogues worked on the basis of the closed world assumption, whereas linked data takes an open world approach and recognises that other people will know things you don’t. Owen quoted Karen Coyle “the catalogue should be an information source, not just an inventory” and noted that while data on the web is messy, linked data provides the option to select sources we can trust.
Cathy Dolbear of Oxford University Press, gave a very interesting talk from the perspective of a publisher providing data to libraries and other search and discovery services. OUP provides data to library discovery services, search engines, wiki data, and other publishers. Most OUP products tend to be discovered by search engines, only a small number of referrals, 0.7%, come from library discovery services. OUP have two OAI-PMH APIs but they are not widely used and they are very keen to learn why. The publisher’s requirements are primarily driven by search engines, but they would like to hear more from library discovery services.
Neil Jeffries of the Bodleian Digital Library was not able to be present on the day, but he overcame the inevitable technical hitches to present remotely. He began by arguing that digital libraries should not be seen as archives or museums; digital libraries create knowledge and artefacts of intellectual discourse rather than just holding information. In order to enable this knowledge creation, libraries need to collaborate, connect and break down barriers between disciplines. Neil went on to highlight a wide range of projects and initiatives, including VIVO, LD4L, CAMELOT, that use linked data and the semantic web to facilitate these connections. He concluded by encouraging libraries to be proactive and to understand the potential of both data and linked data in their own domain.
Ken Chad posed a question that often comes up in discussions about linked data and the semantic web; why bother? What’s the value proposition for linked data? Gartner currently places linked data in the trough of disillusionment, so how do we cross the chasm to reach the plateau of productivity? This prompted my colleague Phil Barker to comment:
Ken recommended using the Jobs-to-be-Done framework to cross the chasm. Concentrate on users, but rather than just asking them what they want focus on, asking them what they are trying to do and identify their motivating factors – e.g. how will linked data help to boost my research profile?
For those willing to take the leap of faith across the chasm, Gill Hamilton of the National Library of Scotland presented a fantastic series of Top Tips! for linked data adoption which can be summarised as follows:
- Strings to things aka people smart, machines stupid – library databases are full of things, people are really smart at reading things, unfortunately machines are really stupid. Turn things into strings with URIs so machines can read them.
- Never, ever, ever dumb down your data.
- Open up your metadata – license your metadata CC0 and put a representation of it into the Open Metadata Registry. Open metadata is an advert for your collections and enables others to work with you.
- Concentrate on what is unique in your collections – one of the unique items from the National Library of Scotland that Gill highlighted was the order for the Massacre of Glencoe. Ahem. Moving swiftly on…
- Use open vocabularies.
Simples! Linked Data is still risky though; services go down, URIs get deleted and there’s still more playing around than actual doing, however it’s still worth the risk to help us link up all our knowledge.
Richard J Wallis brought the day to a close by asking how can libraries exploit the web of data to liberate their data? The web of data is becoming a web of related entities and it’s the relationships that add value. Google recognised this early on when they based their search algorithm on the links between resources. The web now deals with entities and relationships, not static records.
One way to encode these entities and relationships is using Schema.org. Schema.org aims to help search engines to interpret information on web pages so that it can be used to improve the display of search results. Schema.org has two components; an ontology for naming the types and characteristics of resources, their relationships with each other, and constraints on how to describe these characteristics and relationships, and the expression of this information in machine readable formats such as microdata, RDFa Lite and JSON-LD. Richard noted that Schema.org is a form or linked data, but “it doesn’t advertise the fact” and added that libraries need to “give the web what it wants, and what it wants is Schema.org.”
If you’re interested in finding out more about Schema.org, Phil Barker and I wrote a short Cetis Briefing Paper on the specification which is available here: What is Schema.org? Richard Wallis will also be presenting a Dublin Core Metadata Initiative webinar on the Schema.org and its applicability to the bibliographic domain on the 18th of November, registration here http://dublincore.org/resources/training/#2015wallis.
ETA Phil Barker has also written a comprehensive summary of this even over at his own blog , Sharing and Learning, here: A library shaped black hole in the web?