The world of galleries, libraries, archives and museums and the diverse domains of the humanities, arts and social sciences, represented under the umbrella term “digital humanities” are likely to share interests in metadata at different levels and for different reasons. For interoperability and record sharing at the higher level, for exactitude close to the research process, likely more.
The session topic proposed is based on two related questions:
- How are, and can, these worlds of work and practice, work effectively together, using linked open data methods?
- What do we need to move on, in tandem, or not, to help demonstrate the value of using a linked open data approach, to support resource discovery or the integration or fusion of data?
As a follow up to the 2013 LODLAM “Great War, Linked Open Data and [too much] Chinese Food“, there will be another get together on WW1 Linked Open Data sets at the Harts Pub in Sydney, Australia on Sunday the 28th at about 6:30 pm.
Wouldn’t it be fantastic if any search engine could give you highly accurate information about libraries, archives, galleries, and museums in your area? They can, of course, and some do just that by relying on data maintained in social media sites like Google+–as long as that secondary source of data is kept up to date. However, most GLAM institutions already publish web pages that include their name, location, opening hours, contact information, and events in human-readable format, but fail to maintain even a subset of that information with secondary data sources such as the WorldCat Registry.
The http://schema.org/Library, http://schema.org/ArtGallery, and http://schema.org/Museum classes (note: missing an Archive class!) and related vocabulary offers most of the vocabulary we need to express this information as linked open data. Using serializations such as RDFa or JSON-LD, GLAM institutions should be able to publish linked open data that matches the human-readable data, and the Evergreen library system has been doing just that for the past year. Sadly, however, the schema.org pages for GLAM institutions (Library, ArtGallery, Museum) currently claim that fewer than a thousand domains have used this approach to making their locations available to the world. We can do better.
Let’s discuss how we can enable GLAM institutions to publish this fundamental linked open data, possibly through the adoption of templates in content management systems such as Drupal and Joomla, and figure out what practical steps we can take to make this happen in our own parts of the world.
Updated 2015-06-28 to be more inclusive of GLAM institutions in general.
Creating links between resources has become easier thanks to the creation of big pivot open vocabularies such as VIAF, DBpedia…These vocabularies are easy to find, they are generic enough to be used in different services. But what happen when you want to develop services for a specific domain. How do you find more specific LOD vocabularies and datasets?
Europeana is currently working on redesigning its portal and developing more thematic access points: the first thematic channel to be developed will be around Art.
I would like to size the opportunity of the LODLAM to gather resources related to the Art domain, but also discuss how we can better connect our resources.
How can we better connect our Art related cultural heritage objects? What is needed? What datasets is already available
The ‘museum API’ wiki has been running for a number of years, listing various museum, library and archive APIs, image and data collections and linked open data, plus ‘cool things’ made with open cultural data sources.
How could it be improved to better meet the needs of data providers and users?
Session notes at: LODLAM Open Cultural Data portal (museum API wiki)
Making your data open can be as simple as dumping the data somewhere on the internet. A baby step beyond that is build a simple REST API on top of your data. This approach is becoming more popular within visual arts and design institutions, mainly large ones, with a few really solid Collection APIs out there now. Properly linking this data is the next step, but one almost no institutions take.
I’ll be offering a sneak peak at the soon to be released SFMOMA Collection API during the LODLAM Summit and asking the question “now what?”
Backed by the major commercial search engines; promoted as the way to mark up web pages to tell them about your products and feature in their rich snippets, knowledge panels and power so-called Semantic Search; a pragmatic approach to vocabulary development – is this for the LAM Communities?
650 Types (Classes), 980 properties; adopted by millions of domains; CreativeWorks are a core focus; a live evolving vocabulary; a community approach to enhancement; introducing extension domains (bib is one of the first); recognised by the search engines – is this for the LAM Communities?
After a brief update on Schema.org progress, the Schema Bib Extend Group‘s work on the bib.schema.org extension, and a short insight into its application in WorldCat; join the discussion to explore how relevant it is to the world of LODLAM and how proactive we should be about adopting it.
A vast amount of information is locked away inside visual content. We can see it but not search it. We can apply metadata to the whole image but not access its interior content. OASIS is a new way to access the visual features and conceptual ideas inside images, including ultra high definition images, using Linked Data. This session can include a quick demo of how the system works and a discussion of the technologies and standards used and challenges encountered and overcome in the process. The following link has 6 screen shots to illustrate:
Much work has been put in on ontology design and data modelling for LAM resources, but until recently there has not been as much discussion of practical applications for publishing or consuming LOD in Libraries, Archives and Museums. This has started to change with the emergence of tools like Canvas, Karma, and many of the 2015 LODLAM Challenge entrants.
This proposed session will focus on gaps in this tool chain and open questions. Possible topics include:
- ETL for linked data resources
- Graph selection and management for application ingest
- Value vocabulary caching and proxying
- Graph-based data validation
- Linked Data “Records” & RDF Sources
- Linked Data Platform
- Linked Data Fragments
Many of us are here from large organizations with substantive research and development budgets, or from smaller organizations that have found external, short term funding for experimentation with LOD application development. However, I would wager that most of our instiutitions rely heavily on vendor supplied infrastructure for operational technology needs. Many institutions are not going to be able to truly participate in the LOD ecosystem until viable vendor solutions are available. Until recently, most vendors have been waiting until they see a clear LOD business opportunity to pursue.
This proposed session will look at vendor engagement and discuss strategies for influencing vendor development strategies. Possible topics include:
- Making a business case to your vendor
- Collecting and presenting customer use cases
- Working with vendor User Groups
- Identifying gaps and “quick wins” in current generation software
We are fortunate to have a number of vendors at LODLAM this year, including Archivematica, Ex Libris, and OCLC. It is possible that they might also be able to help us understand what vendors need from the LODLAM community.
In the current LODLAM landscape, most of the published datasets are metadata describing digital objects. Europeana for instance only publish metada for digital objects. However the report on the digitisation status of cultural heritage in Europe  shows that only 10% of our cultural heritage is current digitised. The question is should we also spend efforts trying to publish bibliographic data without digital objects as LOD? The European Library  for instance has released last year one of the largest library open dataset. However the uses cases and benefits of bringing bibliographic data together are still not clearly defined? How bibliographic data can complement the other datasets? What information is useful?
Let’s discuss it together!