The Linked Data Center is an initiative to improve discovery of digital items held locally and beyond using linked data concepts and tools.
Current Members: Lynne Jacobsen, Grace Ye, and Cory Aitchison
Former Members: Kevin Miller, Casey Ann Mitchell, Paul Stenis
March 28, 2018
The Linked Data group is attending a series of ALCTS webinars this spring covering the creation of linked data for annotations, performed music, works of art, improving discoverability of non-Roman materials, and description of cartographic resources.
November 3, 2017
Our participation in the OCLC Strategic Advisory Group for Digital Collections officially ended and we received a final update on the Metadata Refinery Project. We were able to work with experimental tools for data cleanup, mapping, reconciliation, and transformation of bibliographic data from CONTENTDM into linked data. We look forward to further development of the services needed to manage digital object descriptions in a linked data environment.
December 19, 2016
A Pepperdine University Libraries Wikipedia page has been created and published by Cory Aitchison, with some help from Melissa Nykanen (Associate University Librarian for Special Collections and University Archives). https://en.wikipedia.org/wiki/
October 19th 2016
We are attending a series of linked data webinars hosted by the Association for Library Collections and Technical Services (ALCTS) that cover BIBFRAME, embedded URIs, encoding serials, cataloging workflows, and much more.
July 19th 2016
We are participating in an OCLC pilot project entitled Metadata Refinery. The project, which involves taking metadata within a collection—including subject headings, name authorities, and keywords—allows its testers to essentially create triples of Linked Data from their collections, using schema.org URIs, and mappings to authority files like FAST and VIAF, with an attempt to make these collections not only have a more visible presence on the web, but potentially allow for more meaningful connections between resources. We are currently working on refining our example collection, and are excited to see the results.
June 8th 2016
I attended a webinar entitled "Linked Open Data for Digitized Special Collections." This research project by the University of Illinois at Urbana-Champaign was funded by the Andrew Mellon Foundation. The team mapped CONTENTdm metadata to RDF for three collections in an effort to maximize the usefulness of their collections, bring more context to these collections, and to try out a social network view of one collection. They applied schema.org semantics, improved authority control, added functionality to the user interface, and provided a way for users to annotate a network graph. I learned how they adapted schema.org to the needs of their collections and how they used the Google structured data testing tool. This was the most visual and appealing implementation of linked open data to date.
April 7th 2016
We are part of an OCLC study of linked data in CONTENTdm. Participating libraries have contributed digital collections with metadata to a test system that can be described as a next generation digital repository. We are exploring what discovery services are needed as we move to linked data entity-based metadata.
April 1st 2016
I completed my ALA "Linked Data" Class, taught by University of North Texas's Oksana Zavalina. We covered quite a lot in six weeks, including metadata schemes (such as structure, syntax, and semantics), their applications, and the role of metadata in the Semantic Web. These skills will be immeasurable when Pepperdine begins creating our own Linked Open Data. In addition, I have completed my research paper on Linked Data for my schooling.
We are also in the process of completing a Wikipedia page for Pepperdine University Libraries. Doing this will create URIs through DBpedia, a site that takes Wikipedia pages and structures them into Linked Data, which can then be used in generate triples, the building blocks upon which Linked Data is created.
March 4th 2016
Casey shared a link to a webinar entitled "Evolving MarcEdit : Leveraging Semantic Data in MarcEdit." MarcEdit is a tool for batch editing MARC records and other metadata schemas. This program has been expanded to embed URIs into MARC data and to reconcile headings in large sets. Twenty four data sources have been profiled including AAT and TGM. URIs are being added to 1XX, 6XX, and 7XX headings in $0. This is currently experimental as a proof of concept.
February 22nd 2016
We completed participation in the OCLC Name Entity Pilot Project and gave input on apps for searching entities across multiples data sources. We are very appreciative of being included in these types of test cases. We are very excited about the new developments at OCLC in this area that will significantly impact the successful use of linked data by libraries.
February 17th 2016
Two team members attended a webinar entitled "Linked Data Fragments: Querying Multiple Linked Data Sources on the Web" presented by Ruben Verborgh from Ghent University in Belgium, which we will share with the rest of the group. From a pragmatic point of view, he explained how to query linked data triples at low cost and with adequate speed. He demonstrated a way to execute queries over multiple Linked Data sources. He also provided instructions on how to publish Linked Data at low cost so that others can query it. All of this information is useful to our group as we learn about SPARQL as a language and a protocol, understand the difference between a data dump and a SPARQL endpoint, and consider the landscape for finding relevant published triple stores.
January 20th 2016
I organized and attended the Southern California Technical Processes Group's (SCTPG) Linked Data Symposium on January 15th. Three wonderful presentations covered a range of current linked data projects.
Rachel Fewell from Denver Public Library discussed DPL's partnership with Zepheira's LibHub Initiative. DPL's MARC records were converted to BIBFRAME and the resulting linked data has improved discovery of their resources. Xiaoli Li from University of California Davis updated us on Davis' work with BIBFRAME. Davis is also working with Zepheira. Their BIBFLOW project is working towards creating new workflows and tools, such as BIBFRAME Scribe, to describe resources in a linked data environment. Finally, Cory Lampert from University of Nevada, Las Vegas described working with digital collections metadata to convert it to linked data. Using open source tools, such as PivotViewer, UNLV was able to visualize their collections in new and exciting ways.
- Casey Ann Mitchell
January 15th 2016
I have begun an ALA online eCourse entitled "Introduction to Metadata and Linked Data," taught by Dr. Oksana Zavalina of University of North Texas. We will be covering metadata schemes (such as structure, syntax, and semantics), their applications, and the role of metadata in the Semantic Web. We will be looking at various schemas, including Dublin Core and MARCXML. I'm looking forward to what the class has to offer!
- Cory Aitchison
January 8th 2016
Committee members have started working on a Wikipedia entry for Pepperdine University Libraries which will establish the libraries as an entity in RDF, as well as link to other datasets on the Web. This will increase traffic to the library and provide a better user experience.
December 14th 2015
Pepperdine University Libraries joined phase 2 of the OCLC Person Entity pilot project which involves testing string searching for person entities.This project will conclude at the end of January 2016.
December 3rd 2015
We attended the second installment of the schema.org webinar given by Richard Wallis. The webinar provided a more in-depth look at the bib.schema.org extension, which features library-specific properties. Wallis also did a live demonstration of RDFa in action, showing us an example website with linked data (www.smarttrees.co.uk/), and various tools on the internet to visualize linked data, such as RDFa/Play (https://rdfa.info/play/), Structured Data Linter (http://linter.structured-data.org/), and Google’s Structured Data testing tool (https://developers.google.com/structured-data/testing-tool/). Wallis also briefly demonstrated how to extend schema.org by accessing it (via a github repository) through a terminal, which was then used to upload his local properties into schema.org.
- Cory Aitchison & Casey Ann Mitchell
November 26th 2015
I attended the Semantic Web in Libraries conference held in Hamburg, Germany Nov. 23-25. 170 people from 27 countries heard presentations on Schema.org vocabulary, tools for validating linked data, identity management, sustainability of linked data, metadata enrichment practices, and much more. A brief report is available at http://www.bi-international.de/download/file/401_Lynne%20Jacobsen_Report%20SWIB%202015.pdf
November 19th 2015
Three of us attended a webinar on Schema.org yesterday. Richard Wallis traced the history of the Schema.org vocabulary, explained how a bibliographic extension was proposed by the WC3 Community Group, and described the current status of the vocabulary. Schema.org is a linked data vocabulary constructed of RDF triples made up of URIs that describe stuff. It is published on over 10 million web domains. Schema.org is used by OCLC to create linked bibliographic data on the Web, making library materials available through search engines such as Google, Bing, and Yahoo.
- Lynne Jacobsen
November 17th 2015
As a second year MLIS student, I have had the opportunity to incorporate the Linked Data Center into my schooling. Having opted into an independent study program, I will be creating an extensive annotated bibliography on all relevant and useful readings this group comes across in our findings, as well as writing a research paper that will explore our progress with this initiative, and the various methods of linked data as a whole.
- Cory Aitchison
November 16th 2015
To better understand the languages of the semantic web and the tools available to prepare our data for the semantic web, I have been taking a series of online classes at Library Juice Academy. Through readings and exercises, a greater understanding of XML, schemas, ontologies, and RDF is being learned. The ability to manipulate and transform our data will help to open our collections to the greater web. This new knowledge will be shared with the group and help to build a foundation for our linked data work.
- Casey Ann Mitchell
Q. What is Linked Data?
Linked data is about making connections between information on the web to show helpful correlations, and to provide connections to outside information to create new and meaningful knowledge. In the context of libraries, linked data is being used to take information within our catalogs and make it readable by search engines, which will provide users with a rich new resource for research and discovery.
Q. What is the benefit to Pepperdine University?
Pepperdine will benefit in a number of ways from linked data. Patrons wil be able to explore related materials to their searches in a way that has not been possible in conventional library catalogs. It will also allow Pepperdine's collection to be more broadly accessible. Library catalogs have a wealth of information that is not structured in a way that is readable by the internet. By applying linked data principles to our catalog, the information it contains will be made available through search engines, opening up our collection to a larger group of people.
Q. How does Linked Data fit in with the Semantic Web?
The semantic web is also known as the "Web of Data." By creating linked data we are helping to build and expand upon the semantic web. Traditionally, one navigates the web in a hierarchical or relational fashion; going from document to document. However, when the semantics of the data is coded as well as the documents, the data itself can be described in relationships and be searched and connected in new and exciting ways.
Q. What is RDF?
RDF is Resource Description Framework. This framework provides a common way that data can be described and defined in order to allow for interoperability between many systems. RDF is not the only framework utilized for the semantic web but it was created by The World Wide Web Consortium (W3C). RDF statements, also known as triples, name and define the relationships between data. These relationships can then be understood, processed, and shared across various systems, applications, and platforms.
Q. What's an example of Linked Data in action?
A great example of linked data is the "knowledge card" that shows up on the side of Google searches. Google is pulling data from various sources from the web, and provides this information to its users in a card-like format.