Advancing Ontology Alignment in the Labor Market: Combining Large Language Models with Domain Knowledge
DOI:
https://doi.org/10.1609/aaaiss.v3i1.31208Keywords:
Ontology Alignment, Mapping Refinement, Generative Large Language Models, Labor Market, ESCO, O*NETAbstract
One of the approaches to help the demand and supply problem in the labor market domain is to change from degree-based hiring to skill-based hiring. The link between occupations, degrees and skills is captured in domain ontologies such as ESCO in Europe and O*NET in the US. Several countries are also building or extending these ontologies. The alignment of the ontologies is important, as it should be clear how they all relate. Aligning two ontologies by creating a mapping between them is a tedious task to do manually, and with the rise of generative large language models like GPT-4, we explore how language models and domain knowledge can be combined in the matching of the instances in the ontologies and in finding the specific relation between the instances (mapping refinement). We specifically focus on the process of updating a mapping, but the methods could also be used to create a first-time mapping. We compare the performance of several state-of-the-art methods such as GPT-4 and fine-tuned BERT models on the mapping between ESCO and O*NET and ESCO and CompetentNL (the Dutch variant) for both ontology matching and mapping refinement. Our findings indicate that: 1) Match-BERT-GPT, an integration of BERT and GPT, performs best in ontology matching, while 2) TaSeR outperforms GPT-4, albeit marginally, in the task of mapping refinement. These results show that domain knowledge is still important in ontology alignment, especially in the updating of a mapping in our use cases in the labor domain.Downloads
Published
2024-05-20
How to Cite
Snijder, L. L., Smit, Q. T. S., & de Boer, M. H. T. (2024). Advancing Ontology Alignment in the Labor Market: Combining Large Language Models with Domain Knowledge. Proceedings of the AAAI Symposium Series, 3(1), 253-262. https://doi.org/10.1609/aaaiss.v3i1.31208
Issue
Section
Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge