liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Prompting or Fine-tuning? A Comparative Study of Large Language Models for Taxonomy Construction
McGill Univ, Canada.
McGill Univ, Canada.
Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. McGill Univ, Canada.ORCID iD: 0000-0002-8790-252X
2023 (English)In: 2023 ACM/IEEE INTERNATIONAL CONFERENCE ON MODEL DRIVEN ENGINEERING LANGUAGES AND SYSTEMS COMPANION, MODELS-C, IEEE COMPUTER SOC , 2023, p. 588-596Conference paper, Published paper (Refereed)
Abstract [en]

Taxonomies represent hierarchical relations between entities, frequently applied in various software modeling and natural language processing (NLP) activities. They are typically subject to a set of structural constraints restricting their content. However, manual taxonomy construction can be time-consuming, incomplete, and costly to maintain. Recent studies of large language models (LLMs) have demonstrated that appropriate user inputs (called prompting) can effectively guide LLMs, such as GPT-3, in diverse NLP tasks without explicit (re-)training. However, existing approaches for automated taxonomy construction typically involve fine-tuning a language model by adjusting model parameters. In this paper, we present a general framework for taxonomy construction that takes into account structural constraints. We subsequently conduct a systematic comparison between the prompting and fine-tuning approaches performed on a hypernym taxonomy and a novel computer science taxonomy dataset. Our result reveals the following: (1) Even without explicit training on the dataset, the prompting approach outperforms fine-tuning-based approaches. Moreover, the performance gap between prompting and fine-tuning widens when the training dataset is small. However, (2) taxonomies generated by the fine-tuning approach can be easily post-processed to satisfy all the constraints, whereas handling violations of the taxonomies produced by the prompting approach can be challenging. These evaluation findings provide guidance on selecting the appropriate method for taxonomy construction and highlight potential enhancements for both approaches.

Place, publisher, year, edition, pages
IEEE COMPUTER SOC , 2023. p. 588-596
Keywords [en]
taxonomy construction; domain-specific constraints; large language models; few-shot learning; fine-tuning
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-201054DOI: 10.1109/MODELS-C59198.2023.00097ISI: 001137051500079ISBN: 9798350324983 (electronic)ISBN: 9798350324990 (print)OAI: oai:DiVA.org:liu-201054DiVA, id: diva2:1840366
Conference
ACM/IEEE International Conference on Model Driven Engineering Languages and Systems (MODELS), Vasteras, SWEDEN, oct 01-06, 2023
Note

Funding Agencies|FRQNT-B2X project [319955]; Wallenberg AI, Autonomous Systems and Software Program (WASP), Sweden

Available from: 2024-02-23 Created: 2024-02-23 Last updated: 2024-02-23

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Varro, Daniel
By organisation
Software and SystemsFaculty of Science & Engineering
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 63 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf