liu.seSearch for publications in DiVA
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Using a Character-Based Language Model for Caption Generation
Linköping University, Department of Computer and Information Science, Human-Centered systems.
2019 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesisAlternative title
Användning av teckenbaserad språkmodell för generering av bildtext (Swedish)
Abstract [en]

Using AI to automatically describe images is a challenging task. The aim of this study has been to compare the use of character-based language models with one of the current state-of-the-art token-based language models, im2txt, to generate image captions, with focus on morphological correctness.

Previous work has shown that character-based language models are able to outperform token-based language models in morphologically rich languages. Other studies show that simple multi-layered LSTM-blocks are able to learn to replicate the syntax of its training data.

To study the usability of character-based language models an alternative model based on TensorFlow im2txt has been created. The model changes the token-generation architecture into handling character-sized tokens instead of word-sized tokens.

The results suggest that a character-based language model could outperform the current token-based language models, although due to time and computing power constraints this study fails to draw a clear conclusion.

A problem with one of the methods, subsampling, is discussed. When using the original method on character-sized tokens this method removes characters (including special characters) instead of full words. To solve this issue, a two-phase approach is suggested, where training data first is separated into word-sized tokens where subsampling is performed. The remaining tokens are then separated into character-sized tokens.

Future work where the modified subsampling and fine-tuning of the hyperparameters are performed is suggested to gain a clearer conclusion of the performance of character-based language models.

Place, publisher, year, edition, pages
2019. , p. 49
Keywords [en]
Natural Language Processing, NLP, Machine Learning, ML, Neural Network, Caption Generation, Deep Learning, Recurrent Neural Network, Long-Short-Term-Memory, LSTM, word2vec, Language Model
National Category
Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:liu:diva-163001ISRN: LIU-IDA/LITH-EX-A--19/095--SEOAI: oai:DiVA.org:liu-163001DiVA, id: diva2:1383356
Subject / course
Computer science
Presentation
2019-11-26, Alan Turing, Linköpings Universitet, Linköping, 13:15 (English)
Supervisors
Examiners
Available from: 2020-01-09 Created: 2020-01-07 Last updated: 2020-01-09Bibliographically approved

Open Access in DiVA

fulltext(1515 kB)9 downloads
File information
File name FULLTEXT01.pdfFile size 1515 kBChecksum SHA-512
8ee38a7071963124f495921eb39614582ccd05e693a86c0cd37f0fafef6ceac11210d96b378e6f0a19a4811e272fde5148a4b093e73b40498f5e7c4dcb0ec339
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Keisala, Simon
By organisation
Human-Centered systems
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar
Total: 9 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 86 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf