liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Numeric Reward Machines
Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering. Ericsson Research, Stockholm, Sweden. (Machine Reasoning)ORCID iD: 0009-0008-3959-3508
Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0003-4416-7702
Ericsson Research, Stockholm, Sweden.
Ericsson Research, Stockholm, Sweden.
Show others and affiliations
2024 (English)In: Workshop on Bridging the Gap Between AI Planning and Reinforcement Learning, 2024Conference paper, Published paper (Refereed)
Abstract [en]

Reward machines inform reinforcement learning agents about the reward structure of the environment and often drastically speed up the learning process. However, reward machines only accept Boolean features such as robot-reached-gold. Consequently, many inherently numeric tasks cannot profit from the guidance offered by reward machines. To address this gap, we aim to extend reward machines with numeric features such as distance-to-gold. For this, we present two types of reward machines: numeric-Boolean and numeric. In a numeric-Boolean reward machine, distance-to-gold is emulated by two Boolean features distance-to-gold-decreased and robot-reached-gold. In a numeric reward machine, distance-to-gold is used directly alongside the Boolean feature robot-reached-gold. We compare our new approaches to a baseline reward machine in the Craft domain, where the numeric feature is the agent-to-target distance. We use cross-product Q-learning, Q-learning with counter-factual experiences, and the options framework for learning. Our experimental results show that our new approaches significantly outperform the baseline approach. Extending reward machines with numeric features opens up new possibilities of using reward machines in inherently numeric tasks.

Place, publisher, year, edition, pages
2024.
Keywords [en]
Reward Machine, Reinforcement Learning, Numeric Feature, Artificial Intelligence, WASP
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-208331OAI: oai:DiVA.org:liu-208331DiVA, id: diva2:1904177
Conference
The 34th International Conference on Automated Planning and Scheduling (ICAPS 2024), Banff, Alberta, Canada, June 1-6, 2024
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 310129Available from: 2024-10-08 Created: 2024-10-08 Last updated: 2024-10-18Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

https://prl-theworkshop.github.io/prl2024-icaps/papers/16.pdf

Authority records

Levina, KristinaPappas, NikolaosSeipp, Jendrik

Search in DiVA

By author/editor
Levina, KristinaPappas, NikolaosSeipp, Jendrik
By organisation
Artificial Intelligence and Integrated Computer SystemsFaculty of Science & EngineeringDatabase and information techniques
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 134 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf