Inlärning i Emotional Behavior Networks: Online Unsupervised Reinforcement Learning i kontinuerliga domäner
Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesisAlternative title
Learning in Emotional Behavior Networks : Online Unsupervised Reinforcement Learning in Continuous Domains (English)
The largest project at the AICG lab at Linköping University, Cognitive models for virtual characters, focuses on creating an agent architecture for intelligent, virtual characters. The goal is to create an agent that acts naturally and gives a realistic user experience. The purpose of this thesis is to develop and implement an appropriate learning model that fits the existing agent architecture using an agile project methodology. The model developed can be seen as an online unsupervised reinforcement learning model that enhances experiences through reward. The model is based on Maes model where new effects are created depending on whether the agent is fulfilling its goals or not.
The model we have developed is based on constant monitoring of the system. If an action is chosen it is saved in a short-term memory. The memory is constantly updated with current information about the environment and the agent’s state. These memories will be evaluated on the basis of user defined classes that define what all values must satisfy to be successful. If the last memory in the list is considered to be evaluated it will be saved in a long-term memory. This long-term memory works all the time as a basis for how theagent’s network is structured. The long term memory is filtered based on where the agent is, how it feels and its current state.
Our model is evaluated in a series of tests where the agent's ability to adapt and how repetitive the agent is, is tested.
In reality, an agent with learning will get a dynamic network based on input from the user, but after a short period it may look completely different, depending on the amount of situations experienced by the agent and where it has been. An agent will have one network structure in the vicinity of food at location x and a completely different structure at anenemy at location y. If the agent enters a new situation where past experience does notfavor the agent, it will explore all possible actions it can take and thus creating newexperiences.
A comparison with an implementation without classification and learning indicates that the user needs to create fewer classes than it otherwise needs to create effects to cover all possible combinations. KS+KB classes creates effects for S*B state/behavior combinations, where KS and KB is the number of state classes and behavior classes and S and B is the number of states and behaviors in the network.
Place, publisher, year, edition, pages
2010. , 100 p.
learning, emotional, behavior networks, online, reinforcement, unsupervised, AI
inlärning, beteendenätverk, förstärkt lärande, AI
Computer Science Other Computer and Information Science
IdentifiersURN: urn:nbn:se:liu:diva-54442ISRN: LIU-ITN-TEK--A--10/014--SEOAI: oai:DiVA.org:liu-54442DiVA: diva2:303878
2010-03-05, K52, Linköpings Universitet, Norrköping, 13:15 (Swedish)
Pierangelo, Dell'Aqcua, univ lektor
ProjectsCognitive models for virtual characters