liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 83) Show all publications
Tiger, M. & Heintz, F. (2020). Incremental Reasoning in Probabilistic Signal Temporal Logic. International Journal of Approximate Reasoning, 119, 325-352, Article ID j.ijar.2020.01.009.
Open this publication in new window or tab >>Incremental Reasoning in Probabilistic Signal Temporal Logic
2020 (English)In: International Journal of Approximate Reasoning, ISSN 0888-613X, E-ISSN 1873-4731, Vol. 119, p. 325-352, article id j.ijar.2020.01.009Article in journal (Refereed) Published
Abstract [en]

Robot safety is of growing concern given recent developments in intelligent autonomous systems. For complex agents operating in uncertain, complex and rapidly-changing environments it is difficult to guarantee safety without imposing unrealistic assumptions and restrictions. It is therefore necessary to complement traditional formal verification with monitoring of the running system after deployment. Runtime verification can be used to monitor that an agent behaves according to a formal specification. The specification can contain safety-related requirements and assumptions about the environment, environment-agent interactions and agent-agent interactions. A key problem is the uncertain and changing nature of the environment. This necessitates requirements on how probable a certain outcome is and on predictions of future states. We propose Probabilistic Signal Temporal Logic (ProbSTL) by extending Signal Temporal Logic with a sub-language to allow statements over probabilities, observations and predictions. We further introduce and prove the correctness of the incremental stream reasoning technique progression over well-formed formulas in ProbSTL. Experimental evaluations demonstrate the applicability and benefits of ProbSTL for robot safety.

Place, publisher, year, edition, pages
Elsevier, 2020
Keywords
Knowledge representation Stream reasoning Incremental reasoning Probabilistic logic Temporal logic Runtime verification
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-163327 (URN)10.1016/j.ijar.2020.01.009 (DOI)000517653700018 ()
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)CUGS (National Graduate School in Computer Science)Swedish Research CouncilELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsLinnaeus research environment CADICS
Note

Funding agencies: National Graduate School in Computer Science, Sweden (CUGS); Swedish Research Council (VR) Linnaeus Center CADICSSwedish Research Council; ELLIIT Excellence Center at Linkoping-Lund for Information Technology; Wallenberg AI, Autonomous Systems and Softwar

Available from: 2020-01-31 Created: 2020-01-31 Last updated: 2020-03-29
de Leng, D. & Heintz, F. (2019). Approximate Stream Reasoning with Metric Temporal Logic under Uncertainty. In: Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI): . Paper presented at AAAI Conference on Artificial Intelligence (AAAI) (pp. 2760-2767). Palo Alto: AAAI Press
Open this publication in new window or tab >>Approximate Stream Reasoning with Metric Temporal Logic under Uncertainty
2019 (English)In: Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI), Palo Alto: AAAI Press, 2019, p. 2760-2767Conference paper, Published paper (Refereed)
Abstract [en]

Stream reasoning can be defined as incremental reasoning over incrementally-available information. The formula progression procedure for Metric Temporal Logic (MTL) makes use of syntactic formula rewritings to incrementally evaluate formulas against incrementally-available states. Progression however assumes complete state information, which can be problematic when not all state information is available or can be observed, such as in qualitative spatial reasoning tasks or in robotics applications. In those cases, there may be uncertainty as to which state out of a set of possible states represents the ‘true’ state. The main contribution of this paper is therefore an extension of the progression procedure that efficiently keeps track of all consistent hypotheses. The resulting procedure is flexible, allowing a trade-off between faster but approximate and slower but precise progression under uncertainty. The proposed approach is empirically evaluated by considering the time and space requirements, as well as the impact of permitting varying degrees of uncertainty.

Place, publisher, year, edition, pages
Palo Alto: AAAI Press, 2019
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-153444 (URN)000485292602095 ()
Conference
AAAI Conference on Artificial Intelligence (AAAI)
Funder
CUGS (National Graduate School in Computer Science)
Available from: 2018-12-17 Created: 2018-12-17 Last updated: 2019-10-24
Selin, M., Tiger, M., Duberg, D., Heintz, F. & Jensfelt, P. (2019). Efficient Autonomous Exploration Planning of Large Scale 3D-Environments [Letter to the editor]. IEEE Robotics and Automation Letters
Open this publication in new window or tab >>Efficient Autonomous Exploration Planning of Large Scale 3D-Environments
Show others...
2019 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045Article in journal, Letter (Refereed) Epub ahead of print
Abstract [en]

Exploration is an important aspect of robotics, whether it is for mapping, rescue missions or path planning in an unknown environment. Frontier Exploration planning (FEP) and Receding Horizon Next-Best-View planning (RH-NBVP) are two different approaches with different strengths and weaknesses. FEP explores a large environment consisting of separate regions with ease, but is slow at reaching full exploration due to moving back and forth between regions. RH-NBVP shows great potential and efficiently explores individual regions, but has the disadvantage that it can get stuck in large environments not exploring all regions. In this work we present a method that combines both approaches, with FEP as a global exploration planner and RH-NBVP for local exploration. We also present techniques to estimate potential information gain faster, to cache previously estimated gains and to exploit these to efficiently estimate new queries.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2019
Keywords
Search and Rescue Robots, Motion and Path Planning, Mapping
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-154335 (URN)10.1109/LRA.2019.2897343 (DOI)
Projects
FACT (SSF)WASP
Funder
Swedish Foundation for Strategic Research Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2019-02-05 Created: 2019-02-05 Last updated: 2019-03-06Bibliographically approved
Källström, J. & Heintz, F. (2019). Multi-Agent Multi-Objective Deep Reinforcement Learning for Efficient and Effective Pilot Training. In: Ingo Staack and Petter Krus (Ed.), Proceedings of the 10th Aerospace Technology Congress: . Paper presented at FT2019. Proceedings of the 10th Aerospace Technology Congress, October 8-9, 2019, Stockholm, Sweden (pp. 101-111).
Open this publication in new window or tab >>Multi-Agent Multi-Objective Deep Reinforcement Learning for Efficient and Effective Pilot Training
2019 (English)In: Proceedings of the 10th Aerospace Technology Congress / [ed] Ingo Staack and Petter Krus, 2019, p. 101-111Conference paper, Published paper (Refereed)
Abstract [en]

The tactical systems and operational environment of modern fighter aircraft are becoming increasingly complex. Creating a realistic and relevant environment for pilot training using only live aircraft is difficult, impractical and highly expensive. The Live, Virtual and Constructive (LVC) simulation paradigm aims to address this challenge. LVC simulation means linking real aircraft, ground-based systems and soldiers (Live), manned simulators (Virtual) and computer controlled synthetic entities (Constructive). Constructive simulation enables realization of complex scenarios with a large number of autonomous friendly, hostile and neutral entities, which interact with each other as well as manned simulators and real systems. This reduces the need for personnel to act as role-players through operation of e.g. live or virtual aircraft, thus lowering the cost of training. Constructive simulation also makes it possible to improve the availability of training by embedding simulation capabilities in live aircraft, making it possible to train anywhere, anytime. In this paper we discuss how machine learning techniques can be used to automate the process of constructing advanced, adaptive behavior models for constructive simulations, to improve the autonomy of future training systems. We conduct a number of initial experiments, and show that reinforcement learning, in particular multi-agent and multi-objective deep reinforcement learning, allows synthetic pilots to learn to cooperate and prioritize among conflicting objectives in air combat scenarios. Though the results are promising, we conclude that further algorithm development is necessary to fully master the complex domain of air combat simulation.

Series
Linköping Electronic Conference Proceedings, ISSN 1650-3686, E-ISSN 1650-3740 ; 162
Keywords
Pilot Training, Embedded Training, LVC Simulation, Artificial Intelligence, Autonomy, Sub-system and System Technology, Aircraft and Spacecraft System Analysis
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-161707 (URN)10.3384/ecp19162011 (DOI)978-91-7519-006-8 (ISBN)
Conference
FT2019. Proceedings of the 10th Aerospace Technology Congress, October 8-9, 2019, Stockholm, Sweden
Funder
Vinnova, 2017-04885Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2019-11-07 Created: 2019-11-07 Last updated: 2019-11-07
Källström, J. & Heintz, F. (2019). Reinforcement Learning for Computer Generated Forces using Open-Source Software. In: Proceedings of the 2019 Interservice/Industry Training, Simulation, and Education Conference: . Paper presented at Interservice/Industry Training, Simulation, and Education Conference, December 2-6, 2019, Orlando, USA (pp. 1-11). , Article ID 19197.
Open this publication in new window or tab >>Reinforcement Learning for Computer Generated Forces using Open-Source Software
2019 (English)In: Proceedings of the 2019 Interservice/Industry Training, Simulation, and Education Conference, 2019, p. 1-11, article id 19197Conference paper, Published paper (Refereed)
Abstract [en]

The creation of behavior models for computer generated forces (CGF) is a challenging and time-consuming task, which often requires expertise in programming of complex artificial intelligence algorithms. This makes it difficult for a subject matter expert with knowledge about the application domain and the training goals to build relevant scenarios and keep the training system in pace with training needs. In recent years, machine learning has shown promise as a method for building advanced decision-making models for synthetic agents. Such agents have been able to beat human champions in complex games such as poker, Go and StarCraft. There is reason to believe that similar achievements are possible in the domain of military simulation. However, in order to efficiently apply these techniques, it is important to have access to the right tools, as well as knowledge about the capabilities and limitations of algorithms.   

This paper discusses efficient applications of deep reinforcement learning, a machine learning technique that allows synthetic agents to learn how to achieve their goals by interacting with their environment. We begin by giving an overview of available open-source frameworks for deep reinforcement learning, as well as libraries with reference implementations of state-of-the art algorithms. We then present an example of how these resources were used to build a reinforcement learning environment for a CGF software intended to support training of fighter pilots. Finally, based on our exploratory experiments in the presented environment, we discuss opportunities and challenges related to the application of reinforcement learning techniques in the domain of air combat training systems, with the aim to efficiently construct high quality behavior models for computer generated forces.

Keywords
Pilot Training, Computer Generated Forces, Machine Learning, Reinforcement Learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-162589 (URN)
Conference
Interservice/Industry Training, Simulation, and Education Conference, December 2-6, 2019, Orlando, USA
Funder
Vinnova, 2017-04885Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2019-12-09 Created: 2019-12-09 Last updated: 2019-12-10
Källström, J. & Heintz, F. (2019). Tunable Dynamics in Agent-Based Simulation using Multi-Objective Reinforcement Learning. In: : . Paper presented at Adaptive and Learning Agents Workshop (ALA-19) at AAMAS, Montreal, Canada, May 13-14, 2019 (pp. 1-7).
Open this publication in new window or tab >>Tunable Dynamics in Agent-Based Simulation using Multi-Objective Reinforcement Learning
2019 (English)Conference paper, Oral presentation only (Refereed)
Abstract [en]

Agent-based simulation is a powerful tool for studying complex systems of interacting agents. To achieve good results, the behavior models used for the agents must be of high quality. Traditionally these models have been handcrafted by domain experts. This is a difficult, expensive and time consuming process. In contrast, reinforcement learning allows agents to learn how to achieve their goals by interacting with the environment. However, after training the behavior of such agents is often static, i.e. it can no longer be affected by a human. This makes it difficult to adapt agent behavior to specific user needs, which may vary among different runs of the simulation. In this paper we address this problem by studying how multi-objective reinforcement learning can be used as a framework for building tunable agents, whose characteristics can be adjusted at runtime to promote adaptiveness and diversity in agent-based simulation. We propose an agent architecture that allows us to adapt popular deep reinforcement learning algorithms to multi-objective environments. We empirically show that our method allows us to train tunable agents that can approximate the policies of multiple species of agents.

Keywords
Modelling for agent based simulation, Reward structures for learning, Learning agent capabilities (agent models, communication, observation)
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-161093 (URN)
Conference
Adaptive and Learning Agents Workshop (ALA-19) at AAMAS, Montreal, Canada, May 13-14, 2019
Funder
Vinnova, 2017-04885Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2019-10-22 Created: 2019-10-22 Last updated: 2019-10-23Bibliographically approved
Präntare, F. & Heintz, F. (2018). An Anytime Algorithm for Simultaneous Coalition Structure Generation and Assignment. In: Tim Miller, Nir Oren, Yuko Sakurai, Itsuki Noda, Bastin Tony Roy Savarimuthu and Tran Cao Son (Ed.), PRIMA 2018: Principles and Practice of Multi-Agent Systems: 21st International Conference, Tokyo, Japan, October 29-November 2, 2018, Proceedings. Paper presented at PRIMA 2018: Principles and Practice of Multi-Agent Systems: 21st International Conference, Tokyo, Japan, October 29-November 2, 2018 (pp. 158-174). Cham, 11224
Open this publication in new window or tab >>An Anytime Algorithm for Simultaneous Coalition Structure Generation and Assignment
2018 (English)In: PRIMA 2018: Principles and Practice of Multi-Agent Systems: 21st International Conference, Tokyo, Japan, October 29-November 2, 2018, Proceedings / [ed] Tim Miller, Nir Oren, Yuko Sakurai, Itsuki Noda, Bastin Tony Roy Savarimuthu and Tran Cao Son, Cham, 2018, Vol. 11224, p. 158-174Conference paper, Published paper (Refereed)
Abstract [en]

A fundamental problem in artificial intelligence is how to organize and coordinate agents to improve their performance and skills. In this paper, we consider simultaneously generating coalitions of agents and assigning the coalitions to independent tasks, and present an anytime algorithm for the simultaneous coalition structure generation and assignment problem. This optimization problem has many real-world applications, including forming goal-oriented teams of agents. To evaluate the algorithm’s performance, we extend established methods for synthetic problem set generation, and benchmark the algorithm against CPLEX using randomized data sets of varying distribution and complexity. We also apply the algorithm to solve the problem of assigning agents to regions in a major commercial strategy game, and show that the algorithm can be utilized in game-playing to coordinate smaller sets of agents in real-time.

Place, publisher, year, edition, pages
Cham: , 2018
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 11224Lecture notes in artificial intelligence ; 11224
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-152438 (URN)10.1007/978-3-030-03098-8_10 (DOI)9783030030971 (ISBN)9783030030988 (ISBN)
Conference
PRIMA 2018: Principles and Practice of Multi-Agent Systems: 21st International Conference, Tokyo, Japan, October 29-November 2, 2018
Available from: 2018-10-31 Created: 2018-10-31 Last updated: 2018-11-20
Heintz, F. & Mannila, L. (2018). Computational Thinking for All - An Experience Report on Scaling up Teaching Computational Thinking to All Students in a Major City in Sweden. In: Proceedings of the 49th ACM Technical Symposium on Computer Science Education (SIGCSE): . Paper presented at ACM Technical Symposium on Computer Science Education (SIGCSE), Baltimore, Maryland, USA, February 21-24, 2018.
Open this publication in new window or tab >>Computational Thinking for All - An Experience Report on Scaling up Teaching Computational Thinking to All Students in a Major City in Sweden
2018 (English)In: Proceedings of the 49th ACM Technical Symposium on Computer Science Education (SIGCSE), 2018Conference paper, Published paper (Refereed)
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-141853 (URN)
Conference
ACM Technical Symposium on Computer Science Education (SIGCSE), Baltimore, Maryland, USA, February 21-24, 2018
Funder
VINNOVA
Available from: 2017-10-09 Created: 2017-10-09 Last updated: 2018-04-03Bibliographically approved
Tiger, M. & Heintz, F. (2018). Gaussian Process Based Motion Pattern Recognition with Sequential Local Models. In: 2018 IEEE Intelligent Vehicles Symposium (IV): . Paper presented at Intelligent Vehicles Symposium 2018, 26-30 June 2018, Changshu, China (pp. 1143-1149). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Gaussian Process Based Motion Pattern Recognition with Sequential Local Models
2018 (English)In: 2018 IEEE Intelligent Vehicles Symposium (IV), Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 1143-1149Conference paper, Published paper (Refereed)
Abstract [en]

Conventional trajectory-based vehicular traffic analysis approaches work well in simple environments such as a single crossing but they do not scale to more structurally complex environments such as networks of interconnected crossings (e.g. urban road networks). Local trajectory models are necessary to cope with the multi-modality of such structures, which in turn introduces new challenges. These larger and more complex environments increase the occurrences of non-consistent lack of motion and self-overlaps in observed trajectories which impose further challenges. In this paper we consider the problem of motion pattern recognition in the setting of sequential local motion pattern models. That is, classifying sub-trajectories from observed trajectories in accordance with which motion pattern that best explains it. We introduce a Gaussian process (GP) based modeling approach which outperforms the state-of-the-art GP based motion pattern approaches at this task. We investigate the impact of varying local model overlap and the length of the observed trajectory trace on the classification quality. We further show that introducing a pre-processing step filtering out stops from the training data significantly improves the classification performance. The approach is evaluated using real GPS position data from city buses driving in urban areas for extended periods of time.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Series
IEEE Intelligent Vehicles Symposium, ISSN 1931-0587 ; 2018
Keywords
Motion Pattern Recognition, Situation Analysis and Planning, Traffic Flow and Management, Vision Sensing and Perception, Autonomous Driving
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-148724 (URN)10.1109/IVS.2018.8500676 (DOI)9781538644522 (ISBN)9781538644515 (ISBN)9781538644539 (ISBN)
Conference
Intelligent Vehicles Symposium 2018, 26-30 June 2018, Changshu, China
Projects
CUGSVRCADICSELLIITWASP
Funder
CUGS (National Graduate School in Computer Science)
Available from: 2018-06-18 Created: 2018-06-18 Last updated: 2020-03-10Bibliographically approved
Andersson, O., Ljungqvist, O., Tiger, M., Axehill, D. & Heintz, F. (2018). Receding-Horizon Lattice-based Motion Planning with Dynamic Obstacle Avoidance. In: 2018 IEEE Conference on Decision and Control (CDC): . Paper presented at 2018 IEEE 57th Annual Conference on Decision and Control (CDC),17-19 December, Miami, Florida, USA (pp. 4467-4474). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Receding-Horizon Lattice-based Motion Planning with Dynamic Obstacle Avoidance
Show others...
2018 (English)In: 2018 IEEE Conference on Decision and Control (CDC), Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 4467-4474Conference paper, Published paper (Refereed)
Abstract [en]

A key requirement of autonomous vehicles is the capability to safely navigate in their environment. However, outside of controlled environments, safe navigation is a very difficult problem. In particular, the real-world often contains both complex 3D structure, and dynamic obstacles such as people or other vehicles. Dynamic obstacles are particularly challenging, as a principled solution requires planning trajectories with regard to both vehicle dynamics, and the motion of the obstacles. Additionally, the real-time requirements imposed by obstacle motion, coupled with real-world computational limitations, make classical optimality and completeness guarantees difficult to satisfy. We present a unified optimization-based motion planning and control solution, that can navigate in the presence of both static and dynamic obstacles. By combining optimal and receding-horizon control, with temporal multi-resolution lattices, we can precompute optimal motion primitives, and allow real-time planning of physically-feasible trajectories in complex environments with dynamic obstacles. We demonstrate the framework by solving difficult indoor 3D quadcopter navigation scenarios, where it is necessary to plan in time. Including waiting on, and taking detours around, the motions of other people and quadcopters.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Series
Conference on Decision and Control (CDC), ISSN 2576-2370 ; 2018
Keywords
Motion Planning, Optimal Control, Autonomous System, UAV, Dynamic Obstacle Avoidance, Robotics
National Category
Control Engineering
Identifiers
urn:nbn:se:liu:diva-152131 (URN)10.1109/CDC.2018.8618964 (DOI)9781538613955 (ISBN)9781538613948 (ISBN)9781538613962 (ISBN)
Conference
2018 IEEE 57th Annual Conference on Decision and Control (CDC),17-19 December, Miami, Florida, USA
Funder
VINNOVAKnut and Alice Wallenberg FoundationSwedish Foundation for Strategic Research ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsSwedish Research CouncilLinnaeus research environment CADICSCUGS (National Graduate School in Computer Science)
Note

This work was partially supported by FFI/VINNOVA, the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) funded by Knut and Alice Wallenberg Foundation, the Swedish Foundation for Strategic Research (SSF) project Symbicloud, the ELLIIT Excellence Center at Linköping-Lund for Information Technology, Swedish Research Council (VR) Linnaeus Center CADICS, and the National Graduate School in Computer Science, Sweden (CUGS).

Available from: 2018-10-18 Created: 2018-10-18 Last updated: 2020-03-26Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-9595-2471

Search in DiVA

Show all publications