liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Safe Reinforcement Learning via a Model-Free Safety Certifier
Sharif Univ Technol, Iran.
Sharif Univ Technol, Iran.
Michigan State Univ, MI 48863 USA.
Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0002-6665-5881
Show others and affiliations
2024 (English)In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 35, no 3, p. 3302-3311Article in journal (Refereed) Published
Abstract [en]

This article presents a data-driven safe reinforcement learning (RL) algorithm for discrete-time nonlinear systems. A data-driven safety certifier is designed to intervene with the actions of the RL agent to ensure both safety and stability of its actions. This is in sharp contrast to existing model-based safety certifiers that can result in convergence to an undesired equilibrium point or conservative interventions that jeopardize the performance of the RL agent. To this end, the proposed method directly learns a robust safety certifier while completely bypassing the identification of the system model. The nonlinear system is modeled using linear parameter varying (LPV) systems with polytopic disturbances. To prevent the requirement for learning an explicit model of the LPV system, data-based $\lambda$ -contractivity conditions are first provided for the closed-loop system to enforce robust invariance of a prespecified polyhedral safe set and the systems asymptotic stability. These conditions are then leveraged to directly learn a robust data-based gain-scheduling controller by solving a convex program. A significant advantage of the proposed direct safe learning over model-based certifiers is that it completely resolves conflicts between safety and stability requirements while assuring convergence to the desired equilibrium point. Data-based safety certification conditions are then provided using Minkowski functions. They are then used to seemingly integrate the learned backup safe gain-scheduling controller with the RL controller. Finally, we provide a simulation example to verify the effectiveness of the proposed approach.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC , 2024. Vol. 35, no 3, p. 3302-3311
Keywords [en]
Data-driven control; gain-scheduling control; reinforcement learning (RL); safe control
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-193589DOI: 10.1109/TNNLS.2023.3264815ISI: 000973264800001PubMedID: 37053065OAI: oai:DiVA.org:liu-193589DiVA, id: diva2:1755871
Note

Funding Agencies|Excellence Centerat Linkoeping-Lund in Information Technology (ELLIIT); ZENITH

Available from: 2023-05-09 Created: 2023-05-09 Last updated: 2024-10-10Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMed

Authority records

Adib Yaghmaie, Farnaz

Search in DiVA

By author/editor
Adib Yaghmaie, Farnaz
By organisation
Automatic ControlFaculty of Science & Engineering
In the same journal
IEEE Transactions on Neural Networks and Learning Systems
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 119 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf