liu.seSearch for publications in DiVA
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Safe Reinforcement Learning via a Model-Free Safety Certifier
Sharif Univ Technol, Iran.
Sharif Univ Technol, Iran.
Michigan State Univ, MI 48863 USA.
Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska fakulteten.ORCID-id: 0000-0002-6665-5881
Vise andre og tillknytning
2024 (engelsk)Inngår i: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 35, nr 3, s. 3302-3311Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

This article presents a data-driven safe reinforcement learning (RL) algorithm for discrete-time nonlinear systems. A data-driven safety certifier is designed to intervene with the actions of the RL agent to ensure both safety and stability of its actions. This is in sharp contrast to existing model-based safety certifiers that can result in convergence to an undesired equilibrium point or conservative interventions that jeopardize the performance of the RL agent. To this end, the proposed method directly learns a robust safety certifier while completely bypassing the identification of the system model. The nonlinear system is modeled using linear parameter varying (LPV) systems with polytopic disturbances. To prevent the requirement for learning an explicit model of the LPV system, data-based $\lambda$ -contractivity conditions are first provided for the closed-loop system to enforce robust invariance of a prespecified polyhedral safe set and the systems asymptotic stability. These conditions are then leveraged to directly learn a robust data-based gain-scheduling controller by solving a convex program. A significant advantage of the proposed direct safe learning over model-based certifiers is that it completely resolves conflicts between safety and stability requirements while assuring convergence to the desired equilibrium point. Data-based safety certification conditions are then provided using Minkowski functions. They are then used to seemingly integrate the learned backup safe gain-scheduling controller with the RL controller. Finally, we provide a simulation example to verify the effectiveness of the proposed approach.

sted, utgiver, år, opplag, sider
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC , 2024. Vol. 35, nr 3, s. 3302-3311
Emneord [en]
Data-driven control; gain-scheduling control; reinforcement learning (RL); safe control
HSV kategori
Identifikatorer
URN: urn:nbn:se:liu:diva-193589DOI: 10.1109/TNNLS.2023.3264815ISI: 000973264800001PubMedID: 37053065OAI: oai:DiVA.org:liu-193589DiVA, id: diva2:1755871
Merknad

Funding Agencies|Excellence Centerat Linkoeping-Lund in Information Technology (ELLIIT); ZENITH

Tilgjengelig fra: 2023-05-09 Laget: 2023-05-09 Sist oppdatert: 2024-10-10bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstPubMed

Person

Adib Yaghmaie, Farnaz

Søk i DiVA

Av forfatter/redaktør
Adib Yaghmaie, Farnaz
Av organisasjonen
I samme tidsskrift
IEEE Transactions on Neural Networks and Learning Systems

Søk utenfor DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric

doi
pubmed
urn-nbn
Totalt: 176 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf