In this work, strategies to devise an optimal feedback control of probabilistic Boolean control networks (PBCNs) are discussed. Reinforcement learning (RL) based control is explored in order to minimize model design efforts and regulate PBCNs with high complexities. A Q-learning random forest (QLRF) algorithm is proposed; by making use of the algorithm, state feedback controllers are designed to stabilize the PBCNs at a given equilibrium point. Further, by adopting QLRF stabilized closed-loop PBCNs, a Lyapunov function is defined, and a method to construct the same is presented. By utilizing such Lyapunov functions, a novel self-triggered control (STC) strategy is proposed, whereby the controller is recomputed according to a triggering schedule, resulting in an optimal control strategy while retaining the closed-loop PBCN stability. Finally, the results are verified using computer simulations.

Self-triggered control of probabilistic Boolean control networks: A reinforcement learning approach / Bajaria, P.; Yerudkar, A.; Glielmo, L.; Del Vecchio, C.; Wu, Y.. - In: JOURNAL OF THE FRANKLIN INSTITUTE. - ISSN 0016-0032. - 359:12(2022), pp. 6173-6195. [10.1016/j.jfranklin.2022.06.004]

Self-triggered control of probabilistic Boolean control networks: A reinforcement learning approach

Glielmo L.;Del Vecchio C.;
2022

Abstract

In this work, strategies to devise an optimal feedback control of probabilistic Boolean control networks (PBCNs) are discussed. Reinforcement learning (RL) based control is explored in order to minimize model design efforts and regulate PBCNs with high complexities. A Q-learning random forest (QLRF) algorithm is proposed; by making use of the algorithm, state feedback controllers are designed to stabilize the PBCNs at a given equilibrium point. Further, by adopting QLRF stabilized closed-loop PBCNs, a Lyapunov function is defined, and a method to construct the same is presented. By utilizing such Lyapunov functions, a novel self-triggered control (STC) strategy is proposed, whereby the controller is recomputed according to a triggering schedule, resulting in an optimal control strategy while retaining the closed-loop PBCN stability. Finally, the results are verified using computer simulations.
2022
Self-triggered control of probabilistic Boolean control networks: A reinforcement learning approach / Bajaria, P.; Yerudkar, A.; Glielmo, L.; Del Vecchio, C.; Wu, Y.. - In: JOURNAL OF THE FRANKLIN INSTITUTE. - ISSN 0016-0032. - 359:12(2022), pp. 6173-6195. [10.1016/j.jfranklin.2022.06.004]
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0016003222003933-main.pdf

non disponibili

Dimensione 1.37 MB
Formato Adobe PDF
1.37 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/910699
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 17
  • ???jsp.display-item.citation.isi??? 16
social impact