Next Article in Journal
The Metastable State of Fermi–Pasta–Ulam–Tsingou Models
Previous Article in Journal
Visual Sorting of Express Packages Based on the Multi-Dimensional Fusion Method under Complex Logistics Sorting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Tracking Control of a Nonlinear Multiagent System Using Q-Learning via Event-Triggered Reinforcement Learning

College of Electronic and Information Engineering, Southwest University, Chongqing 400700, China
*
Author to whom correspondence should be addressed.
Submission received: 13 December 2022 / Revised: 25 January 2023 / Accepted: 27 January 2023 / Published: 5 February 2023
(This article belongs to the Section Multidisciplinary Applications)

Abstract

:
This article offers an optimal control tracking method using an event-triggered technique and the internal reinforcement Q-learning (IrQL) algorithm to address the tracking control issue of unknown nonlinear systems with multiple agents (MASs). Relying on the internal reinforcement reward (IRR) formula, a Q-learning function is calculated, and then the iteration IRQL method is developed. In contrast to mechanisms triggered by time, an event-triggered algorithm reduces the rate of transmission and computational load, since the controller may only be upgraded when the predetermined triggering circumstances are met. In addition, in order to implement the suggested system, a neutral reinforce-critic-actor (RCA) network structure is created that may assess the indices of performance and online learning of the event-triggering mechanism. This strategy is intended to be data-driven without having in-depth knowledge of system dynamics. We must develop the event-triggered weight tuning rule, which only modifies the parameters of the actor neutral network (ANN) in response to triggering cases. In addition, a Lyapunov-based convergence study of the reinforce-critic-actor neutral network (NN) is presented. Lastly, an example demonstrates the accessibility and efficiency of the suggested approach.

1. Introduction

Recently, distributed coordination control of MASs has received a great deal of attention as a result of its extensive applications in power systems [1,2], multi-vehicle [3] and multi-area power systems [4], and other fields. MASs have a variety problems, such as consensus control [5,6,7], synchronization control [8,9], anti-synchronization control [10], and tracking control [11]. Reinforcement learning (RL) [12] and adaptive dynamic programming (ADP) methods [13,14] have been employed by researchers as a means of solving the optimal control problems. Due to its excellent ability for global approximation, neural networks are excellent for dealing with nonlinearities and uncertainties [15]. ADP has great online learning and adaptive ability when it uses neural networks. Furthermore, researchers used RL/ADP algorithms to settle optimal coordination control matters, proposed a lot of directions, tracked control [16,17,18,19], graphical games [19], consensus control [20], containment control [21] and formation control [22]. The controller is designed in the above ways were relying on traditional time-triggered methods. Event-triggered in [23,24], it was suggested that the traditional implementation be changed to an event-triggered one. Because of the increasing number of agents, MASs are required to resolve many computing costs related to the exchange of information. Traditionally, the controller or actuator is constantly updated over a fixed period while the system is in operation. In order to minimize computation and preserve resources, aperiodic sampling is employed in the method of triggering events to improve the controller’s computation efficiency. There have been a number of developments in methods that are based on events for addressing discrete time systems [24]. The traditional implementation was suggested to be replaced by one that is triggered by events.
With an increase in the number of agents, MASs must solve a large number of computing costs related to information exchange. Traditionally, the controller or actuator is constantly updated frequently using a predetermined period of sampling during system operation. To lessen the computational and save resources, aperiodic sampling is used in the event-triggering scheme to improve the associated controller’s computational efficiency. Researchers have developed some event-based methods to address discrete time systems [25] as well as systems based on continuous time [26,27]. Several algorithms based on triggered events have been designed to solve discrete-time systems [25], as well as systems that operate in continuous time [26,27]. According to these results, the system dynamics are assumed to be accurate ahead of time. However, it is not always possible to understand dynamics properly in practice. According to [24], a controller that was triggered by events was proposed which was designed with inaccurate or unknown dynamics for the system.
The application of Q-learning to process control [28], chemical process control, industrial process automatic control, and other areas was an early application of reinforcement learning (RL). The Q-learning algorithm provides a modeless data-driven method for solving control problems. A key point to keep in mind is all potential actions in the present state. Q-learning is currently used primarily for routing optimization and reception processing in network communication within the context of network management. The Q-learning algorithm supports a modeless data-driven method for solving control problems. It is important to note that all potential actions in the present state [29] are evaluated in the Q-learning method, relying on the Q-function. At present, Q-learning is used primarily for routing optimization and reception processing in network communication in the domain of network management [30]. As a result of AlphaGo’s emergence, dynamic research has been conducted in the field of game theory, and tracking control research has been conducted on issues associated with nonlinear MAS tracking control based on Q-learning, such as in [31]. At present, there is some research for tracking control issues for nonlinear MASs based on Q-learning, such as in [32].
The MAS’s issue of optimal control was solved using the RL/ADP method, as mentioned above. The majority of the above results share two common features. First, the direct use of the immediate or immediate reward (IR) signal to define each agent’s performance index function results in limited learning opportunities. As a second step, a state’s value function is used to determine the Hamilton–Jacobi–Bellman (HJB) equation. The corresponding controller is designed using RL/ADP, which results in efficient learning of the MAS equation. It is beneficial to provide each agent with more information signals in a wide range of realistic applications in order to enhance their learning capabilities. In addition to merely considering performance in terms of status, performance can also be viewed from a broader perspective. The purpose of our research is to avoid the limitations described above.
Taking into consideration the aforementioned findings, this work investigates an ideal solution to the optimum control issue for MASs with unknown nonlinearity to enhance the process of learning as well as the effectiveness of control systems. Utilizing the graph theory, a coordination control problem is first identified. According to the gathered information of the IR, increased reinforcement reward (IRR) signals are provided for a longer-term reward period. Based on the IRR function, a Q-function is then developed to assess the efficacy of each agent’s control system. In addition, a tracking control technique is developed using iterative IrQL to derive the HJB equation for each agent. Then, based on the IrQL technique, triggering mechanisms are employed to establish a tracking control system. Finally, an optimum event-triggered controller based on a network topology of reinforce-actor-critic is created. The event triggering mechanism in a closed-loop approach guarantees that the network weights converge and the system remains stable. In light of the findings of this study, an additional contribution has been made to the literature:
(1) With respect to nonlinear MAS tracking control, the authors of [32] proposed an IrQL framework, which differs from [18,33,34], and the design of a new long-term IRR signal is completed. This product was designed on the basis of the data of neighbors to provide more information to the agent. The IRR function is used to define a Q-function, and an iterative IrQL method is proposed for obtaining control schemes that are optimally distributed.
(2) It is designed to trigger a new condition and cite in an asynchronous and distributed manner [24]. As a result, each agent triggers at its own time. Consequently, there is no need to update the controller on a regular basis. For the purpose of achieving online learning, a reinforce-actor-critic neural network based on triggered events is established to determine the optimal control scheme for triggered events. When compared with other papers [18,33,35,36], this paper adjusts the weights non-periodically, and the ANN is only adjusted when a trigger is encountered.
(3) In this paper, the objective is to develop the most effective tracking control method using a new triggering mechanism developed using the IrQL method. As far as event-triggered optimal control mechanisms are concerned, the Lyapunov approach is used to determine the rigorous stability assurance of closed-loop multi-agent networks. The designed RCA-NN framework [32] offers an effective means of executing the proposed method online without requiring any knowledge of the dynamics of the system. We made a comparison between the traditional activation method and the IrQL method. According to the simulation results, the designed algorithm is capable of detecting control problems with good tracking performance.
This article is organized as follows. The graph theory and problems of Section 2 provide an overview of some foundations. In Section 3, IrQL-based HJB equations are obtained. As described in Section 4, the most appropriate controller design should be triggered by an event to build the proposed algorithm. Section 5 develops the RCA-NN. The use of Lyapunov technology leads to convergence of weights in the neural networks. Through analogy examples and comparisons, its effectiveness and correctness of the method are demonstrated in Section 6. The last part includes our final thoughts.

2. Preliminary Findings

2.1. Theoretical Basis of Graphs

It would be possible to model the exchange of information using a directed graph between agents G = ( V , E , A ) , in which V = υ 1 , υ 2 , , υ n represents N nonempty notes and E = { ( υ i , υ j ) | υ i , υ j V } V × V represents an edge set, indicating agent i could derive the data from agent j. We define A = [ a i j ] , which is a matrix that is adjacency relevant and does not contain negative elements a i j , where a i j > 0 is satisfied if ( i , j ) E . Otherwise, a i j = 0 . N i = { j | ( i , j ) E } is defined as the set of nodes that are neighbors with node i, and a i j > 0 is satisfied for each j N i . We denote the input matrix D = d i a g { d i } , where d i = j N A i j . The Laplacian matrix is then defined as L = D A R N × N .
A leader’s relationship with its followers is the subject of this article. In order to describe follower-leader interactions, we propose an enhanced directed graph model, (i.e., G ^ = ( V ^ , E ^ ) , in which V ^ = { 0 , 1 , 2 , , N } and E ^ V ^ × V ^ ). A leader’s communication with his or her followers is determined by b i . If b i > 0 , then there is an assumption that the leader and followers are in communication. Otherwise, b i = 0 . B = d i a g { b 1 , , b n } R N × N is defined as the matrix of related connections.

2.2. Problem Formulation

If a nonlinear MAS has one leader as well as N followers, then the dynamics for the ith follower would be as follows:
x i ( k + 1 ) = A x i ( k ) + B i u i ( k )
In this case, x i R N represents the system state, u i R p i represents the control input, and A R n × n , B i R n × n represent unknown matrices for the plants and inputs.
The leader is written as follows:
x 0 ( k + 1 ) = A x 0 ( k )
It is assumed that x 0 R n represents the leader state.
Assumption 1.
If there is a spanning tree with a leader, then G ^ has a network of communication interactions, and G ^ does not contain repeated edges.
Definition 1.
As a result of our design, we are able to develop a control scheme u i ( k ) that only requires agent information. Therefore, the followers can keep track of the leader. In the event that the funder’s conditions are met, we will be able to implement a perfect control scheme [32]:
lim k x i ( k ) x 0 ( k ) = 0 , i = 1 , 2 , , n
The MAS’s local consensus error is expressed as follows:
e i ( k ) = j N i a i j ( x i ( k ) x j ( k ) ) + b i ( x i ( k ) x 0 ( k ) )
Then, an overview of the error vector is presented as follows:
e ( k ) = ( ( L + B ) I n ) ( x ( k ) x 0 ^ ( k ) )
e ( k ) = ( e 1 T ( k ) , e 2 T ( k ) , , e n T ( k ) ) T R n N , x ( k ) = ( x 1 T ( k ) , x 2 T ( k ) , , x n T ( k ) ) T R n N , x 0 ^ ( k ) = I n x 0 R n N , as well as vector I n having n dimensions.
The tracking error is written as ζ i ( k ) = x i ( k ) x 0 ( k ) , which has the vector form
ζ ( k ) = x ( k ) x 0 ^ ( k )
In this equation, ζ ( k ) = ( ζ 1 T ( k ) , ζ 2 T ( k ) , , ζ n T ( k ) ) T R n N , x 0 ^ ( k ) = ( x 0 T ( k ) , x 0 T ( k ) , , x 0 T ( k ) ) T .
Consequently, the localized neighbor error e i ( k ) is represented in the following manner, in agreement with Equations (1) and (4):
e i ( k + 1 ) = A e i ( k ) + ( d i + b i ) B i u i ( k ) j N i a i j B j u j ( k ) = F i ( e i ( k ) , u i ( k ) )
Given Equations (5) and (6), it is evident that e ( k ) and ζ ( k ) are related as follows: lim k e ( k ) = 0 as lim k ζ ( k ) = 0 . Consequently, when the localized neighboring error is close to zero, the control problem is resolved.

3. Design of the IrQL Method

To resolve the issue of tracking control in systems with multiple agents, the authors of [32] developed the IrQL method. What is important is that in order to provide agents with a greater level of local information from other agents or environments, it is necessary to introduce IRR information, thereby improving control and learning efficiency. In addition, agents have been defined according to the Q-function, and the relevant HJB equation is acquired using the IrQL method.
As an example, consider the following IR function for the ith agent:
j i ( e i ( k ) , u i ( k ) , u i ( k ) ) = e i ( k ) T R i i e i ( k ) + u i ( k ) T Q i i u i ( k ) + j N i u j ( k ) T Q i j u j ( k )
In this case, we can represent the agent’s neighbors’ input with u i = { u j | j N i } . The weight matrices R i i > 0 , Q i i > 0 , and Q i j > 0 are positive.
According to the IR function, as a function of IRR, the following is expressed:
R i ( e i ( k ) , u i ( k ) , u i ( k ) ) = s = k r s k j i ( e i ( s ) , u i ( s ) , u i ( s ) )
where the IRR function is defined as r ( 0 , 1 ] and r is its discount factor.
The following performance indices must be minimized for every agent to find a solution to the issue of controlling tracking optimally:
J i ( e i ( 0 ) , u i ( 0 ) , u i ( 0 ) ) = t = 0 β t R i ( e i ( t ) , u i ( t ) , u i ( t ) )
In this case, its performance index discount factor is β ( 0 , 1 ] .
Remark 1.
The function of the designed IRR function incorporates accumulated prospective long-term reward data from the IR function. The performance factor is measured depending on IRR as opposed to IR, which is contrary to the majority of methods. The advantage is that we can enhance the control actions, and the learning process can be accelerated by using a great deal of data.
Remark 2.
Intrinsic motivation (IM) provides a possible method for enhancing the faculty of abstract actions or solving the difficulties associated with exploring the environment in its reinforcement learning direction. IRR acts as a driving agent that learns skills through intrinsic motivation [32].
Definition 2.
In order to resolve the MAS’s tracking control issue, we propose a distributed tracking control scheme. As the time step k approaches infinity, e i ( k ) 0 minimizes the performance metrics ( 10 ) simultaneously.
We can obtain a state value function as follows based on the control method of the agent as well as the neighbors u i ( t ) and u i ( t ) :
V i ( e i ( k ) ) = t = k β t k R i ( e i ( t ) , u i ( t ) , u i ( t ) )
Equation (11) can also be expressed as the following formula:
V i ( e i ( k ) = R i ( e i ( k ) , u i ( k ) , u i ( k ) ) + β V i ( e i ( k + 1 ) )
Based on the theory, the ideal state value function meets the following conditions:
V i ( e i ( k ) ) = m i n u i ( k ) R i ( e i ( k ) , u i ( k ) , u i ( k ) ) + β V i ( e i ( k + 1 ) )
In this case, in Bellman form, the function of IRR is expressed as
R i ( e i ( k ) , u i ( k ) , u i ( k ) ) = j i ( e i ( k ) , u i ( k ) , u i ( k ) ) + ϱ R i ( e i ( k + 1 ) , u i ( k + 1 ) , u i ( k + 1 ) )
On the basis of the condition of stationarity, (i.e., V i ( e i ( k ) ) u i ( k ) ), the description of the optimal distributed control method is given below:
u i ( k ) = a r g m i n u i ( k ) R i ( e i ( k ) , u i ( k ) , u N i ( k ) ) + β V i ( e i ( k + 1 ) ) = 1 2 β ( d i + b i ) Q i i 1 h i T ( x i ( k ) ) V i ( e i ( k + 1 ) )
In this equation, V i ( e i ( k + 1 ) ) = V i ( e i ( k + 1 ) ) e i ( k + 1 ) .
Remark 3.
As is well known, the state value algorithm V i ( e i ( k ) ) is highly concerned with the space of states. In accordance with the state action function, the Q-learning method is designed with RL. The Q-function can be used by each agent to estimate the properties of all possible decisions in the current situation, and we can determine what is the best behavior of the agent at each step by using the Q-function.
The Q-function is written as follows:
Q i ( e i ( k ) , u i ( k ) , u i ( k ) ) = R i ( e i ( k ) , u i ( k ) , u i ( k ) ) + β V i ( e i ( k + 1 ) )
In accordance with the optimal scheme, the optimal Q-function is given by
Q i ( e i ( k ) , u i ( k ) , u i ( k ) ) = R i ( e i ( k ) , u i ( k ) , u i ( k ) ) + β Q i ( e i ( k + 1 ) , u i ( k + 1 ) , u i ( k + 1 ) )
Based on Equations (16) and (17), we can express the optimal solution as follows:
u i ( k ) = a r g m i n u i ( k ) Q i ( e i ( k ) , u i ( k ) , u i ( k ) )
In comparison with the control method of Equation (15), its optimum Q-function provides the optimal solution for the control scheme here. As a result, we intend to calculate the solution to Equation (17).

4. Designs of the Event-Driven Controller

According to a previous work [18], a time-triggered controller was developed. Nevertheless, a new event-triggering mechanism is designed to minimize computing costs for this case.
Q i , ( e i ( k ) , u i ( k ) , u i ( k ) ) is defined as the sequence of trigger times. At the triggering instant, the sampled disagreement error is expressed as e ^ i s .
As a result of the threshold value and error, the triggering time varies. The control scheme can only be updated when k = k t s i and cannot be updated under any other circumstances:
u i ( k ) = u i ( k t s i ) , k [ k t s i , k t s + 1 i )
To design a triggering condition, we propose a function that measures the gap arising from the existing error and the previously sampled error:
ϵ i s ( k ) = e ^ i s e i ( k ) , k [ k t s i , k t s + 1 i )
We have set the triggering error equal to zero at k = k t s i .
The dynamic expression of localized mistakes based on an event-triggered controlling approach can be written as
e i ( k + 1 ) = F i ( e i ( k ) , u i ( k t s i ) )
Thus, the equation for event-triggered events is obtained:
V i ( e i ( k ) ) = m i n u i ( k t s i ) R i ( e i ( k ) , u i ( k t s i ) , u i ( k t s i ) ) + β V i ( F i ( e i ( k ) , u i ( k t s i ) ) )
Q i ( e i ( k ) ) = R i ( e i ( k ) , u i ( k t s i ) , u i ( k t s i ) ) + β Q i ( F i ( e i ( k ) , u i ( k t s i ) ) )
It is possible to express the optimal tracking control using an event-triggered approach in the following way:
u i ( k ) = a r g m i n u i ( k t s i ) Q i ( e i ( k ) )
Assumption 2.
There is a constant L that explains the inequality below:
F i ( e i ( k ) , u i ( k t s i ) ) L e i ( k ) + L ϵ i s ( k )
Assumption 3.
There is a triggering condition which is as follows:
ϵ i s ( k ) 2 ( 1 2 L 2 ) / ( 2 L 2 ) e i ( k ) 2 = π i T
where π i T represents the triggering threshold and L ( 0 , 2 / 2 ) [24]. Once the multi-agent system dynamics have stabilized, followers are able to track their leaders.

5. Neural Network Implementation for the Event-Triggered Approach Using the IrQL Method

This section discusses the tree-NN structure, also known as RCA-NNs. Three virtual networks are included in the tree-NN structure.

5.1. Reinforce Neutral Network (RNN) Learning Model

The reinforced NN is employed to approximate the IRR signal as follows:
R ^ ( Z r i ( k ) ) = φ r i ( ω r 2 i T ( k ) · φ r i ( ω r 1 i T ( k ) · Z r i ( k ) ) )
where Z r i ( k ) represents the input vector, which has e i ( k ) , u i ( k ) , while u i ( k ) . ω r 1 i represents the matrix of weights for input-to-hidden layering. Meanwhile, ω r 2 i represents the matrix of weights for hidden-to-output layering, and φ r i ( · ) represents an activation function [24].
Due to the reinforced NN, the associated error function is as follows:
e r i ( k ) = j i ( e i ( k 1 ) , u i ( k 1 ) , u i ( k 1 ) ) + ϱ R ^ i ( Z r i ( k ) ) R ^ i ( Z r i ( k 1 ) )
The loss function is written as
E r i ( k ) = 1 2 e r i 2 ( k )
For convenience’s sake, only the matrices ω r 2 i are updated, and the matrices ω r 1 i remain unchanged during the training process.
The RNN’s update law is expressed as
ω r 2 i ( k + 1 ) = ω r 2 i ( k ) α r i · E r i ( k ) ω r 2 i ( k )
In this equation, α r i represents the rate at which the RNN learns.
The gradient descent rule (GDR) is used to obtain an updated law for the reinforced NN’s weight, which yields the following results:
ω r i ( k + 1 )   = ω r i ( k ) α r i · E r i ( k ) e r i ( k ) · e r i ( k ) R ^ ( Z r i ( k ) ) · R ^ ( Z r i ( k ) ) ω r 2 i ( k )   = ω r 2 i ( k ) α r i ϱ e r i ( k ) 1 φ r i 2 ω r 2 i T ( k ) · Δ r i ( k ) Δ r i ( k )
In this equation, Δ r i ( k ) = φ r i ( ω r 1 i T ( k ) · Z r i ( k ) ) .

5.2. Critic Neutral Network (CNN) Learning Model

In the following section, when designing the critic NN, an attempt is made to achieve a close approximation of the Q-function:
Q i ^ ( Z c i ( k ) ) = ω c 2 i T ( k ) · φ c i ( ω c 1 i T ( k ) · Z c i ( k ) )
In this equation, Z c i ( k ) represents the relative vector of inputs that has R ^ i ( k ) , e i ( k ) , and u i ( k ) as well as u i ( k ) , while ω c 1 i T ( k ) and ω c 2 i T ( k ) represent the input layer weight matrices and output layer weight matrices.
It is possible to express the function of the error for the CNN to be
e c i ( k ) = R ^ i ( Z r i ( k 1 ) ) + β Q ^ i ( Z c i ( k ) ) Q ^ i ( Z c i ( k 1 ) )
Its function of loss is written to be
E c i ( k ) = 1 2 e c i 2 ( k )
In accordance with the operation of RNNs, only ω c 2 i is updated, and ω c 1 i remains unchanged.
With the help of the gradient descent rule (GDR), it can be used to express the weight update law:
ω c 2 i ( k + 1 ) = ω c 2 i ( k ) α c i E c i ( k ) ω c 2 i ( k )
where α c i represents the critic NN’s learning rate. Furthermore, we can obtain its weight update schemes for the critic NN:
ω c 2 i ( k + 1 )   = ω c 2 i ( k ) α c i E c i ( k ) e c i ( k ) · e c i ( k ) Q ^ i ( Z c i ( k ) ) · Q ^ i ( Z c i ( k ) ) ω c 2 i ( k )   = ω c 2 i ( k ) α c i β R ^ i ( Z c i ( k 1 ) ) + β ω c 2 i T ( k ) · Δ c i ( k )   ω c 2 i T ( k 1 ) · Δ c i ( k 1 ) · Δ c i ( k )
In this equation, Δ c i ( k ) = φ c i ( ω c i 1 T ( k ) Z c i ( k ) ) .

5.3. Actor Neutral Network (ANN) Learning Model

Based on the actor NN, an approximate optimal scheme is defined as follows:
u ^ i ( k ) = ω a 2 i T · φ a i ( ω a 1 i T · Z a i ( k ) )
where the input data of the ANN is represented by Z a i ( k ) = e i ( k ) , ω a 1 i represents the weight matrices of the input layer, and ω a 2 i represents the weight matrices of the output layer.
Based on the prediction error of the actor NN, the following result is obtained:
e a i ( k ) = Q ^ i ( Z c i ( k ) ) U c
It is possible to express the function of loss of the ANN to be
E a i ( k ) = 1 2 e a i ( k )
As with RNNs and CNNs, ω a 1 i must remain unchanged throughout the learning process. The actor NN update laws are defined as follows:
ω a 2 i ( k + 1 ) = ω a 2 i ( k ) α a i E a i ( k ) ω a 2 i ( k )
where α a i represents the ANN learning rate. We can design a weight-tuning scheme for an ANN as follows:
ω a 2 i ( k + 1 )   = ω a 2 i ( k ) α a i · E a i ( k ) e a i ( k ) · e a i ( k ) Q ^ i ( Z c i ( k ) )   × Q ^ i ( Z c i ( k ) ) u ^ i ( k ) · u ^ i ( k ) ω a 2 i ( k )   = ω a 2 i ( k ) α a i Δ a i ( k ) ω a 2 i T ( k )   × c i ( k ) ω c 1 i T ( k ) u ^ i ( Z c i ( k ) ) ω a 2 i T Δ c i ( k )
where Δ a i ( k ) = φ a i ( ω a 1 i T ( k ) Z a i ( k ) ) , c i ( k ) = φ c i ( ω c 1 i ( k ) Z c i ( k ) ) φ c i ( ω c 1 i T ( k ) Z c i ( k ) ) , u ^ i ( Z c i ( k ) ) = Z c i ( k ) u ^ i ( k ) .
Furthermore, we can obtain
ω a 2 i ( k + 1 ) = ω a 2 i ( k ) α a i Δ a i ( k ) ω a 2 i T ( k ) × c i ( k ) ω c 1 i T ( k ) u ^ i ( Z c i ( k ) ) ω a 2 i T Δ c i ( k ) , k = k t s i ω a 2 i ( k ) , k [ k t s i , k t s + 1 i ) .
It is described in detail in Algorithm 1 how the controller is designed using RCA-NNs and event triggering. When the trigger conditions are met, the actor NN is updated.
For analysis of stability based on the Lyapunov method, we present an analysis of stability and convergence in the following section.
Assumption 4.
The following conditions are assumed to be true: ω r 2 i ( k ) ω r i m , ω c 2 i ( k ) ω c i m , ω a 2 i ( k ) ω a i m . There are bounded activation functions, i.e., Δ r i ( k ) Δ r i m , Δ c i ( k ) Δ c i m , Δ a i ( k ) Δ a i m . What’s more, the functions of activation φ a i ( k ) is the function of Lipschitz that satisfies φ a i ( e i ( k t s i ) ) φ a i ( k ) θ a i e i ( k t s i ) e i ( k ) = θ a i ϵ i s ( k ) θ a i π i T , where θ a i , π i T are positive constants. Approximation errors of NNs’ output can be defined to be: δ c i ( k ) = ω c 2 i ( k ) Δ c i ( k ) , δ a i ( k ) = ω a 2 i ( k ) Δ a i ( k ) , ϑ r i ( k ) = ω r 2 i ( k ) Δ r i ( k ) .
Theorem 1.
Assume that Assumptions 1 and 2 are true. CNN and ANN weights are renewed by (36) and (42). Upon satisfying the triggering term(26), the local inconsistency error is e i ( k ) , critic evaluated error and actor evaluated error error are consistent and ultimately bounded. Furthermore the control method u i converges to the optimal value u i .
Evidence: Set ω ˜ r 2 i ( k ) = ω r 2 i ( k ) ω r 2 i as the weighting assessment error between the optimal weights for RNNs ω r 2 i . Its assessment ω r 2 i ( k ) , ω ˜ c 2 i ( k ) = ω c 2 i ( k ) ω c 2 i is the error resulting from weighting evaluation involving the ideal CNN weights ω c 2 i ; its assessed ω c 2 i ( k ) , as well as ω ˜ a 2 i ( k ) = ω a 2 i ( k ) ω a 2 i is the weighting evaluated error involving the ideal ANN weightings ω a 2 i and its estimation ω a 2 i ( k ) .
Algorithm 1 RCA neural networks based on the IrQL method with event triggering.
Set initial value:
1: Set initial values for ω r 2 i ( 0 ) , ω a 2 i ( 0 ) , ω c 2 i ( 0 ) between ( 0 , 1 ) ;
2: Set a low level of degree of precision for the calculation E .
3: Initialize the score of x i ( 0 ) , x 0 ( 0 ) within ( 0 , 1 )
The iterative process: Make k i s e q u a l t o 0 . Error calculation at the localized level e i ( k ) ;
4: Keep on;
5: Based on actor NN, estimate u ^ i ( k ) by ( 37 )
6: Update the reinforce NN;
7: Via the inputting [ e i ( k ) , u i ( k ) , u i ( k ) ] into the reinforce NN, and we can obtain the
estimated the function of IRR R i ( Z r i ( k ) ) via ( 27 )
8: Obtain e r i ( k ) by ( 28 ) ;
9: Renew the matrices ω r 2 i ( k ) by ( 31 ) ;
10: Renew the critic NN:
11: Via the inputting [ R ^ i ( Z r i ( k ) ) , e i ( k ) , u i ( k ) , a n d u i ( k ) ] into critic NN,
and we can obtain its estimated Q-function via ( 32 ) ;
12: Obtain e c i ( k ) by ( 33 ) ;
13: Renew the matrices ω c 2 i ( k ) by ( 36 ) ;
14: Renew the actor NN:
15: Input [ e i ( k ) ] to the actor NN, and we can obtain the estimated Q-function
u ^ i ( k ) via ( 37 )
16: Calculation e a i ( k ) via ( 38 )
17: In the event that the triggering conditions are met, renew the matrices
ω a 2 i ( k ) of the actor NN using ( 41 )
18: Otherwise, do not update the weight matrices ω a 2 i ( k )
19: Until ω c 2 i ( k + 1 ) ω c 2 i ( k ) E ; otherwise, set k = k + 1 , then go to
procedure ( 5 )
20: Keep on ω r 2 i ( k ) , ω c 2 i ( k ) , ω a 2 i ( k ) as the optimal weights.
(1) We can obtain the following function at the time of triggering as follows:
L ( k ) = L 1 ( k ) + L 2 ( k ) + L 3 ( k ) + L 4 ( k ) + L 5 ( k )
In this equation,
L 1 ( k ) = 1 α r i t r ( ω r 2 i T ( k ) ω r 2 i ( k ) ) , L 2 ( k ) = 1 α c i t r ( ω c 2 i T ( k ) ω c 2 i ( k ) ) , L 3 ( k ) = 1 α a i t r ( ω a 2 i T ( k ) ω a 2 i ( k ) ) , L 4 ( k ) = ϱ k R ^ i ( k ) , L 5 ( k ) = β k Q ^ i ( k ) .
Δ L 1 ( k ) is written to be
Δ L 1 ( k ) = 1 α r i t r ( ω r 2 i T ( k + 1 ) ω r 2 i ( k + 1 ) ω r 2 i T ( k ) ω r 2 i ( k ) ) .
In this equation, we have
ω ˜ r 2 i ( k + 1 ) = ω r 2 i ( k + 1 ) ω r 2 i = ω r 2 i ( k ) α r i ϱ [ j ( k 1 ) + ϱ R ^ ( k ) R ^ ( k 1 ) ] × δ r i ( k ) Δ r i ( k )
Furthermore, we have
Δ L 1 ( k ) = 2 ϱ 2 δ r i ( k ) [ ϱ 1 j ( k ) + R ^ ( k ) R ^ ( k 1 ) ] + α r i ϱ 4 [ ϱ 1 j ( k ) + R ^ ( k ) R ^ ( k 1 ) ] 2 ϑ r i 2 ( k ) = δ r i ( k ) ϱ 2 [ ϱ 1 j ( k ) + R ^ ( k ) R ^ ( k 1 ) ] 2 ( 1 α r i ( k ) Δ r i 2 ( k ) ) ϱ 4 [ ϱ 1 j ( k ) + R ^ ( k ) R ^ ( k 1 ) ] 2 × ϑ r i 2 ( k ) δ r i 2 ( k )
Δ L 2 ( k ) can be written as
Δ L 2 ( k ) = 1 α c i t r ( ω c 2 i T ( k + 1 ) ω c 2 i ( k + 1 ) ω c 2 i T ( k ) ω c 2 i ( k ) ) .
Within this equation, we have
ω ˜ c 2 i ( k + 1 ) = ω c 2 i ( k + 1 ) ω c 2 i = ω c 2 i ( k ) α c i β Δ c i ( k ) [ R ^ i ( k 1 ) + β ( ω c 2 i ( k ) + ω c 2 i ) Δ c i ( k ) ω c 2 i T ( k 1 ) Δ c i ( k 1 ) ]
Furthermore, we have
Δ L 2 ( k ) = 1 α c i D 1 + D 2 + D 3 ω c 2 i T ( k ) ω c 2 i ( k )
where
D 1 = ω c 2 i T ( k ) ( I α c i β 2 Δ c i ( k ) Δ c i T ( k ) ) 2 ω c 2 i ( k = ω c 2 i ( k ) 2 2 α c i β 2 δ c i ( k ) 2 + α c i 2 β 4 Δ c i ( k ) 2 δ c i ( k ) 2 D 2 = 2 α c i β 2 δ c i ( k ) [ β 1 R ^ i ( k 1 ) + ( ω c 2 i ) T Δ c i ( k ) β 1 ω c 2 i ( k 1 ) Δ c i ( k 1 ) ] Δ c i ( k ) 2 D 3 = α c i 2 β 4 [ β 1 R ^ i ( k 1 ) + ( ω c 2 i ) T Δ c i ( k ) β 1 ω c 2 i T ( k 1 ) Δ c i ( k 1 ) ] T × [ β 1 R ^ i ( k 1 ) + ( ω c 2 i ) T Δ c i ( k ) β 1 ω c 2 i T ( k 1 ) Δ c i ( k 1 ) ]
The following result is obtained by computation:
Δ L 2 ( k ) = β 2 δ c i ( k ) 2 β 2 ( 1 α c i β 2 Δ c i ( k ) 2 ) × | | δ c i ( k ) + β 1 R ^ i ( k 1 ) + ( ω c 2 i ) T Δ c i ( k ) β 1 ω c 2 i T ( k 1 ) Δ c i ( k 1 ) ) | | 2 + | | R ^ i ( k 1 ) + β ( ω c 2 i ) T Δ c i ( k ) ω c 2 i T ( k 1 ) Δ c i ( k 1 ) | | 2
In the case of the difference of the first order of L 3 ( k ) , we can obtain
Δ L 3 ( k ) = 1 α a i ( ω a 2 i T ( k + 1 ) ω a 2 i ( k + 1 ) ω a 2 i T ( k ) ω a 2 i ( k ) )
where,
ω ˜ a 2 i ( k + 1 ) = ω a 2 i ( k + 1 ) ω a 2 i = ω a 2 i ( k ) α a i Δ a i ( k ) ω c 2 i T ( k ) C ( k ) × [ ω c 2 i T ( k ) Δ c i ( k ) ]
Therefore, we have
Δ L 3 ( k ) = 1 α a i ( E 1 ω a 2 i T ( k ) ω a 2 i ( k ) )
where
E 1 = | | ω a 2 i ( k ) | | 2 2 α a i ω c 2 i T ( k ) C ( k ) δ a i ( k ) [ ω c 2 i T ( k ) Δ c i ( k ) ] + α a i | | ω c 2 i T ( k ) Δ c i ( k ) | | 2 | | Δ a i ( k ) | | 2 | | ω c 2 i T ( k ) C ( k ) | | 2
In the case of Δ L 3 ( k ) , the simplified formula is given below:
Δ L 3 ( k ) = ( 1 α a i | | Δ a i ( k ) | | 2 ) | | ω c 2 i T ( k ) Δ c i ( k ) | | 2 × | | ω c 2 i T ( k ) C ( k ) | | 2 | | δ a i ( k ) | | 2 + | | ω c 2 i T ( k ) C ( k ) Δ a i ( k ) ω c 2 i T ( k ) δ a i ( k ) | | 2
By adding Equations (47), (51), and (56), we can obtain L ( k ) as follows:
Δ L ( k ) = Δ L 1 ( k ) + Δ L 2 ( k ) + Δ L 3 ( k ) + Δ L 4 ( k ) + Δ L 5 ( k ) = β 2 | | δ c i ( k ) | | 2 β 2 ( 1 α c i β 2 | | Δ c i ( k ) | | 2 ) × | | δ c i ( k ) + β 1 R ^ i ( k 1 ) + ( ω c 2 i ) T Δ c i ( k ) β 1 ω c 2 i T ( k 1 ) Δ c i ( k 1 ) | | 2 ( 1 α a i | | Δ a i ( k ) | | 2 ) × | | ω c 2 i T ( k ) Δ c i ( k ) | | 2 | | ω c 2 i T ( k ) C ( k ) | | 2 + | | R ^ i ( k 1 ) + β ( ω c 2 i ) T Δ c i ( k ) ω c 2 i T ( k 1 ) Δ c i ( k 1 ) | | 2 + | | ω c 2 i T ( k ) C ( k ) Δ c i T ( k ) ω c 2 i T ( k ) δ a i ( k ) | | 2 ( 1 α r i Δ r i 2 ( k ) ) ϱ 4 × | | ϱ 1 j ( k ) + R ^ i ( k ) ϱ 1 R ^ i ( k 1 ) | | 2 ϑ 2 ( k ) + | | δ r i ( k ) ϱ 2 [ ϱ 1 j ( k ) + R ^ i ( k ) ϱ 1 R ^ i ( k 1 ) ] | | 2 | | δ a i ( k ) | | 2 | | δ r i ( k ) | | 2 + β k + 1 Q i ( k + 1 ) β k Q i ( k ) + ϱ k + 1 R i ( k + 1 ) ϱ k R i ( k )
Therefore, we can obtain
Δ L ( k ) = β 2 | | δ c i ( k ) | | 2 β 2 ( 1 α c i β 2 | | Δ c i ( k ) | | 2 ) × | | δ c i ( k ) + β 1 V 1 ( k ) | | 2 ( 1 α a i | | Δ a i ( k ) | | 2 ) | | X 1 ( k ) | | 2 × | | W 1 ( k ) | | 2 + | | V 1 ( k ) | | 2 + | | W 1 ( k ) X 1 T ( k ) δ a i ( k ) | | 2 ( 1 α a i ( k ) | | Δ r i ( k ) | | 2 ) ϱ 4 | | ϱ 1 Y 1 ( k ) | | 2 υ r i 2 ( k ) | | δ a i ( k ) | | 2 | | δ r i ( k ) | | 2 + β k + 1 Q i ( k + 1 ) β k Q i ( k ) + ϱ k + 1 R i ( k + 1 ) ϱ k R i ( k )
where V 1 ( k ) = R ^ i ( k 1 ) + β ( ω c 2 i ) T Δ c i ( k ) ω c 2 i T ( k 1 ) Δ c i ( k 1 ) , W 1 ( k ) = ω c 2 i T ( k ) C ( k ) , X 1 ( k ) = ω c 2 i T ( k ) Δ c i ( k ) , Y 1 ( k ) = j ( k ) + ϱ R ^ i ( k ) R ^ i ( k 1 ) , and we can obtain | | V 1 ( k ) | | V 1 m , | | W 1 ( k ) | | W 1 m , | | X 1 ( k ) | | X 1 m , | | Y 1 ( k ) | | Y 1 m . Next, we can obtain
Δ L ( k ) β 2 | | δ c i ( k ) | | 2 β 2 ( 1 α c i β 2 | | Δ c i ( k ) | | 2 ) × | | δ c i ( k ) + β 1 V 1 ( k ) | | 2 ( 1 α a i | | Δ a i ( k ) | | 2 ) | | X 1 ( k ) | | 2 | | W 1 ( k ) | | 2 + 2 | | W 1 ( k ) X 1 T ( k ) | | 2 + | | δ a i ( k ) | | 2 ( 1 α r i | | Δ r i ( k ) | | 2 ) ϱ 2 | | Y 1 ( k ) | | 2 ϑ r i 2 ( k ) + 2 | | δ r i ( k ) | | 2 + 2 | | Y 1 ( k ) | | 2 β k Q i ( k ) ϱ k R i ( k )
Moreover, we can obtain
Δ L ( k ) β 2 | | δ c i ( k ) | | 2 β 2 ( 1 α c i β 2 | | Δ c i ( k ) | | 2 ) × | | δ c i ( k ) + β 1 V 1 ( k ) | | 2 ( 1 α a i | | Δ a i ( k ) | | 2 ) | | X 1 ( k ) | | 2 | | W 1 ( k ) | | 2 + V 1 m 2 + 2 W 1 m 2 X 1 m 2 + 2 | | ( ω a 2 i ) T Δ a i ( k ) | | 2 + 2 | | ω a 2 i T Δ a i ( k ) | | 2 ( 1 α r i | | Δ r i ( k ) | | 2 ) ϱ 2 | | Y 1 ( k ) | | 2 ϑ r i 2 ( k ) + 2 | | δ r i ( k ) | | 2 + 2 | | Y 1 ( k ) | | 2 β k Q i ( k ) ϱ k R i ( k ) β 2 | | δ c i ( k ) | | 2 β 2 ( 1 α c i β 2 | | Δ c i ( k ) | | 2 ) × | | δ c i ( k ) + β 1 V 1 ( k ) | | 2 ( 1 α a i | | Δ a i ( k ) | | 2 ) | | X 1 ( k ) | | 2 | | W 1 ( k ) | | 2 + V 1 m 2 + 2 W 1 m 2 X 1 m 2 + 4 ω a i m 2 Δ a i m 2 ( 1 α r i | | Δ r i ( k ) | | 2 ) ϱ 2 | | Y 1 ( k ) | | 2 ϑ r i 2 ( k ) + 2 δ r i m 2 + 2 Y 1 m 2 β k Q i ( k ) ϱ k R i ( k )
If the conditions are met, then we can obtain
α r i 1 | | Δ r i ( k ) | | 2 , α c i 1 β 2 | | Δ c i ( k ) | | 2 , α a i 1 | | Δ a i ( k ) | | 2 | | δ c i ( k ) | | > ( V 1 m 2 + 2 W 1 m 2 X 1 m 2 + 4 ω a 2 i m 2 Δ a i m 2 + 2 δ r i m 2 + 2 Y 1 m 2 ) / β 2
We can derive Δ L ( k ) 0 . The proof has been completed.
(2) In the absence of the triggering conditions, consider the following:
L ( k ) = L 1 ( k ) + L 2 ( k ) + L 4 ( k )
where
L 1 ( k ) = 1 α r i t r ( ω r 2 i T ( k ) ω r 2 i ( k ) ) , L 2 ( k ) = 1 α c i t r ( ω c 2 i T ( k ) ω c 2 i ( k ) ) , L 4 ( k ) = e i T ( k ) e i ( k )
Δ L ( k ) = Δ L 1 ( k ) + Δ L 2 ( k ) + Δ L 4 ( k ) = β 2 | | δ c i ( k ) | | 2 β 2 ( 1 α c i β 2 | | Δ c i ( k ) | | 2 ) × | | δ c i ( k ) + β 1 V 1 ( k ) | | 2 + | | V 1 ( k ) | | 2 ( 1 α r i | | Δ r i ( k ) | | 2 ) ϱ 2 | | Y 1 ( k ) | | 2 ϑ r i 2 ( k ) + 2 δ r i m 2 + 2 Y 1 m 2 + e i T ( k + 1 ) e i ( k + 1 ) e i T ( k ) e i ( k )
Δ L ( k ) β 2 | | δ c i ( k ) | | 2 β 2 ( 1 α c i β 2 | | Δ c i ( k ) | | 2 ) × | | δ c i ( k ) + β 1 V 1 ( k ) | | 2 + | | V 1 ( k ) | | 2 ( 1 α r i | | Δ r i ( k ) | | 2 ) ϱ 2 | | Y 1 ( k ) | | 2 ϑ r i 2 ( k ) + 2 δ r i m 2 + 2 Y 1 m 2 + ( ( ι | | e i ( k ) + ι | | ϵ i s | | ) 2 | | e i ( k ) | | 2 ) β 2 | | δ c i ( k ) | | 2 β 2 ( 1 α c i β 2 | | Δ c i ( k ) | | 2 ) × | | δ c i ( k ) + β 1 V 1 ( k ) | | 2 + V 1 m 2 ( 1 α r i | | Δ r i ( k ) | | 2 ) ϱ 2 | | Y 1 ( k ) | | 2 ϑ r i 2 ( k ) + 2 δ r i m 2 + 2 Y 1 m 2 ( 1 2 ι 2 ) | | e i ( k ) | | 2 2 ι 2 | | ϵ i s | | 2
In the event that it is satisfied that α r i 1 | | Δ r i ( k ) | | 2 , α c i 1 β 2 | | Δ c i ( k ) | | 2 , α a i 1 | | Δ a i ( k ) | | 2 , and | | δ c i ( k ) | | > ( V 1 m 2 + 2 δ r i m 2 + 2 Y 1 m 2 ) / β 2 , one has Δ L ( k ) 0 . Thus, we can derive Δ L ( k ) 0 , and the proof is completed.

6. Statistical Data Illustration

To demonstrate the viability of the proposed method, a simulation is presented in the following section.

Nonlinear MAS Consisting of One Leader and Six Followers

There were six followers and one leader in this tangled set of MASs which were considered. Figure 1 depicts the connection graph of the studied MASs. There was a leader of 0, and there were followers of 1, 2, 3, 4, 5, and 6. It is possible to obtain the corresponding adjacency matrix a 14 = a 21 = a 32 = a 43 = a 52 = a 65 = 1 . There is a weighted relationship involving the leaders and followers where b 1 = 1 , b 2 = b 3 = b 4 = b 5 = b 6 = 0 . It is possible for agent 1 to accept the information of the leader immediately. The system model parameters for MASs with one leader as well as six followers are as follows: A = 0.995 0.09980 0.09982 0.995 ,   B 1 = [ 0 , 0.2 ] T ,   B 2 = [ 0 , 0.5 ] T ,   B 3 = [ 0 , 0.4 ] T ,   B 4 = [ 0 , 0.3 ] T ,   B 5 = [ 0 , 0.6 ] T , and B 6 = [ 0 , 0.7 ] T .
The weight matrices are as follows: Q 11 = Q 22 = Q 33 = Q 44 = Q 55 = Q 66 = 1 , R 11 = R 22 = R 33 = R 44 = R 55 = R 66 = I 2 × 2 , and Q 14 = Q 21 = Q 32 = Q 43 = Q 52 = Q 65 = I 2 × 2 . The learning rates are α r i = 0.95 , α a i = 0.90 , and α c i = 0.07 ( i i s e q u a l t o 1 , 2 , 3 , 4 , 5 , 6 ) , with a discount factor of ϱ = 0.57 , β = 0.9 .
For the agents, the activation function of the RNNs and ANNs is as follows: Z r 1 ( k ) = [ e 1 T ( k ) , u 1 T ( k t s 1 ) , u 4 T ( k t s 4 ) ] T , Z a 1 ( k ) = e 1 ( k t s 1 ) , Z r 2 ( k ) = [ e 2 T ( k ) , u 2 T ( k t s 2 ) , u 1 T ( k t s 1 ) ] T , Z a 2 ( k ) = e 2 ( k t s 2 ) , Z r 3 ( k ) = [ e 3 T ( k ) , u 3 T ( k t s 3 ) , u 2 T ( k t s 2 ) ] T , Z a 3 ( k ) = e 3 ( k t s 3 ) , Z r 4 ( k ) = [ e 4 T ( k ) , u 4 T ( k t s 4 ) , u 3 T ( k t s 3 ) ] T ,
Z a 4 ( k ) = e 4 ( k t s 4 ) , Z r 5 ( k ) = [ e 5 T ( k ) , u 5 T ( k t s 5 ) , u 2 T ( k t s 2 ) ] T , Z a 5 ( k ) = e 5 ( k t s 5 ) , Z r 6 ( k ) = [ e 6 T ( k ) , u 6 T ( k t s 6 ) , u 5 T ( k t s 5 ) ] T , Z a 6 ( k ) = e 6 ( k t s 6 ) . The initial values of the leader and followers are x 0 ( 0 ) = [ 0.6675 , 0.7940 ] T , x 1 ( 0 ) = [ 0.5734 , 0.6000 ] T , x 2 ( 0 ) = [ 0.5667 , 0.7348 ] T , x 3 ( 0 ) = [ 0.8694 , 0.7140 ] T , x 4 ( 0 ) = [ 1.0212 , 1.3842 ] T , x 5 ( 0 ) = [ 0.8606 , 1.5565 ] T , and x 6 ( 0 ) = [ 0.5274 , 1.3235 ] T .
According to Figure 2, all followers of the leader were able to accurately follow the leader, and the whole MAS was able to achieve synchronization. Figure 3 illustrates the six agents’ cumulative amount of trigger instants. On average, the amount of trigger instants for the six agents was approximately 220. However, using the traditional RL method, the number was approximately 1000. As a result, the computational burden was reduced by 78.0 % in comparison with the conventional time-triggered method. According to Figure 4, the trigger mechanism of each agent is illustrated, which indicates that the actor network weight will be updated only when the trigger mechanism is satisfied. As can be seen in Figure 5, there is a correlation involving the error of triggering | | ϵ i s ( k ) | | 2 as well as the minimum triggering requirements π i T . Over time, it appears that the triggering error converged. Figure 6 and Figure 7 illustrate the evaluation of the local neighborhood errors using the proposed control method, and it is shown that they could be converged to 0 at k = 60. The local neighborhood errors of [32] are shown in Figure 8 and Figure 9. In comparison with Figure 8 and Figure 9, our proposed control method produced a better convergence effect. Figure 10 and Figure 11 show the estimation of the ANN weight parameters. With the proposed control method, the actor network weights can stabilize faster than with IrQL.

7. Conclusions

According to this study, an event-triggered optimum controlling problem for model-free MASs was examined using the IrQL method based on RL. A new IrQL method was introduced by adding additional IRR functions [32], As a result, more information could be obtained by the agent. As a consequence of defining the IRR formula, we defined the Q-function and derived the corresponding HJB equation. In an iterative approach to IrQL, this method was designed to calculate the optimal control strategy. Using the IrQL algorithm, an event-triggered controller utilizing the IrQL method was presented. It was designed to update the controller only at the time of triggering to reduce the burden on computing resources and the transmission network. An RCA-NN was used to implement the suggested approach, which eliminated the need for a model of the system. It is possible to determine the convergent weights of neural networks using the Lyapunov method. To assess the performance and control efficiency of the suggested algorithm, a simulation model was used. Further research will be conducted on the effect of the discount rates on system reliability.

Author Contributions

Software, Y.T., Y.L. and J.H.; Writing—review & editing, Z.W.; Supervision, X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wen, G.; Yu, X.; Liu, Z.W.; Yu, W. Adaptive consensus-based robust strategy for economic dispatch of smart grids subject to communication uncertainties. IEEE Trans. Ind. Inform. 2018, 14, 2484–2496. [Google Scholar] [CrossRef]
  2. Li, P.; Hu, J.; Qiu, L.; Zhao, Y.; Ghosh, B.K. A distributed economic dispatch strategy for power-water networks. IEEE Trans. Control Netw. Syst. 2021, 9, 356–366. [Google Scholar] [CrossRef]
  3. Fax, J.A.; Murray, R.M. Information flow and cooperative control of vehicle formations. IEEE Trans. Autom. Control 2004, 49, 1465–1476. [Google Scholar] [CrossRef]
  4. Wen, S.; Yu, X.; Zeng, Z.; Wang, J. Event-triggering load frequency control for multiarea power systems with communication delays. IEEE Trans. Ind. Electron. 2016, 63, 1308–1317. [Google Scholar] [CrossRef]
  5. Wen, G.; Wang, P.; Huang, T.; Lü, J.; Zhang, F. Distributed consensus of layered multi-agent systems subject papers. IEEE Trans. Circuits Syst. 2020, 67, 3152–3162. [Google Scholar] [CrossRef]
  6. Wu, Z.G.; Xu, Y.; Pan, Y.J.; Su, H.; Tang, Y. Event-triggered control for consensus problem in multi-agent systems with quantized relative state measurements and external disturbance. IEEE Trans. Circuits Syst. 2018, 65, 2232–2242. [Google Scholar] [CrossRef]
  7. Liu, H.; Cheng, L.; Tan, M.; Hou, Z.G. Exponential finite-time consensus of fractional-order multiagent systems. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 1549–1558. [Google Scholar] [CrossRef]
  8. Shi, K.; Wang, J.; Zhong, S.; Zhang, X.; Liu, Y.; Cheng, J. New reliable nonuniform sampling control for uncertain chaotic neural networks under Markov switching topologies. Appl. Math. Comput. 2019, 347, 169–193. [Google Scholar] [CrossRef]
  9. He, W.; Chen, G.; Han, Q.L.; Du, W.; Cao, J.; Qian, F. Multi-agent systems on multilayer networks: Synchronization analysis and network design. IEEE Trans. Syst. 2017, 47, 1655–1667. [Google Scholar]
  10. Hu, J.; Wu, Y. Interventional bipartite consensus on coopetition networks with unknown dynamics. J. Frankl. Inst. 2017, 354, 4438–4456. [Google Scholar] [CrossRef]
  11. Hu, J.P.; Feng, G. Distributed tracking control of leader follower multi-agent systems under noisy measurement. Automatica 2010, 46, 1382–1387. [Google Scholar] [CrossRef] [Green Version]
  12. Wu, X.; Tang, Y.; Cao, J. Input-to-State Stability of Time-Varying Switched Systems with Time Delays. IEEE Trans. Autom. Control 2019, 64, 2537–2544. [Google Scholar] [CrossRef]
  13. Chen, D.; Liu, X.; Yu, W. Finite-time fuzzy adaptive consensus for heterogeneous nonlinear multi-agent systems. IEEE Trans. Netw. Sci. Eng. 2021, 7, 3057–3066. [Google Scholar] [CrossRef]
  14. Wang, J.L.; Wang, Q.; Wu, H.N.; Huang, T. Finite-time consensus and finite-time H consensus of multi-agent systems under directed topology. IEEE Trans. Netw. Sci. Eng. 2020, 7, 1619–1632. [Google Scholar] [CrossRef]
  15. Ren, Y.; Zhao, Z.; Zhang, C.; Yang, Q.; Hong, K.S. Adaptive neural-network boundary control for a flexible manipulator with input constraints and model uncertainties. IEEE Trans. Cybern. 2021, 51, 4796–4807. [Google Scholar] [CrossRef]
  16. Mu, C.; Zhao, Q.; Gao, Z.; Sun, C. Q-learning solution for optimal consensus control of discrete-time multiagent systems using reinforcement learning. J. Frankl. Inst. 2019, 356, 6946–6967. [Google Scholar] [CrossRef]
  17. Peng, Z.; Zhao, Y.; Hu, J.; Ghosh, B.K. Data-driven optimal tracking control of discrete-time multi-agent systems with two-stage policy iteration algorithm. Inf. Sci. 2019, 481, 189–202. [Google Scholar] [CrossRef]
  18. Zhang, H.; Jiang, H.; Luo, Y.; Xiao, G. Data-driven optimal consensus control for discrete-time multi-agent systems with unknown dynamics using reinforcement learning method. IEEE Trans. Ind. Electron. 2017, 64, 4091–4100. [Google Scholar] [CrossRef]
  19. Abouheaf, M.I.; Lewis, F.L.; Vamvoudakis, K.G.; Haesaert, S.; Babuska, R. Multi-agent discrete-time graphical games and reinforcement learning solutions. Automatica 2014, 50, 3038–3053. [Google Scholar] [CrossRef]
  20. Peng, Z.; Zhao, Y.; Hu, J.; Luo, R.; Ghosh, B.K.; Nguang, S.K. Input–output data-based output antisynchronization control of multiagent systems using reinforcement learning approach. IEEE Trans. Ind. Inform. 2021, 17, 7359–7367. [Google Scholar] [CrossRef]
  21. Peng, Z.; Hu, J.; Ghosh, B.K. Data-driven containment control of discrete-time multi-agent systems via value iteration. Sci. China Inf. Sci. 2020, 63, 189205. [Google Scholar] [CrossRef]
  22. Wen, G.; Chen, C.P.; Feng, J.; Zhou, N. Optimized multi-agent formation control based on an identifier-actor-critic reinforcement learning algorithm. IEEE Trans. Fuzzy Syst. 2018, 26, 2719–2731. [Google Scholar] [CrossRef]
  23. Bai, W.; Li, T.; Long, Y.; Chen, C.P. Event-triggered multigradient recursive reinforcement learning tracking control for multiagent systems. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 366–379. [Google Scholar] [CrossRef] [PubMed]
  24. Peng, Z.; Luo, R.; Hu, J.; Shi, K.; Ghosh, B.K. Distributed optimal tracking control of discrete-time multiagent systems via event-triggered reinforcement learning. IEEE Trans. Circuits Syst. 2022, 69, 3689–3700. [Google Scholar] [CrossRef]
  25. Hu, J.; Chen, G.; Li, H.X. Distributed event-triggered tracking control of leader-follower multi-agent systems with communication delays. Kybernetika 2011, 47, 630–643. [Google Scholar]
  26. Eqtami, A.; Dimarogonas, D.V.; Kyriakopoulos, K.J. Event-triggered control for discrete-time systems. In Proceedings of the American Control Conference, Baltimore, MD, USA, 30 June–2 July 2010; pp. 4719–4724. [Google Scholar]
  27. Chen, X.; Hao, F. Event-triggered average consensus control for discrete-time multi-agent systems. IET Control Theory Appl. 2012, 6, 2493–2498. [Google Scholar] [CrossRef]
  28. Jiang, Y.; Fan, J.; Chai, T.; Li, J.; Lewis, F.L. Data-driven flotation industrial process operational optimal control based on reinforcement learning. IEEE Trans. Ind. Inform. 2018, 14, 1974–1989. [Google Scholar] [CrossRef]
  29. Watkins, C.J.C.H.; Dayan, P. Q-learning. Mach. Learn. 1992, 8, 279–292. [Google Scholar] [CrossRef]
  30. Alsheikh, M.A.; Lin, S.; Niyato, D.; Tan, H.P. Machine learning in wireless sensor networks: Algorithms, strategies, and applications. IEEE Commun. Surv. Tutor. 2014, 16, 1996–2018. [Google Scholar] [CrossRef]
  31. Vamvoudakis, K.G.; Modares, H.; Kiumarsi, B.; Lewis, F.L. Game theory-based control system algorithms with real-time reinforcement learning: How to solve multiplayer games online. IEEE Control Syst. 2017, 37, 33–52. [Google Scholar]
  32. Peng, Z.; Luo, R.; Hu, J. Optimal tracking control of nonlinear multiagent systems using internal reinforce Q-learning. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 4043–4055. [Google Scholar] [CrossRef] [PubMed]
  33. Wang, D.; Liu, D.; Wei, Q.; Zhao, D.; Jin, N. Optimal control of unknown nonaffine nonlinear discrete-time systems based on adaptive dynamic programming. Automatica 2012, 48, 1825–1832. [Google Scholar] [CrossRef]
  34. Peng, Z.; Hu, J.; Shi, K.; Luo, R.; Huang, R.; Ghosh, B.K.; Huang, J. A novel optimal bipartite consensus control scheme for unknown multi-agent systems via model-free reinforcement learning. Appl. Math. Comput. 2020, 369, 124821. [Google Scholar] [CrossRef]
  35. Zhang, H.; Yue, D.; Dou, C.; Zhao, W.; Xie, X. Data-driven distributed optimal consensus control for unknown multiagent systems with input-delay. IEEE Trans. Cybern. 2019, 49, 2095–2105. [Google Scholar] [CrossRef] [PubMed]
  36. Si, J.; Wang, Y.-T. Online learning control by association and reinforcement. IEEE Trans. Neural Netw. 2001, 12, 264–276. [Google Scholar] [CrossRef]
Figure 1. The topology structure for leader-follower MASs.
Figure 1. The topology structure for leader-follower MASs.
Entropy 25 00299 g001
Figure 2. The tracks for the leader and followers.
Figure 2. The tracks for the leader and followers.
Entropy 25 00299 g002
Figure 3. The comparison of the trigger time number involving the suggested method as well as the conventional approach.
Figure 3. The comparison of the trigger time number involving the suggested method as well as the conventional approach.
Entropy 25 00299 g003
Figure 4. The triggering instant for each agent.
Figure 4. The triggering instant for each agent.
Entropy 25 00299 g004
Figure 5. The triggering error trajectory | | ϵ i s ( k ) | | 2 in addition to triggering thresholds π i T ( i = 1 , 2 , 3 , 4 , 5 , 6 ) .
Figure 5. The triggering error trajectory | | ϵ i s ( k ) | | 2 in addition to triggering thresholds π i T ( i = 1 , 2 , 3 , 4 , 5 , 6 ) .
Entropy 25 00299 g005
Figure 6. Local neighborhood errors e i 1 ( k ) with the proposed control method.
Figure 6. Local neighborhood errors e i 1 ( k ) with the proposed control method.
Entropy 25 00299 g006
Figure 7. Local neighborhood errors e i 2 ( k ) with proposed control method.
Figure 7. Local neighborhood errors e i 2 ( k ) with proposed control method.
Entropy 25 00299 g007
Figure 8. Local neighborhood errors e i 1 ( k ) of [32].
Figure 8. Local neighborhood errors e i 1 ( k ) of [32].
Entropy 25 00299 g008
Figure 9. Local neighborhood errors e i 2 ( k ) of [32].
Figure 9. Local neighborhood errors e i 2 ( k ) of [32].
Entropy 25 00299 g009
Figure 10. The estimation of weight parameters of the ANN of [32].
Figure 10. The estimation of weight parameters of the ANN of [32].
Entropy 25 00299 g010
Figure 11. Estimation of the weight parameters of an ANN using the proposed control method.
Figure 11. Estimation of the weight parameters of an ANN using the proposed control method.
Entropy 25 00299 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Wang, X.; Tang, Y.; Liu, Y.; Hu, J. Optimal Tracking Control of a Nonlinear Multiagent System Using Q-Learning via Event-Triggered Reinforcement Learning. Entropy 2023, 25, 299. https://doi.org/10.3390/e25020299

AMA Style

Wang Z, Wang X, Tang Y, Liu Y, Hu J. Optimal Tracking Control of a Nonlinear Multiagent System Using Q-Learning via Event-Triggered Reinforcement Learning. Entropy. 2023; 25(2):299. https://doi.org/10.3390/e25020299

Chicago/Turabian Style

Wang, Ziwei, Xin Wang, Yijie Tang, Ying Liu, and Jun Hu. 2023. "Optimal Tracking Control of a Nonlinear Multiagent System Using Q-Learning via Event-Triggered Reinforcement Learning" Entropy 25, no. 2: 299. https://doi.org/10.3390/e25020299

APA Style

Wang, Z., Wang, X., Tang, Y., Liu, Y., & Hu, J. (2023). Optimal Tracking Control of a Nonlinear Multiagent System Using Q-Learning via Event-Triggered Reinforcement Learning. Entropy, 25(2), 299. https://doi.org/10.3390/e25020299

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop