Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
Next Article in Journal
Exploring the Extended Beta-Logarithmic Function: Matrix Arguments and Properties
Next Article in Special Issue
Adaptive Visual Control for Robotic Manipulators with Consideration of Rigid-Body Dynamics and Joint-Motor Dynamics
Previous Article in Journal
DE-MKD: Decoupled Multi-Teacher Knowledge Distillation Based on Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Iterative Learning Constrained Control for Linear Motor-Driven Gantry Stage with Fault-Tolerant Non-Repetitive Trajectory Tracking

Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin 150001, China
Mathematics 2024, 12(11), 1673; https://doi.org/10.3390/math12111673
Submission received: 25 March 2024 / Revised: 12 May 2024 / Accepted: 22 May 2024 / Published: 27 May 2024
(This article belongs to the Special Issue Application of Mathematical Method in Robust and Nonlinear Control)

Abstract

:
This article introduces an adaptive fault-tolerant control method for non-repetitive trajectory tracking of linear motor-driven gantry platforms under state constraints. It provides a comprehensive solution to real-world issues involving state constraints and actuator failures in gantry platforms, alleviating the challenges associated with precise modeling. Through the integration of iterative learning and backstepping cooperative design, this method achieves system stability without requiring a priori knowledge of system dynamic models or parameters. Leveraging a barrier composite energy function, the proposed controller can effectively regulate the stability of the controlled system, even when operating under state constraints. Instability issues caused by actuator failures are properly addressed, thereby enhancing controller robustness. The design of a trajectory correction function further extends applicability. Experimental validation on a linear motor-driven gantry platform serves as empirical evidence of the effectiveness of the proposed method.

1. Introduction

In recent years, with the ongoing advancement of modern industry towards intelligence and efficiency [1,2,3], linear motor-driven gantry stages (LMDGSs) have found widespread application [4,5] in the high-speed, high-precision intelligent equipment industry, such as surface mount technology (SMT), mechanical machining centers, and lithography machines, due to its rapid and precise positioning capabilities [6,7]. Consequently, a plethora of complex control issues have emerged [8,9], and solutions to these problems are still being actively researched and explored [10,11].
The first challenge addressed in this article pertains to the state constraints of LMDGSs [12,13,14]. In the practical applications of these platforms, it is common practice to impose state constraints on the system to ensure safety and control effectiveness. For instance, due to the limited travel range of LMDGSs, maximum value constraints are placed on the position states of the moving platform to ensure safety [15]. Introducing time-varying dynamic position constraints on the motion segment of LMDGSs aims at achieving the desired tracking error in line with actual motion [16,17,18]. Using the barrier composite energy function and backstepping cooperative design, the controller in [19] can effectively regulate the stable operation of the controlled system, even when the state of the linear motor is constrained.
In addition to dynamic state constraints, actuator faults [20,21,22] also demand attention, as system reliability is of paramount importance for LMDGSs. In practical applications, continuous operation of LMDGSs 24 h a day is common, and under prolonged, uninterrupted, and high-intensity operation, wear and tear gradually accumulates on actuators. Moreover, unforeseen events may cause the permanent degradation of actuator output capabilities. Since these changes are usually challenging to detect and repair promptly, this article also explores solutions to address actuator fault issues [23,24]. By introducing the event-triggered adaptive control law [25], online adaptive adjustments are achieved, enhancing applicability and robustness to actuator faults. Actually, it mitigates the impact of both actuator faults and measurement error parameters [26].
In practical applications of LMDGSs, most scenarios involve extended periods of repetitive tasks. This feature makes iterative learning control (ILC) have a better control effect and lower control complexity than adaptive robust control (ARC) and proportional-integral-differential (PID) [19]. This paper builds on the work of [19] to further improve the iterative learning controller by adding non-repetitive trajectory tracking capabilities and fault-tolerant control. This article aims at applying ILC as the cornerstone for devising a comprehensive solution to the aforementioned issues. Nevertheless, real-world applications inevitably introduce certain non-repetitive constraints, such as obstacle avoidance, which disrupt the repetitive nature of the tasks [27]. In [28], an ILC method is introduced based on an advanced internal model, along with convergence conditions, achieving robust tracking for non-repetitive system tasks. To enhance practicality and broaden the applicability of the control strategy, this article introduces a trajectory correction function to address non-repetitive trajectory challenges.
Considering the circumstances outlined above, this article introduces a non-repetitive trajectory adaptive iterative learning fault-tolerant controller for LMDGSs with state constraints, even in the absence of precise model information. The key contributions of this work are as follows:
  • The designed controller achieves the desired tracking precision without requiring precise modeling or precise a priori knowledge of the system;
  • By formulating a Barrier Composite Energy Function (BCEF) and integrating it into the controller design, the tracking control challenge associated with a LMDGS subject to state variables constraints is properly addressed;
  • By incorporating the contributions of the controller’s output fault term, the designed control scheme can effectively mitigate to a significant extent the adverse effects of actuator failures, ensuring system’s convergence.
  • By introducing in the design the desired trajectory correction function, non-repetitive desired trajectories can be effectively tracked while ensuring convergence of the system’s tracking error.
The subsequent sections of this article are structured as follows. In Section 2, the tracking problem for an LMDGS is defined and a simplified plant model is presented. The trajectory correction function is designed in Section 3. Section 4 delves into a comprehensive exploration of the controller design and the BCEF. In Section 5, convergence and finiteness aspects of the proposed ILC tracking scheme are analyzed. Experimental results are presented and discussed in Section 6. Finally, Section 7 presents the concluding remarks and findings of this work.

2. Problem Statement and Preliminaries

The LMDGS depicted in Figure 1 is a two-dimensional multiple-input-multiple-output-system. This system exhibits parametric and non-parametric uncertainties due to discrepancies between nominal and real values, as well as losses incurred during actual system operation. A simplified continuous time model of the LMDGS can be represented as
s ˙ = v , m v ˙ = g u F S f k c k v v + d 1 , u F = p u + ϕ , y = { s , v } ,
where m = [ m x , m y ] T denotes the masses of the X and Y axes moving parts, s = [ s x ( t ) , s y ( t ) ] T represents the position of the motion axis, and v = [ v x ( t ) , v y ( t ) ] T is the corresponding velocity. S f k c R 2 × 1 refers to approximate Coulomb friction, where S f = S f ( v ) is an unknown continuous function. k v v R 2 × 1 is the viscous friction. k c and k v denote friction coefficients. f = S f k c + k v v denotes the system’s reverse resistance and d 1 ( s , v , t ) = [ d x ( t ) , d y ( t ) ] T represents external disturbances and unknown system dynamics. g = diag [ g x ( t ) , g y ( t ) ] is the gain function, which is not fully known. The controller input is u = u ( t ) R 2 × 1 . When affected by actuator failure, the actual input to the controlled plant is u F . p = diag [ p 1 , p 2 ] R 2 × 2 represents multiplicative actuator fault and ϕ R 2 × 1 denotes additive actuator fault. Both actuator faults can vary with time and iteration, whereas 0 < p 1 , p 2 1 . y is the system output. To facilitate subsequent operations, transform system (1) into the following by introducing new variables: X 1 = s , X 2 = v , X { X 1 T , X 2 T } T , G = g , Φ T F = S f k c / m , and d = ( d 1 ( X , t ) k v v ) / m .
X 1 = X 2 , X ˙ 2 = Φ T F + d + G u F , u F = p u + ϕ , y = { X 1 , X 2 } .
Then, the system (2) is transformed into the corresponding n-th iterative form as follows:
X 1 , n = X 2 , n , X ˙ 2 , n = Φ T F n + d n + G n u n F , u n F = p n u n + ϕ n , y = { X 1 , n , X 2 , n } .
System states X 1 , n = X 1 , n ( t ) R 2 × 1 and X 2 , n = X 2 , n ( t ) R 2 × 1 represent the state of the system at the n-th iteration, and n Z + . X { X 1 T , X 2 T } T , X n { X 1 , n T , X 2 , n T } T , t [ 0 , T ˘ ] , T ˘ R + signifies the time taken for each system iteration. The unmodeled dynamics and unknown external disturbances d n = d n ( t ) R 2 × 1 at the n-th iteration. Φ T F n refers to the parametric uncertainty, the unknown iteration invariant is given by Φ = Φ ( t ) R 2 × 2 , and F n = F ( X n , t ) R 2 × 1 is a known state-dependent nonlinear function at the n-th iteration. In addition, G n = G n ( t ) R 2 × 2 is the unknown control input gain function at the n-th iteration. The controller input is u n = u n ( t ) R 2 × 1 at the n-th iteration. When affected by actuator failure, the actual input to the controlled plant is u n F at the n-th iteration. p n = p n ( t ) = d i a g [ p n , 1 ( t ) , p n , 2 ( t ) ] R 2 × 2 represents multiplicative actuator fault and ϕ n R 2 × 1 denotes additive actuator fault at the n-th iteration. Both actuator faults can vary with time and iteration, whereas 0 < p n , 1 , p n , 2 1 .
ILC is a method in control systems where a controller is refined through repetition. It is designed for processes that repeat in cycles, allowing the controller to learn from previous cycles to improve performance in subsequent ones [29]. ILC involves tracking error update control signals based on earlier iterations, continuously improving the response and stability of the system through n iterations of updates. The simple mathematical equation of the P-type iterative learning controller is as follows:
u n = u n 1 + K p ( X 1 , n X 1 r , n ) ,
where K p is the learning rate and X 1 r , n refers to the system reference position state at the n-th iteration. For the convenience of further discussion, the following assumptions are considered:
Assumption 1.
Generalized control input gain function G u , n = ( ( ( G n p n + ( G n p n ) T ) / 2 ) ) is positive definite and bounded.
Remark 1.
This assumption is very practical in many engineering systems. In real engineering systems, the control input gain is generally positive and bounded, owing to the limited power of the actuators themselves. This condition is essential for maintaining consistency of control direction during operation. Note, that complete information regarding the control input gain function is usually not available. The existence of specific upper or lower bounds for the gain function is unknown. Thus, 0 < b ̲ I m G u , n , where b ̲ is unknown.
Assumption 2.
Regarding control input gain, actuator fault, and unmodeled dynamics, | | d n + G n ϕ n | | has an unknown upper bound d ¯ > 0 .
Remark 2.
In real-world engineering applications, disturbances during system operation are typically bounded. These disturbances can disrupt system stability but are not typically large enough to cause system shutdown directly. Assumption 2 can be readily extended to encompass constraints with Lipschitz-like characteristics.
Assumption 3.
Φ n has an unknown bound θ ¯ , 0 < | | Φ n | | θ ¯ .
Remark 3.
Φ n represents the system’s parameter uncertainty. In real engineering scenarios, the parameters of physical systems are generally limited and tend to change slowly. In common practical scenarios, Φ n remains independent of the iteration index n. However, in this article, the possibility of Φ n varying with iterations is considered, thus expanding the applicability of the proposed strategy.
Lemma 1
(see [30]). For any ϵ > 0 and z R , it can be obtained that 0 | z | z tanh ( z ϵ ) k l ϵ , k l = 0.2785 .
Lemma 2
(see [31]). For a convergent sequence ϵ = q / k l , where k Z + , q R , and l Z + , and q > 0 , l > 1 , one has lim k j = 1 k ϵ j 2 q .

3. Modification of Reference Trajectories

In this article, reference trajectories X 1 r , n and X 2 r , n can vary in different iterations. Hence, the system’s initial states can also vary between iterations. That is, the system’s initial state tracking errors are probably not zero, and even vary in each iteration. Therefore, it is impossible for system states to track the desired trajectories well over the entire time [ 0 , T ˘ ] of every iteration. In response, a short initial time interval [ 0 , T ˘ 1 ] is ignored to ensure good tracking for the rest of the time [ T ˘ 1 , T ˘ ] . Therefore, a new reference trajectory modified method for ILC is proposed in this paper, as follows:
X 1 d , n ( t ) = X 1 r , n ( t ) j = 1 2 ω j ( t ) ( X j r , n ( 0 ) X j , n ( 0 ) ) ,
and, from the derivative definition,
X 2 d , n ( t ) = X ˙ 1 d , n ( t ) = X ˙ 1 r , n ( t ) j = 1 2 ω ˙ j ( t ) ( X j r , n ( 0 ) X j , n ( 0 ) ) = X 2 r , n ( t ) j = 1 2 ω ˙ j ( t ) ( X j r , n ( 0 ) X j , n ( 0 ) ) ,
where X 1 d , n ( t ) and X 2 d , n ( t ) are the modified desired reference trajectories. ω j ( t ) R , j = 1 , 2 are “trajectory modifier functions” satisfying the following properties:
(1)
ω j ( t ) , j = 1 , 2 , are 2nd order differentiable;
(2)
ω j ( t ) , j = 1 , 2 , are uniformly bounded over [ 0 , T ˘ ] ;
(3)
ω j j 1 ( t ) = 1 , j = 1 , 2 , whereas ω j i ( t ) = 0 , i = 0 , 1 , 2 , i j , i j 1 , where ω j i ( t ) representes the ith order time derivative;
(4)
ω j i ( t ) = 1 , j = 1 , 2 , i = 0 , 1 , 2 , over t [ T ˘ 1 , T ˘ ] , where 0 < T ˘ 1 < T ˘ .
The example used in this paper is as follows:
ω j ( t ) = t j 1 ( j 1 ) ! cos 3 π 2 sin 3 ( π t 2 2 T ˘ 1 2 ) , t [ 0 , T ˘ 1 ) 0 , t [ T ˘ 1 , T ˘ ] ,
where j = 1 , 2 , and ! represents a factorial, namely j ! = j ( j 1 ) 1 .
It can be seen from Figure 2 and (7) that the trajectory modifier function ω 1 is 1 at t = 0 and 0 at t = T ˘ 1 , and ω 2 is 0 at t = 0 and 0 at t = T ˘ 1 , all of which are continuously variable and differentiable. Combined with (5) and (6), we can see that no matter what X 1 r , n value is, X 1 d , n is 0 at t = 0 , which satisfies the condition that the initial condition of each iteration of the iterative learning controller is the same. And X 1 d , n is always equal to X 1 r , n . The same is true for X 2 d , n . Therefore, the new modifier trajectory method composed of (5)–(7) not only satisfies the conditions of ILC but also does not affect the tracking target of the system, which is useful.
Remark 4.
Typically, conventional ILC controllers require the initial conditions to be identical for each iteration, meaning that the system’s initial tracking error is zero. However, this paper addresses the problem of non-repetitive trajectory tracking, which does not meet this requirement. The trajectory correction function helps to nullify the initial tracking error, allowing the system to align with the target estimation within a very short time frame. This is why the ILC-based controller in this paper can ensure system stability and maintain robust tracking performance. That is, with trajectory modifier function ω j ( t ) , one has X j d , n ( 0 ) = X j , n ( 0 ) and X j d , n ( t ) = X j r , n ( t ) , j = 1 , 2 over t [ T ˘ 1 , T ˘ ] . The proposed control strategy follows modified trajectories X j d , n ( t ) . This implies that when dealing with non-repetitive trajectory tracking and random nonzero initial tracking errors, tracking performance may be slightly compromised during a short time interval t [ 0 , T ˘ 1 ] at the beginning of each iteration. It is important to note that this trade-off is made to ensure that control objectives are achieved within t [ T ˘ 1 , T ˘ ] . Since T ˘ 1 can be configured to be as short as necessary, [ T ˘ 1 , T ˘ ] can closely approximate the entire [ 0 , T ˘ ] interval to a high degree of precision.
System state tracking errors are designed as follows:
e j , n ( t ) = X j , n ( t ) X j r , n ( t ) , j = 1 , 2 , n Z + .
Thus, system state constraints in every iteration are given by
| | e 1 , n ( t ) | | < k b , n ( t ) , t [ T ˘ 1 , T ˘ ] ,
where | | · | | represents the Euclidean norm, and k b , n > 0 for t [ T ˘ 1 , T ˘ ] are the constraint functions of system position state tracking error at the nth iteration.
Remark 5.
In practice, ensuring the stable operation of a system with state constraints is a common and demanding control challenge. Many systems require position and velocity regulation to maintain stability. For instance, this is essential in SMT systems, which place circuit boards or lithographic presses for chip engraving [32]. Various other system states, such as acceleration and jerk, may also be constrained by system limitations or actuator torque boundaries.
For the sake of notation simplicity, time and state variables may be omitted in the subsequent analysis whenever this does not lead to any ambiguity.

4. Controller Design and BCEF

In this section, the control algorithm design is first presented. Fictitious state tracking errors are introduced such that
z 1 , n = X 1 , n X 1 d , n z 2 , n = X 2 , n σ 1 , n ,
where σ 1 , n is the stabilizing term as follows:
σ 1 , n = X ˙ 1 d , n + K σ , n z 1 , n K 1 z 1 , n c o s 2 π z 1 , n T z 1 , n 2 k b , n 2 2 K σ , n k b , n 2 π z 1 , n z 1 , n T z 1 , n s i n π z 1 , n T z 1 , n 2 k b , n 2 × c o s π z 1 , n T z 1 , n 2 k b , n 2 ,
where K 1 is a positive design constant and K σ , n = k ˙ b , n / k b , n .
The following controller is proposed by employing BCEF, inverse design, and the features of system (3),
u n = u ˘ n v n tanh ( z 2 , n T u ˘ n v n ϵ n ) ,
where ϵ n > 0 is designed from Lemma 2 at each iteration n, and u ˘ is the estimate of u ¯ = 1 / b ̲ for the nth iteration in Remark 1, which is designed as
u ˘ n = u ˘ n 1 + K u ˘ z 2 , n T v n , u ˘ 0 = 0 ,
where K u ˘ > 0 is a designed parameter and v n is designed as
v n = v ˘ n + v n 0
v ˘ n = v ˘ n 1 + K v z 2 , n , v ˘ 0 = 0
v n 0 = σ ˙ 1 , n + K 2 z 2 , n + z 1 , n cos 2 ( π z 1 , n 2 2 k b , n 2 ) + d ˘ n tanh ( z 2 , n ϵ n ) + θ ˘ n F ˘ n tanh ( z 2 , n T F ˘ n ϵ n ) ,
where K v > 0 is a design parameter, and K 2 > 0 is the designed control gain. d ˘ n and θ ˘ n are the estimates for d ¯ and θ ¯ at the nth iteration in Assumptions 2 and 3, respectively:
d ˘ n = d ˘ n 1 + K d z 2 , n T tanh ( z 2 , n ϵ n ) ,
θ ˘ n = θ ˘ n 1 + K θ z 2 , n T F ˘ n tanh ( z 2 , n T F ˘ n ϵ n ) ,
where K d > 0 and K θ > 0 are design parameters.
The following BCEF is applied for facilitating further analysis,
E n ( t ) = V 1 , n ( t ) + V 2 , n ( t ) + V v , n ( t ) + V u ˘ , n ( t ) + V d , n ( t ) + V θ , n ( t ) ,
V 1 , n ( t ) = k b , n 2 ( t ) π tan ( π z 1 , n T ( t ) z 1 , n ( t ) 2 k b , n 2 ( t ) )
V 2 , n ( t ) = 1 2 z 2 , n T ( t ) z 2 , n ( t ) ,
V v , n ( t ) = 1 2 K v 0 t v ˘ n T ( τ ) v ˘ n ( τ ) d τ ,
V u ˘ , n ( t ) = b ̲ 2 K u ˘ 0 t u ˜ n 2 ( τ ) d τ ,
V d , n ( t ) = 1 2 K d 0 t d ˜ n 2 ( τ ) d τ ,
V θ , n ( t ) = 1 2 K θ 0 t θ ˜ n 2 ( τ ) d τ ,
where u ˜ = u ˘ u , d ˜ = d ˘ d and θ ˜ = θ ˘ θ .
Remark 6.
When designing the BECF, the following BLF from authors’ previous research [19] is chosen as the basis:
V n = k b , n 2 π tan ( π z 1 , n T z 1 , n 2 k b , n 2 ) , z 1 , n ( 0 ) = 0 ,
where z 1 , n = X 1 , n X 1 d , n is the fictitious position tracking error of the LMDGS. It follows from (26) that, if | z 1 , n | increases, V n tends to infinity, as k b , n is a predefined bound. Subsequently, a comprehensive analysis of the complete BCEF is carried out to prove the system’s state-tracking error convergence and ensure compliance with state constraints.
Remark 7.
The control algorithm in this paper is implemented using the designed BCEF and backstepping method. Based on Remark 6, V 1 , n is designed to constrain the system’s position state, V 2 , n ensures effective tracking of the system’s velocity, V d , n enables the system to adaptively handle external disturbances, V θ , n enhances robustness against parameter uncertainty and unmodeled dynamics, V u ˘ , n indicates the fault of the actuator, while V v , n and ensure stability of the iterative process. The ILC iteration form of the controller is in the form of P, and according to Lemma 1 and Lemma 2, the corresponding parts are derived through the designed BCEF and backstepping method. The controller is represented by (12) and (13), while (15) to (18) depict the iterative computation process. Although it may appear complex, it involves simple mathematical operations. For comparison with other algorithms, please refer to Section 6.2.

5. Convergence Analysis

This section provides evidence of convergence of system tracking errors and constraints on both system output and states.
Theorem 1.
(1) For LMDGS system (3) under actuator faults and Assumptions 1–3, given control law (12) with ILC laws (13)–(16), system state tracking errors e 1 , n ( t ) and e 2 , n ( t ) uniformly converge to 0 over t [ T ˘ 1 , T ˘ ] , n . That is, lim n e j , n = 0 over t [ T ˘ 1 , T ˘ ] , j = 1 , 2 .
(2) Constraints on system states are ensured within each iteration, that is | e 1 , n | < K 1 , n , and all system states are finite for t [ 0 , T ˘ ] during each iteration.
Proof of Theorem 1.
The proof consists of three steps. First, it is shown that E n ( t ) is finite. Next, uniform convergence of state tracking errors is demonstrated. Finally, it is proved that system states are finite. □

5.1. Finiteness of E n ( t )

First, it will be proved that the designed BCEF is bounded for t [ 0 , T ˘ ] over every iteration. For any iteration index n, and from the difference definition, one has
Δ E n ( t ) = Δ V 1 , n + Δ V 2 , n + Δ V v , n + Δ V u ˘ , n + Δ V d , n + Δ V θ , n .
Terms in (27) are checked in turn. For Δ V 1 , n , one has
Δ V 1 , n = k b , n 2 ( 0 ) π tan π z 1 , n T ( 0 ) z 1 , n ( 0 ) 2 k b , n 2 + 0 t z 1 , n z ˙ 1 , n cos 2 ( π z 1 , n 2 ( τ ) 2 k b , 1 2 ) d τ k b , n 1 2 π tan π z 1 , n 1 T z 1 , n 1 2 k b , n 1 2 = k b , n 2 ( 0 ) π tan π z 1 , n T ( 0 ) z 1 , n ( 0 ) 2 k b , n 2 k b , n 1 2 π tan π z 1 , n 1 T z 1 , n 1 2 k b , n 1 2 + 0 t ( ( k ˙ b , n k b , n ) π z 1 , n T z 1 , n cos 2 ( π z 1 , n T z 1 , n 2 k b , n 2 ) + 2 k b , n k ˙ b , n π tan π z 1 , n T z 1 , n 2 k b , n 2 + z 1 , n T ( z 2 , n + σ 1 , n X ˙ 1 d , n ) cos 2 ( π z 1 , n T z 1 , n 2 k b , n 2 ) ) d τ .
Designing stabilizing function σ 1 , n as in (11), and with conditions z 1 , n ( 0 ) = 0 for every iteration n under the designed trajectory modified function, one has
Δ V 1 , n < 0 t K 1 z 1 , n T z 1 , n + z 1 , n T z 2 , n cos 2 ( π z 1 , n 2 2 k b , n 2 ) d τ k b , n 1 2 π tan π z 1 , n 1 T z 1 , n 1 2 k b , n 1 2 .
Similarly, Δ V 2 , n leads to
Δ V 2 , n = 1 2 z 2 , n T ( 0 ) z 2 , n ( 0 ) + 0 t ( z 2 , n T z ˙ 2 , n ) d τ 1 2 z 2 , n 1 T z 2 , n 1 .
Note, that
X ˙ 1 d , n ( t ) = X ˙ 1 r , n ( t ) j = 1 2 ω ˙ ( t ) ( X j r , n ( 0 ) X j , n ( 0 ) ) ,
where, at time t = 0 , ω ˙ 2 ( 0 ) = 1 , and ω ˙ 1 ( 0 ) = 0 , one has X ˙ 1 d , n ( 0 ) = X 2 r , n ( 0 ) ( X 2 r , n ( 0 ) X 2 , n ( 0 ) ) = X 2 , n ( 0 ) . Thus, z 2 , n ( 0 ) = X 2 , n ( 0 ) σ 1 , n ( 0 ) = 0 .
At the final step of the backstepping process, the control signal comes into effect. Examining the dynamics of z ˙ 2 , n , it yields
z ˙ 2 , n = σ ˙ 1 , n + Θ n T F n + d n + G n ρ n u n + G n ϕ n .
Furthermore,
Δ V 2 , n = 1 2 z 2 , n T ( 0 ) z 2 , n ( 0 ) + 0 t ( z 2 , n T z ˙ 2 , n ) d τ 1 2 z 2 , n 1 T z 2 , n 1 ,
where z 2 , n ( 0 ) = 0 follows from the design of the modified reference and modifier functions. For term z 2 , n T z ˙ 2 , n one has
z 2 , n T z ˙ 2 , n = z 2 , n T σ ˙ 1 , n + z 2 , n T Θ n T F n + z 2 , n T ( d n + G n ϕ n ) + z 2 , n T G n ρ n u n .
Note, that
z 2 , n T Θ n T F n | | z 2 , n | | θ ˜ | | F n | | < θ ¯ K l ϵ n + θ ¯ z 2 , n T F n tanh z 2 , n T F n ϵ n ,
and
z 2 , n T ( d n + G n ϕ n ) | | z 2 , n | | d ˜ < d ¯ K l ϵ n + d ¯ z 2 , n T tanh z 2 , n ϵ n .
Concerning the control input, given Assumption 1 and Lemma 1, μ = 1 / b ̲ , one obtains
z 2 , n T G n ρ n u n < ϵ n b ̲ b ̲ u ˘ n z 2 , n T v k + b ̲ μ z 2 , n T v k b ̲ μ z 2 , n T v k < ϵ n b ̲ z 2 , n T v n b ̲ u ˜ n z 2 , n T v k .
Considering (14)–(16), (33) is transformed into
Δ V 2 , n < 0 t ( z 2 , n T v ˘ n z 2 , n T v n b ̲ u ˜ n + ϵ n ( b ̲ + d ¯ K l + θ ¯ K l ) z 1 , n T z 2 , n cos 2 ( π z 1 , n 2 2 k b , n 2 ) K 2 z 2 , n T z 2 , n d ˜ n z 2 , n T tanh z 2 , n ϵ n θ ˜ n z 2 , n T F n tanh z 2 , n T F n ϵ n ) d τ 1 2 z 2 , n 1 T z 2 , n 1 .
Regarding terms Δ V v , n , Δ V u ˘ , n , Δ V d , n , and Δ V θ , n , starting with Δ V v , n it can be obtained that
Δ V v , n ( t ) = 1 2 K v 0 t ( v ˘ n + v ˘ n 1 ) T ( v ˘ n v ˘ n 1 ) d τ = 1 2 K v 0 t ( 2 v ˘ n K v z 2 , n ) T ( K v z 2 , n ) d τ 0 t z 2 , n T v ˘ n d τ .
Similarly,
Δ V u ˘ , n ( t ) = b ̲ 2 K u ˘ 0 t u ˜ n 2 u ˜ n 1 2 d τ 0 t b ̲ z 2 , n T v n u ˜ n d τ .
Δ V d , n ( t ) = 1 2 K d 0 t d ˜ n 2 d ˜ n 1 2 d τ 0 t d ˜ n z 2 , n T tanh ( z 2 , n ϵ n ) d τ .
Δ V θ , n ( t ) = 1 2 K θ 0 t θ ˜ n 2 θ ˜ n 1 2 d τ θ ˜ n z 2 , n T F k tanh z 2 , n T F k ϵ n d τ .
Therefore, from (29), (38)–(42),
Δ E n = 0 t j = 1 2 K j z j , n T z j , n d τ + ϵ n ( b ̲ + d ¯ K l + θ ¯ K l ) T ˘ k b , n 1 2 π tan π z 1 , n 1 T z 1 , n 1 2 k b , n 1 2 1 2 z 2 , n 1 T z 2 , n 1 .
To prove E n is finite, from (43), one has
E n ( t ) E 1 ( t ) k = 2 n 0 t j = 1 2 K j z j , k T z j , k d τ + k = 2 n ϵ k ( b ̲ + d ¯ K l + θ ¯ K l ) T ˘ k = 2 n k b , k 1 2 π tan π z 1 , k 1 T z 1 , k 1 2 k b , k 1 2 k = 2 n 1 2 z 2 , k 1 T z 2 , k 1 .
From Lemma 2, k = 2 n ϵ k ( b ̲ + d ¯ K l + θ ¯ K l ) T ˘ is bounded. When n = 1 , E 1 ( t ) is bounded. Similarly for Δ E n ( t ) ,
E ˙ 1 = V ˙ 1 , 1 + V ˙ 2 , 1 + V ˙ v , 1 + V ˙ u ˘ , 1 + V ˙ d , 1 + V ˙ θ , 1 z 2 , 1 T v ˘ 1 z 2 , 1 T v 1 b ̲ u ˜ 1 + ϵ 1 ( b ̲ + d ¯ K l + θ ¯ K l ) j = 1 2 K j z j , 1 T z j , 1 d ˜ 1 z 2 , 1 T tanh z 2 , 1 ϵ 1 θ ˜ 1 z 2 , 1 T F n tanh z 2 , 1 T F k ϵ 1 + 1 2 K v v ˘ 1 T v ˘ 1 + b ̲ 2 K v ˘ u ˜ 1 2 + 1 2 K d d ˜ 1 2 + 1 2 K θ θ ˜ 1 2 .
From (13), (15), (17) and (18), one can obtain
u ˘ 1 = K u ˘ z 2 , 1 T v 1 , v ˘ 1 = K v ˘ z 2 , 1 , d ˘ 1 = K d z 2 , 1 T tanh z 2 , 1 ϵ 1 , θ ˘ 1 = K θ z 2 , 1 T F 1 tanh z 2 , 1 T F 1 ϵ 1 .
By introducing (46) into (45) and simplifying it,
E ˙ 1 < ϵ 1 ( b ̲ + d ¯ K l + θ ¯ K l ) + b ̲ u ˜ 1 2 2 K u ˘ + d ˜ 1 2 2 K d + θ ˜ 1 2 2 K θ < .
Hence, E 1 ( t ) is finite over t [ 0 , T ˘ ] . When n = 2 , from (44) and Lemma 2, E 1 ( t ) , ϵ 2 ( b ̲ + d ¯ K l + θ ¯ K l ) and k b , 1 2 π tan ( π z 1 , 1 T z 1 , 1 2 k b , 1 2 ) are finite and nonnegative. Thus, E 2 ( t ) is also finite over t [ 0 , T ˘ ] . Similarly, it can be concluded that for any iteration n, E n ( t ) is finite over t [ 0 , T ˘ ] .

5.2. Convergence of State Tracking Errors

5.2.1. Fictitious State Tracking Errors

Based on the previous analysis, when n , it follows that
lim n E n ( t ) E 1 ( t ) lim n k = 2 n 0 t j = 1 2 K j z j , k T z j , k d τ + lim n k = 2 n ϵ k ( b ̲ + d ¯ K l + θ ¯ K l ) T ˘ lim n k = 2 n k b , k 1 2 π tan π z 1 , k 1 T z 1 , k 1 2 k b , k 1 2 lim n k = 2 n 1 2 z 2 , k 1 T z 2 , k 1 .
Since E n ( t ) is positive and both C 1 = E 1 ( t ) and C 2 = lim n k = 2 n ϵ k ( b ̲ + d ¯ K l + θ ¯ K l ) T ˘ are bounded. [ ( k b , k 1 2 ) / π ] tan [ ( π z 1 , k 1 T ) / 2 k b , k 1 2 ] ( 1 / 2 ) z 1 , k 1 T z 1 , k 1 , hence,
lim n E n ( t ) C 1 + C 2 lim n k = 2 n j = 1 2 1 2 z j , k T z j , k lim n k = 2 n 0 t j = 1 2 K j z j , k T z j , k d τ > 0 .
Since K j 0 , then lim n j = 1 2 z j , n T z j , n = 0 . Thus,
lim n z j , n ( t ) = 0 , j = 1 , 2 , t [ 0 , T ˘ ] ,
and uniform convergence of the system fictitious full state tracking error is demonstrated.

5.2.2. State Tracking Errors

State tracking errors converge uniformly in [ 0 , T ˘ ] . For position state tracking error e 1 , n , from (5), (7), (8), (10) and (50), it follows that lim n e 1 , n ( t ) 0 for t [ T ˘ 1 , T ˘ ] . Similarly, from (6)–(8), (10) and (50), as n 0 , σ 1 , n X ˙ 1 d , n = X 2 , n , and z 2 , n = X 2 , n σ 1 , n = e 2 , n over t [ T ˘ 1 , T ˘ ] . Since z 2 , n converges uniformly, e 2 , n also converges uniformly to 0 as n for t [ T ˘ 1 , T ˘ ] . In other words,
lim n e j , n ( t ) = 0 , t [ T ˘ 1 , T ˘ ] , j = 1 , 2 .

5.3. Boundedness of System States

Since E n ( t ) has been proven to be finite, BCEF [ 2 k b , n 2 / π ] tan ( π z 1 , n T z 1 , n / 2 k b , n 2 ) E n < is bounded. Hence, by scaling transformation, one can obtain
| | z 1 , n | | 2 k b , n 2 π tan 1 ( E n π k b , n 2 ) < 2 k b , n 2 π π 2 = k b , n .
Therefore, position tracking error z 1 , n is always smaller than bound k b , n . Thus X 1 , n is bounded during every iteration over [ T ˘ 1 , T ˘ ] . From (50), during [ 0 , T ˘ 1 ] | | z 1 , n | | = | | X 1 , n X 1 r , n j = 1 2 ω j ( X j r , n X j , n ) | | . Besides, X 1 r , n , | | ( X j r , n X j , n ) | | and ω j are bounded at j = 1 , 2 , hence X 1 , n is bounded during t [ 0 , T ˘ ] on each iteration.
Similarly, it can be concluded that X 2 , n is finite during t [ 0 , T ˘ ] on every iteration. Since both z 2 , n and σ 1 , n are finite from (11) and z 2 , n = X 2 , n σ 1 , n , X 1 , n is also bounded.
Remark 8.
n , z 1 , n and z 2 , n converges to zero uniformly, and e 1 , n and e 2 , n tend to 0 as well. Hence, as iterations progress, the system’s state tracking error achieves uniform convergence, while also ensuring the boundedness of both system output and states.
Remark 9.
Theoretically, T ˘ 1 can be chosen arbitrarily small to allow [ T ˘ 1 , T ˘ ] to approximate the entire iteration period [ 0 , T ˘ ] with high precision. However, since T ˘ 1 appears in the denominator of the desired trajectory correction function, making T ˘ 1 excessively small can result in excessively large control signals, which is detrimental. Therefore, the value of T ˘ 1 should be balanced according to the specific requirements of practical applications.
Remark 10.
The analysis in this chapter provides comprehensive evidence for the theoretical effectiveness of the control algorithm proposed in this paper. This approach combines iterative learning with backstepping cooperative design, achieving system stability without requiring prior knowledge of dynamic models or parameters. By using the barrier composite energy function, this controller effectively regulates system stability even when operating under state constraints. The instability issues caused by actuator failures are properly addressed, thus enhancing the robustness of the controller. The design of the trajectory correction function further extends its applicability.

6. Experimental Results and Discussion

In this section, experimental results are presented and discussed, demonstrating the efficacy and practicality of the proposed methodology.

6.1. Experimental Platform

Experiments have been conducted on the LMDGS (Akribis System, Shanghai, China) depicted in Figure 3. Both the X and Y axes are driven by flat linear motors. Given the extended span of the X-axis, identical motor drives are used on either side of the Y-axis. It is important to note that this article does not delve into the synchronization issue of the Y-axis. Hence, in the experiments, the Y-axis is considered as a single-drive axis. To provide accurate feedback for the entire system, optical encoders with 50 nm resolution are strategically positioned alongside the rail. The control algorithm runs on an industrial computer connected to an EtherCAT network, enabling closed-loop control of the entire experimental platform. The master controller operates with a control period of h = 1 ms. The remaining parameters of the experimental platform are detailed in Table 1.

6.2. Validation Experiments

The practical effectiveness and merits of the proposed controller are demonstrated in two experimental scenarios. A varying curvilinear ellipse where the X and Y frequencies are identical is used as the target trajectory to show the versatility of the approach. The first experiment validates the usefulness of the developed ILC scheme for tracking the elliptical target trajectory. By introducing variations in state and input constraints, the study proves the ability of the system to properly work in state- and input-constrained scenarios. The second experiment involves the application of three different methods for tracking the elliptical target trajectory, followed by a comparative analysis of the tracking and control performance among these approaches.
The target trajectory is an ellipse that varies with iteration index n, described as follows (units are meters)
x ( t ) = 0.05 ( sin ( π 2 t ) + s i n ( n ) ) , y ( t ) = 0.02 ( 1 cos ( π 2 t ) + s i n ( n ) ) , t 0 .
For comparison purposes, two additional commonly used control methods, namely PID and adaptive robust control (ARC), are also implemented for tracking the same trajectory. Notably, the learning law of the proposed ILC is also based on the PD form. For a fair comparison, the same trajectory modifier function is used in all three methods, subsequently referred to as TMPID, TMARC, and TMILC. (“Position” is abbreviated as “Pos.” and “Velocity” as “Vel.”.)

6.2.1. TMILC Performance Experiment

To validate the effectiveness of the proposed TMILC in mitigating the impact of actuator faults, in this experiment actuator power faults for the X and Y axes are all introduced as p n = 0.8 + 0.2 e 5 ( t ( n 1 ) T ˘ ) and ϕ n = 0.0005 ( n 1 ) . The modifier function parameters are set as T ˘ = 4 , T ˘ 1 = 1 . Detailed control parameters can be found in the next subsubsection.
Remark 11.
The form of actuator faults simulated in this experiment largely encompasses common scenarios in engineering applications [33]. The multiplicative component of the actuator fault represents the loss of actuator power, indicating a gradual decrease. The additive component accounts for changes in friction and other resistance. During the experiment, the fault becomes active from the second iteration ( n = 2 ) at t = 4 s, and it continues to change with each subsequent iteration.
Figure 4 depicts the changes in position and velocity states of the X-axis during the first and twentieth iterations, respectively. It is clear that the position and velocity states of the X-axis are closer to the corresponding target trajectory as iteration progresses, nearly overlapping it. Additionally, throughout all the iterations, the position state of the X-axis is consistently well-constrained within the predetermined limits. It is worth noting that these effects persist even during the trajectory-modified period (t = [0, 1]). Similar conclusions can be drawn from the state curve plots of the Y-axis in Figure 5. Therefore, under the predefined position states and actuator faults, the proposed TMILC method effectively tracks the non-repetitive target trajectory, validating the contributions and usefulness of this work.

6.2.2. Comparison of TMARC, TMPID, and TMILC

For the sake of fair comparison, in the experiments, the same reference trajectory is used in all methods, and the parameters of each method are carefully selected as follows to obtain the configuration with the best performance.
TMILC: Control parameters are selected as K 1 = [ 6 , 8 ] and K 2 = [ 2 , 3 ] . TMILC gains are chosen as K u ˘ = [ 0.2 , 0.1 ] , K v = [ 0.2 , 0.1 ] , K d = [ 0.2 , 0.1 ] and K θ = [ 0.2 , 0.1 ] . System constraints are k b , n = [ 0.01 , 0.01 ] .
TMARC: The Coulomb friction nonlinear term is S f ( x 2 ) = arctan(800 x 2 ), where arctan(800 x 2 ) is a continuous smooth function and x 2 is the system velocity. V s = K s p K e e K a e 2 p . Gains are set to K s = diag[9.3, 3.2], K e = diag[320, 310], and K a = diag[500, 500]. The initial parameter vector is chosen as θ = [0.05, 0.1, 0.3, 0.08, 0.03, 0.02, 0, 0]. The range of θ is selected as θ min = [0.03, 0.7, 0.2, 0.06, 0.01, 0.01, −0.02, −0.04] and θ max = [0.07, 0.13, 0.4, 0.1, 0.05, 0.03, 0.02, 0.04].
TMPID: Gains are P = 960, I = 75, and D = 9850.
After carrying out numerous experiments, t = [ 80 , 90 ] turned out to be the time interval where results for all three methods stabilized with minimal errors. From Figure 6a, it is evident that the X-axis tracking error of TMILC is smaller compared to that of TMARC and TMPID. Moreover, the X-axis tracking error curve of TMILC exhibits better smoothness with minimal fluctuations, whereas for TMARC and TMPID noticeable peaks appear. At t = 4 n , when the correction function is activated, TMILC exhibits smaller and faster-reducing peaks. The Figure 6b shows the same behavior for the Y-axis tracking error. Figure 7 depicts output curves for the X and Y axes of all three methods during the time interval t = [ 80 , 90 ] . It is clear that TMILC’s output curves have smaller amplitudes, smoother trajectories, and lower fluctuations, resulting in lower power demand on the actuator.
To further demonstrate the superiority of the proposed method, a quantitative analysis of position and velocity tracking errors has been performed. The results are listed in Table 2 and Table 3, where
R M S E r r o r = ( 1 n j = 1 n e 2 ( j ) ) 1 2 .
From Figure 8 and the data in Table 2, it is evident that TMILC exhibits superior performance in position tracking error for both the X and Y axes. The absolute maximum value of its X-axis position tracking error is approximately half of that of TMARC and TMPID. Its RMS error is about one-third of that of TMARC and approximately equal to that of TMPID. Similar conclusions apply to the Y-axis position tracking error, as both the absolute maximum and RMS values of TMILC are smaller than those of TMARC and TMPID. From Table 3 and Figure 8, it is also clear that, in terms of velocity tracking, TMILC also outperforms TMARC and TMPID.

7. Conclusions

This article introduces a novel adaptive iterative learning fault-tolerant controller to address state constraints and actuator failures in LMDGSs without requiring a priori knowledge of the system’s precise model. With this adaptive iterative learning fault-tolerant controller, LMDGSs states comply with predefined constraint conditions. Furthermore, it effectively handles both multiplicative and additive actuator faults in LMDGSs. These conclusions also apply to scenarios involving non-repetitive target trajectories, thanks to the introduction of a trajectory modifier function. The advantages of the proposed method outlined above have been validated in practical experiments on LMDGSs. Furthermore, it exhibited excellent performance, surpassing that of comparative methods, even in scenarios involving actuator faults and system constraints. Future research will focus on developing a chatting-free adaptive control algorithm for LMDGSs to achieve perfect trajectory tracking performance.

Funding

This work has been supported in part by the National Natural Science Foundation of China under Grant U20A20188, in part by the Major Scientific and Technological Research Project of Ningbo under Grant 2021Z040, and in part by the Fundamental Research Funds for the Central Universities.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares no conflicts of interest.

Nomenclature

The following abbreviations are used in this manuscript:
nNumber of iterations
jOrder number, j = 1 , 2 .
X j , n System states at n-th iteration.
X j r , n Expected trajectory at n-th iteration.
X j d , n The revised expected trajectory at n-th iteration.
ω j trajectory modifier functions.
e j , n System state tracking errors at n-th iteration.
z j , n System fictitious state tracking errors at n-th iteration.
Φ T F n The parametric uncertainty at n-th iteration.
d n External disturbances and unknown system dynamics at n-th iteration.
G n The unknown control input gain function.
u n Control input at n-th iteration.
u n F the control input subject to actuator faults at n-th iteration.
y System output, y = { X 1 , n , X 2 , n } .
E n barrier composite energy function at n-th iteration.
˙ First derivative of •
˜ Estimated value of •
˘ Error between ˜ and •, i.e., ˘ = ˜
k b , n    The constraint function of system position state tracking error at the n-th iteration

References

  1. Zhou, S.; Shi, Y.; Wang, D.; Xu, X.; Xu, M.; Deng, Y. Election Optimizer Algorithm: A New Meta-Heuristic Optimization Algorithm for Solving Industrial Engineering Design Problems. Mathematics 2024, 12, 1513. [Google Scholar] [CrossRef]
  2. Pan, H.; Chang, X.; Sun, W. Multitask Knowledge Distillation Guides End-to-End Lane Detection. IEEE Trans. Ind. Inform. 2023, 19, 9703–9712. [Google Scholar] [CrossRef]
  3. Pan, H.; Hong, Y.; Sun, W.; Jia, Y. Deep Dual-Resolution Networks for Real-Time and Accurate Semantic Segmentation of Traffic Scenes. IEEE Trans. Intell. Transp. Syst. 2023, 24, 3448–3460. [Google Scholar] [CrossRef]
  4. Kang, Z.; Lin, W.; Liu, Z.; Xu, R. Accurate Contour Error Estimation-based robust contour control for dual-linear-motor-driven gantry stages. Mechatronics 2024, 100, 103174. [Google Scholar] [CrossRef]
  5. Shi, P.; Yu, X.; Yang, X.; Rodríguez-Andina, J.J.; Sun, W.; Gao, H. Composite Adaptive Synchronous Control of Dual-Drive Gantry Stage with Load Movement. IEEE Open J. Ind. Electron. Soc. 2023, 4, 63–74. [Google Scholar] [CrossRef]
  6. Li, T.; Tao, L.; Xu, B. Linear Parameter Varying Observer-Based Adaptive Dynamic Surface Sliding Mode Control for PMSM. Mathematics 2024, 12, 1219. [Google Scholar] [CrossRef]
  7. Gao, H.; Liu, Y.; Sun, W.; Yu, X. Adaptive Wavelet Tracking Control of Dual-Linear-Motor-Driven Gantry Stage with Suppression of Crossbeam Rotation. IEEE/ASME Trans. Mechatron. 2023, 29, 97–105. [Google Scholar] [CrossRef]
  8. Sun, W.; Liu, J.; Gao, H. A Composite High-Speed and High-Precision Positioning Approach for Dual-Drive Gantry Stage. IEEE Trans. Autom. Sci. Eng. 2024, 1–12. [Google Scholar] [CrossRef]
  9. Liu, Y.; Sun, W. High-Performance Position Control for Repetitive Tasks of Motor-Driven Servo Systems Based on Periodic Disturbance Observer. IEEE/ASME Trans. Mechatron. 2023, 28, 2461–2470. [Google Scholar] [CrossRef]
  10. Yuan, Y.; Sun, W. An Integrated Kinematic Calibration and Dynamic Identification Method with Only Static Measurements for Serial Robot. IEEE/ASME Trans. Mechatron. 2023, 28, 2762–2773. [Google Scholar] [CrossRef]
  11. Hou, S.; Qiu, Z.; Chu, Y.; Gao, J.; Fei, J. Emotional Intelligent Finite-Time Tracking Control for a Class of Nonlinear Systems. IEEE Internet Things J. 2024, 11, 20421–20432. [Google Scholar] [CrossRef]
  12. Liu, Y.; Wang, J.; Gomes, L.; Sun, W. Adaptive Robust Control for Networked Strict-Feedback Nonlinear Systems with State and Input Quantization. Electronics 2021, 10, 2783. [Google Scholar] [CrossRef]
  13. Pan, H.; Luo, M.; Wang, J.; Huang, T.; Sun, W. A Safe Motion Planning and Reliable Control Framework for Autonomous Vehicles. IEEE Trans. Intell. Veh. 2024, 1–14. [Google Scholar] [CrossRef]
  14. Guo, Z.; Zhen, S.; Liu, X.; Zhong, H.; Yin, J.; Chen, Y.H. Design and application of a novel approximate constraint tracking robust control for permanent magnet synchronous motor. Comput. Chem. Eng. 2023, 173, 108206. [Google Scholar] [CrossRef]
  15. Xu, Q.Y.; He, W.Y.; Zheng, C.T.; Xu, P.; Wei, Y.S.; Wan, K. Adaptive Fuzzy Iterative Learning Control for Systems with Saturated Inputs and Unknown Control Directions. Mathematics 2022, 10, 3462. [Google Scholar] [CrossRef]
  16. Li, C.; Yao, B.; Wang, Q. Modeling and Synchronization Control of a Dual Drive Industrial Gantry Stage. IEEE/ASME Trans. Mechatron. 2018, 23, 2940–2951. [Google Scholar] [CrossRef]
  17. Zhou, L.; Trumper, D.L. Magnetically Levitated Linear Stage with Linear Bearingless Slice Hysteresis Motors. IEEE/ASME Trans. Mechatron. 2021, 26, 1084–1094. [Google Scholar] [CrossRef]
  18. Li, C.; Sun, Y.; Pu, S. Accurate physical modeling and synchronization control of dual-linear-motor-driven gantry with dynamic load. AIP Adv. 2021, 11, 025133. [Google Scholar] [CrossRef]
  19. Yu, C.; Ma, J.; Pan, H.; Basin, M.V. Adaptive Iterative Learning Constrained Control for Linear-Motor-Driven Gantry Stage. IEEE/ASME Trans. Mechatron. 2023, 1–12. [Google Scholar] [CrossRef]
  20. Wang, Y.; Wang, Z. Distributed model free adaptive fault-tolerant consensus tracking control for multiagent systems with actuator faults. Inf. Sci. 2024, 664, 120313. [Google Scholar] [CrossRef]
  21. Pan, H.; Zhang, C.; Sun, W. Fault-Tolerant Multiplayer Tracking Control for Autonomous Vehicle via Model-Free Adaptive Dynamic Programming. IEEE Trans. Reliab. 2023, 72, 1395–1406. [Google Scholar] [CrossRef]
  22. Wang, C.; Zhou, Z.; Wang, J. Distributed Fault Diagnosis via Iterative Learning for Partial Differential Multi-Agent Systems with Actuators. Mathematics 2024, 12, 955. [Google Scholar] [CrossRef]
  23. Zhu, Z.; Zhao, H.; Sun, H. Stackelberg-Theoretic Optimal Robust Control for Constrained Permanent Magnet Linear Motor with Inequality Constraints. IEEE/ASME Trans. Mechatron. 2022, 27, 5439–5450. [Google Scholar] [CrossRef]
  24. Gong, J.; Yu, X.; Pan, H.; Rodríguez-Andina, J.J. Adaptive Fault Tolerant Control of Linear Motors under Sensor and Actuator Faults. IEEE Trans. Transp. Electrif. 2024, 1. [Google Scholar] [CrossRef]
  25. Pan, H.; Zhang, D.; Sun, W.; Yu, X. Event-Triggered Adaptive Asymptotic Tracking Control of Uncertain MIMO Nonlinear Systems with Actuator Faults. IEEE Trans. Cybern. 2022, 52, 8655–8667. [Google Scholar] [CrossRef] [PubMed]
  26. Wang, J.; Pan, H.; Zhang, D. Event-Triggered Adaptive Finite-Time Control for MIMO Nonlinear Systems with Actuator Faults. IEEE Trans. Ind. Electron. 2023, 70, 7343–7352. [Google Scholar] [CrossRef]
  27. Chen, Y.; Jiang, W.; Charalambous, T. Machine learning based iterative learning control for non-repetitive time-varying systems. Int. J. Robust Nonlinear Control 2023, 33, 4098–4116. [Google Scholar] [CrossRef]
  28. Meng, D.; Zhang, J. Robust Tracking of Nonrepetitive Learning Control Systems with Iteration-Dependent References. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 842–852. [Google Scholar] [CrossRef]
  29. Wang, L.; Dong, L.; Yang, R.; Chen, Y. Dynamic ILC for Linear Repetitive Processes Based on Different Relative Degrees. Mathematics 2022, 10, 4824. [Google Scholar] [CrossRef]
  30. Pan, H.; Chang, X.; Zhang, D. Event-Triggered Adaptive Control for Uncertain Constrained Nonlinear Systems with Its Application. IEEE Trans. Ind. Inform. 2020, 16, 3818–3827. [Google Scholar] [CrossRef]
  31. Li, J.; Li, J. Adaptive iterative learning control for coordination of second-order multi-agent systems. Int. J. Robust Nonlinear Control 2014, 24, 3282–3299. [Google Scholar] [CrossRef]
  32. Gao, H.; Li, Z.; Yu, X.; Qiu, J. Hierarchical Multiobjective Heuristic for PCB Assembly Optimization in a Beam-Head Surface Mounter. IEEE Trans. Cybern. 2022, 52, 6911–6924. [Google Scholar] [CrossRef] [PubMed]
  33. Jin, X. Fault tolerant finite-time leader–follower formation control for autonomous surface vessels with LOS range and angle constraints. Automatica 2016, 68, 228–236. [Google Scholar] [CrossRef]
Figure 1. LMDGS structure.
Figure 1. LMDGS structure.
Mathematics 12 01673 g001
Figure 2. The curves of trajectory modifier functions ω 1 and ω 2 .
Figure 2. The curves of trajectory modifier functions ω 1 and ω 2 .
Mathematics 12 01673 g002
Figure 3. Experimental platform.
Figure 3. Experimental platform.
Mathematics 12 01673 g003
Figure 4. Evolution of X−states x 1 , n , x 1 r , n , x 2 , n and x 2 r , n under constraints W x H , n and W x L , n . (a) During the first iteration, n = 1 . (b) During the twentieth iteration, n = 20 .
Figure 4. Evolution of X−states x 1 , n , x 1 r , n , x 2 , n and x 2 r , n under constraints W x H , n and W x L , n . (a) During the first iteration, n = 1 . (b) During the twentieth iteration, n = 20 .
Mathematics 12 01673 g004
Figure 5. Evolution of Y−states y 1 , n , y 1 r , n , y 2 , n and y 2 r , n under constraints W y H , n and W y L , n . (a) During the first iteration, n = 1 . (b) During the twentieth iteration, n = 20 .
Figure 5. Evolution of Y−states y 1 , n , y 1 r , n , y 2 , n and y 2 r , n under constraints W y H , n and W y L , n . (a) During the first iteration, n = 1 . (b) During the twentieth iteration, n = 20 .
Mathematics 12 01673 g005
Figure 6. The position tracking error for TMARC, TMPID, and TMILC. (a) X-axis. (b) Y-axis.
Figure 6. The position tracking error for TMARC, TMPID, and TMILC. (a) X-axis. (b) Y-axis.
Mathematics 12 01673 g006
Figure 7. Control inputs of X and Y axes for TMARC, TMPID, and TMILC.
Figure 7. Control inputs of X and Y axes for TMARC, TMPID, and TMILC.
Mathematics 12 01673 g007
Figure 8. Max and RMS of state tracking errors e1, e2 for TMARC, TMPID and TMILC.
Figure 8. Max and RMS of state tracking errors e1, e2 for TMARC, TMPID and TMILC.
Mathematics 12 01673 g008
Table 1. Experimental platform parameters.
Table 1. Experimental platform parameters.
Indexm k v k c k m k f f
UnitkgN·s/mmN1N/V1
X150.130.03109.738
Y250.130.038030.531
Table 2. Position tracking error e 1 for TMILC, TMARC, and TMPID.
Table 2. Position tracking error e 1 for TMILC, TMARC, and TMPID.
IndexTMILCTMARCTMPID
Max | e 1 | X 1.0315 × 10 5 1.9304 × 10 5 7.3413 × 10 5
Max | e 1 | Y 1.1047 × 10 5 1.2986 × 10 5 3.9029 × 10 5
RMS e 1 X 3.3453 × 10 6 5.3372 × 10 6 9.7100 × 10 6
RMS e 1 Y 5.3772 × 10 6 7.6146 × 10 6 6.6105 × 10 6
Table 3. Velocity tracking error e 2 for TMILC, TMARC, and TMPID.
Table 3. Velocity tracking error e 2 for TMILC, TMARC, and TMPID.
IndexTMILCTMARCTMPID
Max | e 2 | X 6.9881 × 10 4 2.6380 × 10 3 4.3013 × 10 3
Max | e 2 | Y 7.2846 × 10 4 1.7175 × 10 3 1.4549 × 10 3
RMS e 2 X 1.4865 × 10 4 4.0457 × 10 4 6.8210 × 10 4
RMS e 2 Y 1.1971 × 10 4 7.3033 × 10 4 4.3452 × 10 4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, C. Adaptive Iterative Learning Constrained Control for Linear Motor-Driven Gantry Stage with Fault-Tolerant Non-Repetitive Trajectory Tracking. Mathematics 2024, 12, 1673. https://doi.org/10.3390/math12111673

AMA Style

Yu C. Adaptive Iterative Learning Constrained Control for Linear Motor-Driven Gantry Stage with Fault-Tolerant Non-Repetitive Trajectory Tracking. Mathematics. 2024; 12(11):1673. https://doi.org/10.3390/math12111673

Chicago/Turabian Style

Yu, Chaohai. 2024. "Adaptive Iterative Learning Constrained Control for Linear Motor-Driven Gantry Stage with Fault-Tolerant Non-Repetitive Trajectory Tracking" Mathematics 12, no. 11: 1673. https://doi.org/10.3390/math12111673

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop