FINITE TIME STABILIZATION OF NON-AUTONOMOUS, NONLINEAR SECOND-ORDER SYSTEMS BASED ON MINIMUM TIME PRINCIPLE

This paper proposes a controller design method to stabilize a class of nonlinear, nonautonomous second-order systems in finite time. This method is developed based on exactlinearization and Pontryagin’s minimum time principle. It is shown that the system can be stabilized in a finite time of which the upper bound can be chosen according to the initial states of the system. Simulation results are given to validate the theoretical analysis.


INTRODUCTION
In recent years, there has been increasing research interest in designing finite/fixed-time stabilization (FTS) laws for second-order control-affine systems with a single control input, i.e., the autonomous system modeled by where 12 ( , )  T x x x is the state vector, (0) 0  f and 0 is the origin. Several notable works on this avenue include [1 -6].
The problem of designing FTS laws has theoretical significance because of two reasons. First, a FTS control law can drive the states to the origin in a finite-time. Second, FTS is crucial for designing sliding mode controllers, since the states of the system must be driven to the sliding surface in a finite time [7 -9]. In the literature, almost all existing solutions to the FTS problem hinge on the Lyapunov stability theory [6]. The common approach is to find a continuously differentiable, positive definite function () Vx so that there is at least a statefeedback controller () ux making where T is a finite strictly positive number. The value T is also referred to as the stabilizing time of system (1) under the FTS controller () ux due to Krasovski-LaSalle invariance principle, it must held that ( ) 0  xt for  tT . The aforementioned approaches, however, share the same issue of finding a suitable Lyapunov candidate function. Until now, there is still no complete solution for constructing such a function In this paper, we propose a novel approach to the FTS problem that does not invoke the Lyapunov stability theory. The theoretical foundation of the proposed approach is based on the minimum time optimal control theory in Pontryagin's maximum principle [10] and a suitable exact-linearization approach from differential geometry [11 -13]. The proposed approach gives a simple approach to stabilize a class of autonomous, nonlinear second-order system in a finite time T . An explicit formula for the switching time and the stabilizing time T , which can be used as guidelines for performance design is also given. It is noted that minimum time control has been studied for second-order linear systems in [10,14,15]. The authors of [16] studied minimum-time control of second-order systems with a partly unknown nonlinear dynamic.
In the preliminary work of this paper, the [17], a minimum time controller was designed for autonomous, nonlinear second-order systems in strict feedback form, which appears to be structurally more simple than the terminal sliding mode controller presented in [18] also for second-order autonomous systems in feedback form. In this paper, the results of [17] will be enlarged to non-autonomous systems and additionally give a formula to determine the precise switching-and stabilizing time. In fact, we are not aware of any other work considering minimum time principle for stabilizing non-autonomous, nonlinear systems in finite time.

MAIN RESULTS
In this paper, we consider a non-autonomous, nonlinear second-order system with the following feedback structure 1 11 Then the system (4) can be rewritten as 12 2 ( , ) Hence, by using the time varying state-feedback controller which is linear and time-invariant in the whole state space z .

Time optimal control
We come to the next task, i.e. to the finite time stabilization of the LTI system (11). From literatures, there are many methods available for solving this task associated with Lyapunov's theory [1 -7]. However, belonging to the purpose that the stabilization time T could be adjusted flexibility, the usage of principle of minimum time optimal control appears to be preferred [10,14,15,17]. Therefore, we will use this principle for carrying out the second task.
Based on the time optimal control principle for any starting point 0 z to the fixed endpoint 0  T z we obtain the state feedback time optimal controller for LTI system (11) as follows [17]: (13) and 0  k is arbitrarily chosen. The time optimal controller (12) is quite the same state feedback controller given previously in [17] for autonomous systems. It is also shown furthermore in [10,17] that the bigger k is chosen, the faster stabilizing will be. Moreover, within (12) it is realizable that each optimal trajectory ends at the origin and could have maximum one switching point on the curve (13). It means that this curve contains all switching points of the control input v and/or end part of all optimal trajectories starting outside it. Therefore, from now onwards, we refer to the curve (13) as the "switching-curve".
Coming back to the original state space x we have the finite time optimal controller: with the switching-curve (13) as follows:

Determination of stabilizing time
In this subsection, we will give an explicit formula for determining the stabilizing time. be the initial states of the LTI system (11). The state feedback controller (12)  Proof: Consider the following three cases with the illustration in Fig. 2.
 Case 1: 0 z is on the switching-curve (13). In this case, 0, sgn     and thus 2 1 0 22 The optimal control signal sgn , does not change its sign. The optimal trajectory can be calculated as follows: T kk and this is consistent with (16).
The optimal input given by 1 1 for 0 () for k t t T changes its sign one time as illustrated in Figure 2. Hence the first part of the optimal trajectory is given by: On the switching-curve AOB (Fig. 2), i.e. when 1  tt , we have Then, together with the result from Case 1 for 1  t t T , the total stabilizing time is The optimal input 1 1 for 0 () for which is consistent with (16). ■ Based on Lemma 1, we can prove the main result of this paper.

Theorem:
The state-feedback controller (14) stabilizes the nonlinear non-autonomous system (5) from any initial state 0 x to the origin in finite time T determined in (16). Proof: Since the state-feedback controller (12) stabilizes the LTI system (11)  Hence, with 1 (0, ) 0,  f t t we come finally to ( ) 0  xT . ■

Remark 1:
The formula (16) shows that the stabilizing time T for non-autonomous nonlinear systems can be made smaller by increasing k . ■ Remark 2: Consider a more general class of non-autonomous, nonlinear systems given as where 11 ( , ) f x t is smooth with 1 (0, ) 0,  f t t , 2 ( , ) g x t is smooth and invertible in an open region containing the origin,

Simulation 1
To illustrate the proposed FTS controller design method, consider the following system: Comparing with system (5), we have It is clearly, that this non-autonomous nonlinear can not be stabilized by applying any conventional methods related to Lyapunov's theory, because ( , ) f x t is not bounded in t . The corresponding designed FTS controller for system (20) can be written as in (14) and (15) Figure 3 depicts the simulated states of the system with 3.5  k and 0 (3 , 5)  T x , obtained with simulation program FTS1.m (see Appendix). By using the formula (16), we obtain the switching time and the finite time stabilizing time of the closed-loop system as 1 1.8326  ts and 2.2367  Ts . Thus, the simulation result exhibited in Fig. 3 is consistent with our analysis. x (solid) and 2 x (dashed) vs time.