<電子ブック>
Stabilization of Linear Systems

責任表示
著者
本文言語
出版者
出版年
出版地
関連情報
概要 One of the main problems in control theory is the stabilization problem consisting of finding a feedback control law ensuring stability; when the linear approximation is considered, the nat ural probl...em is stabilization of a linear system by linear state feedback or by using a linear dynamic controller. This prob lem was intensively studied during the last decades and many important results have been obtained. The present monograph is based mainly on results obtained by the authors. It focuses on stabilization of systems with slow and fast motions, on stabilization procedures that use only poor information about the system (high-gain stabilization and adaptive stabilization), and also on discrete time implementa tion of the stabilizing procedures. These topics are important in many applications of stabilization theory. We hope that this monograph may illustrate the way in which mathematical theories do influence advanced technol ogy. This book is not intended to be a text book nor a guide for control-designers. In engineering practice, control-design is a very complex task in which stability is only one of the re quirements and many aspects and facets of the problem have to be taken into consideration. Even if we restrict ourselves to stabilization, the book does not provide just recipes, but it fo cuses more on the ideas lying behind the recipes. In short, this is not a book on control, but on some mathematics of control.続きを見る
目次 1. Introduction
1.1 Stability Concepts: The Problem of Stabilization
1.2 Linear Systems with Constant Coefficients: The Theorem on Stability by the Linear Approximation
1.3 An Overview of Some Stabilization Problems
Notes and References
2. Stabilization of Linear Systems
2.1 Controllability
2.2 Stabilizability: Stabilization Algorithms
2.3 Observability and Detectability: State Estimators: A Parametrization of Stabilizing Controllers
2.4 Liapunov Equations
2.5 Optimal Stabilization of Linear Systems: The Kalman-Lurie-Yakubovich-Popov Equations
2.6 Estimate of the Cost Associated with a Stabilizing Feedback Control: Loss in Performance Due to the Use of a Dynamic Controller
2.7 Stabilization with Disturbance Attenuation
Notes and References
3. Stabilization of Linear Systems with Two Time Scales
3.1 Separation of Time Scales
3.2 Controllability and Stabilizability
3.3 State Estimators
3.4 Optimal Stabilization for Systems with Two Time Scales
Notes and References
4. High-Gain Feedback Stabilization of Linear Systems
4.1 An Example
4.2 Square Systems with Minimum Phase
4.3 Invariant Zeros of a Linear System
4.4 Systems with Stable Invariant Zeros and with rank CB = rank C = p
4.5 High-Gain Feedback Stabilization of Linear Systems with Higher Relative Degree
4.6 The Special Popov Form of Linear Systems
4.7 High-Gain Stabilization of Linear Systems: The General Case
Notes and References
5. Adaptive Stabilization and Identification
5.1 Adaptive Stabilization in the Fundamental Case
5.2 Adaptive Stabilization in the Case of Unmodeled Fast Dynamics
5.3 Asymptotic Structure of the Invariant Zeros of a System with Two Time Scales and Adaptive Stabilization
5.4 Adaptive Stabilization of Some Linear Systems of Relative Degree Two
5.5 An Algorithm of Adaptive Identification
Notes and References
6. Discrete Implementation of Stabilization Procedures
6.1 Discrete Time Implementation of a State Feedback Control
6.2 Discrete-Time Implementation of a Stabilizing Dynamic Controller
6.3 Performance Estimates
6.4 Discrete Implementation of a Linear Feedback Control for Systems with Two Time Scales
6.5 Discrete Implementation of a High-Gain Feedback Control
6.6 Discrete Implementation of the Adaptive Stabilization Algorithm
Notes and References.
続きを見る
本文を見る Full text available from SpringerLink ebooks - Mathematics and Statistics (Archive)

詳細

レコードID
主題
SSID
eISBN
登録日 2020.06.27
更新日 2020.06.28