State-Space Control
Last updated
Last updated
The basic idea behind state space control is to use the state of the system in order to set the input. Namely, if we are given
then we can let . If we do this, then the equivalent state-evolution equation becomes
Notice that if our system is in phase variable form, then the controlled state-evolution equation is
This makes it very convenient to place our poles where we want since the last row of the matrix is also the coefficients of the characteristic polynomial.
Suppose we have a system
which is not in phase variable form. To place it into phase variable form, first assume that for some invertible matrix .
Since our transformation is invertible, the controllability of the system is unchanged, so
Assuming the system is controllable, . Now we can apply state feedback to the phase variable system.
Now, if we we do state feedback using the estimated state, then
Looking at the combined system,
Suppose we wanted to get rid of the steady state error using state-space control. We would do this using an integrator over the error in the observed outputs.
If we treat this as a new state, its evolution will be
When we apply our feedback rule, we get
When doing state feedback, we often don’t have access to the states themselves because we only have access to . In that case, we can’t use because we don’t know . One idea is to keep track of an estimated state and estimated output which follow the same system dynamics as the actual state and the actual output and receive the same input. If is a stable matrix, then where is the initial state of the system. This means that even if there is a discrepancy between the estimated state and the true state in the beginning, the estimate will match the true state after some time.
Suppose now that we want to control the error between the true state and the estimated state , so we add a gain to the error in the outputs .
Thus we can design to get quick error convergence. Notice that if our system is not observable, then we will not be able to place the poles of the observer system where we want them to be.
Notice that the poles of this system are just the poles of the original system and the poles of the observer system, so we can choose and independently.
If our new control law is , then our new state-space equations are
Suppose we want to control our system to send the state to over an infinite time horizon using the input . We want to do this by optimizing a cost function that penalizes control effort and the state error. In particular, we want to minimize the cost function
where and are positve-semi-definite matrices (typically diagonal) and determine how much we penalize the error and the control effort.