Modeling Systems
Last updated
Was this helpful?
Last updated
Was this helpful?
Systems are most easily modeled using systems of linear constant coefficient differential equations. They can be represented either as a set of state-space equations or as a transfer function in the Laplace domain.
In electrical systems, there are three basic components: resistors, capacitors, and inductors. See Table 1 for their Laplace domain relationships. At an electrical node, by Kirchoff’s Voltage Law, and at an electrical junction, by Kirchoff’s Current Law.
In mechanical systems, there are also three basic components: dampers, springs, and masses. There are also rotational counterparts. See Table 1 for their Laplace domain relationships. At a massless node, by Newton’s 2nd law. Because we consider dampers and springs are massless, the force at two ends of a damper or spring must be equal. In rotational systems, we can also have a gear train. Rotational impedances are reflected through gear trains by multiplying by .
It turns out that electrical and mechanical systems are analogous to each other. In other words, given an electrical system, we can convert it into a mechanical system and vice versa. Capacitors act like springs as energy storage, resistors act like dampers which dissipate energy, and inductors act like inertial masses which resist movement. These are clear from their force/voltage differential equations (in the Laplace domain) in Table 1. Under these analogies, forces are like voltages, currents are like velocities, and charge is like position.
Because non-linear systems often have dynamics which are complicated to analyze, a standard trick to make them simpler is to linearize them.
Using Definition 5, we can see that around our operating point, we have
System variables are variables which depend on either the input or the system's internal state.
We can easily go from State-Space Equations to a transfer function via the Unilateral Laplace transform. After taking the Laplace Transform of both sides of Equation 2, Equation 3,
We can also derive state space equations from their transfer functions. First, we assume that the transfer function comes from the LCCDE
meaning our transfer function will be of the form
Using this intermediary variable, we can now let
Converting this back to the time-domain,
When we do control in State-Space Control, this makes it easier to place the system poles where we want them to be.
Notice that
Combining these two equations, we see that
Notice that Equation 7 is broken into two pieces.
By the Cayley-Hamilton Theorem (see Cayley-Hamilton),
A theorem analogous to Theorem 1 exists for observability.
Linearization is when a nonlinear system is approximated by the first two terms of its Taylor series about a particular operating point.
Equation 1 will hold so long as is small enough to be within the linear regime (i.e where the Taylor Series expansion is a good approximation). If is a multi-variable equation, then Equation 1 becomes
The state variables of a system are the smallest set of linear independent system variables that can uniquely determine all the other system variables for all .
One can think of the state variables as capturing the internal dynamics of the system. The dynamics are described by matrices (the state-evolution matrix) and (the input matrix)
where is the input to the system. Sometimes the states are not directly observable, but instead the sensor in Figure 2 only provides a linear combination of the states determined by the output matrix and the feedforward matrix . Together, Equation 2 and Equation 3 are the state-space equations of the system.
If the system is Single-Input, Single-Output (SISO) and the initial condition is , then
Equation 4 makes it very clear that the poles of the system are the same as the eigenvalues of the matrix.
It is possible that such that . In other words, the numerator can have fewer terms than the denominator. We now introduce an intermediary variable so
We can now choose our state-variables to be the derivatives , giving us the state-evolution equation
Applying the state-variables to ,
Together, Equation 5, Equation 6 are known as Phase Variable Form. Notice that the characteristic polynomial of the matrix when it is in phase variable form is
For transfer functions, the time domain solution for a particular input is given by . How do we do the same for state-space equations? Equation 2 is a inhomogenous, first-order vector ordinary differential equation. If it was a scalar homogenous ODE, then we know the solution would be , so for our vector case, let us first define
using the Taylor Series expansion. With this definition, we can solve Equation 2 using integrating factors. If we let be our integrating factor, then multiplying it to both sides of Equation 2 gives
Integrating both sides from 0 to ,
The zero-state response is how the system response to an input when its initial state is . It is the convolution of the input with where is the unit step.
A system is controllable if for any initial state , we can reach a new state in finite time with no constraints on the input .
Let us assume that we have a controllable system and we want to reach the state from , and we reach it at time . Then using Equation 7,
Notice that if is invertible, then we can find the which will recover , but if it is not invertible, then we may not be able to do this.
If is invertible, then the system is controllable.
A system is observable if for any initial state , we can determine from and over a finite time interval.
If is invertible, then the system is observable.
Sometimes systems have a time-delay in them. This is equivalent to placing a system before the plant with impulse response since . In the Laplace domain, this is the same as the transfer function as shown in Figure 3.