64  Control, feedback, and stability

Making a dynamic system behave

Solving a differential equation tells you what a system will do. Control asks a different question: whether that behaviour is acceptable, how fast it settles, how much it overshoots, and how sensitive it is to disturbances and modelling error.

The central move is this. You measure the output y(t), compare it to the target r(t), form the error e(t) = r(t) - y(t), and choose a control input u(t) that reshapes the dynamics in your favour. Stability and transient response are not side remarks — they are the point.


64.1 What this chapter helps you do

Symbols to keep handy

These are the bits of notation you'll see a lot. If a line of symbols feels like a fence, read it out loud once, then keep going.

  • (A-BK): the eigenvalues of A minus B K — the closed-loop modes under state feedback

  • T(s) = : T of s — the closed-loop transfer function

  • K (row vector): K — state-feedback gain row vector in u equals minus K x; distinct from the scalar proportional gain

  • A, B: A and B — the system and input matrices in the state-space model dot-x equals Ax plus Bu

  • x: x — the state vector in state-space models

  • C(s): C of s — the controller transfer function

  • e(t) = r(t) - y(t): e of t equals the error: target minus output

  • G(s): G of s — the plant transfer function

Definitions to keep handy

These are the words we keep coming back to. If one feels slippery, come back here and steady it before you push on.

  • plant: The system you’re trying to control (motor, drone, furnace, reactor).

  • feedback: Using the measured output to decide what input to apply next.

  • open loop vs closed loop: Open loop does not look at the output; closed loop does, and corrects using the error.

  • stability: Whether the system settles down after a disturbance instead of blowing up or drifting away.

  • transfer function: An input-output description in the Laplace domain that turns dynamics into algebra.

Here is the main move this chapter is making, in plain terms. You do not need to be fast. You just need to keep the thread.

  • Coming in: A system evolves in time, and its response matters more than its closed-form solution.

  • Leaving with: Feedback reshapes dynamics. Stability, transient response, and robustness become design questions, not just analytical observations.

64.2 The control problem

Most control problems look complicated because the diagram has a lot of boxes. The underlying idea is homely: compare what you want to what you have, then push in the direction that closes the gap.

If you have ever adjusted a shower to keep the temperature steady, you have done control. You looked at the output (water temperature), compared it to the target, and changed the input (the tap) until the error was small.

Start with four signals. They are just roles in the story:

  • r(t) is the reference: what you want.
  • y(t) is the output: what the system is actually doing.
  • e(t) = r(t) - y(t) is the error.
  • u(t) is the control input you apply to the plant.

Watch for this

The letters are not sacred. What matters is the job:

  • reference = target
  • output = reality
  • error = target minus reality
  • input = the shove you are allowed to apply

The plant is the system being controlled: a motor, furnace, vehicle, converter, or biochemical process. In Laplace-transform language the plant is often represented by a transfer function G(s), read “G of s.” The controller is represented by C(s), read “C of s.”

In open loop, you choose u(t) without using the measured output. In closed loop, you feed back the measured output and let the error signal drive the control action. That is the essential difference.

Unity feedback means the measured output Y(s) is fed back directly, with no additional gain in the feedback path. The block-diagram algebra is:

E(s) = R(s) - Y(s) U(s) = C(s)E(s) Y(s) = G(s)U(s)

Read those as simple cause-and-effect:

  • error = target minus output
  • controller turns error into an input
  • plant turns input into an output

Substitute in sequence:

Y(s) = G(s)C(s)\bigl(R(s) - Y(s)\bigr)

Rearrange:

\bigl(1 + C(s)G(s)\bigr)Y(s) = C(s)G(s)R(s)

So the closed-loop transfer function is

Closed loop, in words

The fraction

T(s) = \frac{C(s)G(s)}{1 + C(s)G(s)}

is not something you memorise. It is a story:

  • the top is “how strongly the controller and plant push the output”
  • the bottom is “the same push, plus the self-correction from feedback”

As the loop gain C(s)G(s) gets large (where the model is valid), T(s) tends toward 1: the output tracks the reference. When the loop gain is small, T(s) looks like the open-loop behaviour.

T(s) = \frac{Y(s)}{R(s)} = \frac{C(s)G(s)}{1 + C(s)G(s)}

This means feedback does not merely add another block to the diagram. It changes the input-output law of the whole system.

Characteristic equation, in words

The equation

1 + C(s)G(s) = 0

is where feedback “decides” the system’s natural behaviour. Its solutions are the closed-loop poles: they are the modes the system will naturally express after a disturbance.

The roots of 1 + C(s)G(s) = 0 are the closed-loop poles. They determine whether the response decays, oscillates, grows, or diverges.

Adjust the gain K below. Watch the step response, pole location, and error readouts change simultaneously.

64.3 What stability means

A control loop is stable if small disturbances do not produce outputs that grow without bound. In continuous-time linear systems, the quick rule is:

  • if every closed-loop pole has negative real part, the response decays
  • if any closed-loop pole has positive real part, the response grows
  • if poles sit on the imaginary axis, the system is marginal and small model errors may push it into trouble

This is why pole location is not bookkeeping. It is a statement about physical behaviour.

The same stability condition applies in state-space form — vectors and matrices instead of transfer functions, but the same underlying dynamics. A transfer function G(s) and a state-space model (A, B) are two descriptions of the same system.

Two different K’s in this chapter

This chapter uses K in two roles:

  • scalar K in C(s)=K: a single proportional gain in a transfer-function loop
  • row vector K in u=-Kx: state-feedback gains in a state-space loop

Same letter, different job. When you see K next, check which story you are in.

Here K is a row vector of gains — distinct from the scalar proportional gain used above. For a state-space model

\dot{x} = Ax + Bu, \qquad u = -Kx

substituting u = -Kx into the equation gives

\dot{x} = Ax + B(-Kx) = (A - BK)x

Now the eigenvalues \lambda(A-BK) play the same role that closed-loop poles play in transfer-function form. They tell you what the state does in time.

NoteWhy this works

Feedback changes the equation you are solving. Without control, the plant’s dynamics are built into G(s) or A. With control, the measured output feeds back into the input channel, so the effective denominator changes from the plant’s own poles to a new set determined by the loop.

You are not accepting the poles, modes, or eigenvalues the plant came with. You are choosing them.

64.4 The core method

For a first pass through a control problem, the workflow is usually:

  1. Write the plant model in transfer-function or state-space form.
  2. Decide what “good behaviour” means: stable, fast enough, low overshoot, small steady-state error, acceptable control effort.
  3. Choose a controller structure: proportional, PI, PID, state feedback, or another architecture appropriate to the system.
  4. Form the closed-loop model.
  5. Inspect the poles or eigenvalues and connect them back to the time response.
  6. Tune, then check what the tuning costs you elsewhere.

The important habit is to keep the interpretation attached to the algebra. A gain value is not just a number. It changes speed, error, sensitivity, and sometimes noise amplification.

64.5 Worked example 1: cruise control with proportional feedback

Suppose a simplified vehicle-speed model has plant transfer function

G(s) = \frac{1}{5s + 1}

where input is throttle command and output is speed deviation from the desired operating point. Use a proportional controller

C(s) = K

with unity feedback.

The closed-loop transfer function is

T(s) = \frac{KG(s)}{1 + KG(s)} = \frac{K/(5s+1)}{1 + K/(5s+1)} = \frac{K}{5s + 1 + K}

So the closed-loop pole is at

s = -\frac{1+K}{5}

For any K > -1, the pole is in the left half-plane, so the linear model is stable. If K > 0, increasing K moves the pole further left, which makes the response faster.

The steady-state gain is

T(0) = \frac{K}{1+K}

so proportional feedback alone does not remove step-tracking error completely. For a unit step reference, the steady-state output is K/(1+K) and the steady-state error is

1 - \frac{K}{1+K} = \frac{1}{1+K}

This is the standard tradeoff. Larger K reduces error and speeds the loop, but in a more realistic model it also increases sensitivity to unmodelled dynamics, actuator limits, and measurement noise.

The two-panel chart below shows both sides of this tradeoff at once. Drag the cursor to explore.

64.6 Worked example 2: state feedback for a two-state system

Consider the linear system

\dot{x} = Ax + Bu

with

A = \begin{pmatrix} 0 & 1 \\ -2 & -1 \end{pmatrix}, \qquad B = \begin{pmatrix} 0 \\ 1 \end{pmatrix}

Choose state feedback

u = -Kx, \qquad K = \begin{pmatrix} 3 & 2 \end{pmatrix}

Then

A - BK = \begin{pmatrix} 0 & 1 \\ -2 & -1 \end{pmatrix} - \begin{pmatrix} 0 \\ 1 \end{pmatrix} \begin{pmatrix} 3 & 2 \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -5 & -3 \end{pmatrix}

Its characteristic polynomial is \det(\lambda I - (A-BK)), evaluated as:

\det(\lambda I - (A-BK)) = \det\begin{pmatrix} \lambda & -1 \\ 5 & \lambda+3 \end{pmatrix} = \lambda(\lambda+3) + 5 = \lambda^2 + 3\lambda + 5

The eigenvalues are

\lambda = \frac{-3 \pm \sqrt{9-20}}{2} = -\frac{3}{2} \pm \frac{\sqrt{11}}{2}i

Both eigenvalues have negative real part, so the closed-loop system is stable. Compared with the uncontrolled system, the controller has shifted the system’s modes. That is the state-space version of pole placement.

The interactive below lets you explore how the gain vector K = (k_1, k_2) shapes the eigenvalues and phase portrait.

This is the language used constantly in robotics, flight control, and embedded control software. The code may be digital and the sensors noisy, but the core mathematical question is still: what did your feedback law do to the modes?

64.7 Worked example 3: stabilising a scientific instrument

An optics experiment requires a laser intensity to hold near a target despite thermal drift. A simplified model of the plant is slow and first-order, so the experimentalist begins with a proportional controller for the same reason an engineer would: it is low-complexity and straightforward to analyse.

If the gain is too low, the output drifts and tracking is poor. If the gain is too high, noise and delay in the sensor chain start to matter. The mathematics is identical to the motor-speed example. What changes is the vocabulary.

Control theory is not owned by electrical engineering. It is a general mathematical structure for shaping dynamics under measurement.

64.8 Where this goes

The most direct continuation is Estimation, inverse problems, and filtering (Volume 8, Chapter 5). Real control systems often cannot measure every state they need directly. Once you ask how to control a system, the next question is usually how to estimate what you cannot observe cleanly.

This chapter also informs how you read later Volume 8 material on sampled systems and computational models. A simulation is not a controller. A model can be numerically stable and still be a poor control design. That distinction runs through the rest of the volume.

TipWhere this appears in practice
  • motor-speed control in electric drives
  • altitude hold and attitude stabilisation in drones
  • furnace and reactor temperature control
  • active vibration suppression in flexible structures
  • instrument stabilisation in optics and experimental physics
  • state-feedback and observer design in robotics and autonomous systems

64.9 Exercises

Each exercise asks you to connect the algebra to physical behaviour.

64.9.1 Exercise 1

A thermal plant is modelled by

G(s) = \frac{2}{10s + 1}

with proportional controller C(s) = K and unity feedback.

  1. Derive the closed-loop transfer function.
  2. Find the closed-loop pole.
  3. For a unit-step reference, compute the steady-state error.
  4. Compare what changes when K = 1 and when K = 4.

64.9.2 Exercise 2

A two-state model has

A = \begin{pmatrix} 0 & 1 \\ -1 & -0.5 \end{pmatrix}, \qquad B = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \qquad K = \begin{pmatrix} 4 & 1.5 \end{pmatrix}

Compute A-BK and decide whether the closed-loop system is stable.

64.9.3 Exercise 3

A lab instrument is modelled by a first-order plant G(s) = a/(Ts+1). The current proportional gain gives a closed-loop pole at s=-0.2. Following the same pole formula as Worked example 1, the operator proposes increasing the gain so the pole moves to s=-0.8.

Write a short design note answering:

  1. What qualitative change will the lab see in the time response?
  2. Why might this still be a bad idea if the sensor is noisy or delayed?

64.9.4 Exercise 4

Choose one system from your own field: a drive, drone, room heater, queueing server with autoscaling, or experimental instrument. Identify:

  1. the reference
  2. the measured output
  3. the error signal
  4. the control input
  5. one reason high gain might help
  6. one reason high gain might hurt

Write the answer as a one-page systems sketch, not as prose only.

The diagram below shows the same unity-feedback loop relabelled for five different domains. Use it as a template for your sketch.