Why Do We Study Automatic Control?

This is the third post in the series “Why Do We Study X?” In this post, I will explain why it is important to study automatic control if you are a computer engineer. First, we need to understand what automatic control is. Essentially, automatic control is the science that allows us to design and manage systems that operate automatically without human intervention.

In other words, it is about designing systems that can respond to changing conditions and change their behavior to achieve a desired outcome. That outcome could be navigating an obstacle course, correctly positioning a robotic arm, maintaining a certain speed in the cruise control of a car, or maintaining altitude in the autopilot of an airplane.

At this point, you will probably begin to have an inkling about why it is important for computer engineers to study automatic control. A large part of what we do is designing systems that can operate autonomously.

A typical automatic control system consists of a sensor or a set of sensors, a controller, and an actuator or a set of actuators. The sensors get readings from the environment and, after the appropriate signal processing, send it to the controller. The controller is the brain; it is where the necessary actions to achieve the desired outcome are computed. Once it is computed here, the actuators perform the action. This action results in a change in the environment, which is detected by the sensors and sent to the controller for processing. The cycle is continued in this way until the desired outcome is accomplished.

What Are the Types of Automatic Control?

We will begin by mentioning some of the basic types of automatic control and then move on to the more advanced types.

The most basic type of automatic control is on-off control. Think of the thermostat in your air-conditioning unit. It measures the temperature of the room, and once the temperature has reached a certain level, it turns off the cooling unit. The thermostat continues reading the temperature of the room and turns the unit back on when the temperature reaches a certain level. This is a very simple control system. The sensor is the thermostat, the actuator is some sort of on-off switch, probably a power transistor, and the controller is a simple if statement that switches the unit off or on based on the reading of the sensor.

Another type of controller is the proportional controller. In this type of controller, the output of the controller is proportional to the error between the desired output and the actual output of the system. A typical use of this type of controller would be in the speed control of a motor.

Assume that we want to have the motor go at a speed of x km/h. In order to change the speed of the motor, we vary the voltage it receives as input. One way to do this is to calculate its actual speed with a sensor and then send this speed and the desired speed of the motor to a controller. The controller will calculate the difference between them and change the voltage being sent to the motor in a way that is proportional to this difference. This will allow us to maintain the speed of the motor at the desired level.

A third type of controller is the integral controller. In this type of controller, the output of the controller is proportional to the integration of the error between the desired outcome and the actual outcome over time. So it is essentially a sum of errors over time. When this value becomes constant, we have reached the desired outcome. Otherwise, the system responds by adjusting the actuator until the value is achieved.

The main difference between this type of controller and the proportional controller is that since the output of the proportional controller is, by definition, proportional to the error between the desired and actual outputs, the response becomes slower as the desired output approaches the actual output. The response becomes slower and slower, or less and less, until it doesn’t make a significant difference in the output. This means that the system can settle in a steady state that is not exactly equal to the desired outcome. This is typically referred to as an offset error.

By adding up all previous errors, the integral controller ensures that the response does not slow down, it continues as usual. It only stops when the integral becomes constant, because this means that there are no more errors to be added, since the error between the desired and actual outcome is now zero.

One disadvantage of the integral controller is that it does not respond quickly to changes in the system. For example, consider a system that is in the steady state, i.e., its desired and actual outcomes are considered equal by that system. Now consider an outside force that acts on this system and changes its actual output. Since the proportional controller only depends on the difference between the actual and desired output, it will immediately measure a large error and respond accordingly. So it quickly responds to changes in the environment.

Now consider the integral controller. Since it adds up all previous errors, it will have a larger baseline value of “residual error”, if I may coin a term. When the change above occurs, the new error will be added to this residual error. Since the residual error was accumulated over time, it is expected to be large compared to the new error. So the system does not respond as quickly as in the proportional system.

In practice, several types of controllers are combined to take advantage of the strong points that each of them offers. We will discuss this more later. Let us now turn our attention to another type of controller — the derivative or differential controller.

The derivative controller bases its response on the rate of change of error, not its actual value. Such a controller would respond very quickly to changes in the system. Therefore it is typically used in controllers for systems that require rapid response, such as aircraft controllers. However, it is prone to amplify noise. To understand this more, consider it this way. Noise is typically a small value compared to the signal you are measuring, and since both the integral and proportional controllers measure error either alone or additively, the noise component is going to be small compared to the signal that drives the controller.

On the other hand, the derivative controller uses the change in the rate of error. It essentially subtracts consecutive errors from each other. This will result in a smaller value, compared to which the noise signal could be significant. Consider the worst-case scenario in which the actual error in the system for two consecutive time periods is the same, but the noise in measuring them is different. Ideally, since the rate of change of error is zero, the system should not mount a significant response. But since the noise in measuring them is different, the controller will see them as different and mount a response.

As previously mentioned, the different controllers are typically combined to form a PID controller. PID controllers combine proportional, integral, and derivative controllers to take advantage of the best features of all its components. There are, of course, more complex controllers that use machine learning algorithms, like fuzzy logic or genetic algorithms, to design the control system, but discussing them all would turn this blog post into a book rather than a post, so I will refrain from doing so.

How Do We Use All This in Computer Engineering?

This is the fun part of the post and the actual subject. Using the techniques above, you can design many systems that control themselves autonomously. The most obvious systems are those with mechanical components, like self-driving cars, robots, or aerial vehicles.

Consider, for example, industrial robots. These robots have many moving parts that need to be controlled autonomously. It would not be very productive if a human had to supervise industrial robots. As a matter of fact, their raison d’être is to minimize the need for human labor.

Mechanical engineers design the motors and bodies of the robots, electrical and electronics engineers may design their sensors, but it requires a computer engineer to program their intelligence. Part of that intelligence is going to be automatic control controllers that allow them to do what is required of them.

Of course, there will be other things involved in the robot, such as a vision system that uses ideas from pattern recognition and image processing to allow the robot to do tasks led by visual cues, but at the end, the output of these systems will go into a controller to control the motion of the robot to the desired degree, or its speed, or a combination thereof.

Let me explain this more. For the purpose of this explanation, I am going to give a very artificial example that is easy to follow, but hopefully, you will be able to understand how this generalizes to more complex cases. Assume that you have a robot that you want to move as fast as possible towards a wall, but as the robot approaches the wall, it needs to slow down to avoid an abrupt halt at the wall that may spill its payload.

So this robot needs to move quickly but slow down as it approaches the wall. Assume that the robot is moved by a single motor. What we need for this artificial example is a speed control automatic control system driven by the distance between the robot and the wall. Thus, you can design a PID controller, for example, that compares the distance from the robot to the wall using, for example, a lidar or computer vision. This controller will control the speed of the motor proportionally to its distance from the wall.

This is just an artificial example, but you can see that automatic control is very important in robotics. It is what allows you to control them precisely according to the desired state you want to achieve. The same is for autonomous vehicles or drones. You need them to read values from their sensors and then process these readings in a controller you design to allow them to respond to the environment they are in. Learning how to properly design controllers is essential if you are to be active in this interesting and important part of computer engineering.

Another, perhaps less obvious, use of automatic control in computer engineering is in network traffic control. It is possible to design an automatic control algorithm that adjusts the setting of routing algorithms to automatically respond to changes in network traffic. Automatic control can also be used to provide differentiated services to network traffic according to quality of service (QoS) settings. For example, voice and video traffic may require some QoS guarantees to make sure that the video or audio is not interrupted during transmission. By deploying controllers, perhaps using fuzzy logic, network administrators can allow the network to respond to these changes and demands automatically, thus ensuring a better experience for all.

You can also use automatic control for power management in mobile devices or laptops. In these devices, it is sometimes necessary to change CPU speed, screen brightness, and other settings to conserve power and maximize battery life. This can be done using automatic control algorithms that measure current levels of battery consumption and then use a controller to adjust all these settings until the rate of battery consumption approaches the desired rate.

Yet another use of automatic control by computer engineers is in process control. If you are a computer engineer and you are designing a system for an industrial process that requires adjusting settings or ingredients, you will need an automatic control system to be able to precisely reach your desired goal. For example, you can use it to regulate temperature and pressure in a chemical process or the rate of flow of material on an assembly line.

Automatic control is a very important topic for computer engineers, and learning how to properly design a controller to control whatever you want to control is an essential skill that you need to master. The theory that you take, such as transient analysis, steady-state analysis, and controller parameters design may seem like unnecessary abstract ideas to you, but when you try to design a control system for your project or job and find yourself relying on trial and error and then have your system behavior unpredictably when you finish designing it, you will understand that all that theory is very important.


Automatic control is very important for computer engineers. Without it, we are basically programmers with some knowledge of hardware who try their luck when building autonomous systems. With it, we are competent engineers who can design our system on paper, perhaps simulate it in software, and then implement it to precise specifications. Consider this the next time you enter a quiz in automatic control and your brain rebels, asking you, “Why do we study this course?”.

This is the third post in the series “Why Do We Study X?” In this post, I will explain why it is important to study automatic control if you are a computer engineer. First, we need to understand what automatic control is. Essentially, automatic control is the science that allows us to design and manage…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.