Engineering is concerned with producing technical solutions for problems following the pattern of requirements analysis, system and subsystem design followed by synthesis, verification and validation. At the heart of it, engineering is based on two principles: recursively breaking up complex technical problems into smaller pieces down to a level where a solution becomes possible for each piece, then integrating those in reverse order at all levels to build a complete whole. The resulting system is therefore built of components or modules in a nested manner with nearly “atomic” elements at the bottom; the behaviour of the latter can usually be described by a model¹ with a finite set of parameters to a specified degree. Hence the complete system can — in most cases — be described by a finite albeit larger number of parameters, too. Through this process quite complex system can be designed and build such as the Horloge astronomique de Besançon, France, or the Large Hadron Collider at CERN.
It doesn’t actually come as a surprise that technical systems built by engineers with this approach are predictable by design; they are complex but in most cases not complicated (if well-designed). For instance, mechanical Engineering uses simple basic components such as nuts and bolts, literally; they have very well defined properties, a few numbers such as diameter or ductile strength. Many of those basic components became standardised as so-called “norm-components” and make up the basic building blocks of mechanical systems. Similarly, there are well-defined components in electrical engineering, from simple resistors, capacitors, inductors to integrated circuits of highest complexity: the principle is the same: design and manufacture well-defined simple components. Then, use those in a nested fashion to build complex systems.
At the centre of breaking down of problems and building up complex solutions is the understanding of these simple components whose behaviour can be described by a finite set of dominant parameters to a desired degree of fidelity. Training and eduction in engineering is mostly about these models whose purpose is not only to describe the behaviour but to make predictions. This approach is reductionist as an infinitely complex real system is reduced to a finite set of parameters in a mathematical model. For instance, let’s take a pendulum for a clock, consisting of a thin rod with a mass of 0.05kg and a length of 0.5m, fitted with a flat metal disc at the end with a mass of 0.5kg and a radius of 50mm. Let’s try to model this pendulum in order to predict its oscillation frequency which determines the accuracy of the clock:
- Using the simplest mathematical model with no friction, a perfect length and all the mass concentrated at the end of the pendulum, the frequency only depends on its length and the Earth’s gravitational constant. For instance, according to a simple model, this pendulum would be expected to swing at approximately 0.705Hz.
- If the fact that both disc and rod have a certain moment of inertia around the pivot point is taken into account, then the model needs to be more complex. Remaining in the realm of small motion around the point of equilibrium, a more sophisticated model would predict a frequency of 0.638Hz, slightly below but not too far away.
- Taking into account friction, both at the pivot point with a bearing and air resistance, a new model yields a slightly lower frequency, 0.612Hz. Larger oscillations would require even more complex models.
The good news is that the predictions of refined models do not completely deviate from the simple ones, but they converge towards a better and better solution. There is some stability in modelling, perhaps the secret why Engineering works, model prediction convergence: a) components can be described by a comprehensible model using a finite set of parameters; b) the more parameters are added, the more the modelled system behaviour converges to the true behaviour, never actually reaching it as any model remains merely a representation of reality². Most importantly, we can trust in the convergence of the model prediction with increasing model complexity and fidelity, i.e. no divergence of predictions. Engineers build systems that follow this assumption: models are comprehensible and converge³; this is what makes Engineering relatively easy.
In contrast, other disciplines have it more difficult: their system was not designed following the Engineer’s paradigm, or even not conceived by any human. For instance, with nature, scientists do try to isolate certain phenomena and find models of certain aspects. As an example, climate modelling is quite complex and it took a while for models to become stable enough in order to make reliable predictions such as the heating of the planet by 2100 CE give a certain fossil fuel consumption profile⁴. The more complex the system under investigation is, the more complex the model has to be, too. Unfortunately, the model predictions do not necessarily exhibit the convergence property of engineering systems, e.g. economic models⁵. I have yet to see a dynamic model in a set of differential equations with a limited set of parameters that describe the global financial market, allowing the prediction of inflation rates for the next few years and its dependence on central bank interest rates.
Bottom line: given the complexity and lack of convergence of models for natural or social systems but the convergence of models in technical systems, Engineering is a comparatively easy discipline.
- In this context, a model is understood as an abstract, idealised mathematical representation of a real entity (physical or not), mostly developed from first principles; it could equally be a “model” in the sense of convolutional neural networks. Modelling in this context is hence an extension to our innate ability of creating mental representations of our environment and its behaviour in the Hippocampus that allows us to navigate the world; and these are indeed neural networks, albeit natural ones.
- Statements such as: “nature follows the laws of physics” always ring wrong to me. Nature does what nature does. Over the course of the years humans came up with models to describe nature, or aspects of it. Claiming that nature follows human-made models or is constrained by human-made laws is confusing cause and consequence. What we mean to say is “our understanding of nature expressed in laws of physics suggests that this or that is not possible or will behave in a certain way.”
- Note this convergence is not to be confused with parameter sensitivity, i.e. the variance in prediction as a function of the variance of given parameter or input values. Here it is the improvement of the model prediction by adding another parameter or making the model more complex, e.g. go from a linear to a non-linear model.
- Note that any model above a very small level of complexity actually is capable of showing chaotic behaviour, whether it is a technical, natural or social system. It only needs two rods assembled as a pendulum or a ball bouncing on a table tennis racket to create chaotic behaviour.
- The problem in model reliability is exacerbated by the fact that some of these “models” are mere spreadsheets and do not undergo solid verification and peer review, e.g. Growth in a Time of Debt.