In our recent article about the implications of a war with Iran, we mentioned the potential of Iran to mount a bloodless cyber-attack against us, with their hackers attacking our infrastructure’s computers from the comfort of their homes and offices in Iran, rather than soldiers attacking us more directly.
It is our feeling that few people appreciate the dangers and risks of a cyber-attack, and in the last couple of days there have been a couple of interesting news items that help to put this in context. We discuss these below.
But, first, as a quick summary about cyber-vulnerabilities, do you remember back to the fuss about the Y2K bug? This concern happily did not translate to a nightmare reality – but not because the concern was unfounded, but due to enormous efforts (and many billions of dollars) at rewriting and updating software in the several years prior to that date subsequent to people realizing that there was a problem that otherwise would occur.
The concern back then was what would happen if all sorts of computers started to malfunction due to date logic errors – computers as diverse as those that operate lifts, those that operate food refrigeration facilities, and so on. Think of anything you do in your life today, and you’ll quickly find some sort of computer/controller is directly related to the smooth experience you expect and usually enjoy. Indeed, we challenge you to think of something that is moderately important in your life which could work if the ‘behind the scenes’ computers malfunctioned.
The invidious nature of cyber-attacks is that to defend against them, the computer systems being attacked must be 100% invulnerable and bug-free. As you surely know, the 100% perfect, bug-free,.computer program or operating system does not exist. Such paragons of computing perfection may have existed, decades ago, when computers were very much simpler. But nowadays, with millions of lines of programming in modern computer programs, and many more millions of different combinations of scenarios/events, it is close to impossible to make software bug-free. As proof of this impossibility, a decade or two ago, software developers rewrote their guarantees and they no longer warrant their software to be bug-free and for sure they disclaim any liability for any problems arising from bugs in their software.
Because we don’t know what, where, and how such bugs exist and can be exploited (if we did know, the bugs would presumably be resolved), it is very hard to safeguard computers from cyber-attack. Even if we completely disconnect computers from the internet, they are not truly isolated. The underlying operating system and the even lower-level firmware and BIOS type programming built into the actual hardware all had to come from somewhere – there are plenty of examples of infected distribution disks that people have used to load computer operating systems onto fresh new computers, or infected software direct from a manufacturer, or of actual hardware with ‘back doors’ (see below) deliberately engineered into them.
The vulnerabilities continue. Every time a person shares a file, there is a chance that there is some sort of infection in that file. Even a simple safe seeming word processing document can contain programs these days.
An Example of a Current Cyber-Attack
With that as background, it is helpful to see the latest real-world example of a military style cyber-attack. As we mentioned in our earlier article about Iran, while Iran is one of only five nations known to have a cyber army, Iran is – to date – more notable for having been on the receiving end of cyber-attacks rather than of generating them. The Stuxnet virus was the highest profile (but not only) example of a cyber-attack on Iran when our article was written, but now news has come out of a newer more sophisticated attack, using what is termed the Flame virus.
Here’s a good article about what Flame is and be sure to look at the graphic that sets out some of the things this virus can do as well. Amazingly, it seems that the Flame virus has been ‘in the wild’ – ie, out there, infecting computers, and collecting/distributing data – for between two and possibly five years, and only now is being subject to public disclosure.
At present, the big difference between Stuxnet and Flame is that the former was used to destroy equipment controlled by virally infected controllers (here’s an explanation), whereas the latter is currently operating in an intelligence gathering mode. But who knows what else Flame might be capable of, and also, who knows what other independent infections Flame hasn’t subsequently created in the machines it now inhabits.
Our point is simply this. If a country as ‘closed’ as Iran, a country that has been subject to past cyber-attacks, can be re-infected again and again with viruses, and if it can take up to five years for these infections to be discovered, who knows what is residing on key computers here in the US already, let alone what might infect them in the future.
Planes Falling From the Skies – An Example of a Potential Risk
Now for something a bit closer to home. Until recently, all planes were controlled mechanically. The pilot would turn the control column in the cockpit, and a series of wires would then carry that movement back to the ailerons, elevators and rudder and make them move in direct response to the movement on the control column. Similarly, moving the throttle levels on the quadrant in the cockpit also directly controlled the engines.
It used to be the same in our cars, too. In nearly all cases, our brakes are still directly controlled, albeit with ‘power’ boosting systems, and the same with the steering, but most modern cars these days no longer have a physical link between the gas pedal and the carburetor (of course, most cars don’t even have a carburetor now, they use fuel injection instead).
The reason our cars still have direct links between the controls we operate and the wheels is for safety. There’s much less that can go wrong with a mechanical series of levers and rods and wires.
With planes as with cars, the increasing complexity of modern jet or car engines saw the mechanical linkage between throttle levers and the engines now replaced with computer controls. Your foot on the gas pedal or the pilot’s moving the throttle lever merely sends a signal to an engine management computer that you want more or less power, and the computer then decides how to interpret that control, not just adjusting the fuel flow but also adjusting timings, compression levels, and possibly gear selections too. This makes our cars (and planes) run more smoothly and fuel efficiently, and is generally a good thing.
For a plane, moving the flight controls – the control column – also have interactions with the plane’s speed and need for engine power, in a complex and changing relationship depending on many factors, so airplane manufacturers are replacing the previous mechanical linkages to the flight control surfaces on the plane’s wings, rudder and elevator with computerized controls.
Now, when the pilot moves the stick back and to the right, the computer thinks about that instruction and decides how best to interpret it with an optimized combination of engine setting adjustments, and movements to all three primary flight control surfaces, as well as to secondary control surfaces too (trim tabs, air brakes, etc). The computer is supposed to be more clever than the pilot, and won’t allow dangerous flying commands to be accepted (although usually there is a command mode that can be manually selected where the computer is told to obediently do everything exactly as instructed, even if the computer thinks the command is wrong). The flying control instruments on a modern Airbus plane are almost exactly the same as the joystick and throttle lever you can buy at a computer store to connect to your computer to play a Flight Simulator game.
Fly by Wire Introduces Vulnerabilities as well as Conveniences
This new type of airplane control is called ‘fly by wire’ in the sense of flying by computer control rather than by direct pilot control. It is usually considered to be a good thing, although there are possible cases where a ‘miscommunication’ between the pilots and the flight computer may have resulted in airplane crashes (most recently the Air France flight AF447 that crashed in the Atlantic en route from Rio de Janeiro to Paris in June 2009).
However, what happens if the computer that interprets the pilot’s requests and decides how to translate a movement of the pilot joystick into changes in the airplane’s control surfaces and engine power settings deliberately does the wrong thing? What say the request to the computer to just do exactly what the pilot is asking is ignored? Or maybe the computer misunderstands exactly what the pilot is asking. This sounds like the HAL 9000 computer from the movie/book 2001: A Space Odyssey and indeed, that is a great example of the possible outcomes.
The famous ‘blue screen of death crash’ in Windows could be a literal blue screen of death crash on a plane – with a misbehaving computer causing the sea to fill the pilots’ windshield as a plane plunges unstoppably out of the skies and into the ocean beneath (as was what happened with AF447).
It is rather scary that we now risk our lives on planes controlled by computers when we know, from personal experience, that computer ‘crashes’ are common events. The number of fly-by-wire airplanes is increasing, not only with every new Airbus plane sold, but now with new Boeing planes also being fly-by-wire.
We have been talking about inadvertent errors and logic bugs. What say the computer controllers were deliberately infected with malicious code that was designed to cause planes to unstoppably crash? What say, for example, these controllers had a virus in them that said ‘at exactly a particular time on a particular day, move engine power to maximum and set the plane in a crash dive’. So that at the same instant, all around the world, hundreds (more likely, thousands) of planes all simultaneously went into nose dives and crashed into whatever was below, and of course, in all cases, killing everyone on board.
That could never happen, right? Wrong! It is all too easy to see how such a thing could happen. Maybe while we are protecting our airports and airplanes with metal detectors and scanners to check the passengers, the real threat to our aviation system is something very different indeed – an ‘invisible’ passenger – a cyber threat that the airport security guards have no way of detecting.
For a specific example of a specific vulnerability, please see this article about how one of the control chips in modern military and civilian planes has a ‘back door’ written into it – a way for instructions to be secretly inserted into its control code, bypassing the normal way of doing so and the controls/restrictions placed on that normal way.
Think of this back door as being like a secret passage in an old house. If you know exactly where to press the secret opening lever, all of a sudden, a wall in the study/library swings open, and you can then roam around the house at will, using secret spy holes to peek in on what people are doing in other rooms in the house, and using other secret doors to appear in other parts of the house unexpectedly. Other people in the house might suspect there are secret passages, but if they don’t know exactly where and how to press the hidden lever, they’ll never get into the secret passages.
It is the same with computers. There might be an entire set of instructions hidden inside a computer chip, but when some trigger event occurs, these extra instructions will suddenly start executing. A similar thing is relatively common for benign purposes – what are called ‘Easter Eggs’ – hidden extra routines in programs that if you know exactly what set of key strokes to enter, you can trigger. Here is one such list of computer easter eggs to give you examples of what they are and how they appear.
The article also obliquely and delicately points out a vulnerability that impacts on nearly every piece of computer control circuitry these days. Although the chips may be designed and developed in the US or other ‘friendly’ country, they tend to be manufactured in a third party country outside of our direct control.
What is to stop the chip manufacturer (in this particular case, in China) from deliberately changing part of the specification so as to create an obscured ‘back door’ for future exploitation? With millions of transistors and other devices on a single chip, and space for thousands/millions of lines of built-in programming, how can such vulnerabilities be completely tested for prior to deployment of each batch of chips?
Implications for Preppers
We’re not saying don’t fly on modern planes. And we’re not saying turn off every computer controlled device in your home, office, car, retreat, wherever.
We’re simply pointing out that there are unseen and unthought of risks and vulnerabilities in our lives that could suddenly create major havoc in our world as we currently know and enjoy it. A Y2K bug type scenario might be unleashed upon us by a foreign power, and with even a small part of our computer controlled lives destroyed, our entire lives could be destroyed. Kill the computers that manage our water system. Or the computers that manage oil refineries and pipelines. Or the computers that run the electricity grid.
What would you do if water no longer appeared by magic every time you turned on a tap or flushed a toilet? What would food processors do without water, too?
If we lost the ability to refine and transport bulk oil/gas products, how would you get to work each day? No cars, no buses. If your business has to close down, how will that impact other businesses that rely on its products/services (assuming they haven’t already had to close down too)? And how would food get to the supermarket without trucks to transport it there? Even if it got there, how would you go to the supermarket to get the food and bring it home? And all those oil and gas-fired power stations? Take those away and our electricity supply starts to crash, even without needing to infect computer control systems for the electricity grid.
Modern society is an example of the old rhyme ‘For want of a nail, a kingdom was lost’. With all the layers of interlocking dependencies that go into every part of our lives, if a single one of those dependencies should fail, the whole lot might fail.
There’s nothing we can individually do about this. But we can, individually and in our families and communities, prepare for the consequences of a failure.