Hack attacks are not just a nuisance; they cause costly harm and could threaten critical systems. Can they be stopped?
Excerpted from The Future Postponed, Massachusetts Institute of Technology, 2015
Howard E. Shrobe: Principal Research Scientist at Computer Science and Artificial Intelligence Laboratory
The recent cyberattack on Sony released embarrassing private emails, temporarily stalled the release of a film, and caused other reputational and economic harm. But while dramatic, this incident is hardly unusual. Hacking of computer systems, theft of commercial and personal data, and other cyberattacks costs the nation billions of dollars per year. The number of attacks are increasing rapidly, and so are the range of targets: major retailers (Tar- get and Home Depot), newspapers (The New York Times), major banks (Morgan Chase), even savvy IT companies (Microsoft). The global cost of continuing to use insecure IT systems is estimated at about $400 billion per year.
Cyber insecurity also has national security implications, stemming from theft of military technology secrets. For example, China is believed to be copying designs of our most advanced aircraft and may be developing the technology to attack or disable our weapons systems through cyber means. Likewise, because computer processors linked to networks are now embedded almost everywhere in our mechanical devices and industrial infrastructure—a high end car uses almost 100 separate processors, for example—attacks that could damage or take control of cars, fuel pipelines, electric power grids, or telecommunications networks are a proven possibility. Large scale damage—as Sony found out—cannot be ruled out.
Are such vulnerabilities inevitable? It might seem so, because of the complexity of computer systems and the millions of lines of software “code” that direct them—and given that a single programming mistake could result in a major vulnerability. And if that were so, then the only strategy would seem to be changing passwords and other cybersecurity good practices, sharing risk information, and a never ending sequence of “patch and pray”. But there is good reason to believe that fundamentally more secure systems—where security is built in, and doesn’t depend on programmers never making mistakes or users changing their passwords—are possible.
A second fundamental cause of cyber insecurity is a weakness in our means of identifying individuals and authorizing access, which today mostly comes down to a typed-in password.
One fundamental cause of cyber insecurity is core weaknesses in the architecture of most current computer systems that are, in effect, a historical legacy. These architectures have their roots in the late 1970’s and early 1980’s when computers were roughly 10,000 times slower than today’s processors and had much smaller memories. At that time, nothing mattered as much as squeezing out a bit more performance and so enforcing certain key safety properties (having to do with the way in which access to computer memory is controlled and the ability of operating systems to differentiate among different types of instructions) were deemed to be of lesser importance. Moreover at that time, most computers were not networked and the threat environ- meant was minimal. The result is that widely used programming languages such as C and C++ have features such as memory buffers that are easy to inject malicious code into and other structural flaws. Today’s world is quite different and priorities need to change.
A second fundamental cause of cyber insecurity is a weakness in our means of identifying individuals and authorizing access, which today mostly comes down to a typed-in password. Cyber criminals have developed means of exploiting human laziness and credulity to steal such credentials—guessing simple passwords, bogus emails that get people to provide their passwords, and similar tricks. Even more sophisticated passwords are not really safe: a contest at a “DEFCON” conference several years ago showed that sophisticated password guessing software could guess 38,000 of 53,000 passwords within a 48 hour period. It often only takes one such theft to gain access to a machine within a corporate setting or a government agency; this machine,in turn, is accorded greater trust than an out- side machine, allowing the attacker to then gain access to other machines within the organization.
Both of these fundamental weaknesses could be overcome, if we decided to do so, by redesigning computer systems to eliminate structural cybersecurity flaws, using well understood architecture principles—a conceptually simple project but difficult and costly to implement because of the need to replace legacy systems—and introducing what is called multi- factor authentication systems for user access. This latter fix is far easier—a computer user would be required both to have a password and at least one other source of identity proof. This could include a smart card or other physical token; a second password generated and sent in realtime for the user to enter (such as a pin sent by text to the users mobile phone, a system already offered by Google for its email and required by many banks for certain transactions); or a biometric ID such as the fingerprint reader in Apple’s new iPhones. Breaking such
a system requires the theft of the token or second ID, or the physical capture of the computer user, and would be almost impossible to do on a large scale.
The opportunity exists to markedly reduce our vulnerability and the cost of cyberattacks. But current investments in the priority areas identified here, especially for non-defense systems, are either non-existent or too small.
Several research activities would make such a transition to a cybersecure world much easier and more feasible. These include:
The design of a new prototype computer system that can run the bulk of today’s software and that is demonstrated through rigorous testing and/or formal methods (i.e. mathematical proofs of correctness) to be many orders of magnitude more secure than today’s systems.
Economic/behavioral research into incentives that could speed the transition to such new architectures and the adoption by consumers of multi factor authentication. At present, the cost of providing more secure computer systems would fall primarily on the major chip vendors (Intel, AMD, Apple, ARM) and the major operating system vendors (Apple, Google, Microsoft), without necessarily any corresponding increases in revenue. Consumers, too, will likely require incentives to adopt new practices. There has been little research into the design of such incentives.
Consideration of how continued cyber insecurity or the introduction of new more secure cyber technologies would impact international relations. What national security doctrines make sense in a world where virtually all nations depend on vulnerable cyber systems and in which it is virtually impossible to attribute an attack to a specific enemy with certainty? Deterrence—as used to prevent nuclear war—is not a good model for cybersecurity, because cyber conflict is multi-lateral, lacks attribution and is scalable in its impacts.
The opportunity exists to markedly reduce our vulnerability and the cost of cyberattacks. But current investments in these priority areas especially in non-defense systems are either non-existent or too small to enable develop- meant and testing of a prototype system with demonstrably better security and with performance comparable to commercial systems. Small scale efforts have demonstrated that new, clean slate designs offer a way out of the current predicament. But a sustained effort over multiple years would be required.