DarkSide, the hacking group that shut down a key U.S. oil pipeline earlier this month, has collected over $90 million recently in hard-to-trace Bitcoin from 47 different victims, according to the blockchain analytics firm Elliptic.
The pipeline hack ended only after Colonial Pipeline Co. paid nearly $5 million in ransom to regain control of computer systems needed to supply gasoline to much of the eastern U.S., and was widely dubbed a “wakeup call” to batten down loose digital hatches. Following the subsequent release of President Joe Biden’s new “Executive Order on Improving the Nation’s Cybersecurity,” the Department of Homeland Security is now moving to regulate cybersecurity in the pipeline industry. The Transportation Security Administration is expected to issue mandatory rules and reporting requirements for safeguarding pipelines against cyberattacks.
But there are key gaps. In all the recent reporting on cyber attacks, there’s been scant coverage of how they actually occur. You’d almost think that bad guys are breaking into corporate data centers in the dead of night armed with sinister thumb drives, or sneaking lines of malevolent code past snoozing information security officers. It’s as if malware materializes spontaneously on a server, then worms its way in to seize control of operational assets.
Companies are reluctant to correct the misimpressions by discussing the details of a breach, because it makes for terrible press and inevitably reveals some sloppy security. The absence of information creates a sense of bystander apathy, leaving many in the industry unprepared for the next attack.
In real life, corporate servers are often breached through remote login services as employees connect to the office from compromised home networks. Once an attacker has gained initial access to an enterprise network, other hacking tools can be used to exploit software flaws and infiltrate critical control systems. The rise of remote work during the pandemic has drastically increased these attack surfaces.
Most people don’t think of their personal computers as vectors for infectious malware, but that’s what they are. Laptops are thought of as places to store private photos and files, and manufacturers tend to downplay the vulnerabilities. It came as a surprise last week when Apple’s senior vice president of software engineering, Craig Federighi, admitted that Mac has a malware problem. According to Federighi, there have been 130 types of Mac malware in the past year, one of which infected 300,000 systems. And all this is coming from a company that historically advertised its machines as a more secure alternative to Microsoft Windows.
Brutal honesty could encourage greater consumer vigilance. In 2016, the comedian John Oliver featured a satirical clip of Apple engineers scrambling to put out fires and patch software vulnerabilities while a malicious hacker steals intimate photos from user devices. It’s a fairly accurate depiction of the challenges of information security, where a few engineers must hold off potential hackers in 24 different time zones.
The lack of transparency is not just the fault of corporate public relations. Software vulnerabilities are often kept secret for national security purposes. Nobody likes to talk about it, but the U.S. government exploits security flaws all the time for intelligence-gathering and counterterrorism measures. The National Security Agency and Central Intelligence Agency notoriously stockpile hacking tools, many of which have fallen into the wrong hands. In 2019, hackers used a leaked NSA exploit to disrupt government services in Baltimore.
Biden’s executive order addresses part of the problem by envisioning the movement of government data and services to the cloud from local servers. A reputable cloud-hosting provider has full-time staff monitoring the infrastructure and staying on top of security updates, so newly disclosed vulnerabilities can be patched immediately.
This may be sensible for government agencies, but perhaps not the private companies operating critical infrastructure. The cloud computing market is dominated by three players: Google Cloud, Microsoft Azure and Amazon Web Services. Greater dependence on tech giants would make the internet more susceptible to catastrophic failure by reducing the number of prime hacking targets. A distributed communications system, by contrast, should be able to survive a nuclear strike; now, malfunctions at major cloud-storage providers can disable service for the entire country. Also, people concerned about the increasing monopoly power of big tech would have their own reasons to object.
The TSA’s new cybersecurity rules will probably draw from the Cybersecurity Framework maintained by the Commerce Department’s National Institute of Standards and Technology. The framework was prompted by an executive order signed by President Barack Obama in 2013 and establishes industry best practices for cyber risk management, but adherence has been limited because implementation requires a huge investment. Security measures are easy to undervalue because the consequences of sloppiness are unknowable. Laziness is a competitive advantage until the day the bad guys strike.
Even with TSA-enforced security standards, the industry would benefit from greater transparency about breaches and software vulnerabilities. Cybersecurity ultimately comes down to human behavior, and people are prone to cut corners when they underestimate risk. The worst outcome would be for cybersecurity to turn into a checkbox-ticking exercise like the pointless ritual that we suffer at the airport.