Hackers with AI Are Harder to Stop, Microsoft Says

Stealthier attacks are being crafted by hackers using both artificial intelligence tools that have been on the market for a while and generative-AI chatbots that emerged last year, said Tom Burt, Microsoft’s corporate vice president for customer security and trust.

Source: WSJ | Published on October 9, 2023

Cybercriminals using AI for attacks

Hackers are using AI and encryption in new ways to make cyberattacks more painful, according to new research from Microsoft.

Stealthier attacks are being crafted by hackers using both artificial intelligence tools that have been on the market for a while and generative-AI chatbots that emerged last year, said Tom Burt, Microsoft’s corporate vice president for customer security and trust.

“Cybercriminals and nation states are using AI to refine the language they use in phishing attacks or the imagery in influence operations,” he said.

Meanwhile, an emerging development in ransomware shows hackers remotely encrypting data, rather than encrypting it within hacked networks, Microsoft said. By sending encrypted files to a different computer, attackers leave behind less evidence and make it harder for companies to recover. Around 60% of human-operated ransomware attacks that Microsoft observed in the last year used this technique.

The new AI and encryption tools used by hackers are making it more difficult for companies to defend their networks as the number of attacks surges.

General data exfiltration attacks, where hackers steal data and demand ransom payments from victims, doubled between November 2022 and June 2023, Microsoft researchers found in an analysis of data generated from the 135 million devices it manages for customers and the more than 300 hacker groups it tracks.

Also, ransomware attacks operated by humans climbed 200% between September 2022 and June, the company said in its report published Thursday. Unlike automated ransomware strikes, human-operated ones are customized.

Hackers have been moving toward stealing data and demanding ransom in return for not leaking it, to make money as many companies have gotten better at recovering from ransomware damage alone, said Jake Williams, faculty at IANS Research and a former offensive hacker at the National Security Agency. “We definitely are seeing more threat actors moving toward extortion,” he said.

Tech and cyber companies are quickly adding AI capabilities to their security tools, to fight fire with fire, said Lane Bess, chief executive of AI cybersecurity provider Deep Instinct. “The battle has to be escalated,” Bess said at the WSJ CIO Network Summit on Monday.

Cisco Systems’ $28 billion purchase of Splunk announced in September reflects a shift in the cyber market, where investment is going to companies focused on using AI to manage security and risk.

U.S. cybersecurity and national security officials have issued warnings about the risks of hackers using powerful AI tools to infiltrate corporate and government systems, and said the government needs to develop AI technologies to counter attacks from foreign adversaries. Cybersecurity and Infrastructure Security Agency Director Jen Easterly said in April that cybercriminals and nation-state hackers’ potential use of generative AI tools is a major threat, and there aren’t legal safeguards limiting their use. Tech executives including Elon Musk, Mark Zuckerberg and Bill Gates met with U.S. senators last month in a closed-door session on AI and potential regulation.

Hackers are using large language models similar to those in generative-AI tools to speed up elements of cyberattacks like writing phishing emails or creating malware, making it easier to carry out hacks, said Lukasz Olejnik, an independent cybersecurity researcher and consultant. To train especially large models, large language models require huge volumes of data. “Some tasks that previously necessitated teams of people, can now be done by single individuals,” he said.

Diego Souza, chief information security officer at manufacturer, Cummins, said he has seen a big increase in authentic-looking phishing emails since generative tools including OpenAI’s ChatGPT came out last year. The emails now realistically mimic actual companies and people, and the language used in them is more convincing than in the past, he said. “I have seen some generative AI phishing that are just like wow,” Souza said.

Cybercriminals can subscribe to underground phishing services for $200 to $1,000 a month, Microsoft found.

Sophisticated hacker groups will likely start experimenting with AI to refine tried and true attacks, Burt said. Phishing, as well as password spraying and brute-force attacks—methods to break into password-protected accounts—are still the most common ways hackers infiltrate corporate systems. “What [hackers are] looking for is: What is the cheapest way to gain access to our target?” he said.