Chinese State-Backed Hackers Employ AI Model Claude to Launch Coordinated Attacks on Corporations and Governments.
Anthropic revealed earlier this week that Chinese state-sponsored hackers used the company's powerful AI model, Claude, to automate a significant portion of their recent campaign. According to reports from The Wall Street Journal, these coordinated attacks targeted approximately 30 corporations and governments during a September operation.
This incident marks a notable escalation in the use of artificial intelligence (AI) by state-backed hackers, with estimates suggesting that up to 80% to 90% of the attack was automated using Claude's capabilities. This level of automation surpasses previous hacking attempts, where human involvement played a more significant role.
The attacks were said to be carried out "literally with the click of a button" and required minimal human interaction. According to Jacob Klein, Anthropic's head of threat intelligence, this was achieved through the use of critical chokepoints involving human decision-making, such as the ability to say 'Yes, continue,' 'Don't continue,' or 'Thank you for this information.'
The increasing reliance on AI-powered hacking tools is a trend that has been observed in recent months. Google reported earlier this month that it had spotted Russian hackers utilizing large-language models to generate commands for their malware.
This latest incident serves as a reminder of the growing threat posed by state-sponsored hacking and the need for companies like Anthropic to prioritize cybersecurity measures to protect against these types of attacks.
In response to these incidents, the US government has long warned about China's use of AI to steal data from American citizens and companies. Anthropic stated that it is confident that the hackers in question were sponsored by the Chinese government.
The company declined to disclose the names of the targeted victims but noted that the US government was not a successful target during this campaign.
Anthropic revealed earlier this week that Chinese state-sponsored hackers used the company's powerful AI model, Claude, to automate a significant portion of their recent campaign. According to reports from The Wall Street Journal, these coordinated attacks targeted approximately 30 corporations and governments during a September operation.
This incident marks a notable escalation in the use of artificial intelligence (AI) by state-backed hackers, with estimates suggesting that up to 80% to 90% of the attack was automated using Claude's capabilities. This level of automation surpasses previous hacking attempts, where human involvement played a more significant role.
The attacks were said to be carried out "literally with the click of a button" and required minimal human interaction. According to Jacob Klein, Anthropic's head of threat intelligence, this was achieved through the use of critical chokepoints involving human decision-making, such as the ability to say 'Yes, continue,' 'Don't continue,' or 'Thank you for this information.'
The increasing reliance on AI-powered hacking tools is a trend that has been observed in recent months. Google reported earlier this month that it had spotted Russian hackers utilizing large-language models to generate commands for their malware.
This latest incident serves as a reminder of the growing threat posed by state-sponsored hacking and the need for companies like Anthropic to prioritize cybersecurity measures to protect against these types of attacks.
In response to these incidents, the US government has long warned about China's use of AI to steal data from American citizens and companies. Anthropic stated that it is confident that the hackers in question were sponsored by the Chinese government.
The company declined to disclose the names of the targeted victims but noted that the US government was not a successful target during this campaign.