|
It has been confirmed that hacker groups linked to North Korea, China, Russia, Iran, and others are using generative artificial intelligence (AI) ChatGPT, which proves the possibility of generative AI for criminal activities, highlighting the importance of security measures to prevent this.
According to the New York Times (NYT), on the 14th (local time), OpenAI, the developers of ChatGPT, and Microsoft (MS) detected and blocked the access attempts to their sites by these hacker groups.
According to MS, a hacker group linked to Russia used ChatGPT for research on satellite communication and radar technology related to the Ukraine war. A hacker group linked to Iran’s Islamic Revolutionary Guard Corps sought ways to bypass computer security systems with the help of ChatGPT. They also used ChatGPT to create phishing emails targeting feminist activists or to disguise themselves as international development organizations.
However, as MS explained, contrary to what some experts had feared, there have been no instances where hackers used the power of AI to create previously unthinkable attack methods. Bob Rotsted, head of security at OpenAI, said, “There is no evidence yet that hackers affiliated with adversarial nations have discovered more innovative and new ways to attack using OpenAI than regular search engines.”
It was on the level of entrusting ChatGPT with drafting emails, translating documents, and fixing computer program errors. Tom Burt, head of security at MS, said, “Hackers also used OpenAI to increase productivity like regular computer users.” OpenAI can track the locations of ChatGPT users, but it was revealed that hackers used ChatGPT like regular users by manipulating IP addresses.
Most Commented