Since its November launch, ChatGPT has been a hit with the internet. In no time it gained 1 million users who have enjoyed employing this AI-driven natural language processing tool for creating wedding speeches, hip-hop lyrics, academic essays and even computer code!
The cybersecurity industry, traditionally wary of AI’s implications, seems to be paying attention to ChatGPT due to worries that hackers with no technical expertise could misuse it.
Just weeks after ChatGPT’s debut, Check Point showed how the web-based chatbot coupled with OpenAI’s Codex could generate a phishing email containing a malicious payload. According to Sergey Shykevich, Check Point’s threat intelligence group manager, this exemplifies ChatGPT’s potential to transform the cyber threat landscape and advance dangerous cyber capabilities.
Many security experts believe ChatGPT’s ability to craft convincing phishing emails – the most common way ransomware is spread – will be widely adopted by cybercriminals, particularly those who don’t speak English as their first language.
Chester Wisniewski of Sophos, a principal research scientist, warned that ChatGPT could be exploited for social engineering attacks in which perpetrators craft convincing messages in American English.
I’ve used it to write successful phishing lures, and anticipate it could be employed for more realistic interactive conversations in business email compromise and social media attacks (e.g., Facebook Messenger, WhatsApp). – Wisniewski told TechGround.
“Being a bottom-feeder cyber criminal requires more than just acquiring and using malware; there’s plenty of grunt work involved.” – The Grugq, security researcher
Chatbots that can create convincing conversations and interactions are not far-fetched. According to Hanah Darley (head of threat research at Darktrace), ChatGPT is one such example; it “can generate life-like text in seconds” when instructed to mimic a GP surgery. It’s easy to see how malicious actors could leverage this technology as a tool for multiplying their efforts.
Check Point recently warned of a chatbot’s potential to help cybercriminals write malicious code. Researchers observed hackers with no technical skills using ChatGPT’s AI to create malicious programs, such as one which steals files and compresses them for transmission on the web. Another user posted a Python script that could be used to encrypt machines without any user interaction – this same user was previously found selling access to hacked servers and stolen data.
What challenge lies ahead?
Ozarslan believes ChatGPT and other similar tools will democratize cybercrime: “It’s concerning that ransomware code can be bought ‘off-the-shelf’ on the dark web, and now virtually anyone can create it themselves.”
News of ChatGPT’s ability to write malicious code raised industry concerns, but some experts have moved to debunk such fears. The Grugq, an independent security researcher on Mastodon, mocked Check Point’s belief that it “super charges cyber criminals who suck at coding”.
The Grugq noted that cyber criminals have to do much more than just deploy malware. They need to register domains, maintain infrastructure and update websites with new content; they must also test software and monitor their system for health. Plus, they need to stay on top of the news so their campaigns don’t end up in an article about ‘top 5 most embarrassing phishing fails.’ In other words, actual deployment of malware is only a small part of being a bottom feeder in the cyber criminal world.
Many view ChatGPT’s ability to write malicious code as having a positive side.
“ChatGPT makes it easy for defenders to generate code to simulate adversaries and automate tasks,” said Laura Kankaala, F-Secure’s threat intelligence lead. “It has been used for amazing tasks like personalized education, drafting newspaper articles, and writing computer code. But be wary: the generated text or code could have security issues or factual errors.”
ESET’s Jake Moore believes that as technology progresses, ChatGPT could soon be capable of assessing possible attacks in real-time and providing guidance to bolster security.
Security pros are split on what part ChatGPT will play in the coming years of cybersecurity. We wanted to know its thoughts, so we asked it directly.
ChatGPT’s future impact on cybersecurity is uncertain; it depends on how it’s implemented and used. Be mindful of potential risks and take steps to safeguard against them.