Hackers discuss use of ChatGPT, other AI tools for illegal activities: Report |

While tech companies look to integrate AI technology into workflows, hackers are also looking to explore ways to incorporate AI chatbots for illegal activities, a report has said based on posts on the dark web.
As per Kaspersky’s Digital Footprint Intelligence service, there have been nearly 3000 dark web posts mainly discussing use of ChatGPT and other LLMs for schemes – from creating nefarious alternatives of the chatbot to jailbreaking the original and beyond.
“Stolen ChatGPT accounts and services offering their automated creation en masse are also flooding dark web channels, reaching another 3000 posts,” the study by Russian cybersecurity company added.
In 2023, Kaspersky’s service discovered nearly 3000 posts on the dark web, and said that the chatter peaked in March.
“Threat actors are actively exploring various schemes to implement ChatGPT and AI. Topics frequently include the development of malware and other types of illicit use of language models, such as processing of stolen user data, parsing files from infected devices, and beyond,” said Alisa Kulishenko, digital footprint analyst at Kaspersky.
Alternatives to ChatGPT
The report said that the popularity of AI tools has led to the integration of automated responses from ChatGPT or its equivalents into some cybercriminal forums. It added that hackers often share jailbreaks through various dark web channels and “devise ways to exploit legitimate tools, such as those for pentesting, based on models for malicious purposes.”

Hackers are also giving considerable attention to projects like XXXGPT, FraudGPT and others, which are marketed on the dark web as alternatives to ChatGPT. Reportedly, these alternatives offer additional functionality and do not have limitations that restrict the legitimate chatbots.
Stolen ChatGPT accounts are on sale
Another threat for users and companies is the market for accounts for the paid version of ChatGPT. In 2023, another cache of 3000 posts (in addition to the mentioned above) were advertising ChatGPT accounts for sale across the dark web and shadow Telegram-channels. These posts either distribute stolen accounts or promote auto-registration services creating accounts on request.
“The automated nature of cyberattacks often means automated defenses. Nonetheless, staying informed about attackers’ activities is crucial to being ahead of adversaries in terms of corporate cybersecurity”, says Alisa Kulishenko, digital footprint analyst at Kaspersky.
Kulishenko said that while AI tools are not inherently dangerous, cybercriminals are trying to come up with efficient ways of using them to potentially increase the number of cyberattacks. The spokesperson added that it’s unlikely that generative AI and chatbots will revolutionise the attack landscape in 2024.
The Times of India Gadgets Now awards: Cast your vote now and pick the best phones, laptops and other gadgets of 2023

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment