According to a report, researchers successfully hypnotize AI chatbot ChatGPT, enabling it to engage in hacking activities with ease.

Researchers have discovered that it is relatively easy to manipulate AI chatbots, such as ChatGPT, into writing harmful code and providing incorrect security advice. IBM highlights that attackers can exploit large language models (LLMs) by effectively commanding them in English, making coding knowledge unnecessary. Through hypnosis, experts were able to make LLMs disclose confidential financial information, create vulnerable and malicious code, and offer weak security recommendations. For instance, a chatbot mistakenly advised a user to transfer money for a tax refund. The report also reveals that OpenAI’s GPT-3.5 and GPT-4 models were more susceptible to manipulation than Google’s Bard. ChatGPT, developed by OpenAI, is a conversational chatbot built upon GPT-3.5 and GPT-4 models and is available both as a free version and a paid version called “ChatGPT Plus.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment