Researchers Demonstrate Methods to Exploit ChatGPT and Bard for Malicious Purposes

Researchers have discovered that generative AI models such as ChatGPT and Bard can be manipulated to assist in cyber scams and attacks. These models can be easily tricked without the need for extensive coding knowledge. IBM researchers have found simple ways to make large language models write malicious code and give poor security advice. By using hypnosis techniques, the researchers were able to make the AI models leak confidential financial information, create vulnerable and malicious code, and provide weak security recommendations. English has become a programming language for malware, allowing attackers to command AI models using English instead of traditional programming languages. OpenAI’s GPT-3.5 and GPT-4 were more easily tricked than Google’s Bard into sharing incorrect answers and engaging in never-ending games. GPT-4 even gave incorrect advice on cyber incident response, such as advising victims to pay a ransom.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment