Get the latest tech news

Hackers can read your encrypted AI-assistant chats


Researchers at Ben-Gurion University have discovered a vulnerability in cloud-based AI assistants like Chat GTP. The vulnerability, according to researchers, means that hackers are able to intercept and decrypt conversations between people and these AI assistants.

The researchers found that chatbots such as Chat-GPT send responses in small tokens broken into little parts in order to speed up the encryption process. “Currently, anybody can read private chats sent from ChatGPT and other services,” Yisroel Mirsky, head of the Offensive AI Research Lab, told ArsTechnica in an email Charlotte Colombo is a freelance journalist with bylines in Metro.co.uk, Radio Times, The Independent, Daily Dot, Glamour, Stylist, and VICE among others.

Get the Android app

Or read this on ReadWrite

Read more on:

Photo of Hackers

Hackers

Photo of assistant chats

assistant chats

Related news:

News photo

Hackers exploit Windows SmartScreen flaw to drop DarkGate malware

News photo

Hackers abuse Windows SmartScreen flaw to drop DarkGate malware

News photo

Microsoft to Release Security AI Product to Help Clients Track Hackers