Get the latest tech news
Hackers can read your encrypted AI-assistant chats
Researchers at Ben-Gurion University have discovered a vulnerability in cloud-based AI assistants like Chat GTP. The vulnerability, according to researchers, means that hackers are able to intercept and decrypt conversations between people and these AI assistants.
The researchers found that chatbots such as Chat-GPT send responses in small tokens broken into little parts in order to speed up the encryption process. “Currently, anybody can read private chats sent from ChatGPT and other services,” Yisroel Mirsky, head of the Offensive AI Research Lab, told ArsTechnica in an email Charlotte Colombo is a freelance journalist with bylines in Metro.co.uk, Radio Times, The Independent, Daily Dot, Glamour, Stylist, and VICE among others.
Or read this on ReadWrite