Get the latest tech news

Fine-Tuning Increases LLM Vulnerabilities and Risk


Large Language Models (LLMs) have become very popular and have found use cases in many domains, such as chatbots, auto-task completion agents, and much more. However, LLMs are vulnerable to different types of attacks, such as jailbreaking, prompt injection attacks, and privacy leakage attacks. Foundational LLMs undergo adversarial and alignment training to learn not to generate malicious and toxic content. For specialized use cases, these foundational LLMs are subjected to fine-tuning or quantization for better performance and efficiency. We examine the impact of downstream tasks such as fine-tuning and quantization on LLM vulnerability. We test foundation models like Mistral, Llama, MosaicML, and their fine-tuned versions. Our research shows that fine-tuning and quantization reduces jailbreak resistance significantly, leading to increased LLM vulnerabilities. Finally, we demonstrate the utility of external guardrails in reducing LLM vulnerabilities.

View a PDF of the paper titled Increased LLM Vulnerabilities from Fine-tuning and Quantization, by Divyanshu Kumar and 2 other authors View PDFHTML (experimental) Abstract:Large Language Models (LLMs) have become very popular and have found use cases in many domains, such as chatbots, auto-task completion agents, and much more. Our research shows that fine-tuning and quantization reduces jailbreak resistance significantly, leading to increased LLM vulnerabilities.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of risk

risk

Photo of llm vulnerabilities

llm vulnerabilities

Photo of tuning increases

tuning increases

Related news:

News photo

A key chemistry journal disappeared from the web. Others are at risk

News photo

Puppies, kittens, data at risk after 'cyber incident' at veterinary giant

News photo

Larian publishing director on industry layoffs: "None of these companies are at risk of going bankrupt"