Get the latest tech news

Your LLM Is a Capable Regressor When Given In-Context Examples


We analyze how well pre-trained large language models (e.g., Llama2, GPT-4, Claude 3, etc) can do linear and non-linear regression when given in-context examples, without any additional training or gradient updates. Our findings reveal that several large language models (e.g., GPT-4, Claude 3) are able to perform regression tasks with a performance rivaling (or even outperforming) that of traditional supervised methods such as Random Forest, Bagging, or Gradient Boosting. For example, on the challenging Friedman #2 regression dataset, Claude 3 outperforms many supervised methods such as AdaBoost, SVM, Random Forest, KNN, or Gradient Boosting. We then investigate how well the performance of large language models scales with the number of in-context exemplars. We borrow from the notion of regret from online learning and empirically show that LLMs are capable of obtaining a sub-linear regret.

View a PDF of the paper titled From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples, by Robert Vacareanu and 3 other authors View PDFHTML (experimental) Abstract:We analyze how well pre-trained large language models (e.g., Llama2, GPT-4, Claude 3, etc) can do linear and non-linear regression when given in-context examples, without any additional training or gradient updates. For example, on the challenging Friedman #2 regression dataset, Claude 3 outperforms many supervised methods such as AdaBoost, SVM, Random Forest, KNN, or Gradient Boosting.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Context

Context

Photo of LLM

LLM

Photo of examples

examples

Related news:

News photo

Three major LLM releases in 24 hours

News photo

Meta confirms that its Llama 3 open source LLM is coming in the next month

News photo

Hello OLMo: A truly open LLM