Get the latest tech news

I. From GPT-4 to AGI: Counting the OOMs


AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years. Tracing trendlines in compute (~0.5 orders of magnitude or OOMs/year), algorithmic efficiencies (~0.5 OOMs/year), and “unhobbling” gains (from chatbot to agent), we should expect another preschooler-to-high-schooler-sized qualitative jump by 2027. Look. The models, they just

Industry insiders seem to be very bullish: Dario Amodei (CEO of Anthropic) recently said on a podcast: “if you look at it very naively we’re not that far from running out of data […] My guess is that this will not be a blocker […] There’s just many different ways to do it.” Of course, any research results on this are proprietary and not being published these days. If we succeed at teaching this outer loop, instead of a short chatbot answer of a couple paragraphs, imagine a stream of millions of words (coming in more quickly than you can read them) as the model thinks through problems, uses tools, tries different approaches, does research, revises its work, coordinates with others, and completes big projects on its own. It seems plausible that the schlep will take longer than the unhobbling, that is, by the time the drop-in remote worker is able to automate a large number of jobs, intermediate models won’t yet have been fully harnessed and integrated—so the jump in economic value generated could be somewhat discontinuous.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of GPT-4

GPT-4

Photo of agi

agi

Photo of ooms

ooms

Related news:

News photo

Exclusive: Maven AGI’s $28M funding round signals the rise of generative AI in customer support

News photo

OpenAI confirms GPT-4 successor in training stage

News photo

GPT-4 Performs Better Than Humans At Financial Statement Analysis, Says Study