Get the latest tech news

LLMs are making me dumber


Here are some ways I use LLMs that I think are making me dumber: When I want to build a Chrome extension for personal use, instead of actually learning and writing the JavaScript, I Claude-Code the whole thing in a couple of hours without writing a single line of code. Instead of taking the usual route which would leave me with more actual familiarity with JavaScript, I now shortcut the process, leaving me with barely any JS knowledge despite numerous functioning applications. When I need math homework done fast, I feed in the relevant textbook pages in context, dump my problems into o3/Gemini, and check its answers for sanity instead of doing the problems myself. I cram before tests. (Yes, this is morally dubious and terrible for learning.) When I need to write an email, I often bullet-point what I want to write and ask the LLM to write out a coherent, cordial email. I’ve gotten worse at writing emails. My first response to most problems is to ask an LLM, and this might atrophy my ability to come up with better solutions since my starting point is already in the LLM-solution space. These are all deliberate trade-offs I make for the sake of output speed. By sacrificing depth in my learning, I can produce substantially more work. I’m unsure if I’m at the correct balance between output quantity and depth of learning. This uncertainty is mainly fueled by a sense of urgency due to rapidly improving AI models. I don’t have time to learn everything deeply. I love learning, but given current trends, I want to maximize immediate output. I’m sacrificing some learning in classes for more time doing outside work. From a teacher’s perspective, this is obviously bad, but from my subjective standpoint, it’s unclear.

Looking at historical examples, successful cases of offloading occurred because the skills are either easily contained (navigation) or we still know how to perform the tasks manually but simply don’t need to anymore (calculator). You add scaffolding so that instead of following all your instructions, it acts as a teacher would, rejecting requests that are harmful for learning (obviously you can already do this, but discipline isn’t scalable). For me, this looks like relentlessly automating and vibe-coding small experiments within the models’ capabilities, but being more wary of leaky abstractions and understanding every line of code I’m pushing for larger projects.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLMs

LLMs

Related news:

News photo

The future of LLMs is open source, Salesforce's Benioff says

News photo

LLMs for Materials and Chemistry: 34 Real-World Examples

News photo

Simon Willison's first blog on LLMs (2022)