Against Slopthink
Early AI adopters spent all of 2023 and 2024 trying stuff and making mistakes before finally figuring out what LLMs are good for. Now that these tools have worked their way down to the normies, I’m seeing new people make the same mistakes all over again.
AI is good for a lot of stuff, but it’s no substitute for original thinking and personal communication.
Dilution
Every week I see writing - an email, a blog post, a task description, etc - from someone I know personally, that doesn’t sound anything like them. It’s well-written (sometimes suspiciously so) but without any personality. It doesn’t use the shorthand phrases and style that I’m accustomed to from the other person.
It’s also usually way too long, full of extraneous details that both of us know. And at the same time, it’s often missing (or breezing over) the very few elements that are actually worth emphasizing: the whole reason we need to communicate in the first place.
This is AI idea dilution. Someone feeds three bullets to the AI, which makes it into fluffy paragraphs, which the reader then has to summarize to get the original three bullets back (and maybe even uses AI to summarize).
It should be obvious, but if your message can be conveyed clearly in three bullets, just send the bullets.
Many people seem to believe that good writing means long sentences and obscure words, but writing for communication is all about getting your idea across effectively. That usually means short sentences, simple well-chosen words, and empathy for the reader.
If your bullets are so confusing that a human can’t understand them, an LLM won’t understand them either; it’ll just make them into confusing sentences that take longer to read. And the human on the other end will still have to ask you what it means.
AI for proofreading and feedback is fine, but it shouldn’t be producing any of your written communication itself.
Rubberstamping
AI isn’t going to talk you out of your ideas, so it’s not a good thought partner. It can be helpful when you need information, but I’m not convinced it can help you think through problems. It just gives the illusion of being good at that by saying smart-sounding things and then signing off on your ideas.
I see people “brainstorming” with LLMs by just feeding it their idea, asking it for feedback, and then arguing with its critiques. Of course the AI will eventually agree with you; current models are still exceptionally sycophantic especially when you push them.
It’s useful to ask about how other people are solving similar problems, but “is this idea good? and let me explain why it’s good” only leads to one kind of answer.
For Your Review
Last is a complaint that I’m already hearing often in the workplace: submitting work for others’ review that is straight from an AI.
There are things AI can do on its own, sure. But if humans are reviewing it, you’d better be one of them.
This issue is most common in pull requests (updates to the code of applications): a developer will autogenerate code with an agent and then request a review from another dev before merging the code.
Depending on the expectations of your organization, something as simple as providing evidence that the new code works fine might be enough oversight. But you should do something before the reviewer has to look at it. The same goes for anything else you can generate with AI: roadmaps, fitness plans, spreadsheets.
It’s smart, but only sometimes
These are miraculous tools. They’re capable of doing many tasks that used to require a human: translating product ideas into code, compiling extensive research on a topic, navigating the web and filling out forms1, and on and on.
But they don’t yet replace human creativity, thinking, and empathy. Don’t give in to slopthink.
-
Through the browser control integrations in Claude and ChatGPT. ↩︎