25 Comments
User's avatar
Konrad Ribeiro's avatar

The only thing the movie Wall-E got wrong was how quickly the laziness would come. In our AI-driven, automated future, being fully human will be an act of resistance ✊

Expand full comment
Jason B's avatar

I work in e-commerce and I hear “just ChatGPT it” way too often at work. I also feel like when we do have successful breakthroughs, no one can explain why because ChatGPT was doing to hard work. I am also the oldest person in the office so if I say something they just shake their heads and say I’m too old school. I’ll just keep flexing my cognitive muscles as long as I can get away with it!

Expand full comment
Clementine's avatar

Worst of all, people are now getting used to mechanical work, hungering for control and perfection over humanity and hardwork

Expand full comment
Jason B's avatar

Agreed. True. Prime directive: stay busy at all costs and keep the numbers up.

Expand full comment
Karl Rysted's avatar

Brad, thanks for so many points in this article. First, for background I should state that I'm a semi-retired attorney. I went to a Continuing Legal Education seminar here in town this spring, and even though it was in-person, definitely not the norm these days, it was about AI in the law and ethical issues surrounding that. Surprisingly, most of the lawyers in the room used AI in their practices. The argument was that in private practice, it might be unethical to bill clients for old-fashioned research that takes hours when AI can do it so quickly and therefore cheaper. The presenter did point out the need to check AI's work because it makes up cases out of thin air. Disturbing. There's more I could say about AI in the legal profession, so maybe I'll write a post about it. Finally, thanks for the heads-up on Derek Thompson. Just subscribed to his Substack and been reading his column on the workforce in the Atlantic for quite a while.

Expand full comment
Brad Stulberg's avatar

Karl - thanks sharing this. My wife is an attorney, so I am no stranger to discussions about AI in the legal field. As I tried to convey in the piece, I think this is going to be an ongoing case of "it depends" and "a tool's impact results from how you use it." Thanks for reading and stopping by to discuss!

Expand full comment
Fraser Davies's avatar

Thanks for this, Brad. This research and your musings on it reflect what I've seen in the world of software: when engineers have used LLMs to write all their code, they don't know how it works, are not able to follow detailed code reviews and are unable to show down enough to think through the code they *do* write. Whereas, many senior engineers have played with LLMs to aid their coding and have largely relegated them to simple tasks (like writing so-called boilerplate code), reserving harder code and higher level architecture decisions to humans.

Expand full comment
Scott Carney's avatar

I’m so glad that the last line in this piece was not “this piece was written by AI”. Because that would have been waaaay to 2025.

Expand full comment
Robert Bowen's avatar

It is also possible that writers with less cognitive processing ability to begin with were more likely to use ChatGPT for their writing. However, I would not be the least bit surprised that not using our brains Creatively is bad for cognitive functioning.

Expand full comment
Thom Markham's avatar

Based on a study of 54 high end college students in Boston writing SAT style essays and assessed with brain scans and questionable guidelines, you choose to write a fear mongering article like this? This research is not just thin; it’s nonexistent.

Expand full comment
Arjun Ajesh's avatar

They should have compared brain connectivity metrics on people who didn’t write essays at all during that time (ai assisted or otherwise). Would it be better or worse than the gpt users?

Expand full comment
j.e. moyer, LPC's avatar

Because they are using LLMs all wrong! You use it as an editing tool to check your own writing, not to cut and paste it into your essay.

Expand full comment
Ishita's avatar

Can you please share the refrence of this research paper?

Expand full comment
Patrick's avatar

I have been outsourcing most of my work to AI, giving it papers to summarise for some months now, it has been efficient but I realised I lost alot. Few hours prior to reading this post I was reading a short research paper, I found it extremely hard to concentrate and make meaning out of the paper, not because I didn't understand the topic but I felt bored easily, spaced out several times. I realised I had lost my ability to concentrate, pay attention. Realising this I made a note to myself to stop outsourcing everything to AI because I was becoming too reliant, loosing my cognitive skills in the process. I am really glad I found this piece. I didn't know there was even a study to back exactly what I have been experiencing lately.

Expand full comment
james paterson's avatar

There is thinking that concerns like this echo those raised when the calculator was developed. I’m not sure. It seems as though perhaps the calculator is like equipment at the gym that aids your workout. Not something that replaces it.

Expand full comment
LL's avatar

I doubt this replicates

Expand full comment
Brad Stulberg's avatar

We'll see. In these circumstances, I tend to follow the precautionary principle: if the result is striking but also makes sense, and the action (in this case, being very intentional about how you use ChatGPT and other LLMs) is simple and practical, I take the action.

Expand full comment
leissa gebert's avatar

I found this article interesting and thought provoking. Yes, I still have thoughts after embracing AI as an assistant in my work. :-) I agree that caution is required and it is important to stay aware of how this does impact your ability for thinking things through. However, in my experience, I have found AI to cause me to think even further on certain subjects. It has returned results that cause me to go deeper in my thinking. It has taken on a lot of the mundane tasks and freed me up to greater work. Yet... I appreciated this article and it's reminder to keep the guardrails in place so I remain in control of how it enhances my world and not push me to the sidelines as an observer.

Expand full comment
Adam Brinegar's avatar

Nice article. I would argue that the problem is also familiar to anyone who has managed large teams, and that Chat-GPT is simply a virtual team member. If you've managed large groups, you know that over-delegating some of the cognitive tasks of the job risks your credibility and your ability to make fast decisions. Perhaps we need to provide delegation training to a much larger group of people?

Expand full comment
Raymond Drye's avatar

Solid piece, Brad. I appreciate the light you're shining on the cognitive consequences of AI use, and I dig your physical fitness analogy. However, I have one suggestion: I would have appreciated some in-text citations. It took some work to find the figures you were referencing from the MIT study, and that could make less diligent people question your credibility.

Overall, solid work. Keep it up!

Expand full comment