I've been saying this for the past 2 years. Even think about the stereotypical "996" work schedule that is all the rave in SF and AI founder communities.
It just takes thinking about it for 5 seconds to see the contradiction. If AI was so good at reducing work, why is it every company engaging with AI has their workload increase.
20 years ago SV was stereotyped for "lazy" or fun loving engineers who barely worked but cashed huge pay checks. Now I would say the stereotype is overworked engineers who on the midlevel are making less than 20 back.
I see it across other disciplines too. Everyone I know from sales, to lawyers, etc if they engage with AI its like they get stuck in a loop where the original task is easier but now it revealed 10 more smaller tasks that fill up their time even more so than before AI.
Thats not to say productivity gains with AI aren't found. It just seems like the gains get people into a flywheel of increasing work.
>Are the people leveraging LLMs making more money while working the same number of hours?
Nobody is getting a raise for using AI. So no.
>Are the people leveraging LLMs working fewer hours while making the same amount of money?
Early adopters maybe, as they offload some work to agents. As AI commodifies and is the baseline, that will invert, especially as companies shed people to have the remaining "multiply" their output with AI.
Well they don't call it being a wage slave for nothing. You aren't getting a raise because you're still selling the same 40-60 hours of your time. If the business is getting productivity wins they'll buy less time via layoffs.
(USSR National Anthem plays) But if you owned the means of production and kept the fruits of your labor, say as a founder or as a sole proprietor side hustle, then it's possible those productivity gains do translate into real time gains on your part.
The very reason why we object to state ownership, that it puts a stop to individual initiative and to the healthy development of personal responsibility, is the reason why we object to an unsupervised, unchecked monopolistic control in private hands. We urge control and supervision by the nation as an antidote to the movement for state socialism. Those who advocate total lack of regulation, those who advocate lawlessness in the business world, themselves give the strongest impulse to what I believe would be the deadening movement toward unadulterated state socialism.
> Are the people leveraging LLMs making more money while working the same number of hours?
> Are the people leveraging LLMs working fewer hours while making the same amount of money?
Yes, absolutely. Mostly because being able to leverage LLMs effectively (which is not "vibe coding" and requires both knowing what you're doing and having at least some hunch of how the LLM is going to model your problem, whether it's been given the right data, directed properly, etc.) is a rare skill.
Did high-level languages and compilers make life better for working programmers? Is it even a meaningful question to ask? Like what would we change depending on the outcome?
Lots of people have jobs today thanks to high level languages that wouldn't have a job before them, they don't need to know how to manage memory manually.
Maybe that will happen for LLM programming as well, but I haven't seen many "vibe coder wanted" job ads yet that doesn't also require regular coding skills, so today LLM coding is just a supplementary skill its not a primary skill, so not like higher level languages since those let you skip a ton of steps.
Of course not. In the world of capitalism and employment, money earned is not a function of productivity, it is a function of competency. It is all relative.
Oh you sweet summer child. Under capitalism money is a function of how low you can pay your fungible organic units before they look for other opportunities or worse, unionize (but that can be dealt with relatively easily nowadays). Except for a few exceptional locations and occupations, the scale is tilted waaay against the individual, especially in the land of the free (see H-1B visas, medical debt and workers on food stamps).
(See also the record profits or big companies since Covid).
I feel this. Since my team has jumped into an AI everything working style, expectations have tripled, stress has tripled and actual productivity has only gone up by maybe 10%.
It feels like leadership is putting immense pressure on everyone to prove their investment in AI is worth it and we all feel the pressure to try to show them it is while actually having to work longer hours to do so.
I laughed at all the Super Bowl commercials showing frazzled office workers transformed into happy loafers after AI has done all their work for them...
I chuckled at the Genspark one while imaging what the internal discussions must have been.
Obviously, "take a day off" is not the value prop their selling to buyers (company leadership), but they can't be so on the nose in a public commercial that they scare individual contributors.
As one of the AI people doing 996(7?) I will at least say I can watch youtube videos/play bass/etc while directing 4-5 agents without much trouble, I have my desktop set up into a terminal grid and I just hover the window I want to talk to and give voice instructions. Since I'm working on stuff I'm into the time passes pleasantly.
Yeah, why would billionaires sell us something that lets us chill out all day, instead of using it themselves and capturing the value directly? You claim to have a perpetual motion machine and a Star Trek replicator rolled into one, what do you need me for?
I don't think it's super complicated. I think that prompting takes generally less mental energy than coding by hand, so on average one can work longer days if they're prompting than if they were coding.
I can pretty easily do a 12h day of prompting but I haven't been able to code for 12h straight since I was in college.
Isn’t the grander question why on earth people would tolerate, let alone desire, more hours of work every day?
The older I get, the more I see the wisdom in the ancient ideas of reducing desires and being content with what one has.
---
Later Addition:
The article's essential answer is that workers voluntarily embraced (and therefore tolerated) the longer hours because of the novelty of it all. Reading between the lines, this is likely to cause shifts in expectation (and ultimately culture) — just when the novelty wears off and workers realize they have been duped into increasing their work hours and intensity (which will put an end to the voluntary embracing of those longer hours and intensity). And the dreaded result (for the poor company, won't anyone care about it?!) is cognitive overload, hence worker burnout and turnover, and ultimately reduced work quality and higher HR transaction costs. Therefore, TFA counsels, companies should set norms regarding limited use of generative language models (GLMs, so-called "AI").
I find it unlikely that companies will limit GLM use or set reasonable norms: instead they'll crack the whip!
---
Even Later Addition:
As an outsider, I find it at once amusing and dystopian to consider the suggestions offered at the end of the piece: in the brutalist, reverse-centaur style, workers are now to be programmed with modifications to their "alignment … reconsider[ation of] assumptions … absor[ption of] information … sequencing … [and] grounding"!
The worker is now thought of in terms of the tool, not vice versa.
While I agree with the idea that prompting is easier to get started, is it actually less work. More hours doesn't mean they're equally as productive. More, lower quality hours just makes work:life balance worse with nothing to show for it.
I agree. However, for me, I'm finding that I'm drastically leveling up what I'm doing in my day to day. I'm a former founder and former Head of Engineering, back in an IC role.
The coding is now assumed "good enough" for me, but the problem definition and context that goes into that code aren't. I'm now able to spend more time on the upstream components of my work (where real, actual, hard thinking happens) while letting the AI figure out the little details.
Additionally, I can eke out 4 hrs really deep diving nowadays, and have structured my workday around that, delegating low-mental-cost tasks to after that initial dive. Now diving is a low enough mental cost that I can do 8-12hrs of it.
If you're in the office for 12h it won't matter if you're proompting, pushing pens or working your ass off. You gave that company 12h of your life. You're not getting those back.
> If AI was so good at reducing work, why is it every company engaging with AI has their workload increase.
Heavy machinery replaces shovels. It reduces workload on the shovel holders, However, someone still needs to produce the heavy machinery.
Some of these companies are shovel holders, realizing they need to move up stream. Some of these companies are already upstream, racing to bring a product to market.
The underlying bet for nearly all of these companies is "If I can replace one workflow with AI, I can repeat that with other workflows and dominate"
Same story with hardware and software. Hardware gets more efficient and faster, so software devs shove more CPU intensive stuff into their applications, or just go lazy and write inefficient code.
The software experience is always going to feel about the same speed perceptually, and employers will expect you to work the same amount (or more!)
I think you're missing the point. The folks pushing 996 (and willingly working 996) feel like they are in a land rush, and that AI is going to accelerate their ability "take the most amount of land" No one is optimizing for the "9 to 5" oriented engineer.
You could always choose to work less, but would have less as a result.
These days, that choice is more viable than ever, as the basic level of living supported by even a few hours a week of minimum wage affords you luxuries unimaginable 50 or 100 years ago.
You are correct; however, it should be noted that even the top 1% overworks themselves to some extent (e.g. American CEOs work on average 63h per week). They do it for a different reason, though.
Many people in silicon valley truly Believe that AI will take over everything. Therefore, this is the last chance to get in so you better be working really really hard.
There's a palpable desperation that makes this wave different from mobile or cloud. It's not about making things better so much as its about not being left behind.
I'm not sure of the reason for this shift. It has a lot of overlap with the grindset culture you see on Twitter where people caution against taking breaks because your (mostly imaginary) competition may catch up with you.
I think the article nails it, on multiple counts. From personal experience, the cognitive overload is sneaky, but real. You do end up taking on more than you can handle, just because your mob of agents can do the minutia of the tasks, doesn't free you from comprehending, evaluating and managing the work. It's intense.
For a very small number of people the hard part is writing the code. For most of us, it’s writing the correct code. AI generates lots of code but for 90% of my career writing more code hasn’t helped.
> you do end up taking on more than you can handle, just because your mob of agents can do the minutia of the tasks, doesn’t free you from comprehending, evaluating and managing the work
I’m currently in an EM role and this is my life but with programmers instead of AI agents.
Does AI write 100% correct code? No, but under my watch it writes code that is more correct than anything that anyone else on the team contributed in past year or more. Even better when it is wrong I don’t have to spend literal hours arguing with it nor I have to be mindful how what I’m saying affects others feelings so I get to spend more time on actual work. All in all it’s a net positive
I am not sure about this statement, aren't we always cutting the corners to make things ~95% correct at scale to meet deadlines with our staffing/constraints?
Most of us, who doesnt work on Linux kernel, space shuttles, and near realtime OSes, we were writing good enough code to meet business requirements
Also EM and it feels like now I have a team of juniors on my personal projects, except they need constant micromanaging in a way I never would for real people.
I'm not sure I would agree in totem. Freeing the minutia allows for a higher cognitive load on the bigger picture. I use AI primarily for research gathering, and refining of what I have, which has freed up a lot of time to focus on the bigger issues, and specifically in my case, zeroing in on the diamond in the rough.
This has been my experience too. I feel freed up from the "manual labor" slice of software development and can focus on more interesting design problems and product alignment, which feels like a bit of a drug right now that i'm actually working harder and more hours.
do you think this is inherent in AI-related work, or largely due to the current state of the world, where it's changing rapidly and we're struggling to adapt our entire work systems to the shifting landscape, while under intense (and often false) pressures to "disrupt ourselves"? Put another way, if this was similarly true twenty years ago with the rise of Google, is it still true today?
I hated the old world where some boomer-mentality "senior" dev(s) would take days or weeks to deliver ONE fucking thing, and it would still have bugs and issues.
I like the new world where individuals can move fast and ship, and if there are bugs and issues they can be resolved quickly.
The boomer-mentality and other mids get fired which is awesome, and orgs become way leaner.
Just because there are excess of CS majors and programmers doesn't mean we need to make benches that they can keep warm.
Some places have military grade paperwork where mistakes are measured in millions of dollars per min. Others places are 'just push it in fix it later'.
AI is not going to change that. That is a people problem. Not something you can automate away. But you can fire your way out of it.
For sure. I was replying to people not in that, it seems from the commenters here that is where they (and me) have worked or are working now. Whether it is their own company or some other place.
I've only ever worked at places that are at the bleeding edge and even there we had total slackers.
Garbage In, Garbage Out. If you're working under the illusion any tool relieves you from the burden of understanding wtf it is you're doing, you aren't using it safely, and you will offload the burden of your lack of care on someone else down the line. Don't. Ever. Do. That.
I've started calling it "revenge of the QA/Support engineers", personally.
Our QA & Support engineers have now started creating MR's to fix customer issues, satisfy customer requests and fix bugs.
They're AI sloppy and a bunch of work to fix up, but they're a way better description of the problem than the tickets they used to send.
So now instead of me creating a whole bunch of work for QA/Support engineers when I ship sub-optimal code to them, they're creating a bunch of work for me by shipping sub-optimal code to me.
It does quite well and definitely catches/fixes things I miss. But I still catch significant things it misses. And I am using AI to fix the things I catch.
Which is then more slop I have to review.
Our product is not SaaS, it's software installed on customer computers. Any bug that slips through is really expensive to fix. Careful review and code construction is worth the effort.
This is jevons paradox at its purest. Who really thought companies were just going to let everyone go home earlier? Work is easier, now you will do even more. Congratulations.
The capital class wants you naked and afraid. If you're well rested you might have thoughts like "Why am I working for this guy, why don't I become a competitor". Instead them going "Shit, I need to work 5 more hours though I've already worked 8 today so I can keep my health insurance" is far more beneficial for them controlling everything.
Exactly as happened with computer revolution... Expectations raised in line with productivity. In HN parlance, being a 10x engineer just becomes "being an engineer," and "100x engineer" is the new 10x engineer. And from what I can see in myself and others right now, being a 100x of anything, while exhilarating, is also mentally and physically taxing.
If people are realistically a baseline of 10x more productive where are all the features, games, applications, SAAS’s that are suddenly possible that weren’t before?
AI might be 100x faster than me at writing code - but writing code is a tiny portion of the work I do. I need to understand the problem before I can write code. I need to understand my whole system. I need to test that my code really works - this is more than 50% of the work I do, and automated tests are not enough - too often the automated test doesn't model the real world in some unexpected way. I review code that others write. I answer technical questions for others. There is a lot of release work, mandatory training, and other overhead as well.
Writing code is what I expect a junior or mid level engineer to spend 20% of their time doing. By the time you reach senior engineer it should be less (though when you write code you are faster and so might write more code despite spending less time on it).
I can tell you, with absolute certainty, that before AI ~0 junior/mid level devs spent just 20% of their time programming. At least not at tech companies
My experience is very different. A junior should be spending >90% of their time coding, a mid level about 75% and a senior about the same. It only really splits after that point. But the senior spending 25% more time on the wrong thing is worse than than them spending 25% less time on the right thing.
It’s only when you get to “lead” (as in people manager) or staff (as in tech unblocked) you should spend the rest of your time improving the productivity of someone who spends more of their time coding than you do.
They're out there. I've noticed a surge of Show HNs right here. A lot of it is vibe coded.
I would like to see GitHub's project creation and activity charts from today compared to 5 years ago. Similar trends must be happening behind closed doors as well. Techy managers are happy to be building again. Fresh grads are excited about how productive they can be. Scammers are deploying in record time. Business is boomin'.
It's likely that all this code will do more harm than good to the rest of us, but it's definitely out there.
And that brings us back to square one - if everyone is a 100x engineer, then everyone's again a 1x engineer. Lewis Carroll nailed it with the Red Queen's Race.
This mythical class of developer doesn't exist. Are you trying to tell me that there are a class of developers out there that are doing three months worth of work every single day at the office?
HBR is analyzing this with an old world lens. It might very well be that the effects are as they say temporarily. But the reason this is happening is because AI is in fact replacing human labor and the puppeteers are trying to remain employed. The steady state outcome is human replacement, which means AI does in fact reduce human labor, even if the remaining humans in the loop are more overloaded. The equation is not workload per capita but how many humans it takes to accomplish a goal.
Why? I don't recall HBR ever being at forefront of a change. It seems to me that they're just good at identifying a change when it's already widespread and giving it a catchy name, and an explanatory model that they themselves can debunk in a future issue.
Yeah the quote assumes you're riding without speed limits. In a typical commute, it does get easier once your cardiovascular ability exceeds the upper speed limit given the route.
No, the quote assumes you want to go faster. I don't really. I enjoy my ride and if I wanted to get to work 5 minutes faster, I would leave 5 minutes earlier.
I read that quote as speaking more to the human condition and less about cycling. Humanity has a tendency to keep pushing to the edge of its current abilities no matter how much they expand.
> it intensifies work, and shortens time to burnout
This is most likely correct. Everyone talks how AI makes it possible to "do multiple tasks at the same time", but noone seems to care that the cognitive (over)load is very real.
IME you don't even have to do multiple things at the same time to reach that cognitive fatigue. The pace alone, which is now much higher, could be enough to saturate your cognitive capabilities.
For me one unexpected factor is how much it strains my executive function to try and maintain attention on the task at hand while I’m letting the agent spin away for 5-10 minutes at a stretch. It’s even worse than the bad old days of long compile times because at least then I could work on tests or something like that while I wait. But with coding agents I feel like I need to be completely hands off because they might decide to touch literally any file in the repository.
It reminds me a bit of how à while back people were finding that operating a level 3 autonomous vehicle is actually more fatiguing than driving a vehicle that doesn’t even have cruise control.
For me it's the volume of things that I am now capable of doing in so much shorter amount of time - this leaves almost no space for resting but incurs much more strain on my cognitive limits.
How about using LLM's to improve developer experience instead? I've had a lot of failures with "AI" even on small projects, even the best things I've tried like (agentic-project-management) I still had to just go back to traditional coding.
Not sure if everyone shares this sentiment but the reason I use AI as a crutch is due to poor documentation that's out there, even simple terminal commands don't show use examples for ls when you try to type man ls. I just end up putting up with the code output because it works ok enough for short term, this however doesn't seem like a sustainable plan long term either.
There is also this dread I feel because what I would do if AI went down permanently? The tools I tried like Zeal really didn't do it for me either for documentation, not sure who decided on the documentation format but this "Made by professionals, for professionals" isn't really cutting it anymore. Apologies in advance if I missed out on any tools but in my 4+ years of university nobody ever mentioned any quality tools either, and I'm sure this trend is happening everywhere.
> On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.
Love this quote. For me, barely a few weeks in, I feel exactly this. To clarify - I feel this only when working on dusty old side projects. When I use it to build for the org its still a slog just faster.
This article is scratching the surface of the concept of desynchronization from the theory of social acceleration and the sociology of speed. Any technology that is supposed to create idle time, once it reaches mass adoptions has the opposite effect of speeding up everything else.
We have been on this track for a long time: cars were supposed to save time in transit, but people started living farther from city centres (c.f. Marchetti's constant). E-Mail and instant messaging were supposed to eliminate wait time from postal services, but we now send orders of magnitude more messages and social norms have shifted such that faster replies are expected.
"AI" backed productivity increases are only impressive relative to non-AI users. The idilliac dream of working one or two days a week with agents in the background "doing the rest" is delusional. Like all previous technologies once it reaches mass adoption everyone will be working at a faster pace, because our society is obsessed with speed.
If anyone is saying "yeah, but this time will be different", just look at our society now.
Arguably the only jobs which are necessary in society are related to food, heating, shelter and maybe some healthcare. Everything else - what most people are doing - is just feeding the never ending treadmill of consumer desire and bureaucratic expansion. If everyone adjusted their desired living standards and possessions to those of just a few centuries ago, almost all of us wouldn't need to work.
Yet here we are, still on the treadmill! It's pretty clear that making certain types of work no longer needed will just create new demands and wants, and new types of work for us to do. That appears to be human nature.
I've noticed first hand how the scope of responsibilities is broadened by integrating AI on workflows. Personally if feels like a positive feedback loop: I take more responsibilities; since they are outside my scope I have a harder time reviewing AI output; this increases fatigue and makes me more prone to just accepting more AI output; with the increase in reliance on AI output I get to a point where I'm managing things that are way outside my scope and I can't do it unless I rely on AI entirely. In my opinion this also increases Imposter Syndrome effects.
But I doubt companies and management will think for a second that this voluntary increase in "productivity" is any bad, and it will probably be encouraged
I'm not sure if intensifies is the word. AI just has awkward time dynamics that people are adapting to.
Sometimes you end up with tasks that are low intensity long duration. Like I need to supervise this AI over the course of three hours, but the task is simple enough that I can watch a movie while I do it. So people measuring my work time are like "wow he's working extra hours" but all I did during that time is press enter 50 times and write 3 sentences.
AI speeds things up at the beginning. It helps you get unstuck, find answers quickly without jumping through different solutions from internet, write boilerplate, explore ideas faster. But over time I reach for it faster than I probably should. Instead of digging into basic code, I directly jump to AI. I’ve been using it for even basic code searches. AI just makes it easier to outsource thinking. And your understanding of the codebase can get thinner over time.
It's totally expected because the bar goes way up. There used to be a time when hosting a dynamic website with borderline default HTML components was difficult enough that companies spent hundreds of thousands of dollars. These days, a single person's side project is infinitely more complex.
Can’t miss the opportunity to share my favourite aphorism:
“ I distinguish four types. There are clever, hardworking, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and hardworking; their place is the General Staff. The next ones are stupid and lazy; they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the mental clarity and strength of nerve necessary for difficult decisions. One must beware of anyone who is both stupid and hardworking; he must not be entrusted with any responsibility because he will always only cause damage.”
Same with most productivity gains in tooling historically, I think one way we should consider reckoning with this is through workers rights.
The industrial revolution lead to gains that allowed for weekends and the elimination of child labor, but they didn't come for free, they had to be fought for.
If we don't fight for it, what are we gaining? more intense work in exchange for what?
The cognitive overload is more so people not understanding the slop they are generating. Slop piles on top of slop until a situation arrives where you actually need to understand everything, and you don’t because you didn’t do the work yourself.
Things have improved significantly since then. Copying and pasting code from o1/o3 versus letting codex 5.3 xhigh assemble its own context and do it for you.
Since when? You're quoting the timeframe not the period under study.
It's also not a study of just engineers, it's people across engineering, product, design, research, and operations. For a lot of none-code tasks Ai needs pasted context as it's not usually in a repo like code is.
(And their comments about intensifying engineering workload also aren't really changed by ai copy/paste vs context).
At some point, the “but everything has radically changed in the past 15 minutes” counterargument to all material evidence that undermines AI marketing has to become boring and unpersuasive.
Last night I tried out Opus 4.6 on a personal project involving animating in Gaussian Splats where the final result is output as a video.
In the past, AI coding agents could usually reason about the code well enough that they had a good chance of success, but I’d have to manually test since they were bad at “seeing” the output and characterizing it in a way that allowed them to debug if things went wrong, and they would never ever check visual outputs unless I forced them to (probably because it didn’t work well during RL training).
Opus 4.6 correctly reasoned (on its own, I didn’t even think to prompt this) that it could “test” the output by grabbing the first, middle and last frame, and observing that the first frame should be empty, the middle frame half full of details, and the final frame resembling the input image. That alone wouldn’t have impressed me that much, but it actually found and fixed a bug based on visual observation of a blurry final frame (we hadn’t run the NeRF training for enough iterations).
In a sense this is an incremental improvement in the model’s capabilities. But in terms of what I can now use this model for, it’s huge. Previous models struggled at tokenizing/interpreting images beyond describing the contents in semantic terms, so they couldn’t iterate based on visual feedback when the contents were abstract or broken in an unusual way. The fact that they can do this now means I can set them on tasks like this unaided and have a reasonable probability that they’ll be able to troubleshoot their own issues.
I understand your exhaustion at all the breathless enthusiasm, but every new models radically changes the game for another subset of users/tasks. You’re going to keep hearing that counterargument for a long time, and the worst part is, it’s going to be true even if it’s annoying.
You're right that the argument will become boring, but I think it's gonna be a minute before it does so. I spent much of yesterday playing with the new "agent teams" experimental function of Claude Code, and it's pretty remarkable. It one-shotted a rather complex Ansible module (including packaging for release to galaxy), and built a game that teaches stock options learning, both basically one-shotted.
On Thursday I had a FAC with a coworker and he predicted 2026 is going to be the year of acceleration, and based on what I've seen over the last 2-3 years I'd say it's hard to argue that.
I've been saying this for the past 2 years. Even think about the stereotypical "996" work schedule that is all the rave in SF and AI founder communities.
It just takes thinking about it for 5 seconds to see the contradiction. If AI was so good at reducing work, why is it every company engaging with AI has their workload increase.
20 years ago SV was stereotyped for "lazy" or fun loving engineers who barely worked but cashed huge pay checks. Now I would say the stereotype is overworked engineers who on the midlevel are making less than 20 back.
I see it across other disciplines too. Everyone I know from sales, to lawyers, etc if they engage with AI its like they get stuck in a loop where the original task is easier but now it revealed 10 more smaller tasks that fill up their time even more so than before AI.
Thats not to say productivity gains with AI aren't found. It just seems like the gains get people into a flywheel of increasing work.
Talking about "productivity" is a red herring.
Are the people leveraging LLMs making more money while working the same number of hours?
Are the people leveraging LLMs working fewer hours while making the same amount of money?
If neither of these are true, then LLMs have not made your life better as a working programmer.
Regardless of that, LLMs could be a Moloch problem.
That is, if anyone uses it your life will be worse, but if you don't use it then your life will be even worse than those using it.
Too bad you programmers didn't unionize when you had the chance so you could fight this. Guess you'll have to pull yourself up by your bootstraps.
Classical prisoner's dilemma.
>Are the people leveraging LLMs making more money while working the same number of hours?
Nobody is getting a raise for using AI. So no.
>Are the people leveraging LLMs working fewer hours while making the same amount of money?
Early adopters maybe, as they offload some work to agents. As AI commodifies and is the baseline, that will invert, especially as companies shed people to have the remaining "multiply" their output with AI.
So the answer will be no and no.
Well they don't call it being a wage slave for nothing. You aren't getting a raise because you're still selling the same 40-60 hours of your time. If the business is getting productivity wins they'll buy less time via layoffs.
(USSR National Anthem plays) But if you owned the means of production and kept the fruits of your labor, say as a founder or as a sole proprietor side hustle, then it's possible those productivity gains do translate into real time gains on your part.
What about coops? Or partnerships?
The very reason why we object to state ownership, that it puts a stop to individual initiative and to the healthy development of personal responsibility, is the reason why we object to an unsupervised, unchecked monopolistic control in private hands. We urge control and supervision by the nation as an antidote to the movement for state socialism. Those who advocate total lack of regulation, those who advocate lawlessness in the business world, themselves give the strongest impulse to what I believe would be the deadening movement toward unadulterated state socialism.
--Theodore Roosevelt
> Are the people leveraging LLMs making more money while working the same number of hours?
> Are the people leveraging LLMs working fewer hours while making the same amount of money?
Yes, absolutely. Mostly because being able to leverage LLMs effectively (which is not "vibe coding" and requires both knowing what you're doing and having at least some hunch of how the LLM is going to model your problem, whether it's been given the right data, directed properly, etc.) is a rare skill.
Can you name an example? Who do you know that made more money by using LLM?
Did high-level languages and compilers make life better for working programmers? Is it even a meaningful question to ask? Like what would we change depending on the outcome?
Lots of people have jobs today thanks to high level languages that wouldn't have a job before them, they don't need to know how to manage memory manually.
Maybe that will happen for LLM programming as well, but I haven't seen many "vibe coder wanted" job ads yet that doesn't also require regular coding skills, so today LLM coding is just a supplementary skill its not a primary skill, so not like higher level languages since those let you skip a ton of steps.
> Did high-level languages and compilers make life better for working programmers
Yes.
Of course not. In the world of capitalism and employment, money earned is not a function of productivity, it is a function of competency. It is all relative.
Oh you sweet summer child. Under capitalism money is a function of how low you can pay your fungible organic units before they look for other opportunities or worse, unionize (but that can be dealt with relatively easily nowadays). Except for a few exceptional locations and occupations, the scale is tilted waaay against the individual, especially in the land of the free (see H-1B visas, medical debt and workers on food stamps). (See also the record profits or big companies since Covid).
Lines of code are not a good metric for productivity.
Neither are the hours worked.
Nor is the money.
Just think of the security guard on site walking around, or someone who has a dozen monitors.
I feel this. Since my team has jumped into an AI everything working style, expectations have tripled, stress has tripled and actual productivity has only gone up by maybe 10%.
It feels like leadership is putting immense pressure on everyone to prove their investment in AI is worth it and we all feel the pressure to try to show them it is while actually having to work longer hours to do so.
I laughed at all the Super Bowl commercials showing frazzled office workers transformed into happy loafers after AI has done all their work for them...
I chuckled at the Genspark one while imaging what the internal discussions must have been.
Obviously, "take a day off" is not the value prop their selling to buyers (company leadership), but they can't be so on the nose in a public commercial that they scare individual contributors.
As one of the AI people doing 996(7?) I will at least say I can watch youtube videos/play bass/etc while directing 4-5 agents without much trouble, I have my desktop set up into a terminal grid and I just hover the window I want to talk to and give voice instructions. Since I'm working on stuff I'm into the time passes pleasantly.
Can you describe what stack you're using for this?
Hyprland, Voxtype, Claude Code + Pi.
Yeah, why would billionaires sell us something that lets us chill out all day, instead of using it themselves and capturing the value directly? You claim to have a perpetual motion machine and a Star Trek replicator rolled into one, what do you need me for?
Those ads are not for workers, they're for the employers.
There's an old saying among cyclists attributed to Greg Lemond: "It doesn't get easier, you just go faster"
I don't think it's super complicated. I think that prompting takes generally less mental energy than coding by hand, so on average one can work longer days if they're prompting than if they were coding.
I can pretty easily do a 12h day of prompting but I haven't been able to code for 12h straight since I was in college.
For me it’s the opposite. Coding I enter flow and can do 5 hours at a stretch while barely noticing.
Prompting has so many distractions and context switches I get sick of it after an hour.
Isn’t the grander question why on earth people would tolerate, let alone desire, more hours of work every day?
The older I get, the more I see the wisdom in the ancient ideas of reducing desires and being content with what one has.
---
Later Addition:
The article's essential answer is that workers voluntarily embraced (and therefore tolerated) the longer hours because of the novelty of it all. Reading between the lines, this is likely to cause shifts in expectation (and ultimately culture) — just when the novelty wears off and workers realize they have been duped into increasing their work hours and intensity (which will put an end to the voluntary embracing of those longer hours and intensity). And the dreaded result (for the poor company, won't anyone care about it?!) is cognitive overload, hence worker burnout and turnover, and ultimately reduced work quality and higher HR transaction costs. Therefore, TFA counsels, companies should set norms regarding limited use of generative language models (GLMs, so-called "AI").
I find it unlikely that companies will limit GLM use or set reasonable norms: instead they'll crack the whip!
---
Even Later Addition:
As an outsider, I find it at once amusing and dystopian to consider the suggestions offered at the end of the piece: in the brutalist, reverse-centaur style, workers are now to be programmed with modifications to their "alignment … reconsider[ation of] assumptions … absor[ption of] information … sequencing … [and] grounding"!
The worker is now thought of in terms of the tool, not vice versa.
While I agree with the idea that prompting is easier to get started, is it actually less work. More hours doesn't mean they're equally as productive. More, lower quality hours just makes work:life balance worse with nothing to show for it.
I agree. However, for me, I'm finding that I'm drastically leveling up what I'm doing in my day to day. I'm a former founder and former Head of Engineering, back in an IC role.
The coding is now assumed "good enough" for me, but the problem definition and context that goes into that code aren't. I'm now able to spend more time on the upstream components of my work (where real, actual, hard thinking happens) while letting the AI figure out the little details.
> I can pretty easily do a 12h day of prompting
Do you want to though?
That's a bingo.
Additionally, I can eke out 4 hrs really deep diving nowadays, and have structured my workday around that, delegating low-mental-cost tasks to after that initial dive. Now diving is a low enough mental cost that I can do 8-12hrs of it.
It's a bicycle. Truly.
>so on average one can work longer days if they're prompting than if they were coding
It's 2026 for god's sake. I don't want to work __longer__ days, I want to work __shorter__ days.
If you're in the office for 12h it won't matter if you're proompting, pushing pens or working your ass off. You gave that company 12h of your life. You're not getting those back.
> If AI was so good at reducing work, why is it every company engaging with AI has their workload increase.
Isn't it simple?
Because of competition, which is increased because of entry barrier is lowered a lot for building new software products.
You output a lot, so do your competition.
> If AI was so good at reducing work, why is it every company engaging with AI has their workload increase.
Heavy machinery replaces shovels. It reduces workload on the shovel holders, However, someone still needs to produce the heavy machinery.
Some of these companies are shovel holders, realizing they need to move up stream. Some of these companies are already upstream, racing to bring a product to market.
The underlying bet for nearly all of these companies is "If I can replace one workflow with AI, I can repeat that with other workflows and dominate"
Same story with hardware and software. Hardware gets more efficient and faster, so software devs shove more CPU intensive stuff into their applications, or just go lazy and write inefficient code.
The software experience is always going to feel about the same speed perceptually, and employers will expect you to work the same amount (or more!)
I think you're missing the point. The folks pushing 996 (and willingly working 996) feel like they are in a land rush, and that AI is going to accelerate their ability "take the most amount of land" No one is optimizing for the "9 to 5" oriented engineer.
> If AI was so good at reducing work, why is it every company engaging with AI has their workload increase.
Throughout human history, we have chosen more work over keeping output stable.
Throughout human history we were never given the choice. We were forced into it like cattle.
You could always choose to work less, but would have less as a result.
These days, that choice is more viable than ever, as the basic level of living supported by even a few hours a week of minimum wage affords you luxuries unimaginable 50 or 100 years ago.
See a lot of people on this site doing it willingly. I think a lot of people will always choose perceived convenience over anything
You are correct; however, it should be noted that even the top 1% overworks themselves to some extent (e.g. American CEOs work on average 63h per week). They do it for a different reason, though.
Maybe ask the friendly AI about reducing project scope? But we probably won’t if we’re having too much fun.
Many people in silicon valley truly Believe that AI will take over everything. Therefore, this is the last chance to get in so you better be working really really hard.
There's a palpable desperation that makes this wave different from mobile or cloud. It's not about making things better so much as its about not being left behind.
I'm not sure of the reason for this shift. It has a lot of overlap with the grindset culture you see on Twitter where people caution against taking breaks because your (mostly imaginary) competition may catch up with you.
Jevons Paradox applies to labor.
now everyone gets to be a manager !
996 is a Chinese term, not American.
There is a lot of work to do, just because you are doing more work with your time doesn’t mean you can somehow count that as less work.
I've only seen it in job postings and linkedin posts from SF founders.
china outlawed it
What? 007 is the norm here now.
I think the article nails it, on multiple counts. From personal experience, the cognitive overload is sneaky, but real. You do end up taking on more than you can handle, just because your mob of agents can do the minutia of the tasks, doesn't free you from comprehending, evaluating and managing the work. It's intense.
For a very small number of people the hard part is writing the code. For most of us, it’s writing the correct code. AI generates lots of code but for 90% of my career writing more code hasn’t helped.
> you do end up taking on more than you can handle, just because your mob of agents can do the minutia of the tasks, doesn’t free you from comprehending, evaluating and managing the work
I’m currently in an EM role and this is my life but with programmers instead of AI agents.
Does AI write 100% correct code? No, but under my watch it writes code that is more correct than anything that anyone else on the team contributed in past year or more. Even better when it is wrong I don’t have to spend literal hours arguing with it nor I have to be mindful how what I’m saying affects others feelings so I get to spend more time on actual work. All in all it’s a net positive
I agree.
I provide specific instructions, gotchas when prompting the Agent to write the code. I churn out more instructions quickly by using my voice.
Yes it makes mistakes, but it can correct them quickly as well. This correction loop takes more time if it is a human in my team working.
I never said it’s not a net positive - I said that writing more code won’t solve the problem.
> under my watch it writes code that is more correct than anything that anyone else on the team contributed in past year or more
This I don’t believe.
> For most of us, it’s writing the correct code.
I am not sure about this statement, aren't we always cutting the corners to make things ~95% correct at scale to meet deadlines with our staffing/constraints?
Most of us, who doesnt work on Linux kernel, space shuttles, and near realtime OSes, we were writing good enough code to meet business requirements
My point is that coming up with the business requirements was always the hard part (unless you’re writing a scheduler)
Also EM and it feels like now I have a team of juniors on my personal projects, except they need constant micromanaging in a way I never would for real people.
So you're saying AI doesn't help, and having reports is just like using AI (which you said doesn't help).
What's stopping you from becoming an IC and producing as much as your full team then? What's the point of having reports in this case?
Started referring to it as "speed of accountability".
A responsible developer will only produce code as fast as they can sign it off.
An irresponsible one will just shit all over the codebase.
I'm not sure I would agree in totem. Freeing the minutia allows for a higher cognitive load on the bigger picture. I use AI primarily for research gathering, and refining of what I have, which has freed up a lot of time to focus on the bigger issues, and specifically in my case, zeroing in on the diamond in the rough.
This has been my experience too. I feel freed up from the "manual labor" slice of software development and can focus on more interesting design problems and product alignment, which feels like a bit of a drug right now that i'm actually working harder and more hours.
do you think this is inherent in AI-related work, or largely due to the current state of the world, where it's changing rapidly and we're struggling to adapt our entire work systems to the shifting landscape, while under intense (and often false) pressures to "disrupt ourselves"? Put another way, if this was similarly true twenty years ago with the rise of Google, is it still true today?
That is fun though.
I hated the old world where some boomer-mentality "senior" dev(s) would take days or weeks to deliver ONE fucking thing, and it would still have bugs and issues.
I like the new world where individuals can move fast and ship, and if there are bugs and issues they can be resolved quickly.
The boomer-mentality and other mids get fired which is awesome, and orgs become way leaner.
Just because there are excess of CS majors and programmers doesn't mean we need to make benches that they can keep warm.
That has more to do with where you work than AI.
Some places have military grade paperwork where mistakes are measured in millions of dollars per min. Others places are 'just push it in fix it later'.
AI is not going to change that. That is a people problem. Not something you can automate away. But you can fire your way out of it.
For sure. I was replying to people not in that, it seems from the commenters here that is where they (and me) have worked or are working now. Whether it is their own company or some other place.
I've only ever worked at places that are at the bleeding edge and even there we had total slackers.
"Explain to me like I am five what you just did"
Then "Make a detailed list of changes and reasoning behind it."
Then feed that to another AI and ask: "Does it make sense and why?"
Then get rid of. They can keep 1/10 the humans, and have them run such agents.
Garbage In, Garbage Out. If you're working under the illusion any tool relieves you from the burden of understanding wtf it is you're doing, you aren't using it safely, and you will offload the burden of your lack of care on someone else down the line. Don't. Ever. Do. That.
I've started calling it "revenge of the QA/Support engineers", personally.
Our QA & Support engineers have now started creating MR's to fix customer issues, satisfy customer requests and fix bugs.
They're AI sloppy and a bunch of work to fix up, but they're a way better description of the problem than the tickets they used to send.
So now instead of me creating a whole bunch of work for QA/Support engineers when I ship sub-optimal code to them, they're creating a bunch of work for me by shipping sub-optimal code to me.
I wonder how well a coding agent would do if you asked one to review the change and then to rewrite the merge request to fix the things it criticized?
It does quite well and definitely catches/fixes things I miss. But I still catch significant things it misses. And I am using AI to fix the things I catch.
Which is then more slop I have to review.
Our product is not SaaS, it's software installed on customer computers. Any bug that slips through is really expensive to fix. Careful review and code construction is worth the effort.
Jesus Christ.
This is jevons paradox at its purest. Who really thought companies were just going to let everyone go home earlier? Work is easier, now you will do even more. Congratulations.
Feels like there may be a gap in the market for businesses that do this though, since keeping everyone at work leads to burn out and isn't an edge.
Having well rested employees that don't burn out is though.
The capital class wants you naked and afraid. If you're well rested you might have thoughts like "Why am I working for this guy, why don't I become a competitor". Instead them going "Shit, I need to work 5 more hours though I've already worked 8 today so I can keep my health insurance" is far more beneficial for them controlling everything.
Exactly as happened with computer revolution... Expectations raised in line with productivity. In HN parlance, being a 10x engineer just becomes "being an engineer," and "100x engineer" is the new 10x engineer. And from what I can see in myself and others right now, being a 100x of anything, while exhilarating, is also mentally and physically taxing.
If people are realistically a baseline of 10x more productive where are all the features, games, applications, SAAS’s that are suddenly possible that weren’t before?
AI might be 100x faster than me at writing code - but writing code is a tiny portion of the work I do. I need to understand the problem before I can write code. I need to understand my whole system. I need to test that my code really works - this is more than 50% of the work I do, and automated tests are not enough - too often the automated test doesn't model the real world in some unexpected way. I review code that others write. I answer technical questions for others. There is a lot of release work, mandatory training, and other overhead as well.
Writing code is what I expect a junior or mid level engineer to spend 20% of their time doing. By the time you reach senior engineer it should be less (though when you write code you are faster and so might write more code despite spending less time on it).
I can tell you, with absolute certainty, that before AI ~0 junior/mid level devs spent just 20% of their time programming. At least not at tech companies
Every company I've worked for that is the case. Note that I divided a lot of the work out that might be counted as coding though.
My experience is very different. A junior should be spending >90% of their time coding, a mid level about 75% and a senior about the same. It only really splits after that point. But the senior spending 25% more time on the wrong thing is worse than than them spending 25% less time on the right thing.
It’s only when you get to “lead” (as in people manager) or staff (as in tech unblocked) you should spend the rest of your time improving the productivity of someone who spends more of their time coding than you do.
They're out there. I've noticed a surge of Show HNs right here. A lot of it is vibe coded.
I would like to see GitHub's project creation and activity charts from today compared to 5 years ago. Similar trends must be happening behind closed doors as well. Techy managers are happy to be building again. Fresh grads are excited about how productive they can be. Scammers are deploying in record time. Business is boomin'.
It's likely that all this code will do more harm than good to the rest of us, but it's definitely out there.
Windows 11
Being a 100x developer means you can work just 1% of the time you used to work, right?
When the expectation is 100x, then it is work 100% of the time at maximum speed.
And that brings us back to square one - if everyone is a 100x engineer, then everyone's again a 1x engineer. Lewis Carroll nailed it with the Red Queen's Race.
Think of some 100x folks you know of. Are they working more or less than before?
They sure as hell don’t make 100x more. Maybe from ads they serve selling AI/productivity snake oil.
>Think of some 100x folks you know of.
This mythical class of developer doesn't exist. Are you trying to tell me that there are a class of developers out there that are doing three months worth of work every single day at the office?
It's odd that kind of developer doesn't exist, but that type of CEO does. Maybe we need to replace CEOs with AI.
HBR is analyzing this with an old world lens. It might very well be that the effects are as they say temporarily. But the reason this is happening is because AI is in fact replacing human labor and the puppeteers are trying to remain employed. The steady state outcome is human replacement, which means AI does in fact reduce human labor, even if the remaining humans in the loop are more overloaded. The equation is not workload per capita but how many humans it takes to accomplish a goal.
If AI was indeed replacing human labor I would expect HBR to be among the first publications to cover it.
Why? I don't recall HBR ever being at forefront of a change. It seems to me that they're just good at identifying a change when it's already widespread and giving it a catchy name, and an explanatory model that they themselves can debunk in a future issue.
My dad was a stockbroker in the 1970s and he had a great line:
“When computers first came out we were told:
‘Computers will be so productive and save you so much time you won’t know what to do with all of your free time!’
Unsurprisingly, that didn’t happen.”
Aka Jevon’s Paradox in practice
The Mythical Man Month was published in 75, with a deep technical insiders perspective.
The kinds of productivity scaling they had been seeing to that point could be reasonably extrapolated to all kinds of industrial re-alignment.
Then we ran out of silver bullets.
[Still waiting to see what percentage of LLM hype is driven by people not having read The Mythical Man Month.]
This would be true without competition.
What really happens is everybody adopts the same strategy and raises the work floor while demanding more.
Until we get rid of unlimited greed in humans we shouldn't expect a change.
"It never gets easier, you just go faster" - Greg LeMond
That quotation pops up on cycling subreddits occasionally and I've always disliked it because I think it discourages people from casual bike riding.
I've been biking to work occasionally for a few years now and it definitely gets easier.
Yeah the quote assumes you're riding without speed limits. In a typical commute, it does get easier once your cardiovascular ability exceeds the upper speed limit given the route.
No, the quote assumes you want to go faster. I don't really. I enjoy my ride and if I wanted to get to work 5 minutes faster, I would leave 5 minutes earlier.
I read that quote as speaking more to the human condition and less about cycling. Humanity has a tendency to keep pushing to the edge of its current abilities no matter how much they expand.
Only if you continue to push yourself while training. What used to be difficult absolutely gets easier in endurance after training.
When you’re sailing, juat going as fast as possible won’t necessarily get you where you need to go.
Not just won’t get you there fastest. At all.
it intensifies work, and shortens time to burnout, which... like nobody still talks of. ingesting these huge slops of information can be super tiring.
> it intensifies work, and shortens time to burnout
This is most likely correct. Everyone talks how AI makes it possible to "do multiple tasks at the same time", but noone seems to care that the cognitive (over)load is very real.
IME you don't even have to do multiple things at the same time to reach that cognitive fatigue. The pace alone, which is now much higher, could be enough to saturate your cognitive capabilities.
For me one unexpected factor is how much it strains my executive function to try and maintain attention on the task at hand while I’m letting the agent spin away for 5-10 minutes at a stretch. It’s even worse than the bad old days of long compile times because at least then I could work on tests or something like that while I wait. But with coding agents I feel like I need to be completely hands off because they might decide to touch literally any file in the repository.
It reminds me a bit of how à while back people were finding that operating a level 3 autonomous vehicle is actually more fatiguing than driving a vehicle that doesn’t even have cruise control.
For me it's the volume of things that I am now capable of doing in so much shorter amount of time - this leaves almost no space for resting but incurs much more strain on my cognitive limits.
> But with coding agents I feel like I need to be completely hands off because they might decide to touch literally any file in the repository.
Why not just have another worktree?
So the thing about task switching is, everyone is bad at it. And the studies indicate that people who think they’re good at it are even worse at it.
I was responding to:
> It’s even worse than the bad old days of long compile times because at least then I could work on tests or something like that while I wait.
To me it seems like the exact same context switching situation that it always was.
> it intensifies work, and shortens time to burnout
On the bright side, that would address the employability crisis for new grads.
Would it though? If you have 10 people on the team now doing the work of 20, 30, 50, ...
The title alone maximizes the word-to-LLMism ratio
How about using LLM's to improve developer experience instead? I've had a lot of failures with "AI" even on small projects, even the best things I've tried like (agentic-project-management) I still had to just go back to traditional coding.
Not sure if everyone shares this sentiment but the reason I use AI as a crutch is due to poor documentation that's out there, even simple terminal commands don't show use examples for ls when you try to type man ls. I just end up putting up with the code output because it works ok enough for short term, this however doesn't seem like a sustainable plan long term either.
There is also this dread I feel because what I would do if AI went down permanently? The tools I tried like Zeal really didn't do it for me either for documentation, not sure who decided on the documentation format but this "Made by professionals, for professionals" isn't really cutting it anymore. Apologies in advance if I missed out on any tools but in my 4+ years of university nobody ever mentioned any quality tools either, and I'm sure this trend is happening everywhere.
> On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.
Love this quote. For me, barely a few weeks in, I feel exactly this. To clarify - I feel this only when working on dusty old side projects. When I use it to build for the org its still a slog just faster.
This article is scratching the surface of the concept of desynchronization from the theory of social acceleration and the sociology of speed. Any technology that is supposed to create idle time, once it reaches mass adoptions has the opposite effect of speeding up everything else.
We have been on this track for a long time: cars were supposed to save time in transit, but people started living farther from city centres (c.f. Marchetti's constant). E-Mail and instant messaging were supposed to eliminate wait time from postal services, but we now send orders of magnitude more messages and social norms have shifted such that faster replies are expected.
"AI" backed productivity increases are only impressive relative to non-AI users. The idilliac dream of working one or two days a week with agents in the background "doing the rest" is delusional. Like all previous technologies once it reaches mass adoption everyone will be working at a faster pace, because our society is obsessed with speed.
If anyone is saying "yeah, but this time will be different", just look at our society now.
Arguably the only jobs which are necessary in society are related to food, heating, shelter and maybe some healthcare. Everything else - what most people are doing - is just feeding the never ending treadmill of consumer desire and bureaucratic expansion. If everyone adjusted their desired living standards and possessions to those of just a few centuries ago, almost all of us wouldn't need to work.
Yet here we are, still on the treadmill! It's pretty clear that making certain types of work no longer needed will just create new demands and wants, and new types of work for us to do. That appears to be human nature.
You are wrong. Cars have made it possible for people to commute way more easily than the alternative where cars don’t exist.
Cars allowed people to live far away which allows for lower housing cost therefore decreased overall cost of living.
I've noticed first hand how the scope of responsibilities is broadened by integrating AI on workflows. Personally if feels like a positive feedback loop: I take more responsibilities; since they are outside my scope I have a harder time reviewing AI output; this increases fatigue and makes me more prone to just accepting more AI output; with the increase in reliance on AI output I get to a point where I'm managing things that are way outside my scope and I can't do it unless I rely on AI entirely. In my opinion this also increases Imposter Syndrome effects.
But I doubt companies and management will think for a second that this voluntary increase in "productivity" is any bad, and it will probably be encouraged
I mean this has been occurring for years/decades now.
Have 10 people on staff.
Fire 5.
The remaining 5 have to do all the duties or get fired, for the same pay.
I'm not sure if intensifies is the word. AI just has awkward time dynamics that people are adapting to.
Sometimes you end up with tasks that are low intensity long duration. Like I need to supervise this AI over the course of three hours, but the task is simple enough that I can watch a movie while I do it. So people measuring my work time are like "wow he's working extra hours" but all I did during that time is press enter 50 times and write 3 sentences.
AI speeds things up at the beginning. It helps you get unstuck, find answers quickly without jumping through different solutions from internet, write boilerplate, explore ideas faster. But over time I reach for it faster than I probably should. Instead of digging into basic code, I directly jump to AI. I’ve been using it for even basic code searches. AI just makes it easier to outsource thinking. And your understanding of the codebase can get thinner over time.
How is basic code search "thinking". This is not a skill I need to keep around, it used to be a means to an end and now it's a vestige.
It's totally expected because the bar goes way up. There used to be a time when hosting a dynamic website with borderline default HTML components was difficult enough that companies spent hundreds of thousands of dollars. These days, a single person's side project is infinitely more complex.
The only sustainable thing to do is to reduce peoples work hours, but keep paying them to the same over the week.
If before AI we were talking about 6 hours days as an aim, we should be talking about a 4 hour work day, without any reduction in pay.
Otherwise everyone is going to burn out.
“lol”
- Average manager
Every other advancement in office productivity and software has intensified work. AI will too. It will also further commodify it.
In the Ford matrix of smart to dumb and hardworking to lazy AI will enable the dumb and hard working to 100x their damage to a company over night.
Can’t miss the opportunity to share my favourite aphorism:
“ I distinguish four types. There are clever, hardworking, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and hardworking; their place is the General Staff. The next ones are stupid and lazy; they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the mental clarity and strength of nerve necessary for difficult decisions. One must beware of anyone who is both stupid and hardworking; he must not be entrusted with any responsibility because he will always only cause damage.”
— Kurt von Hammerstein-Equord
Same with most productivity gains in tooling historically, I think one way we should consider reckoning with this is through workers rights.
The industrial revolution lead to gains that allowed for weekends and the elimination of child labor, but they didn't come for free, they had to be fought for.
If we don't fight for it, what are we gaining? more intense work in exchange for what?
Delete the llmism after the dash and the title is correct.
The cognitive overload is more so people not understanding the slop they are generating. Slop piles on top of slop until a situation arrives where you actually need to understand everything, and you don’t because you didn’t do the work yourself.
Related:
AI makes the easy part easier and the hard part harder
https://news.ycombinator.com/item?id=46939593
> In an eight-month study
Things have improved significantly since then. Copying and pasting code from o1/o3 versus letting codex 5.3 xhigh assemble its own context and do it for you.
Since when? You're quoting the timeframe not the period under study.
It's also not a study of just engineers, it's people across engineering, product, design, research, and operations. For a lot of none-code tasks Ai needs pasted context as it's not usually in a repo like code is.
(And their comments about intensifying engineering workload also aren't really changed by ai copy/paste vs context).
At some point, the “but everything has radically changed in the past 15 minutes” counterargument to all material evidence that undermines AI marketing has to become boring and unpersuasive.
I humbly propose that point is today.
Last night I tried out Opus 4.6 on a personal project involving animating in Gaussian Splats where the final result is output as a video.
In the past, AI coding agents could usually reason about the code well enough that they had a good chance of success, but I’d have to manually test since they were bad at “seeing” the output and characterizing it in a way that allowed them to debug if things went wrong, and they would never ever check visual outputs unless I forced them to (probably because it didn’t work well during RL training).
Opus 4.6 correctly reasoned (on its own, I didn’t even think to prompt this) that it could “test” the output by grabbing the first, middle and last frame, and observing that the first frame should be empty, the middle frame half full of details, and the final frame resembling the input image. That alone wouldn’t have impressed me that much, but it actually found and fixed a bug based on visual observation of a blurry final frame (we hadn’t run the NeRF training for enough iterations).
In a sense this is an incremental improvement in the model’s capabilities. But in terms of what I can now use this model for, it’s huge. Previous models struggled at tokenizing/interpreting images beyond describing the contents in semantic terms, so they couldn’t iterate based on visual feedback when the contents were abstract or broken in an unusual way. The fact that they can do this now means I can set them on tasks like this unaided and have a reasonable probability that they’ll be able to troubleshoot their own issues.
I understand your exhaustion at all the breathless enthusiasm, but every new models radically changes the game for another subset of users/tasks. You’re going to keep hearing that counterargument for a long time, and the worst part is, it’s going to be true even if it’s annoying.
Not surprising, since being able to "see" images effectively is key to unblocking LLM augmentation for use in web and app frontend work.
>I humbly propose that point is today.
You're right that the argument will become boring, but I think it's gonna be a minute before it does so. I spent much of yesterday playing with the new "agent teams" experimental function of Claude Code, and it's pretty remarkable. It one-shotted a rather complex Ansible module (including packaging for release to galaxy), and built a game that teaches stock options learning, both basically one-shotted.
On Thursday I had a FAC with a coworker and he predicted 2026 is going to be the year of acceleration, and based on what I've seen over the last 2-3 years I'd say it's hard to argue that.