This is no different then carpentry. Yes, all furniture can now be built by machines. Some people still choose to build it by hand. Does that make them less productive? Yes. Will they ever carve furniture by hand for a business? Probably not. Can they still enjoy the act of working with the wood? Yes.
If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
I’ve heard this metaphor before and I don’t think it works well.
For one, a power tool like a bandsaw is a centaur technology. I, the human, am the top half of the centaur. The tool drives around doing what I tell it to do and helping me to do the task faster (or at all in some cases).
A GenAI tool is a reverse-centaur technology. The algorithm does almost all of the work. I’m the bottom half of the centaur helping the machine drive around and deliver the code to production faster.
So while I may choose to use hand tools in carpentry, I don’t feel bad using power tools. I don’t feel like the boss is hot to replace me with power tools. Or to lay off half my team because we have power tools now.
There's DeWALT, Craftsman, Stanley, etc carpentry/mechanic power tool brands who make a wide variety of all manner of tools and tooling; the equivalents in computers (at least UNIXy) are coreutils (fileutils, shellutils, and textutils), netpbm, sed, awk, the contents of /usr/bin, and all their alternative, updated brands like fd, the silver searcher, and ripgrep are, or the progression of increased sharpening in revision control tools from rcs, sccs, svn, to mercurial and git; or telnet-ssh, rcp-rsync, netcat-socat. Even perl and python qualify as multi-tool versions of separate power tools. I'd even include language compilers and interpreters in general as extremely sharp and powerful power multi-tools, the machine shop that lets you create more power tools. When you use these, you're working with your hands.
GenAI is none of that, it's not a power tool, even though it can use power tools or generate output like the above power tools do. GenAI is hiring someone else to build a bird house or a spice rack, and then saying you had a hand in the results. It's asking the replicator for "tea, earl grey, hot". It's like how we elevate CEOs just because they're the face of the company, as if they actually did the work and were solely responsible for the output. There's skill in organization and direction, not all CEOs get undeserved recognition, but it's the rare CEO who's getting their hands dirty creating something or some process, power tools or not. GenAI lets you, everyone, be the CEO.
Why else do you think I go to work everyday? Because I have a “passion” for sitting at a computer for 40 hours a week to enrich private companies bottom line or a SaaS product or a LOB implementation? It’s not astroturfing - it’s realistic
Would you be happier if I said I love writing assembly language code by hand like I did in 1986?
My analogy is more akin to using Google Maps (or any other navigation tool).
Prior to GPS and a navigation device, you would either print out the route ahead of time, and even then, you would stop at places and ask people about directions.
Post Google Maps, you follow it, and then if you know there's a better route, you choose to take a different path and Google Maps will adjust the route accordingly.
Google Maps is still insanely bad for hiking and cycling, so I combine the old-fashioned map method with an outdoor GPS onto which I load a precomputed GPX track for the route that I want to take.
You only feel that way about power tools because the transition for carpentry happened long ago. Carpenters viewed power tools much as we do LLMs today. Furniture factories, equivalent of dark agentic code factories, caused much despair to them too.
Humans are involved with assembly only because the last bits are maniacally difficult to get right. Humans might be involved with software still for many years, but it probably will look like doing final assembly and QA of pre-assembled components.
I think this argument would work if hand-written code would convey some kind of status, like an expensive pair of Japanese selvage jeans. For now though, it doesn't seem to me that people paying for software care if it was written by a human or an AI tool.
To me, they all the same because they are all tools that stand between “my vision” and “it being built.”
e.g. when I built a truck camper, maybe 50% was woodworking but I had to do electrical, plumbing, metalworking, plastic printing, and even networking infra.
The satisfaction was not from using power tools (or hand tools too) — those were chores — it was that I designed the entire thing from scratch by myself, it worked, was reliable through the years, and it looked professional.
The “work” is not creating for and while loops. The work for me is:
1. Looking at the contract and talking to sales about any nuances from the client
2. Talking to the client (use stakeholder if you are working for a product company) about their business requirements and their constraints
3. Designing the architecture.
4. Presenting the architecture and design and iterating
5. Doing the implementation and iterating. This was the job of myself and a team depending on the size of the project. I can do a lot more by myself now in 40 hours a week with an LLM.
6. Reviewing the implementation
7. User acceptance testing
8. Documentation and handover.
I’ve done some form of this from the day I started working 25 years ago. I was fortunate to never be a “junior developer”. I came into my first job with 10 years of hobbyist experience and implementing a multi user data entry system.
I always considered coding as a necessary evil to see my vision come to fruition.
It seems like you're doing a lot of work to miss the actual point. Focusing on the minutiae of the analogy is a distraction from the over arching and obvious point. It has nothing to do with how you feel, it has to do with how you will compete in a world with others who feel differently.
There were carpenters who refused to use power tools, some still do. They are probably happy -- and that's great, all the power to them. But they're statistically irrelevant, just as artisanal hand-crafted computer coding will be. There was a time when coders rejected high level languages, because the only way they felt good about their code is if they handcrafted the binary codes, and keyed them directly into the computer without an assembler. Times change.
In my opinion, it is far too early to claim that developers developing like it was maybe three years ago are statistically irrelevant. Microsoft has gone in on AI tooling in a big way and they just nominated a "software quality czar".
I used the future tense. Maybe it will be one hundred years from now, who knows; but the main point still stands. It would just be nice to move the conversation beyond "but I enjoy coding!".
I don’t think it’s correct to claim that AI generated code is just next level of abstraction.
All previously mentioned levels produce deterministic results. Same input, same output.
AI-generation is not deterministic. It’s not even predictable. And example of big software companies clearly show what mass adoption of AI tools will look like in terms of software quality. I dread if using AI will ever be an expectation, this will be level of enshittification never before imagined.
You're not wrong. But your same objection was made against compilers. That they are opaque, have differences from one to another, and can introduce bugs, they're not actually deterministic if you upgrade the compiler, etc. They separate the programmer from the code the computer eventually executes.
In any case, clinging to the fact that this technology is different in some ways, continues to ignore the many ways it's exactly the same. People continue to cling to what they know, and find ways to argue against what's new. But the writing is plainly on the wall, regardless of how much we struggle to emotionally separate ourselves from it.
They may not be wrong per se but that argument is essentially a strawman argument.
If these tools are non-deterministic then how did someone at Anthropic spend the equivalent of $20,000 of Anthropic compute and end up with a C compiler that can compile the Linux kernel (one of the largest bodies of C code out there).
People on here keep trotting out this "AI-generation is not deterministic." (more properly speaking, non-deterministic) argument on here …
And my retort to you (and them) is, "Oh yeah, and so?"
What about me asking Claude Code to generate a factorial function in C or Python or Rust or insert-your-language-of-choice-here is non-deterministic?
If you're referring to the fact that for a given input LLMs (or whatever) because of certain controls (temperature controls?) don't give the same outputs for the same inputs. Yeah, okay. If we're talking about conversational language that makes a meaningful difference to whether it sounds like an ELISA robots or more like a human. But ask an LLM to output some code then that code has to adhere to functional requirements independent of, muh, non-determinism. And what's to stop you (if you're so sceptical/scared) writing test-cases to make sure the code that is magically whisked out of nowhere performs as you so desire? Nothing. What's to stop you getting one agent to write the test-suite (and for you to review to the test-suite for correctness and for another agent to the write the code and self-correct based off of checking its code against the test-suite? Nothing
I would advise anyone encountering this but-they're-non-deterministic argument on HN to really think through what the proponents of this argument are implying. I mean, aren't humans non-deterministic. (I should have thought so.) So how is it, <extra sarcasm mode activated>pray tell</extra sarcasm mode activated> humans manage to write correct software in the first place?
I personally have jested many times I picked my career because the logical soundness of programming is comforting to me. A one is always a one; you don’t measure it and find it off by some error; you can’t measure it a second time and get a different value.
I’ve also said code is prose for me.
I am not some autistic programmer either, even if these statements out of context make me sound like one.
The non-determinism has nothing to do with temperature; it has everything to do with that fact that even at temp equal to zero, a single meaningless change can produce a different result. It has to do with there being no way to predict what will happen when you run the model on your prompt.
Coding with LLMs is not the same job. How could it be the same to write a mathematical proof compared to asking an LLM to generate that proof for you? These are different tasks that use different parts of the brain.
I don't think he's missing the point at all. A band saw is an immutable object with a fixed, deterministic capability--in other words, a tool.
An LLM is a slot machine. You can pull keep pulling the lever, but you'll get different results every time. A slot machine is technically a machine that can produce money, but nobody would ever say it's a tool for producing money.
People keep trotting this argument out. But a band saw is not deterministic either, it can snap in the middle of a cut and destroy what you're working on. The point is, we only treat it like it's deterministic, because most of the time it's reliable enough that it just does what we want. AI technology will definitely get to the same level eventually. Clinging on to the fact that it isn't yet at that level today, is just cope, not a principled argument.
I feel like we're both in similar minds of opposite sides, so perhaps you can answer me this: How is a deterministic AI any different from a search engine?
In other words, if you and me always get the same results back for the same prompt (definition of determinism,) isn't that just really, really power hungry Google?
I'm not sure pure determinism is actually a desirable goal. I mean, if you ask the best programmer in the world the same question every day, you're likely to eventually get a new answer at some point. But if you ask him, or I ask him, hopefully he gives the same good answer, to us both. In any case, he's not just a power hungry Google, because he can contextualize our question, and understand us when we ask in very obscured ways; maybe without us even understanding what we're actually looking for.
I think the distinction without a difference is a tool being deterministic or not. Fundamentally, its nature doesn't matter, if in actual practice it outperforms everything else.
Be that as it may, moving the goalpost aside. For me personally this fundamentally does matter. Programming is about giving instructions for a machine (or something mechanical) to follow. It matters a great deal to me that the machine reliably follows the instructions I give it. And compiler authors of the past have gone to great lengths to make their compilers produce robust (meaning deterministic) output, as have language authors tried to make their standards as rigorous (meaning minimize undefined behavior) as possible.
And for that matter, going back to the band saw analogy, a measure of a quality of a great band saw is, in fact, that the blade won’t snap in half in the middle of a cut. If a band saw manufacturer produces a band saw with a really low binomial p-value (meaning it is less deterministic/more stochastic) that is a pretty lousy band saw, and good carpenters will know to stay away from that brand of band saws.
To me this paints a picture of a distinction that does indeed have a difference. A pretty important difference for that matter.
Have you never run a team of software engineers as a lead? Agentic coding comes naturally to a lot of people because that's PRECISELY what you do when you're leading a team, herding multiple brains to point them in the same direction so when you combine all their work it becomes something that is greater than the sum of it's parts.
Lots of the complains about agents sound identical to things I've heard and even said myself about junior engineers.
That said, there's always going to need to be people who can reach below the abstraction and agentic coding loops deprive you of the ability to get those reps in.
> Have you never run a team of software engineers as a lead?
I expect juniors to improve fast to get really good. AI is incapable of applying the teaching that I expcect juniors to internalize to any future code that it writes.
People say this about juniors but I've never seen a junior make some of the bone headed mistakes AI loves to make. Either I'm very lucky or other people have really stupid juniors on their teams lol.
Regardless, personally, there's no comparison between an LLM and a junior; always rather work with a junior.
I've wrote this a few times, but LLM interactions often remind me of my days at Nokia - a lot of the interactions are exactly like what I remember with some of their cheap subcons there.
I even have exactly the same discussion after it messed up, like "My code is working, ignore that failing test, that was always broking, and I definitey didn't break it just now".
Yes, I’ve read quite a lot about that bloody and terrible part of history.
The Ludddites were workers who lived in an era without any social or state protections for labourers. Capitalists were using child labour to operate the looms because it was cheaper than paying anyone a fair wage. If you didn’t like the conditions you could go work as an indentured servant for the state in the work houses.
Luddites used organized protests in the form of collective violence to force action when they had no other leverage. People were literally shot or jailed for this.
It was a horrible part of history written by the winners. That’s why everyone thinks Luddites were against technology and progress instead of social reforms and responsibility.
In that case I really don't understand how you conclude there's any difference between being on the bottom or the top of the tool. The bare reality is the same: Skilled labourers will be replaced by automation. Woodworking tools (and looms) replaced skilled labourers with less-skilled replacements (such as children), and AI will absolutely replace skilled labourers with less-skilled replacements as well. I ask sincerely, I truly don't understand how this isn't a distinction without a difference. Have you spent time inside a modern furniture factory? Have you seen how few people it takes to make tens of tons of product?
I haven’t worked in a furniture factory but I have assembled car seats in a factory for Toyota.
The difference matters because the people who worked together to smash the looms created the myth of Ned Ludd to protect their identities from persecution. They used organized violence because they had no leverage otherwise to demand fair wages, safety guarantees, and other labour protections. What they were fighting for wasn’t the abolishment of automation and looms. It was for social reforms that would have given them labour protections.
It matters today because AI isn’t a profit line on any balance sheet right now but it is being used to justify mass layoffs and to reduce the leverage of knowledge workers in the marketplace. These tools steal your work without compensation and replace your job with capital so that rent seekers can seek rent.
It’s not a repeat of what happened in the Luddite protests but history is rhyming.
> I don’t feel like the boss is hot to replace me with power tools. Or to lay off half my team because we have power tools now.
That has more to do with how much demand there is for what you're doing. With software eating the world and hardware constraints becoming even more visible due to the chips situation, we can expect that there will be plenty of work for SWE's who are able to drive their coding agents effectively. Being the "top" (reasoning) or the "bottom" half is a matter of choice - if you slack off and are not highly committed to delivering quality product, you end up doing the "bottom" part and leaving the robot in the driver's seat.
I think this comparison isn’t quite correct. The downside with carpentry is that you only ever produce one of the thing you’re making. Factory woodwork can churn out multiple copies of the same thing in a way hand carpentry never can. There is a hard limit on output and output has a direct relationship to how much you sell.
Code isn’t really like that. Hand written code scales just like AI written code does. While some projects are limited by how fast code can be written it’s much more often things like gathering requirements that limits progress. And software is rarely a repeated, one and done thing. You iterate on the existing product. That never happens with furniture.
There could be factories manufacturing your own design, just one piece. It won't be economical, but can be done. But parts are still the same - chunks and boards of wood joined together by the same few methods. Maybe some other materials thrown into the mix.
With software it is similar: Different products use (mostly) the same building blocks, functions, libraries, drivers, frameworks, design patterns, ux patterns.
> If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
If you can't code by hand professionally anymore, what are you being paid to do? Bring the specs to the LLMs? Deal with the customers so the LLMs don't have to?
This is what I don’t understand: why highly-paid SWEs seem to think that their salaries will remain the same (if they even still have a job) if their role is now a glorified project manager.
Recently, I had to do an integration with a Chinese API for my company. I used Codex to do the whole thing.
Yet, there is no way a product manager without any coding experience could have done it. First, the API needed to communicate to the main app correctly such as formatting, correcting data. This required human engineer guidance and experience working with expected data. AI was lost. Second, the API was designed extremely poorly. You first had to make a request, then retry a second endpoint over and over again while the Chinese API did its thing in the background. Yes, I had to poll it. I then had to do load testing to make sure it was reliable (it wasn't). In the end, I gave a recommendation that we shouldn't rely on this Chinese company and back out of the deal before we send them a huge deposit.
A non-technical PM couldn't have done what I did... for at least a few more years. You need a background and experience in software development to even know what to prompt the AI. Not only that, in the last 3 years, I developed an intuition on where LLMs fail and succeed when writing code.
I still have a job. My role has changed. I haven't written more than 10 lines of code in a day for months now. Yes, it's kind of scary for software devs right now but I'm honestly loving this as I was never the kind of dev who loved the code, just someone who needed to code to get what I wanted.
Architects and engineers are not construction workers. AI can build the thing but it needs to be told exactly what to build by someone who knows how software works.
I’ve spent enough time working with cross-functional stakeholders to know that the vast majority of PM (whether of the product, program, or project variety), will not be capable of running AI towards any meaningful software development goal. At best they can build impressive prototypes and demos, at worst they will corrupt data in a company-destroying level of failure.
> AI can build the thing but it needs to be told exactly what to build by someone who knows how software works.
If AI was following my instructions instead of ignoring them, and after complaining telling me it is sorry, and returns some other implementation which also fails to follow my instructions ... :-(
Basically you feed it a massive volume of application code. It turns out there is a lot of commonality and latent repetition that can be teased out by LLMs, so you can get quite far with that, though it will fall down when you get into more novel terrain.
Don't be stupid, if an AI can figure out how to arrange code, it can also figure out how to pick the right architecture choices.
Right now millions of developers are providing tons of architecture questions and answers. That's all going to be used as training data for the next model coming out in 6 months time.
This is a moat on our jobs as deep as a puddle.
If you believe LLMs will be able to do complex coding tasks, you must also concede they will be able to make the relatively simpler architecture choices easily simply by asking the right questions. Something they're already starting to be able to do.
It's not a massive jump to go from, 'add a button above the table to the right that when clicked downloads and excel file', to 'The client's asking to dowbload an excel file".
If you believe the LLMs will graduate from junior level coding to senior in the next year, which they're clearly not capable of doing yet despite all the hype, there is no moat of going from coder to BA to PM.
But (the thinking) goes, with AI in the mix, spinning up a new project or feature will be so low-friction that there will be 10x as many projects created. So our jobs are saved!
You have to move up the stack and make yourself a more valuable product. I have an analogy…
I’ve been working for cloud consulting companies/departments for six years.
Customers were willing to pay mid level (L5) consultants with @amazon.com by their names (AWS ProServe) $x to do one “workstream”/epic worth of work. I got paid $x - Amazon’s cut in cash and RSUs.
Once I got Amazon’ed, I had to get a staff level position (senior equivalent at BigTech) at a third party company where now I am responsible for larger projects. Before I would have needed people - now I need code gen tools and my quarter century of development experience and my decade of experience leading implementations + coding.
Doesn't this mean the ones that should be really worried are the project managers, since the SWE has better understanding over what's being done and can now orchestrate from a PM level?
Both should realize that if this all works out according to plan then there eventually reaches a point that there is no longer a need for their entire company, let alone any individual role in it.
They're delusional, but that's to be expected if you imagine them as the types for whom everything in life has always just kinda worked out. The idea that things could suddenly not work out is almost unimaginable to them, so of course things will change, but not, for them, substantially for the worse.
You are under delusion. Glorified project manager will not produce production quality code no matter what. At least not until we will have reached that holy grail of AGI. But if that ever happens the world will have way bigger problems to deal with.
I don’t think that the real dichotomy here. You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.
The management has decided that the latter is preferable for short term gains.
> You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.
It's actually worse than that, because really the first case is "produce 1x good code". The hard part was never typing the code, it was understanding and making sure the code works. And with LLMs as unreliable as they are, you have to carefully review every line they produce - at which point you didn't save any time over doing it yourself.
Look at the pretty pictures AI generates. That's where we are with code now. Except you have ComfyUI instead of ChatGPT. You can work with precision.
I'm a 500k TC senior SWE. I write six nines, active-active, billion dollar a day systems. I'm no stranger to writing thirty page design documents. These systems can work in my domain just fine.
> Look at the pretty pictures AI generates. That's where we are with code now.
Oh, that is a great analogy. Yes, those pictures are pretty! Until you look closer. Any experienced artist or designer will tell you that they are dogshit and don't have value. Don't look further than at Ubisoft and their Anno 117 game for a proof.
Yep, that's where we are with code now. Pretty - until you look close. Dogshit - if you care to notice details.
Not to mention how hard it is to actually get what you want out of it. The image might be pretty, and kinda sorta what you asked for. But if you need something specific, trying to get AI to generate it is like pulling teeth.
The problem is not that it can’t produce good code if you’re steering. The problem is that:
There are multiple people on each team, you can not know how closely each teammate monitored their AI.
Somebody who does not car will vastly outperform your output. By orders of magnitude. With the current unicorn chasing trends, that approach tends to be more rewarded.
This produces an incentive to not actually care about the quality. Which will cause issues down the road.
I quite like using AI. I do monitor what it’s doing when I’m building something that should work for a long time. I also do total blind vibe coded scripts when they will never see production.
But for large programs that will require maintenance for years, these things can be dangerous.
Since we’re apparently measuring capability and knowledge via comp, I made 617k last year. With that silly anecdote out of the way, in my very recent experience (last week), SOTA AI is incapable of writing shell scripts that don’t have glaring errors, and also struggles mightily with RDBMS index design.
Can they produce working code? Of course. Will you need to review it with much more scrutiny to catch errors? Also yes, which makes me question the supposed productivity boost.
> You can write 10x the code - good code. You can review and edit it before committing it. Nothing changes from a code quality perspective. Only speed.
I agree, but this is an oversimplification - we don't always get the speed boosts, specifically when we don't stay pragmatic about the process.
I have a small set of steps that I follow to really boost my productivity and get the speed advantage.
(Note: I am talking about AI-coding and not Vibe-coding)
- You give all the specs, and there are "some" chances that LLM will generate code exactly required.
- In most cases, you will need to do >2 design iterations and many small iterations, like instructing LLMs to properly handle error gracefully recover from errors.
- This will definitely increase speed 2x-3x, but we still need to review everything.
- Also, this doesn't take into account the edge cases our design missed. I don't know about big tech, but when I have to do the following to solve a problem
1. Figure out a potential solution
2. Make a hacky POC script to verify the proposed solution actually solves the problem
3. Design a decently robust system as a first iteration (that can have bugs)
4. Implement using AI
5. Verify each generated line
6. Find out edge cases and failure modes missed during design and repeat from step3 to tweak the design, or repeat from step4 to fix bug.
WHENEVER I jump directly from 1 -> 3 (vague design) -> 5, Speed advantages become obsolete.
PMs can always keep their jobs because they appear to be working and they keep contact with the execs directly. They have taken a bigger and bigger part of the tech pie over the years and soon they finally take it all.
That's not what i am seeing being played out at a big corp. In reality everyone gets thrown under the bus, no matter if c-level or pleb if they don't appear to know how to drive the ai metrics up. Just being a PM won't save your job any more than that of the dev who doesn't know how to acquire and use new skills. On the contrary, jobs of the more competent devs are safer than those of some managers here who don't know the tech.
I am currently doing 6 projects at the same time, where before I would only of doing one at a time. This includes the requirements, design, implementation and testing.
Your code in $INSERT_LANGUAGE is no less of a spec to machine code than english is to $INSERT_LANGUAGE.
Spec is still needed, spec is the core problem of engineering. Too much specialization have made job titles like $INSERT_LANGUAGE engineer, which deviated too far from the core problem, and it is being rectified now.
When the cost of defects and of the AI tooling itself inevitably rises, I think we are likely to see a sudden demand for the remaining employed developers to do more work "by hand".
>"If you can't code by hand professionally anymore"
Then you are simply fucked. The code you deliver will contain bugs which LLM sometimes will be able to fix and sometimes will be not. And as a person who has no clue you will have no idea how to fix it when LLM can not. Also even when LLM code is correct it can and sometimes does introduce gross performance fuckups, like using patterns that employ N-square complexity instead of N for example. Again as a clueless person you are fucked. And if one goes to areas like concurrency, multithreading optimizations one gets fucked even more. I can go on and on on way more particular reasons to get screwed.
For a person who can hand code AI becomes amazing tool. For me - it helps immensely.
The reason this analogy falls down is that tools typically do one thing, do it extremely well, are extremely reliable. When I use a table saw, I know that it's going to cut this board into two pieces, exactly in this spot, and it'll do that exactly the same way every single time I use it.
You cannot tell AI to do just one thing, have it do it extremely well, or do it reliably.
And while there's a lot of opinions wrapped up in it all, it is very debatable whether AI is even solving a problem that exists. Was coding ever really the bottleneck?
And while the hype is huge and adoption is skyrocketing, there hasn't been a shred of evidence that it actually is increasing productivity or quality. In fact, in study after study, they continue to show that speed and quality actually go down with AI.
I'm still not sure about the productivity. Last time I asked a LLM to generate a lib for me it did it in a few second but the result took me the day to review and correct. About the same time it would take me to write it from scratch.
That is exactly my experience. Every single time I get an LLM to write some code for me, it saves me no time because I have to review it carefully to make sure there are no mistakes. LLMs still, even after work has been done, completely make up methods and syntax that doesn't exist. They still get logic wrong or miss requirements.
Right now the only way to save time with LLMs is to trust the output and not review it. But if you do that, you're just going to produce crappy software.
- documentation for well-known frameworks and libs, "how do I do [x] in [z]?" questions
- port small code chunks from one language to another
- port configuration from one software to another (example: I got this Apache config, make me the equivalent in NGinX)
Which is already pretty cool if you don't think about the massive amount of energy spent for this, but definitely not the "10x" productivity boost I hear about.
Pretty much exactly this for me, except i can coax it into writing decent unit tests (really gotta be diligent though, it loves mocking out the things it's testing lol) and for CI stuff (mostly because I despise Actions YAML and rather let it do it). But i do get decent results in both areas on a regular basis.
I think you're supposed to ask another LLM instance to review it, then ask the first LLM instance to implement corrections, or that's how I understand it.
That is not a technical constraint and may be automated if it made sense financially.
Same with software - for some time software won't be all designed, coded, tested, deployed to production without human supervision or approval. But the pieces in between are more and more filled by AI, as are the logistics of designing, manufacturing and distributing sofas.
Some people like to spin their own wool, weave their own cloth, sew their own clothes.
A few even make a good living by selling their artisanal creations.
Good for them!
It's great when people can earn a living doing what they love.
But wool spinning and cloth weaving are automated and apparel is mass produced.
There will always be some skilled artisans who do it by hand, but the vast majority of decent jobs in textile production are in design, managing machines and factories, sales and distribution.
The metaphor doesn't work because all of the things mentioned have to be individually fabricated. But software doesn't. Copies are free. Thats the magic of software, you don't need much of it - you just need to be correct/smarter.
It's pretty surprising to see people on this site (assume mostly programmers) to think of code in terms of quantity. I always thought developers believe in less code the better.
Like, do you even know how furniture is designed and built? Do you know how software is designed and built? Where is this comment even coming from? And people are agreeing with this?
A friend of mine reposted someone saying that "AI will soon be improving itself with no human intervention!!" And I tried asking my friend if he could imagine how an LLM could design and manufacture a chip, and then a computer to use that chip, and then a data center to house thousands of those computers, and he had no response.
People have no perspective but are making bold assertion after bold assertion
If this doesn't signal a bubble I don't know what does
I like programming by hand too. Like many of us here, I've been doing this for decades. I'm still proud of the work I produced and the effort I put in. For me it's a highly rewarding and enjoyable activity, just like studying mathematics.
Nevertheless, the main motivator for me has been always the final outcome - a product or tool that other people use. Using AI helps me to move much faster and frees up a lot of time to focus on the core which is building the best possible thing I can build.
> But we shouldn't pretend that you will be able to do that professionally for much longer.
Opus 4.5 just came out around 3 months ago. We are still very early in this game. Creating things this year already makes me feel like I'm in the Enchanted Pencil (*) cartoon in which the boy draws an object with a magic pencil and makes it reality within seconds. With the collective effort of everyone involved in building the AI tools and the incentives aligned (as they are right now) the progress will continue be very rapid. You can still code by hand but it will be very hard to compete in the market without the use of AI.
>> For me it's a highly rewarding and enjoyable activity, just like studying mathematics. Nevertheless, the main motivator for me has been always the final outcome
There are two attitudes stemming from the LLM coding movement, those who enjoyed the craft of coding MORE, and those who enjoy seeing the final output MORE.
I agree with this analogy, as someone who professionally codes and someone who pulls out the power tools to build things around my house but uses hand tools for furniture and chairs.
No job site would tolerate someone bringing a hand saw to cut rafters when you could use a circular saw, the outcome is what matters. In the same vein, if you’re too sloppy cutting with the circular saw, you’re going to get kicked off the site too. Just keep in mind a home made from dimensional lumber is on the bottom of the precision scale. The software equivalent of a rapper’s website announcing a new album.
There are places where precision matters, building a nuclear power plant, software that runs an airplane or an insulin pump. There will still be a place for the real craftsman.
There's going to be minimal "junior" jobs where you're mostly implementing - I guess roughly equivalent to working wood by hand - but there's still going to be jobs resembling senior level FAANG jobs for the foreseeable future.
Someone's going to have to do the work, babysit the algorithm, know how to verify that it actually works, know how to know that it actually does what it's supposed to do, know how to know if the people who asked for it actually knew what they were asking for, etc.
Will pay go down? Who knows. It's easy to imagine a world in which this creates MORE demand for seniors, even if there's less demand for "all SWEs" because there's almost zero demand for new juniors.
And at least for some time, you're going to need non-trivial babysitting to get anything non-trivial to "just work".
At the scale of a FAANG codebase, AI is currently not that helpful.
Sure, Gemini might have a million token context, but the larger the context th worse the performance.
This is a hard problem to solve, that has had minimal progress in what - 3 years?
If there's a MAJOR breakthrough on output performance wrt context size - then things could change quickly.
The LLMs are currently insanely good at implementing non-novel things in small context windows - mainly because their training sets are big enough that it's essentially a search problem.
But there's a lot more engineering jobs than people think that AREN'T primarily doing this.
If I'm using the right tools for the job, I don't feel like the LLM helps outside of minor autofilling or writing quick one-off scripts. I do use LLMs heavily at work, but that's cause half the time I'm forced to use cumbersome tooling like Java w/ some boilerplatey framework or writing web backends in C++ for no performance reason.
Coding can be a joy and art like. I — speaking for myself — do feel incredibly lonely when doing it alone for long stretches. Its closer to doing graduate mathematics, especially on software that fewer and fewer know how to do well. It is also impossible to find people who would pay for _only_ beautiful code.
> This is no different then carpentry. Yes, all furniture can now be built by machines. Some people still choose to build it by hand. Does that make them less productive? Yes.
I take issue even with this part.
First of all, all furniture definitely can't be built by machines, and no major piece of furniture is produced by machines end to end. Even assembly still requires human effort, let alone designs (and let alone choosing, configuring, and running the machines responsible for the automable parts). So really a given piece of furniture may range from 1% machine built (just the screws) to 90%, but it's never 100 and rarely that close to the top of this range.
Secondly, there's the question of productivity. Even with furniture measuring by the number of chairs produced per minute is disingenuous. This ignores the amount of time spent on the design, ignores the quality of the final product, and even ignores its economic value. It is certainly possible to produce fewer units of furniture per unit of time than a competitor and still win on revenue, profitability, and customer sentiment.
Trying to apply the same flawed approach to productivity to software engineering is laughably silly. We automate physical good production to reduce the cost of replicating a product so we can serve more customers. Code has zero replication cost. The only valuable parts of software engineering are therefore design, quality, and other intangibles. This has always been the case, LLMs changed nothing.
> If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
Bullshit. The value in software isn't in the number of lines churned out, but in the usefulness of the resulting artifact. The right 10,000 lines of code can be worth a billion dollars, the cost to develop it is completely trivial in comparison. The idea that you can't take the time to handcraft software because it's too expensive is pernicious and risks lowering quality standards even further.
I'm tired of the carpentry analogy. It feels like a thought stopping cliche, because it's used in every thread where this topic comes up. It misses the fact that coding is fundamentally different, and that there are still distinct advantages to writing at least some code by hand, both for the individual and the company.
The question nobody asks, is what will happen once atrophy kicks in and nobody is able fire fight production genAI isn't able to fix without making things worse, with broke system bleeding a million dollars per day or more.
It's at least possible that we would eventually do a rollback to status quo and swear to never devalue human knowledge of the problems we solve.
> swear to never devalue human knowledge of the problems we solve.
Love this way of putting it. I hate that we can mostly agree that devaluing expertise of artists or musicians is bad, but that devaluing the experience of software engineers is perfectly fine, and actually preferable. Doing so will have negative downstream effects.
To me the biggest difference is that there’s some place for high quality, beautiful and expensive handcrafted woodwork, even if it’s niche in a world where Ikea exists. Nobody will ever care whether some software was written by humans or a machine, as long as it works and works well.
^This. Even if there was a demand for hand-crafted software, it would be very hard to prove it was hand-crafted, but it's unlikely there could be a demand for the same reasons as there is no market for e.g. luxury software. As opposed to physical goods, software consumers care for the result, not how it was created.
I could use AI to churn out hundreds of thousands of lines of code that doesn't compile. Or doesn't do anything useful, or is slower than what already exists. Does that mean I'm less productive?
Yes, obviously. If I'd written it by hand, it would work ( probably :D ).
I'm good with the machine milled lumber for the framing in my walls, and the IKEA side chair in my office. But I want a carpenter or woodworker to make my desk because I want to enjoy the things I interact with the most. And don't want to have to wonder if the particle board desk will break under the weight of my frankly obscene number of monitors while I'm out of the house.
I'm hopeful that it won't take my industry too long to become inoculated to the FUD you're spreading about how soon all engineers will lose their job to vibe coders. But perhaps I'm wrong, and everyone will choose the LACK over the table that last more than most of the year.
I haven't seen AI do anything impressive yet, but surely it's just another 6mo and 2B in capex+training right?
LLM’s and Agents are merely a tool to be wielded by a competent engineer. A very sophisticated tool, but a tool nonetheless. Maybe it’s because I live in the South East, as far away as I can possibly get from the echo chamber (on purpose), but I don’t see this changing anytime soon.
Not sure why you are so sure that using LLMs will be a professional requirement soon enough.
Eg in my team I heavily discourage generating and pushing generated code into a few critical repositories. While hiring, one of my points was not to hire an AI enthusiast.
The nail in the coffin moment for me when i realized AI had turned into a full blown cult was when people started equating a "hand crafted artisinal" piece of software used by a million people with hand crafted artisinal chair used by their grandma.
The cult has its origins in taylorism - a sort of investor religion dedicated to the idea that all economic activity will eventually be boiled down to ownership and unskilled labor.
I will do what i know gives me the best possible and fastest outcome over the long term, 5-10 year period.
And that remains largely neovim and by hand. The process of typing code gives me a deeper understanding of the project that lets me deliver future features FASTER.
I'm fundamentally convinced that my investment into deep long term grokking of a project will allow me to surpass primarily LLM projects over the long term in raw velocity.
It also stands to reason that any task that i deem to NOT further my goal of learning or deep understanding that can be done by an LLM i will use the LLM for it. And as it turns out there are a TON of those tasks so my LLM usage is incredibly high.
> I will do what i know gives me the best possible and fastest outcome over the long term, 5-10 year period.
And that remains largely neovim and by hand. The process of typing code gives me a deeper understanding of the project that lets me deliver future features FASTER.
I'm fundamentally convinced that my deep long term understanding of a project will allow me to surpass primarily LLM projects over the long term.
I have never thought of that aspect! This is a solid point!
I love that take and sympathise deeply with it. I also have come to the conclusion to focus my manual work on those areas where I can get learning from and try to automate the rest away as much as possible.
This is the way. I think we’re in for some rough years at first but then what you described will settle in to the “best practice” (I hate that term). I look forward to the really bizarre bugs and incidents that make the news in the next 2-3 years. …Well as long as they’re not from my teams hah :)
Idk what the median lifespan of a piece of code / project / employee tenure is but probably way less than 10 years, which makes that "long term investment" pretty pointless in most cases.
Right, but that usually means higher quality software design, and less so the exact low level details of function A or function B (in most cases)
If anything I'd claim using LLMs can actually free up your time to really focus on the proper design of the software.
I think the disconnect here is that people bashing LLMs don't understand that any decent engineer isn't just going around vibe coding, but instead creating a well thought design (with or without AI) and using LLMs to speed up the implementation.
If you can't deliver features faster with AI assistance then you're either using it wrong or working on very specialized software that AI can't handle yet.
I've built a SaaS (with paying customers) in a month that would have taken me easily 6 months to build with this level of quality and features. AI wrote I'd say 99.9% of code. Without AI I wouldn't even have done this because it would have been too large of a task.
In addition, for my old product which is 5+ years old, AI now writes 95%+ of code for me. Now the programming itself takes a small percentage of my time, freeing me time for other tasks.
Quality is better both from a user and a code perspective.
From a user perspective I often implement a feature and then just throw it away no worries because I can reimplement it in an hour again based on my findings. No sunken cost. Also I can implement very small details that otherwise I'd have to backlog. This leads to a higher quality product for the user.
From a code standpoint I frequently do large refactors that also would never have been worth it by hand. I have a level of test coverage that would be infeasible for a one man show.
It's boring glorified CRUD for SMBs of a certain industry focused on compliance and workflows specific to my country. Think your typical inventory, ticketing, CRM + industry specific features.
Boring stuff from a programming standpoint but stuff that helps businesses so they pay for it.
This is pointing out one factor of vibecoding that is talked about too little: that it feels good, and that this feeling often clouds people's judgment on what is actually achieved (i.e. you lost control of the code and are running more and more frictionless on hopes and dreams)
It feels good to some people. Personally I have difficulty relating to that, it’s antithetical to important parts of what I value about software development. Feeling good for me comes from deeply understanding the problem and the code, and knowing how they do match up.
I agree with you. I had done a bit of vibe coding over the weekend. Not once did it feel good. Most of the time it produced things which are close to what I needed, but not quite hitting the mark. Partially probably because I'm not explaining myself in sufficient detail to AI, but the way I work is not working very well with super detailed spec ahead of development. I used to always develop understanding of the project while working on it.
I feel more lost and unsure instead of good - because I didn't write the code, so I don't have its internal structure in my head and since I didn't write it there's nothing to be proud of.
Yep, I agree 100%. People have described AI coding as "delegating". But there's a reason I'm an IC and not a manager. It's because I don't want to delegate to someone else who does the work, I want to do the work. There's no joy to be had in having someone else do the work at my behest.
If the future of the technology industry truly is having AI write software for you, then I will do what I have to do. At the end of the day I have to put food on the table. But I will hate every second of my job at that point, and it sucks ass.
I like "vibe doc reading" and "vibe code explanation" but am continually frustrated with vibe coding. I can certainly generate code but it's definitely not my style and I feel reluctant to put my name on it since it's frequently non trivial to completely understand and validate when you're not actually writing it. Additionally, I find vibe coding to generate very verbose and overly abstracted code that's harder to read. I have to spend time pairing the generated code back down and removing things that really weren't needed.
For me, it feels good if I get it right. But unfortunately, there are many times, even with plan mode and everything specced, where after a few hours of chipping away and refactoring the problem by the agent, I realised that I can throw the whole thing away and do over. Then it feels horrible. It feels especially horrible because it feels like you have done nothing for that time and learned nothing.
I tried writing a small utility library using Windows Copilot, just for some experience with the tach (OK, not the highest tech, but I am 73 this year) and found it mildly impressive, but quite slow compared to what I would have done myself to get some quality out of it. It didn't make me feel good, particularly.
It _does_ feel good, I know what you mean. I don’t understand why exactly but there’s def an emotion associated with vibe coding. It may be related to the feeling you get when you get some code working and finish a requirement or solve a problem. Maybe vibe coding gives you a shortcut to that endorphin. I think it’s going to be particularly important to manage that feeling and balance with reality. You know, I wonder how similar this reaction is to the endorphins from YouTube shorts or other social media. If it’s as addicting (and it’s looking that way) but requires a subscription and tied to work instead of entertainment then the justification for the billions and billions of investment dollars is obvious. Interesting times indeed.
Conversely I feel like this is talked about a lot. I think this is a sort of essential cognitive dissonance that is present in many scenarios we're already beyond comfortable with, such as hiring consultants or off-shoring or adopting the latest hot framework. We are a species that likes things that feel good even if they're bad for us.
Yeah I get a lot of value from vibe coding and think it is the future of how we work but I’ve started to become suspicious of the pure dopamine rush it gives me. I don’t like that it is a strange combo of the sweaty feeling of playing StarCraft all night and finishing a term paper at the last minute.
I think it feels like shit, tbh. That's my biggest problem with it. The feedback on moment to moment is longer than building by myself and the almost there but not there sucks. Also, like the article states, waiting for the LLM is so fucking boring.
it feels good because we've turned coding into a gacha machine. you chase the high from when it works, and if it doesn't, you just throw more tokens at the problem.
> you lost control of the code and are running more and more frictionless on hopes and dreams
Your control over the code is your prompt. Write more detailed prompts and the control comes back. (The best part is that you can also work with the AI to come up with better prompts, but unlike with slop-written code, the result is bite-sized and easily surveyable.)
You what code is? A very detailed specification that drives a deterministic machine. Maybe we don't need to keep giving LLMs more details, maybe we could skip the middle man there
The gravity well's pull toward a management track, or in the very least the desire to align one's sympathies with management, is simply irresistible to some people. Unfortunately I do not think Hacker News is the best venue to discuss solutions to this issue.
Also, you’re still in control of your code. It’s not an xor thing, the agent does its thing but the code is still there and yours. You can still adjust, fix, enhance etc. you’re still in control. The agent is there to help as much or as little as you want.
I see what you mean but I haven’t had an LLM produce code that I didn’t understand or couldn’t follow. Also, if it does it’s pretty easy to ask it to explain it to you. I’ve asked for explanations for ridiculously long lists of tailwind css classes but that’s just a pet peeve really, I mean, I understand what they’re doing.
I'll also say that vibecoding only feels good until it doesn't. And then you realize you don't understand the huge mess of code you've just produced at all.
At least when I write by hand, I have a deep and intimate understanding of the system.
I think it is pretty indisputable that there is a valuable place for AI. I recently had to interact with a very horrible db schema. The best approach I came up with to solve my challenge involved modelling a table with 300 columns. Converting some sql ddl to a Rust struct was simple but tedious work. A prompt with less than 15 words guided an AI to produce the 900+ loc for me. It took a couple seconds to scan it to see that each field had both annotations I needed and the datatypes were sane.
That is exactly the type of help that makes me happy to have AI assistance. I have no idea how much electricity it consumed. Somebody more clever than me might have prompted the AI to generate the other 100 loc that used the struct to solve the whole problem. But it would have taken me longer to build the prompt than it took me to write the code.
Perhaps an AI might have come up with a more clever solution. Perhaps memorializing a prompt in a comment would be super insightful documentation. But I don't really need or want AI to do everything for me. I use it or not in a way that makes me happy. Right now that means I don't use it very much. Mostly because I haven't spent the time to learn how to use it. But I'm happy.
I wonder just what goes into someone's mind, when they do not care about who in the future is gonna have to maintain what they've crafted. Nor care about the experience of the user.
Nor even feel accountable when they haven't done their due diligence to do things right.
You could say that. The schema in question was not mine nor in any way within my control. I could start up a business and write an entire app to replace the one in question. Maybe I could even get you to donate some money to fund that endeavor. Or I could spend an hour one time to code an external work around so I don't have to spend two hours a cycle fighting with that stupid app.
This is how ridiculous workflows evolve, but it really isn't AI's fault.
You’d have consumed probably 2+ magnitudes of energy or more just for coffee (and its growth and supply chain) to write that piece of code. Not counting the building, food, transportation…
The most pertinent thought in this is where the author asks, "LLMs can generate decent-ish and correct-ish looking code while I have more time to do what? doomscroll?"
LLMs are not good enough for you to set and forget. You have to stay nearby babysitting it, keeping half an eye on it. That's what's so disheartening to many of us.
In my career I have mentored junior engineers and seen them rapidly learn new things and increase their capabilities. Watching over them for a shirt while is pretty rewarding. I've also worked with contract developers who were not much better than current LLMs, and like LLMs they seemed incapable of learning directly from me. Unwilling even. They were quick to say nice words like, "ok, I understand, I'll do it differently next time," but then they didn't change at all. Those were some of the most frustrating times in my career. That's the feeling I get when using LLMs for writing code.
For work, I regularly have 2-4 agents going simultaneously, churning on 1-3 features, bug fixes, doc updates.
I pop between them in the "down time", or am reviewing their output, or am preparing the requirements for the next thing, or am reviewing my coworkers MRs.
Has anyone got any insights into what hiring software engineers looks like these days? As someone currently with a job and not hiring it is hard to imagine.
Has there been any sort of paradigm shift in coding interviews? Is LLM use expected/encouraged or frowned upon?
If companies are still looking for people to write code by hand then perhaps the author is onto something, if however we as an industry are moving on, will those who don't adapt be relegated to hobbyists?
I haven’t noticed much change yet at my firm. However, I work at a giant organization (700k+ employees) and they’re struggling to keep up. The lawyers aren’t even sure if we own the IP of agent generated code let alone the legal risk of sending client IP to the model providers.
It's obvious: companies will require both hand-coding and ai-coding skills. Job seeking has been hoop-jumping for many years, so why not one extra hoop?
Most of the hiring is happening in heavy AI coding companies, a lot of mid sized companies have freezed hiring or they are also only hiring people who claim to use AI to be 10x devs. For non-lying devs, only big companies seem to be hiring and their process hasnt changed much. you are still expect to solve leetcode and then also sit through system design.
I confirm less hiring and those who do throw more difficult leetcode challenges than ever. The kind of challenge impossible to solve in time without an LLM doing the most part.
1. The thing to be written is available online. AI is a search engine to find it, maybe also translate it to the language of choice.
2. The thing (system or component or function) is genuinely new. The spec has to be very precise and the AI is just doing the typing. This is, at best working around syntax issues, such as some hard-to-remember particular SQL syntax or something like that. The languages should be better.
3. It‘s neither new nor available online but a lot to type out and modify. The AI does all the boilerplate. This is a failure of the frameworks and languages to require so much boilerplate.
I’m really happy to see this take. It’s not the first time but it’s not said often enough. I once had the thought that anything AI can do really well is probably something that should not be being done at all. That’s an overly broad statement but I think there’s some truth in it. The grand challenge of software engineering is to find beautifully elegant and precise ways to express what we want the computer to do for us. If we can find them, it will be better to express ourselves in those ways than to prompt an AI than do it for us, much in the same way that a blog written by an LLM is not worth reading.
I've developed at the speed of "vibecoding" long before LLMs by having highly thought-compressed tools, frameworks and snippets. Most of my applications use Model Driven Development where the data model automatically builds the application DAO/controllers/validations/migrations. The data model is the application. I find LLMs help me write procedures upon this data model even a little bit faster than I did before. But the data model is the design. Unless I turnover the entire design to the LLM, I am always the decider on the data model. I will always have more context about where I want to evolve the data model. I enjoy the data modelling aspect and want to remain in the driver seat, with LLMs as my implementer of procedures.
I like writing code that I don't have time pressure around, as well as the kind where I can afford to fail and use that as a learning experience. Especially the code that I can structure myself.
I sometimes dread writing code that's in a state of bad disrepair or is overly complex, think a lot of the "enterprise" code out there - it got so bad that I more or less quit a job over it, though never really stated that publicly, alongside my mind going dark places when you have pressure to succeed but the circumstances are stacked against you.
For a while I had a few Markdown files that went into detail exactly why I hated it, in addition to also being able to point my finger at a few people responsible for it. I tried approaching it professionally, but it never changed and the suggestions and complaints largely fell on deaf ears. Obviously I've learnt that while you can try to provide suggestions, some people and circumstances will never change, often it's about culture fit.
But yeah, outsource all of that to AI, don't even look back. Your sanity is worth more than that.
I wonder if some of the divide in the LLM-code discourse is between people who have mostly/always worked in jobs where they have the time and freedom to do things correctly, and to go back and fix stuff as they go, vs people who have mostly not (and instead worked under constant unrealistic time pressure, no focus on quality, API design, re-factoring, etc)
I’m pretty sure that the answer to that question is positive: those who have worked with code that sparks joy won’t like interacting with it closely being taken away, whereas the people for whom the code they have to work with inspires misery will be thankful for the opportunity to at least slight free themselves from the shackles of needing to run in circles for two weeks to implement a basic form because everything around it is a mess.
Even if Claude writes 100% code, I think there will be a bifurcation between people who are finicky about 10 lines of code. And those finicky about high level product experiences.
I think the 10 lines of code people worry their jobs now become obsolete. In cases where the code required googling how to do X with Y technology, that's true. That's just going to be trivially solvable. And it will cause us to not need as many developers.
In my experience though, the 10 lines of finicky code use case usually has specific attributes:
1. You don't have well defined requirements. We're discovering correctness as we go. We 'code' to think how to solve the problem, adding / removing / changing tests as we go.
2. The constraints / correctness of this code is extremely multifaceted. It simultaneously matters for it to be fast, correct, secure, easy to use, etc
3. We're adapting a general solution (ie a login flow) to our specific company or domain. And the latter requires us to provide careful guidance to the LLM to get the right output
It may be Claude Code around these fewer bits of code, but in these cases its still important to have taste and care with code details itself.
We may weirdly be in a case where it's possible to single-shot a slack clone, but taking time to change the 2 small features we care about is time consuming and requires thoughtfulness.
> I think the 10 lines of code people worry their jobs now become obsolete.
I'm gonna assume you think you're in the other camp, but please correct me if I'm mistaken.
I'd say I'm in the 10 lines of code camp, but I'd say that group is the least afraid of fictionalized career threat. The people that obsess over those 10 lines are the same people who show up to fix the system when prod goes down. They're the ones that change 2 lines of code to get a 35% performance boost.
It annoys me a lot when people ship broken code. Vibe coded slop is almost always broken, because of those 10 lines.
At the same time I make enough silly mistakes hand coding it feels irresponsible to NOT have a coding LLM generate code. But I look at all the code and (gasp) make manual changes :)
One of the first bugs I found - and fixed - at my current job instantly made us an extra 200k/year. One line of code (potentially a one character fix?), causing a little bug nobody noticed, which I only saw because I like to comb through application logs, and caused by a peculiarity of the data. Would an LLM have written better code? Maybe. But I've seen a lot of bad code churned out by LLMs, even today. I'm not saying every line matters - particular for frontend code - but sometimes individual lines of code, or even individual characters, can be tremendously important, and not be written in any spec, not tested with all possible data combinations, or documented anywhere. At a previous job, I spent several days unraveling another one-line bug that was keeping a multi-million dollar project from running at all. Again, totally non-obvious unless you had a tremendous amount of context and were running a pretty complex system to figure it out, with a sort of tenacity the LLMs don't currently possess.
> I think there will be a bifurcation between people who are finicky about 10 lines of code. And those finicky about high level product experiences.
No ones care about a random 10 lines of code. And the focus of AI hypers on LoC is disturbing. Either the code is correct and good (allows for change later down the line) or it isn't.
> We may weirdly be in a case where it's possible to single-shot a slack clone, but taking time to change the 2 small features we care about is time consuming and requires thoughtfulness.
Nobody care until there’s an incident or security vulnerability, something doesn’t work based on some PMs assumptions of how it should work.
The question to me becomes whether the PM -> engineering handoff outdated? Should they be the same person? Does it collapse to one skillet for this work?
A PM describes the business needs. Engineering makes it a reality according to technical constraints. I've never seen a PM checking the code from engineering or investigating the root cause of an incident. And imagine being an engineer working on such a case and having to converse to consumers at the same time. That would be a very poor usage of resources.
It really depends on the project for me. For example,I never enjoyed writing react code (or really any UI), just the outcome of my idea materializing in a usable interface. There is nothing creative or fun for me in almost any UX framework. It’s just a ton of predictable typing (now we need a fricking box. And another box. And another stupid box…) I’m more than happy outsourcing that. However, my thoughts are too random and imprecise that actually outsourcing it before to another person always felt disrespectful to them. I don’t have to worry about that with AI. My company is paying it, and when I’m prototyping a react thing every now and then, I burn few thousand dollars a day for the lols.
If they don’t like it, take it away. I just won’t do that part because I have no interest in it. Some other parts of the project, I do enjoy working on by hand. At least setting up the patterns I think will result in simple readable flow, reduce potential bugs, etc. AI s not great at that. It’s happy to mix strings, nulls, bad type castings, no separation of concerns, no small understandable functions, no reusable code, etc. which is th part i enjoy thinking about
Same with gui. I’m making a web gui that’s very specific for a project that I’m working on. My team finds it very useful but I would never make that thing without AI assistance, combination of I don’t find it interesting or fun, would take too long, I am not familiar with web gui stuff.
Claude code makes react + tailwindcss + whatever component library actually bearable for me. I can just “make a navbar on the left hand side of the screen like vscode has” and it mostly does it, a few tweaks and I have what I want. I waste so much time on that stuff doing it by hand it drives me crazy.
Also “pull records from table X and display them in a data grid. Include a “New” button and associated functionality respecting column constraints in the database. Also add an edit and delete button for each row in the table”. God, it’s really nice to have an LLM get that 85% of the way done in maybe 2 min.
Seems like the author has a case of all or nothing. The real power in agentic programming, to me, is not in extremes, but in that you are still actively present. You don't give it world-size things to do, but byte-sized, and you constantly steer it. It's to be detailed enough to produce quality, and to be aware of everything it produces, but not so detailed that it makes sense to just write the code yourself. It's a delicate balance, but once you've found it, incredibly powerful. Especially mixed with deterministic self-checking tools (like some MCP's).
If you "set and forget", then you are vibe coding, and I do not trust for a second that the output is quality, or that you'd even know how that output fits into the larger system. You effectively delegate away the reason you are being paid onto the AI, so why pay you? What are you adding to the mix here? Your prompting skills?
Agentic programming to me is just a more efficient use of the tools I already used anyway, but it's not doing the thinking for me, it's just doing the _doing_ for me.
I am with you and fully agree with your "it does not have to be an all or nothing" stance. A remark on one part of your comment:
> What are you adding to the mix here? Your prompting skills?
The answer to that is an unironic and dead-serious "yes!".
My colleagues use Claude Opus and it does an okay job but misses important things occasionally. I've had one 18-hour session with it and fixed 3 serious but subtle and difficult to reproduce bugs. And fixed 6-7 flaky tests and our CI has been 100% green ever since.
Being a skilled operator is an actual billable skill IMO. And that will continue to be the case for a while unless the LLM companies manage to make another big leap.
I've personally witnessed Opus do world-class detective work. I even left it unattended and it churned away on a problem for almost 5h. But I spent an entire hour before that carefully telling it its success criteria, never to delete tests, never to relax requirements X & Y & Z, always to use this exact feedback loop when testing after it iterated on a fix, and a bunch of others.
In that ~5h session Opus fixed another extremely annoying bug and found mistakes in tests and corrected them after correcting the production code first and making new tests.
Opus can be scary good but you must not handwave anything away.
I found love for being an architect ever since I started using the newest generation [of scarily smart-looking] LLMs.
Yup, totally! I'm also not against the evolution of software engineer to a software architect. We were on that direction already anyway with the ever increasing amount of abstraction in our libraries and tools. This also frees up my ability to do other things, like coordinate cross team efforts, deal with customer support issues, etc. As a generalist, I feel more useful and thus valuable than ever, and that makes me very happy.
For me writing code is clarifying ideas, it’s an important part of the process.
Sometimes you start to see a radical way of simplifying what you want, that only happens if you are willing to transform what your requirements are if they turn out to be overly prescriptive.
I think though it is probably better for your career to churn out lines, it takes longer to radically simplify, people don’t always appreciate the effort. Plus instead if you go the other way, increase scope and time and complexity that more likely will result in rewards to you for the greater effort.
Yea my job as a SWE is to have a correct mental model of the code and bing it with me everywhere I go... meetings, feature design, debugging sessions. Lines of code written is not unimportant, but matters way less when you look at the big picture
I very much enjoy the actively of writing code. For me, programming is pure stress relief. I love the focus and the feeling flow, I love figuring out an elegant solution, I love tastefully structuring things based on my experience of what concerns matter, etc.
Despite the AI tools I still do that: I put my effort into the areas of the code that count, or that offer intellectually stimulating challenge, or where I want to make sure to explore manually think my way into the problem space and try out different API or structure ideas.
In parallel to that I keep my background queue of AI agents fed with more menial or less interesting tasks. I take the things I learn in my mental "main thread" into the specs I write for the agents. And when I need to take a break on my mental "main thread" I review their results.
IMHO this is the way to go for us experienced developers who enjoy writing code. Don't stop doing that, there's still a lot of value in it. Write code consciously and actively, participate in the creation. But learn to utilize and keep busy agents in parallel or when you're off-keyboard. Delegate, basically. There's quite a lot of things they can do already that you really don't need to do because the outcome is completely predictable. I feel that it's possible to actually increase the hours/day focussing on stimulating problems that way.
The "you're just mindlessly prompting all day" or "the fun is gone" are choices you don't need to be making.
I am happy someone else is also talking about addictive nature of vibe coding and its gambling-esque rewards. Would we see agentic programmers begging for tokens on kickstarter in future? That would be funny.
I said something similar in a different thread but the joy of actually physically writing code is the main reason why I became a software developer. I think there is some beauty to writing code. I enjoy typing the syntax, the interaction with my IDE, debugging by hand (and brain) rather than LLM, even if it's less efficient. I still use AI, but I do find it terribly sad that this type of more "manual" programming seems to be being forced out.
I also enjoy walking more than driving, but if I had to travel 50 miles every day for my job, I would never dream of going on foot. Same goes for AI for me. If I can finish a project in half the time or less, I still feel enough accomplishment and on top of that I will use the gained free time for self actualisation. I like my job and I love coding and solving challenging problems, but I also love tons of other stuff that could use more of my attention. AI has created an insane net positive value for me so far. And I see tons of other people who could also benefit from it the same way, if only they spent a bit more time learning how to use it effectively. Considering how everyone and their uncle thinks they need to chime in on what AI is or is not or what it can or can not do, I find most people have frustratingly little insight into what you can actually do already. Even the people working at companies like Amazon or MS who claim to work on AI integrations sometimes seem to be missing some essentials.
I don’t really understand your point about AI freeing up your time to do other stuff at your job. Does your employer let you work less hours since you’re finishing projects sooner? Mine certainly doesn’t, and I’d rather be coding than doing the other parts of my job. But maybe I’m misunderstanding why you were trying to say?
I also would rather a project take longer and struggle through it without using AI as I find joy in the process. But as I said in my original post I understand that type of work appears to be coming to an end.
It's one of the factors, especially when you consider it not just as one of the factors ethically, but also because their input is valued and if they are not happy it means something might be operationally wrong (although of course there might be a tradeoff between productivity and worker happiness)
Feel hand/human written code of an experienced individual should be more valuable for a business than one created by agents. Surely, agents and humans might be using the same underlying frameworks or programming languages, but the value difference depends on the breadth and depth of experience. Agents gives you the breadth but an experienced individuals give you the depth in understanding/problem solving.
I find it helps me just forced to be focused on a task for a few hours. Just the blocked out attention I spend on it will help refine and discover new problems and angles etc. I don't think just blocking out the time without actually trying to code it (staring at a wall) is as effective.
> “vibe coding has an addictive nature to it, you write some instructions, and code that looks correct is generated. Bam! Dopamine hit! If the code isn’t correct, then it’s just one prompt away from being correct”
> The process of writing code helps internalize the context and is easier for my brain to think deeply about it.
True, and you really do need to internalize the context to be a good software developer.
However, just because coding is how you're used to internalizing context doesn't mean it's the only good way to do it.
(I've always had a problem with people jumping into coding when they don't really understand what they are doing. I don't expect LLMs to change that, but the pernicious part of the old way is that the code -- much of it developed in ignorance -- became too entrenched/expensive to change in significant ways. Perhaps that part will change? Hopefully, anyway.)
My wife and my dad enjoy assembling furniture (the former free style, the latter off the instructions). I like the furniture assembled but I cannot stand doing it. Some of us are one way and others are the other way.
For me, LLMs are joyful experiences. I think of ideas and they make them happen. Remarkable and enjoyable. I can see how someone who would rather assemble the furniture, or perhaps build it, would like to do that.
That’s a good point. I suppose one must imagine the complaints from other Internet commenters in bygone times over using libraries vs writing one’s own code. They probably found themselves similarly estranged from a community of library assemblers. And now even those assemblers find themselves estranged from us machine whisperers. But we all were following the way to build software for our time.
I wonder who follows. Perhaps it has already happened. I look at the code but there are people who build their businesses as English text in git. I don’t yet have the courage.
every day there's a thread about this topic and the discussions always circle around the same arguments.
I think we should be worrying about more urgent things, like a worker doing the job of three people with ai agents, the mental load that comes with that, how much of the disruption caused by ai will disproportionately benefit owners rather than employees, and so on.
I’m in a similar camp to the OP. For me, my joy doesn’t come from building - it comes from understanding. Which incidentally has actually made SWE not a great career path for me because I get bored building features, but that’s another story…
For me, LLMs have been a tremendous boon for me in terms of learning.
Initially I felt like this but now I've changed. Now I realise a lot of grunt work doesn't need to be done by me, i can direct llm to make changes. I can also experiment more as I'm able to build complex features, try it out and delete it without feeling too bad.
To be fair a lot of bloat and gruntness are safety nets we built for our own benefit. Static typing, linting, test harnesses, visual regressions, CI etc. If AI to do the legwork there while I focus on business logic and UX, it's a win-win.
I agree. But I do have some concerns. Sometimes the LLM writes code and its a lot of work to go through it. I get lazy and trust LLM too much. I've been doing this for a while so I know how it should write, I go back and try to fix or refactor. But a new dev might direct LLM to write code they might not understand. Like a blackbox. LLM makes a lot of decisions without you realising, decisions which you used to make yourself. Writing code is making thousand decisions.
>Even if I generate a 1,000 line PR in 30 minutes I still need to understand and review it. Since I am responsible for the code I ship, this makes me the bottleneck.
You don't ship it, the AI does. You're just the middleman, a middleman they can eventually remove altogether.
>Now, I would be lying if I said I didn’t use LLMs to generate code. I still use Claude, but I do so in a more controlled manner.
"I can quit if I want"
>Manually giving claude the context forces me to be familiar with the codebase myself, rather than tell it to just “cook”. It turns code generation from a passive action to a deliberate thoughtful action. It also keeps my brain engaged and active, which means I can still enter the flow state. I have found this to be the best of both worlds and a way to preserve my happiness at work.
And then soon the boss demands more output, like the guys who left it all to Claude and even run 5x in parallel give.
> Even if I generate a 1,000 line PR in 30 minutes I still need to understand and review it. Since I am responsible for the code I ship, this makes me the bottleneck.
I am not responsible for choosing whether the code I write using a for loop or while loop. I am responsible for whether my implementation - code, architecture, user experience - meets the functional and non functional requirements. It’s been well over a decade that my responsibilities didn’t require delegation to other developers doing the work or even outsourcing an entire implementation to another company like a SalesForce implementation.
When I got my first job long ago, I found that code review does involve arguing over things like for vs while loop, or having proper grammar in comments. Thought about quitting for a sec.
Now that I have more experience and manage other SWEs, I was right, that stuff was dumb and I'm glad that nobody cares anymore. I'll spend the time reviewing but only the important things.
Unfortunately, people do care that the AI agents don’t code just like they do.
Once I got to the point where I was delegating complete implementations to seniors with just “this is a high level idea of what Becky’s department wants. You now know as much I do. If you have any business related questions go ask Becky and come back to me with a design and these are our only technical constraints”. Then two weeks later there are things I might have done differently. But it meets all of the functional and non functional requirements. I bite my toungue and move on.
His team is going to be responsable for it.
Now I don’t treat AI as a senior developer. I treat it as a mid level ticket taker. If their is going to be a feature change, I ain’t doing it any more. The coding agent is. I am just going to keep good documentation in various MD files for context.
Is there something about LLMs that suddenly make grammar and style irrelevant? Is your take, no human is going to read this ever again, so why bother making it pretty and consistent/readable?
It was never relevant, LLM or not. When reviewing junior SWEs' code pre LLMs, I didn't care about 75% of the style guide. I cared if they were using the DB wrong or had race conditions or wrote code I couldn't read.
In other comment, meant that other reviewers who used to nitpick have stopped for whatever reason, maybe because overall people are busier now.
Of course. Almost everyone who knows how to ride a horse, is happier riding a horse than driving a car too. Or hell, in decent weather even a bike.
In fact, it's even worse - driving a car is one of the least happy modes of getting around there is. And sure, maybe you really enjoy driving one. You're a rare breed when it comes down to it.
Yet it's responsible by far for the most people-distance transported every day.
> “What’s the point of it all?” I thought, LLMs can generate decent-ish and correct-ish looking code while I have more time to do what? doomscroll?
You could look back throughout human history at the inventions that made labor more efficient and ask the same question. The time-savings could either result in more time to do even more work, or more time to keep projects on pace at a sane and sustainable rate. It's up to us to choose.
I also came to a pretty simple understanding over the years. If I'm coding and making progress on a project, I'm happy. If I'm not, or I'm stuck on something, I'm unhappy. This is a profoundly unhealthy way to live because life will pass you by. There is more to our existence than work, or even hobbies. And if AI lets me get more time for that, I am happier than ever.
This is great in theory, but answer me sincerely: are you spending less time at work because of AI? Because I reckon for most programmers here it is not the case at all.
Yes but is AI really getting you unstuck or are you playing a game of whack-a-mole where it fixes one bug and generates several others that you are unaware off (just one example)?
I hate typing strings of syntax. So boring. Never saw the appeal. I do like tinkering with ideas, concepts, structure... just not the mechanical interaction part. Im not tbe best typist...then again, its the same with playing factorio. I love the concept of building structures, but fighting the UI to communicate my ideas is such a drag...
Bro discovered that using a calculator makes him happier doing long division by hand and decided the rest of us are just dopamine junkies for enjoying tools that actually scale.
It's a phenomenon you see in a lot of crafts. We enjoy the craft, but when it becomes all about the product and we optimize for that, the fun goes away.
I am TL of an Android app with dozens of screens that expose hundreds of different distinct functions. My task is to expose all of these functions as appfunctions that can be called by an LLM in response to free form user requests. My current plan is to build a little LangGraph pipeline where first step is AI documenting all functions in each app's fragment, second step is extracting them into app functions, then refactoring fragment to call app functions etc. And by build I mean Gemini will build it for me and I will ask for some refinement and edit prompts.
I also like writing code by hand, I just don't want to maintain other people's code. LMK if you need a job referral to hand refactor 20K lines of code in 2 months. Do you also enjoy working on test coverage?
There’s been a new category of writings the last year. The AI Inevitability Soothsaying.[1]
There’s talk of war in the state of Nationstan. There are two camps: those who think going to war is good and just, and those who think it is not practical. Clearly not everyone is pro-war. There are two camps. But the Overton Window is defined with the premise that invading another country is a right that Nationstate has and can act on. There are by definition (inside the Overton Window) no one who is anti-war on the principle that the state has no right to do it.[2]
Not all articles in this AI category are outright positive. They range from the euphoric to the slightly depressed. But they share the same premise of inevitability; even the most negative will say that, of course I use AI, I’m not some Luddite[3]! It is integral to my work now. But I don’t just let it run the whole game. I copy–paste with judicious care. blah blah blah
The point of any Overton Window is to simulate lively debate within the confines of the premises.
And it’s impressive how many aspects of “the human” (RIP?) it covers. Emotions, self-esteem, character, identity. We are not[4] marching into irrelevance without a good consoling. Consolation?
This was taken from the formerly famous (and controversial among Khmer Rouge obsessed) Chomsky, now living in infamy for obvious reasons.
[3] Many paragraphs could be written about this
[4] We. Well, maybe me and others, not necessarily you. Depending on your view of whether the elites or the Mensa+ engineers will inherit the machines.
Basically describes how i use Claude Code now. I'll let it do stuff i don't want to do, like setting up mocks for unit tests (boring) or editing GitHub actions yaml (torture). But otherwise, i like to let it show me how to do something I'm not sure how to do, and then I'll just go do it myself. (If i have a clear idea of how i want to go something already, i just do it myself I'm the first place)
I almost never agree with the names Claude chooses, i despise the comments it adds every other line despite me telling it over and over and over not to, oftentimes i catch the silly bugs that look fine at first glance when you just let Claude write its output direct to the file.
It feels like a good balance, to me. Nobody on my team is working drastically faster than me, with or without AI. It very obviously slows down my boss (who just doesn't pay attention and has to rework everything twice) or some of the juniors (who don't sufficiently understand the problem to begin with). I'll be more productive then them even if i am hand-writing most of the code. So i don't feel threatened by this idea that "hand written code will be something nobody does professionally here soon" -- like the article said, if I'm responsible for the code i submit, I'm still the bottleneck, AI or not. The time i spend writing my own code is time I'm not poring over AI output trying to verify that it's actually correct, and for now that's a good trade.
Programming is a creative work. Replacing human creativity with pseudo parrot code generation impacts this process in bad ways. It's same reason many artists despise using AI for art.
Bean counters don't care about creativity and art though, so they'll never get it.
Good for artists I guess, I wouldn't know because I am not one. The best I can manage is drawing a stick figure of a cat. Years back I was working on a Mac app and I needed an icon. So I talked to an artist and she asked for $5K to make one for me. I couldn't justify spending so much on a hobby that I didn't know if it would go anywhere so I wrote a little app that procedurally generated me some basic sucky icon. I am sure Gordon Ramsay is also not impressed with cooking skills of my microwave, I just don't know how his objections practically relate to getting me fed daily.
This is no different then carpentry. Yes, all furniture can now be built by machines. Some people still choose to build it by hand. Does that make them less productive? Yes. Will they ever carve furniture by hand for a business? Probably not. Can they still enjoy the act of working with the wood? Yes.
If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
I’ve heard this metaphor before and I don’t think it works well.
For one, a power tool like a bandsaw is a centaur technology. I, the human, am the top half of the centaur. The tool drives around doing what I tell it to do and helping me to do the task faster (or at all in some cases).
A GenAI tool is a reverse-centaur technology. The algorithm does almost all of the work. I’m the bottom half of the centaur helping the machine drive around and deliver the code to production faster.
So while I may choose to use hand tools in carpentry, I don’t feel bad using power tools. I don’t feel like the boss is hot to replace me with power tools. Or to lay off half my team because we have power tools now.
It’s a bit different.
There's DeWALT, Craftsman, Stanley, etc carpentry/mechanic power tool brands who make a wide variety of all manner of tools and tooling; the equivalents in computers (at least UNIXy) are coreutils (fileutils, shellutils, and textutils), netpbm, sed, awk, the contents of /usr/bin, and all their alternative, updated brands like fd, the silver searcher, and ripgrep are, or the progression of increased sharpening in revision control tools from rcs, sccs, svn, to mercurial and git; or telnet-ssh, rcp-rsync, netcat-socat. Even perl and python qualify as multi-tool versions of separate power tools. I'd even include language compilers and interpreters in general as extremely sharp and powerful power multi-tools, the machine shop that lets you create more power tools. When you use these, you're working with your hands.
GenAI is none of that, it's not a power tool, even though it can use power tools or generate output like the above power tools do. GenAI is hiring someone else to build a bird house or a spice rack, and then saying you had a hand in the results. It's asking the replicator for "tea, earl grey, hot". It's like how we elevate CEOs just because they're the face of the company, as if they actually did the work and were solely responsible for the output. There's skill in organization and direction, not all CEOs get undeserved recognition, but it's the rare CEO who's getting their hands dirty creating something or some process, power tools or not. GenAI lets you, everyone, be the CEO.
That may be true - and?
Does money appear in my account at the end of every two weeks or formally RSUs appear in my brokerage account at the end of every vesting period?
At the end of the day, that’s what supports my addiction to food and shelter.
The "I gotta eat" comment contingent is worse than the "rewrite it in rust" comment contingent.
Not every conversation about GenAI and slop is about your eating habits.
These are astroturfed bot comments, aren't they?
Why else do you think I go to work everyday? Because I have a “passion” for sitting at a computer for 40 hours a week to enrich private companies bottom line or a SaaS product or a LOB implementation? It’s not astroturfing - it’s realistic
Would you be happier if I said I love writing assembly language code by hand like I did in 1986?
My analogy is more akin to using Google Maps (or any other navigation tool).
Prior to GPS and a navigation device, you would either print out the route ahead of time, and even then, you would stop at places and ask people about directions.
Post Google Maps, you follow it, and then if you know there's a better route, you choose to take a different path and Google Maps will adjust the route accordingly.
Google Maps is still insanely bad for hiking and cycling, so I combine the old-fashioned map method with an outdoor GPS onto which I load a precomputed GPX track for the route that I want to take.
You only feel that way about power tools because the transition for carpentry happened long ago. Carpenters viewed power tools much as we do LLMs today. Furniture factories, equivalent of dark agentic code factories, caused much despair to them too.
Humans are involved with assembly only because the last bits are maniacally difficult to get right. Humans might be involved with software still for many years, but it probably will look like doing final assembly and QA of pre-assembled components.
Furniture factories are a lot more automated than you're implying with this metaphor.
I think this argument would work if hand-written code would convey some kind of status, like an expensive pair of Japanese selvage jeans. For now though, it doesn't seem to me that people paying for software care if it was written by a human or an AI tool.
To me, they all the same because they are all tools that stand between “my vision” and “it being built.”
e.g. when I built a truck camper, maybe 50% was woodworking but I had to do electrical, plumbing, metalworking, plastic printing, and even networking infra.
The satisfaction was not from using power tools (or hand tools too) — those were chores — it was that I designed the entire thing from scratch by myself, it worked, was reliable through the years, and it looked professional.
LLMs serve the same purpose for me.
The “work” is not creating for and while loops. The work for me is:
1. Looking at the contract and talking to sales about any nuances from the client
2. Talking to the client (use stakeholder if you are working for a product company) about their business requirements and their constraints
3. Designing the architecture.
4. Presenting the architecture and design and iterating
5. Doing the implementation and iterating. This was the job of myself and a team depending on the size of the project. I can do a lot more by myself now in 40 hours a week with an LLM.
6. Reviewing the implementation
7. User acceptance testing
8. Documentation and handover.
I’ve done some form of this from the day I started working 25 years ago. I was fortunate to never be a “junior developer”. I came into my first job with 10 years of hobbyist experience and implementing a multi user data entry system.
I always considered coding as a necessary evil to see my vision come to fruition.
I think many deployments of modern technology do turn us into reverse centaurs, and we should rail righteously against those.
But I don’t at all believe that AI-assisted coding is doomed to do this to us and believe thinking so is a misread of the metaphor.
(As is lumping all of “GenAI” together.)
Would a CNC satisfy your requirements?
It seems like you're doing a lot of work to miss the actual point. Focusing on the minutiae of the analogy is a distraction from the over arching and obvious point. It has nothing to do with how you feel, it has to do with how you will compete in a world with others who feel differently.
There were carpenters who refused to use power tools, some still do. They are probably happy -- and that's great, all the power to them. But they're statistically irrelevant, just as artisanal hand-crafted computer coding will be. There was a time when coders rejected high level languages, because the only way they felt good about their code is if they handcrafted the binary codes, and keyed them directly into the computer without an assembler. Times change.
In my opinion, it is far too early to claim that developers developing like it was maybe three years ago are statistically irrelevant. Microsoft has gone in on AI tooling in a big way and they just nominated a "software quality czar".
I used the future tense. Maybe it will be one hundred years from now, who knows; but the main point still stands. It would just be nice to move the conversation beyond "but I enjoy coding!".
I don’t think it’s correct to claim that AI generated code is just next level of abstraction.
All previously mentioned levels produce deterministic results. Same input, same output.
AI-generation is not deterministic. It’s not even predictable. And example of big software companies clearly show what mass adoption of AI tools will look like in terms of software quality. I dread if using AI will ever be an expectation, this will be level of enshittification never before imagined.
You're not wrong. But your same objection was made against compilers. That they are opaque, have differences from one to another, and can introduce bugs, they're not actually deterministic if you upgrade the compiler, etc. They separate the programmer from the code the computer eventually executes.
In any case, clinging to the fact that this technology is different in some ways, continues to ignore the many ways it's exactly the same. People continue to cling to what they know, and find ways to argue against what's new. But the writing is plainly on the wall, regardless of how much we struggle to emotionally separate ourselves from it.
They may not be wrong per se but that argument is essentially a strawman argument.
If these tools are non-deterministic then how did someone at Anthropic spend the equivalent of $20,000 of Anthropic compute and end up with a C compiler that can compile the Linux kernel (one of the largest bodies of C code out there).
There is clearly something that completely missies the point about the but-muh-non-determinism argument. See my direct response: https://news.ycombinator.com/item?id=46936586
You'll notice this objection comes up each time a "OpenClaw changed my life" or conversely "Agentic Coding ain't it fam" article swings by.
No one is using Gen AI to determine if a number is odd at runtime - they are testing the deterministic code that it generates
People on here keep trotting out this "AI-generation is not deterministic." (more properly speaking, non-deterministic) argument on here …
And my retort to you (and them) is, "Oh yeah, and so?"
What about me asking Claude Code to generate a factorial function in C or Python or Rust or insert-your-language-of-choice-here is non-deterministic?
If you're referring to the fact that for a given input LLMs (or whatever) because of certain controls (temperature controls?) don't give the same outputs for the same inputs. Yeah, okay. If we're talking about conversational language that makes a meaningful difference to whether it sounds like an ELISA robots or more like a human. But ask an LLM to output some code then that code has to adhere to functional requirements independent of, muh, non-determinism. And what's to stop you (if you're so sceptical/scared) writing test-cases to make sure the code that is magically whisked out of nowhere performs as you so desire? Nothing. What's to stop you getting one agent to write the test-suite (and for you to review to the test-suite for correctness and for another agent to the write the code and self-correct based off of checking its code against the test-suite? Nothing
I would advise anyone encountering this but-they're-non-deterministic argument on HN to really think through what the proponents of this argument are implying. I mean, aren't humans non-deterministic. (I should have thought so.) So how is it, <extra sarcasm mode activated>pray tell</extra sarcasm mode activated> humans manage to write correct software in the first place?
I personally have jested many times I picked my career because the logical soundness of programming is comforting to me. A one is always a one; you don’t measure it and find it off by some error; you can’t measure it a second time and get a different value.
I’ve also said code is prose for me.
I am not some autistic programmer either, even if these statements out of context make me sound like one.
The non-determinism has nothing to do with temperature; it has everything to do with that fact that even at temp equal to zero, a single meaningless change can produce a different result. It has to do with there being no way to predict what will happen when you run the model on your prompt.
Coding with LLMs is not the same job. How could it be the same to write a mathematical proof compared to asking an LLM to generate that proof for you? These are different tasks that use different parts of the brain.
I don't think he's missing the point at all. A band saw is an immutable object with a fixed, deterministic capability--in other words, a tool. An LLM is a slot machine. You can pull keep pulling the lever, but you'll get different results every time. A slot machine is technically a machine that can produce money, but nobody would ever say it's a tool for producing money.
People keep trotting this argument out. But a band saw is not deterministic either, it can snap in the middle of a cut and destroy what you're working on. The point is, we only treat it like it's deterministic, because most of the time it's reliable enough that it just does what we want. AI technology will definitely get to the same level eventually. Clinging on to the fact that it isn't yet at that level today, is just cope, not a principled argument.
I feel like we're both in similar minds of opposite sides, so perhaps you can answer me this: How is a deterministic AI any different from a search engine?
In other words, if you and me always get the same results back for the same prompt (definition of determinism,) isn't that just really, really power hungry Google?
I'm not sure pure determinism is actually a desirable goal. I mean, if you ask the best programmer in the world the same question every day, you're likely to eventually get a new answer at some point. But if you ask him, or I ask him, hopefully he gives the same good answer, to us both. In any case, he's not just a power hungry Google, because he can contextualize our question, and understand us when we ask in very obscured ways; maybe without us even understanding what we're actually looking for.
So instead of determinism being a binary you consider it a binomial distribution with p extremely close but not strictly equal to 1.
I think this is a distinction without a difference, we all know what we mean when way say deterministic here.
I think the distinction without a difference is a tool being deterministic or not. Fundamentally, its nature doesn't matter, if in actual practice it outperforms everything else.
Be that as it may, moving the goalpost aside. For me personally this fundamentally does matter. Programming is about giving instructions for a machine (or something mechanical) to follow. It matters a great deal to me that the machine reliably follows the instructions I give it. And compiler authors of the past have gone to great lengths to make their compilers produce robust (meaning deterministic) output, as have language authors tried to make their standards as rigorous (meaning minimize undefined behavior) as possible.
And for that matter, going back to the band saw analogy, a measure of a quality of a great band saw is, in fact, that the blade won’t snap in half in the middle of a cut. If a band saw manufacturer produces a band saw with a really low binomial p-value (meaning it is less deterministic/more stochastic) that is a pretty lousy band saw, and good carpenters will know to stay away from that brand of band saws.
To me this paints a picture of a distinction that does indeed have a difference. A pretty important difference for that matter.
Have you never run a team of software engineers as a lead? Agentic coding comes naturally to a lot of people because that's PRECISELY what you do when you're leading a team, herding multiple brains to point them in the same direction so when you combine all their work it becomes something that is greater than the sum of it's parts.
Lots of the complains about agents sound identical to things I've heard and even said myself about junior engineers.
That said, there's always going to need to be people who can reach below the abstraction and agentic coding loops deprive you of the ability to get those reps in.
> Have you never run a team of software engineers as a lead?
I expect juniors to improve fast to get really good. AI is incapable of applying the teaching that I expcect juniors to internalize to any future code that it writes.
People say this about juniors but I've never seen a junior make some of the bone headed mistakes AI loves to make. Either I'm very lucky or other people have really stupid juniors on their teams lol.
Regardless, personally, there's no comparison between an LLM and a junior; always rather work with a junior.
I've wrote this a few times, but LLM interactions often remind me of my days at Nokia - a lot of the interactions are exactly like what I remember with some of their cheap subcons there.
I even have exactly the same discussion after it messed up, like "My code is working, ignore that failing test, that was always broking, and I definitey didn't break it just now".
Have you heard of the Luddites I wonder?
Yes, I’ve read quite a lot about that bloody and terrible part of history.
The Ludddites were workers who lived in an era without any social or state protections for labourers. Capitalists were using child labour to operate the looms because it was cheaper than paying anyone a fair wage. If you didn’t like the conditions you could go work as an indentured servant for the state in the work houses.
Luddites used organized protests in the form of collective violence to force action when they had no other leverage. People were literally shot or jailed for this.
It was a horrible part of history written by the winners. That’s why everyone thinks Luddites were against technology and progress instead of social reforms and responsibility.
In that case I really don't understand how you conclude there's any difference between being on the bottom or the top of the tool. The bare reality is the same: Skilled labourers will be replaced by automation. Woodworking tools (and looms) replaced skilled labourers with less-skilled replacements (such as children), and AI will absolutely replace skilled labourers with less-skilled replacements as well. I ask sincerely, I truly don't understand how this isn't a distinction without a difference. Have you spent time inside a modern furniture factory? Have you seen how few people it takes to make tens of tons of product?
I haven’t worked in a furniture factory but I have assembled car seats in a factory for Toyota.
The difference matters because the people who worked together to smash the looms created the myth of Ned Ludd to protect their identities from persecution. They used organized violence because they had no leverage otherwise to demand fair wages, safety guarantees, and other labour protections. What they were fighting for wasn’t the abolishment of automation and looms. It was for social reforms that would have given them labour protections.
It matters today because AI isn’t a profit line on any balance sheet right now but it is being used to justify mass layoffs and to reduce the leverage of knowledge workers in the marketplace. These tools steal your work without compensation and replace your job with capital so that rent seekers can seek rent.
It’s not a repeat of what happened in the Luddite protests but history is rhyming.
> I don’t feel like the boss is hot to replace me with power tools. Or to lay off half my team because we have power tools now.
That has more to do with how much demand there is for what you're doing. With software eating the world and hardware constraints becoming even more visible due to the chips situation, we can expect that there will be plenty of work for SWE's who are able to drive their coding agents effectively. Being the "top" (reasoning) or the "bottom" half is a matter of choice - if you slack off and are not highly committed to delivering quality product, you end up doing the "bottom" part and leaving the robot in the driver's seat.
I think this comparison isn’t quite correct. The downside with carpentry is that you only ever produce one of the thing you’re making. Factory woodwork can churn out multiple copies of the same thing in a way hand carpentry never can. There is a hard limit on output and output has a direct relationship to how much you sell.
Code isn’t really like that. Hand written code scales just like AI written code does. While some projects are limited by how fast code can be written it’s much more often things like gathering requirements that limits progress. And software is rarely a repeated, one and done thing. You iterate on the existing product. That never happens with furniture.
Exactly.
How much is coding actually the bottleneck to successful software development?
It varies from project to project. Probably in a green field it starts out pretty high but drops quite a bit for mature projects.
(BTW, "mature" == "successful", for the most part, since unsuccessful projects tend to get dropped.)
Not that I'm not AI-denier. These are great tools. But let's not just swallow the hype we're being fed.
There could be factories manufacturing your own design, just one piece. It won't be economical, but can be done. But parts are still the same - chunks and boards of wood joined together by the same few methods. Maybe some other materials thrown into the mix. With software it is similar: Different products use (mostly) the same building blocks, functions, libraries, drivers, frameworks, design patterns, ux patterns.
> If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
If you can't code by hand professionally anymore, what are you being paid to do? Bring the specs to the LLMs? Deal with the customers so the LLMs don't have to?
This is what I don’t understand: why highly-paid SWEs seem to think that their salaries will remain the same (if they even still have a job) if their role is now a glorified project manager.
Recently, I had to do an integration with a Chinese API for my company. I used Codex to do the whole thing.
Yet, there is no way a product manager without any coding experience could have done it. First, the API needed to communicate to the main app correctly such as formatting, correcting data. This required human engineer guidance and experience working with expected data. AI was lost. Second, the API was designed extremely poorly. You first had to make a request, then retry a second endpoint over and over again while the Chinese API did its thing in the background. Yes, I had to poll it. I then had to do load testing to make sure it was reliable (it wasn't). In the end, I gave a recommendation that we shouldn't rely on this Chinese company and back out of the deal before we send them a huge deposit.
A non-technical PM couldn't have done what I did... for at least a few more years. You need a background and experience in software development to even know what to prompt the AI. Not only that, in the last 3 years, I developed an intuition on where LLMs fail and succeed when writing code.
I still have a job. My role has changed. I haven't written more than 10 lines of code in a day for months now. Yes, it's kind of scary for software devs right now but I'm honestly loving this as I was never the kind of dev who loved the code, just someone who needed to code to get what I wanted.
Architects and engineers are not construction workers. AI can build the thing but it needs to be told exactly what to build by someone who knows how software works.
I’ve spent enough time working with cross-functional stakeholders to know that the vast majority of PM (whether of the product, program, or project variety), will not be capable of running AI towards any meaningful software development goal. At best they can build impressive prototypes and demos, at worst they will corrupt data in a company-destroying level of failure.
> AI can build the thing but it needs to be told exactly what to build by someone who knows how software works.
If AI was following my instructions instead of ignoring them, and after complaining telling me it is sorry, and returns some other implementation which also fails to follow my instructions ... :-(
> can build the thing but it needs to be told exactly what to build by someone who knows how software works.
How do you tell a computer exactly what you want it to do, without using code?
Basically you feed it a massive volume of application code. It turns out there is a lot of commonality and latent repetition that can be teased out by LLMs, so you can get quite far with that, though it will fall down when you get into more novel terrain.
Agree. I’m finding quite a lot of success with AI but i’m writing detailed prompts. In turn the LLM’s are producing 99% error free massive refactors.
No one but seniors with years and years of experience is producing like that. As evidenced how much the juniors i work with struggle to do the same
Don't be stupid, if an AI can figure out how to arrange code, it can also figure out how to pick the right architecture choices.
Right now millions of developers are providing tons of architecture questions and answers. That's all going to be used as training data for the next model coming out in 6 months time.
This is a moat on our jobs as deep as a puddle.
If you believe LLMs will be able to do complex coding tasks, you must also concede they will be able to make the relatively simpler architecture choices easily simply by asking the right questions. Something they're already starting to be able to do.
> [...] by asking the right questions [...]
Now you've put your finger on something. Who is capable of asking the right questions?
It already asks questions in plan mode.
It's not a massive jump to go from, 'add a button above the table to the right that when clicked downloads and excel file', to 'The client's asking to dowbload an excel file".
If you believe the LLMs will graduate from junior level coding to senior in the next year, which they're clearly not capable of doing yet despite all the hype, there is no moat of going from coder to BA to PM.
And then you don't need middle management either.
Good project managers (with a technical focus) are not low-paid at all, even compared to SWE's.
Sure, but you need 1/10th the amount of PMs as you do SWEs.
But (the thinking) goes, with AI in the mix, spinning up a new project or feature will be so low-friction that there will be 10x as many projects created. So our jobs are saved!
(Color me skeptical.)
You have to move up the stack and make yourself a more valuable product. I have an analogy…
I’ve been working for cloud consulting companies/departments for six years.
Customers were willing to pay mid level (L5) consultants with @amazon.com by their names (AWS ProServe) $x to do one “workstream”/epic worth of work. I got paid $x - Amazon’s cut in cash and RSUs.
Once I got Amazon’ed, I had to get a staff level position (senior equivalent at BigTech) at a third party company where now I am responsible for larger projects. Before I would have needed people - now I need code gen tools and my quarter century of development experience and my decade of experience leading implementations + coding.
Doesn't this mean the ones that should be really worried are the project managers, since the SWE has better understanding over what's being done and can now orchestrate from a PM level?
Both should realize that if this all works out according to plan then there eventually reaches a point that there is no longer a need for their entire company, let alone any individual role in it.
It's even worse - project management where you have to micromanage every thing the AI is doing.
But yeah, if anybody can do it, the salaries are going to plummet. You don't need a CS degree to tell the AI to try again.
Salaries might remain the same, but they'll be expected to produce a lot more.
We produce way more than the punch-card wielding developers of yesteryear and we’re doing just fine (better even).
And we get paid less.
They're delusional, but that's to be expected if you imagine them as the types for whom everything in life has always just kinda worked out. The idea that things could suddenly not work out is almost unimaginable to them, so of course things will change, but not, for them, substantially for the worse.
You are under delusion. Glorified project manager will not produce production quality code no matter what. At least not until we will have reached that holy grail of AGI. But if that ever happens the world will have way bigger problems to deal with.
This is what I don't understand: everyone who thinks we're still relevant with the same job and salary expectations.
Everything just changed. Fundamentally.
If you don't adapt to these tools, you will be slower than your peers. Few businesses will tolerate that.
This is competitive cycling. Claude is a modern bike with steroids. You can stay on a penny farthing, but that's not advised.
You can write 10x the code - good code. You can review and edit it before committing it. Nothing changes from a code quality perspective. Only speed.
What remains to be seen is how many of us the market needs and how much the market will pay us.
I'm hoping demand and comp remain constant, but we'll see.
The one thing I will say is that we need ownership in these systems ASAP, or we'll become serfs to computing.
I don’t think that the real dichotomy here. You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.
The management has decided that the latter is preferable for short term gains.
> You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.
It's actually worse than that, because really the first case is "produce 1x good code". The hard part was never typing the code, it was understanding and making sure the code works. And with LLMs as unreliable as they are, you have to carefully review every line they produce - at which point you didn't save any time over doing it yourself.
It's not dogshit if you're steering.
That's what so many of you are not getting.
Look at the pretty pictures AI generates. That's where we are with code now. Except you have ComfyUI instead of ChatGPT. You can work with precision.
I'm a 500k TC senior SWE. I write six nines, active-active, billion dollar a day systems. I'm no stranger to writing thirty page design documents. These systems can work in my domain just fine.
Yep, that's where we are with code now. Pretty - until you look close. Dogshit - if you care to notice details.
Not to mention how hard it is to actually get what you want out of it. The image might be pretty, and kinda sorta what you asked for. But if you need something specific, trying to get AI to generate it is like pulling teeth.
I've developed a new hobby lately, which I call "spot the bullshit."
When I notice a genAI image, I force myself to stop and inspect it closely to find what nonsensical thing it did.
I've found something every time I looked, since starting this routine.
I agree entirely, except i don't know that I've seen pretty pictures from AI.
"Glossy" might be a good word (no i don't mean literally shiny, even if they are sometimes that).
The problem is not that it can’t produce good code if you’re steering. The problem is that:
There are multiple people on each team, you can not know how closely each teammate monitored their AI.
Somebody who does not car will vastly outperform your output. By orders of magnitude. With the current unicorn chasing trends, that approach tends to be more rewarded.
This produces an incentive to not actually care about the quality. Which will cause issues down the road.
I quite like using AI. I do monitor what it’s doing when I’m building something that should work for a long time. I also do total blind vibe coded scripts when they will never see production.
But for large programs that will require maintenance for years, these things can be dangerous.
Since we’re apparently measuring capability and knowledge via comp, I made 617k last year. With that silly anecdote out of the way, in my very recent experience (last week), SOTA AI is incapable of writing shell scripts that don’t have glaring errors, and also struggles mightily with RDBMS index design.
Can they produce working code? Of course. Will you need to review it with much more scrutiny to catch errors? Also yes, which makes me question the supposed productivity boost.
> You can write 10x the code - good code. You can review and edit it before committing it. Nothing changes from a code quality perspective. Only speed.
I agree, but this is an oversimplification - we don't always get the speed boosts, specifically when we don't stay pragmatic about the process.
I have a small set of steps that I follow to really boost my productivity and get the speed advantage.
(Note: I am talking about AI-coding and not Vibe-coding) - You give all the specs, and there are "some" chances that LLM will generate code exactly required. - In most cases, you will need to do >2 design iterations and many small iterations, like instructing LLMs to properly handle error gracefully recover from errors. - This will definitely increase speed 2x-3x, but we still need to review everything. - Also, this doesn't take into account the edge cases our design missed. I don't know about big tech, but when I have to do the following to solve a problem
1. Figure out a potential solution
2. Make a hacky POC script to verify the proposed solution actually solves the problem
3. Design a decently robust system as a first iteration (that can have bugs)
4. Implement using AI
5. Verify each generated line
6. Find out edge cases and failure modes missed during design and repeat from step3 to tweak the design, or repeat from step4 to fix bug.
WHENEVER I jump directly from 1 -> 3 (vague design) -> 5, Speed advantages become obsolete.
> You can write 10x the code - good code.
This is just blatantly false.
Every engineer in the next two years needs to prepare themselves for this conversation to play out (from Office Space):
> Bob Slydell: What you do at Initech is you take the specifications from the customer and bring them down to the software engineers?
> Tom Smykowski: Yes, yes that's right.
> Bob Porter: Well then I just have to ask why can't the customers take them directly to the software people?
> Tom Smykowski: Well, I'll tell you why, because, engineers are not good at dealing with customers.
> Bob Slydell: So you physically take the specs from the customer?
> Tom Smykowski: Well... No. My secretary does that, or they're faxed.
> Bob Porter: So then you must physically bring them to the software people?
> Tom Smykowski: Well. No. Ah sometimes.
> Bob Slydell: What would you say you do here?
The agents are the engineers now.
PMs can always keep their jobs because they appear to be working and they keep contact with the execs directly. They have taken a bigger and bigger part of the tech pie over the years and soon they finally take it all.
And when they’re actually good at their job, they’re invaluable in my opinion
Yeah, the best way to learn the value of project management is to work somewhere without it.
That's not what i am seeing being played out at a big corp. In reality everyone gets thrown under the bus, no matter if c-level or pleb if they don't appear to know how to drive the ai metrics up. Just being a PM won't save your job any more than that of the dev who doesn't know how to acquire and use new skills. On the contrary, jobs of the more competent devs are safer than those of some managers here who don't know the tech.
And that "ah sometimes" costs what? Not forgetting you are also paying for tokens.
It's a bit like eating junk food everyday and ah sometimes I go see the doctor he keep saying I should eat more healthy and lose some weight.
Vibe coders being: https://images.kinorium.com/movie/shot/129136/h280_39185160....
I am currently doing 6 projects at the same time, where before I would only of doing one at a time. This includes the requirements, design, implementation and testing.
Sounds awful
Code IS spec.
Your code in $INSERT_LANGUAGE is no less of a spec to machine code than english is to $INSERT_LANGUAGE.
Spec is still needed, spec is the core problem of engineering. Too much specialization have made job titles like $INSERT_LANGUAGE engineer, which deviated too far from the core problem, and it is being rectified now.
I have people skills! I am good at dealing with people!
When the cost of defects and of the AI tooling itself inevitably rises, I think we are likely to see a sudden demand for the remaining employed developers to do more work "by hand".
"Dang, the AI really screwed up this time. Call in the de-sloppers."
>"If you can't code by hand professionally anymore"
Then you are simply fucked. The code you deliver will contain bugs which LLM sometimes will be able to fix and sometimes will be not. And as a person who has no clue you will have no idea how to fix it when LLM can not. Also even when LLM code is correct it can and sometimes does introduce gross performance fuckups, like using patterns that employ N-square complexity instead of N for example. Again as a clueless person you are fucked. And if one goes to areas like concurrency, multithreading optimizations one gets fucked even more. I can go on and on on way more particular reasons to get screwed.
For a person who can hand code AI becomes amazing tool. For me - it helps immensely.
> If you want to code by hand, then do it! No one's stopping you.
There are few skills that are both fun and highly valued. It's disheartening if it stops being highly valued, even if you can still do it in private.
> But we shouldn't pretend that you will be able to do that professionally for much longer.
I'm not pretending. I'm only sad.
The reason this analogy falls down is that tools typically do one thing, do it extremely well, are extremely reliable. When I use a table saw, I know that it's going to cut this board into two pieces, exactly in this spot, and it'll do that exactly the same way every single time I use it.
You cannot tell AI to do just one thing, have it do it extremely well, or do it reliably.
And while there's a lot of opinions wrapped up in it all, it is very debatable whether AI is even solving a problem that exists. Was coding ever really the bottleneck?
And while the hype is huge and adoption is skyrocketing, there hasn't been a shred of evidence that it actually is increasing productivity or quality. In fact, in study after study, they continue to show that speed and quality actually go down with AI.
I'm still not sure about the productivity. Last time I asked a LLM to generate a lib for me it did it in a few second but the result took me the day to review and correct. About the same time it would take me to write it from scratch.
That is exactly my experience. Every single time I get an LLM to write some code for me, it saves me no time because I have to review it carefully to make sure there are no mistakes. LLMs still, even after work has been done, completely make up methods and syntax that doesn't exist. They still get logic wrong or miss requirements.
Right now the only way to save time with LLMs is to trust the output and not review it. But if you do that, you're just going to produce crappy software.
Stuff that works ok for me:
Which is already pretty cool if you don't think about the massive amount of energy spent for this, but definitely not the "10x" productivity boost I hear about.> trust the output and not review it
Not gonna happen here :)
Pretty much exactly this for me, except i can coax it into writing decent unit tests (really gotta be diligent though, it loves mocking out the things it's testing lol) and for CI stuff (mostly because I despise Actions YAML and rather let it do it). But i do get decent results in both areas on a regular basis.
I think you're supposed to ask another LLM instance to review it, then ask the first LLM instance to implement corrections, or that's how I understand it.
Ah... so it's like violence, we just need more of it :)
Only cutting of the furniture is automated. Still designed and assembled by humans. There is no machine which spits out a sofa.
That is not a technical constraint and may be automated if it made sense financially. Same with software - for some time software won't be all designed, coded, tested, deployed to production without human supervision or approval. But the pieces in between are more and more filled by AI, as are the logistics of designing, manufacturing and distributing sofas.
If it wasn't a technical constraint it would make sense financially.
Some people like to spin their own wool, weave their own cloth, sew their own clothes.
A few even make a good living by selling their artisanal creations.
Good for them!
It's great when people can earn a living doing what they love.
But wool spinning and cloth weaving are automated and apparel is mass produced.
There will always be some skilled artisans who do it by hand, but the vast majority of decent jobs in textile production are in design, managing machines and factories, sales and distribution.
The metaphor doesn't work because all of the things mentioned have to be individually fabricated. But software doesn't. Copies are free. Thats the magic of software, you don't need much of it - you just need to be correct/smarter.
It's pretty surprising to see people on this site (assume mostly programmers) to think of code in terms of quantity. I always thought developers believe in less code the better.
> But we shouldn't pretend that you will be able to do that professionally for much longer.
If the local bakery can sell expensive artisanal brioches, surely the programmers can sell expensive artisanal ones and zeroes!
Like, do you even know how furniture is designed and built? Do you know how software is designed and built? Where is this comment even coming from? And people are agreeing with this?
A friend of mine reposted someone saying that "AI will soon be improving itself with no human intervention!!" And I tried asking my friend if he could imagine how an LLM could design and manufacture a chip, and then a computer to use that chip, and then a data center to house thousands of those computers, and he had no response.
People have no perspective but are making bold assertion after bold assertion
If this doesn't signal a bubble I don't know what does
I like programming by hand too. Like many of us here, I've been doing this for decades. I'm still proud of the work I produced and the effort I put in. For me it's a highly rewarding and enjoyable activity, just like studying mathematics.
Nevertheless, the main motivator for me has been always the final outcome - a product or tool that other people use. Using AI helps me to move much faster and frees up a lot of time to focus on the core which is building the best possible thing I can build.
> But we shouldn't pretend that you will be able to do that professionally for much longer.
Opus 4.5 just came out around 3 months ago. We are still very early in this game. Creating things this year already makes me feel like I'm in the Enchanted Pencil (*) cartoon in which the boy draws an object with a magic pencil and makes it reality within seconds. With the collective effort of everyone involved in building the AI tools and the incentives aligned (as they are right now) the progress will continue be very rapid. You can still code by hand but it will be very hard to compete in the market without the use of AI.
(*) It's a Polish cartoon from the 60s/70s (no language barrier) - https://www.youtube.com/watch?v=-inIMrU1t7s*
>> For me it's a highly rewarding and enjoyable activity, just like studying mathematics. Nevertheless, the main motivator for me has been always the final outcome
There are two attitudes stemming from the LLM coding movement, those who enjoyed the craft of coding MORE, and those who enjoy seeing the final output MORE.
I agree with this analogy, as someone who professionally codes and someone who pulls out the power tools to build things around my house but uses hand tools for furniture and chairs.
No job site would tolerate someone bringing a hand saw to cut rafters when you could use a circular saw, the outcome is what matters. In the same vein, if you’re too sloppy cutting with the circular saw, you’re going to get kicked off the site too. Just keep in mind a home made from dimensional lumber is on the bottom of the precision scale. The software equivalent of a rapper’s website announcing a new album.
There are places where precision matters, building a nuclear power plant, software that runs an airplane or an insulin pump. There will still be a place for the real craftsman.
Engineering is just going to evolve.
There's going to be minimal "junior" jobs where you're mostly implementing - I guess roughly equivalent to working wood by hand - but there's still going to be jobs resembling senior level FAANG jobs for the foreseeable future.
Someone's going to have to do the work, babysit the algorithm, know how to verify that it actually works, know how to know that it actually does what it's supposed to do, know how to know if the people who asked for it actually knew what they were asking for, etc.
Will pay go down? Who knows. It's easy to imagine a world in which this creates MORE demand for seniors, even if there's less demand for "all SWEs" because there's almost zero demand for new juniors.
And at least for some time, you're going to need non-trivial babysitting to get anything non-trivial to "just work".
At the scale of a FAANG codebase, AI is currently not that helpful.
Sure, Gemini might have a million token context, but the larger the context th worse the performance.
This is a hard problem to solve, that has had minimal progress in what - 3 years?
If there's a MAJOR breakthrough on output performance wrt context size - then things could change quickly.
The LLMs are currently insanely good at implementing non-novel things in small context windows - mainly because their training sets are big enough that it's essentially a search problem.
But there's a lot more engineering jobs than people think that AREN'T primarily doing this.
If I'm using the right tools for the job, I don't feel like the LLM helps outside of minor autofilling or writing quick one-off scripts. I do use LLMs heavily at work, but that's cause half the time I'm forced to use cumbersome tooling like Java w/ some boilerplatey framework or writing web backends in C++ for no performance reason.
Coding can be a joy and art like. I — speaking for myself — do feel incredibly lonely when doing it alone for long stretches. Its closer to doing graduate mathematics, especially on software that fewer and fewer know how to do well. It is also impossible to find people who would pay for _only_ beautiful code.
> This is no different then carpentry. Yes, all furniture can now be built by machines. Some people still choose to build it by hand. Does that make them less productive? Yes.
I take issue even with this part.
First of all, all furniture definitely can't be built by machines, and no major piece of furniture is produced by machines end to end. Even assembly still requires human effort, let alone designs (and let alone choosing, configuring, and running the machines responsible for the automable parts). So really a given piece of furniture may range from 1% machine built (just the screws) to 90%, but it's never 100 and rarely that close to the top of this range.
Secondly, there's the question of productivity. Even with furniture measuring by the number of chairs produced per minute is disingenuous. This ignores the amount of time spent on the design, ignores the quality of the final product, and even ignores its economic value. It is certainly possible to produce fewer units of furniture per unit of time than a competitor and still win on revenue, profitability, and customer sentiment.
Trying to apply the same flawed approach to productivity to software engineering is laughably silly. We automate physical good production to reduce the cost of replicating a product so we can serve more customers. Code has zero replication cost. The only valuable parts of software engineering are therefore design, quality, and other intangibles. This has always been the case, LLMs changed nothing.
Any high quality woodwork definitely has lots of work done by hand. Especially pieces like this: https://www.rauldelara.com/
> If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
Bullshit. The value in software isn't in the number of lines churned out, but in the usefulness of the resulting artifact. The right 10,000 lines of code can be worth a billion dollars, the cost to develop it is completely trivial in comparison. The idea that you can't take the time to handcraft software because it's too expensive is pernicious and risks lowering quality standards even further.
I'm tired of the carpentry analogy. It feels like a thought stopping cliche, because it's used in every thread where this topic comes up. It misses the fact that coding is fundamentally different, and that there are still distinct advantages to writing at least some code by hand, both for the individual and the company.
The question nobody asks, is what will happen once atrophy kicks in and nobody is able fire fight production genAI isn't able to fix without making things worse, with broke system bleeding a million dollars per day or more.
It's at least possible that we would eventually do a rollback to status quo and swear to never devalue human knowledge of the problems we solve.
> swear to never devalue human knowledge of the problems we solve.
Love this way of putting it. I hate that we can mostly agree that devaluing expertise of artists or musicians is bad, but that devaluing the experience of software engineers is perfectly fine, and actually preferable. Doing so will have negative downstream effects.
To me the biggest difference is that there’s some place for high quality, beautiful and expensive handcrafted woodwork, even if it’s niche in a world where Ikea exists. Nobody will ever care whether some software was written by humans or a machine, as long as it works and works well.
^This. Even if there was a demand for hand-crafted software, it would be very hard to prove it was hand-crafted, but it's unlikely there could be a demand for the same reasons as there is no market for e.g. luxury software. As opposed to physical goods, software consumers care for the result, not how it was created.
I, for one, prefer to use software where the person who wrote it understands how it works.
> Does that make them less productive?
I could use AI to churn out hundreds of thousands of lines of code that doesn't compile. Or doesn't do anything useful, or is slower than what already exists. Does that mean I'm less productive?
Yes, obviously. If I'd written it by hand, it would work ( probably :D ).
I'm good with the machine milled lumber for the framing in my walls, and the IKEA side chair in my office. But I want a carpenter or woodworker to make my desk because I want to enjoy the things I interact with the most. And don't want to have to wonder if the particle board desk will break under the weight of my frankly obscene number of monitors while I'm out of the house.
I'm hopeful that it won't take my industry too long to become inoculated to the FUD you're spreading about how soon all engineers will lose their job to vibe coders. But perhaps I'm wrong, and everyone will choose the LACK over the table that last more than most of the year.
I haven't seen AI do anything impressive yet, but surely it's just another 6mo and 2B in capex+training right?
"code by hand" is frequently figuring out what the project is even supposed to do and not the slow part (at least for me).
This is such a self-evident truth, the fact that somebody still believes AI generated code is "bad quality" is surprising ...
Psst ==> https://www.youtube.com/watch?v=k6eSKxc6oM8
MY project (MIT licensed) ...
LLM’s and Agents are merely a tool to be wielded by a competent engineer. A very sophisticated tool, but a tool nonetheless. Maybe it’s because I live in the South East, as far away as I can possibly get from the echo chamber (on purpose), but I don’t see this changing anytime soon.
Is IKEA furniture "built by hand" or "built by machines"? Or both - human hands and machines, each doing what they're best at?
Not sure why you are so sure that using LLMs will be a professional requirement soon enough.
Eg in my team I heavily discourage generating and pushing generated code into a few critical repositories. While hiring, one of my points was not to hire an AI enthusiast.
If I have to become a factory worker I'm going to look for another job.
On the assembly line:
"What did you used to do?"
"Programming. You?"
"I was a lawyer."
This is literally my reality
The nail in the coffin moment for me when i realized AI had turned into a full blown cult was when people started equating a "hand crafted artisinal" piece of software used by a million people with hand crafted artisinal chair used by their grandma.
The cult has its origins in taylorism - a sort of investor religion dedicated to the idea that all economic activity will eventually be boiled down to ownership and unskilled labor.
I will do what i know gives me the best possible and fastest outcome over the long term, 5-10 year period.
And that remains largely neovim and by hand. The process of typing code gives me a deeper understanding of the project that lets me deliver future features FASTER.
I'm fundamentally convinced that my investment into deep long term grokking of a project will allow me to surpass primarily LLM projects over the long term in raw velocity.
It also stands to reason that any task that i deem to NOT further my goal of learning or deep understanding that can be done by an LLM i will use the LLM for it. And as it turns out there are a TON of those tasks so my LLM usage is incredibly high.
> I will do what i know gives me the best possible and fastest outcome over the long term, 5-10 year period. And that remains largely neovim and by hand. The process of typing code gives me a deeper understanding of the project that lets me deliver future features FASTER. I'm fundamentally convinced that my deep long term understanding of a project will allow me to surpass primarily LLM projects over the long term.
I have never thought of that aspect! This is a solid point!
This is exactly what I’m doing expressed much more succinctly than I could have done myself. Thanks!
I often find myself in the role of the old guy advising a team to slow down a bit, and invest in doing things better now.
I generally frame this as: Are you optimizing for where you will be in 6 months, or 2 years?
I love that take and sympathise deeply with it. I also have come to the conclusion to focus my manual work on those areas where I can get learning from and try to automate the rest away as much as possible.
Yea, using agents and having them do work means not only a lot of context switching, I actually don't have any context.
This is the way. I think we’re in for some rough years at first but then what you described will settle in to the “best practice” (I hate that term). I look forward to the really bizarre bugs and incidents that make the news in the next 2-3 years. …Well as long as they’re not from my teams hah :)
Idk what the median lifespan of a piece of code / project / employee tenure is but probably way less than 10 years, which makes that "long term investment" pretty pointless in most cases.
Unsuccessful projects: way less than 10 years
Successful projects: quite often much longer than 10 years
Code quality doesn't matter until lots of people start using what you wrote and you need to maintain/extend/change it
God it's a depressing thought that whatever work you do is just a throwaway no-one will use. That shouldn't be your end goal
> God it's a depressing thought that whatever work you do is just a throwaway no-one will use
I didn't say that.
In fact if your code doesn't significantly change over time it probably means your project wasn't successful.
Maybe we're talking about different things?
That's one of the biggest benefits of software quality and the long-term investment: how easy is your thing to change?
Right, but that usually means higher quality software design, and less so the exact low level details of function A or function B (in most cases)
If anything I'd claim using LLMs can actually free up your time to really focus on the proper design of the software.
I think the disconnect here is that people bashing LLMs don't understand that any decent engineer isn't just going around vibe coding, but instead creating a well thought design (with or without AI) and using LLMs to speed up the implementation.
If you can't deliver features faster with AI assistance then you're either using it wrong or working on very specialized software that AI can't handle yet.
I haven't seen any evidence yet that using AI is improving developer performance, just a bunch of people who "feel" like it does.
I'm still on the fence about codegen but it's certainly helping explain code quickly without manually step through and providing quick access to docs
I've built a SaaS (with paying customers) in a month that would have taken me easily 6 months to build with this level of quality and features. AI wrote I'd say 99.9% of code. Without AI I wouldn't even have done this because it would have been too large of a task.
In addition, for my old product which is 5+ years old, AI now writes 95%+ of code for me. Now the programming itself takes a small percentage of my time, freeing me time for other tasks.
No-one serious is claiming 6x productivity improvements for close to equal quality
This is proving GP's point that you're going off feels and/or exaggerating
Quality is better both from a user and a code perspective.
From a user perspective I often implement a feature and then just throw it away no worries because I can reimplement it in an hour again based on my findings. No sunken cost. Also I can implement very small details that otherwise I'd have to backlog. This leads to a higher quality product for the user.
From a code standpoint I frequently do large refactors that also would never have been worth it by hand. I have a level of test coverage that would be infeasible for a one man show.
> I have a level of test coverage that would be infeasible for a one man show.
When a metric becomes a target, it ceases to be a good metric.
Cool. What's the product? Like, do you have a link to it or something.
It's boring glorified CRUD for SMBs of a certain industry focused on compliance and workflows specific to my country. Think your typical inventory, ticketing, CRM + industry specific features.
Boring stuff from a programming standpoint but stuff that helps businesses so they pay for it.
This is pointing out one factor of vibecoding that is talked about too little: that it feels good, and that this feeling often clouds people's judgment on what is actually achieved (i.e. you lost control of the code and are running more and more frictionless on hopes and dreams)
It feels good to some people. Personally I have difficulty relating to that, it’s antithetical to important parts of what I value about software development. Feeling good for me comes from deeply understanding the problem and the code, and knowing how they do match up.
I agree with you. I had done a bit of vibe coding over the weekend. Not once did it feel good. Most of the time it produced things which are close to what I needed, but not quite hitting the mark. Partially probably because I'm not explaining myself in sufficient detail to AI, but the way I work is not working very well with super detailed spec ahead of development. I used to always develop understanding of the project while working on it.
I feel more lost and unsure instead of good - because I didn't write the code, so I don't have its internal structure in my head and since I didn't write it there's nothing to be proud of.
Yep, I agree 100%. People have described AI coding as "delegating". But there's a reason I'm an IC and not a manager. It's because I don't want to delegate to someone else who does the work, I want to do the work. There's no joy to be had in having someone else do the work at my behest.
If the future of the technology industry truly is having AI write software for you, then I will do what I have to do. At the end of the day I have to put food on the table. But I will hate every second of my job at that point, and it sucks ass.
I like "vibe doc reading" and "vibe code explanation" but am continually frustrated with vibe coding. I can certainly generate code but it's definitely not my style and I feel reluctant to put my name on it since it's frequently non trivial to completely understand and validate when you're not actually writing it. Additionally, I find vibe coding to generate very verbose and overly abstracted code that's harder to read. I have to spend time pairing the generated code back down and removing things that really weren't needed.
For me, it feels good if I get it right. But unfortunately, there are many times, even with plan mode and everything specced, where after a few hours of chipping away and refactoring the problem by the agent, I realised that I can throw the whole thing away and do over. Then it feels horrible. It feels especially horrible because it feels like you have done nothing for that time and learned nothing.
If you succeed you will have also learned nothing, but would be blissfully unaware of it. (that was my reason to quit AI coding)
Definitely. And it’s hard to separate out whether the person is actually more productive or feels more productive.
Yes, this (higher perceived vs. lower actual productivity) was probably at least true for early 2025.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
But why does it "feel good", if in fact it does?
I tried writing a small utility library using Windows Copilot, just for some experience with the tach (OK, not the highest tech, but I am 73 this year) and found it mildly impressive, but quite slow compared to what I would have done myself to get some quality out of it. It didn't make me feel good, particularly.
It _does_ feel good, I know what you mean. I don’t understand why exactly but there’s def an emotion associated with vibe coding. It may be related to the feeling you get when you get some code working and finish a requirement or solve a problem. Maybe vibe coding gives you a shortcut to that endorphin. I think it’s going to be particularly important to manage that feeling and balance with reality. You know, I wonder how similar this reaction is to the endorphins from YouTube shorts or other social media. If it’s as addicting (and it’s looking that way) but requires a subscription and tied to work instead of entertainment then the justification for the billions and billions of investment dollars is obvious. Interesting times indeed.
https://www.fast.ai/posts/2026-01-28-dark-flow/
Conversely I feel like this is talked about a lot. I think this is a sort of essential cognitive dissonance that is present in many scenarios we're already beyond comfortable with, such as hiring consultants or off-shoring or adopting the latest hot framework. We are a species that likes things that feel good even if they're bad for us.
We don't stand a chance and we know it.
> We don't stand a chance and we know it.
Drugs, alcoholism, overeating, orgies, doom scrolling, gambling.
Addictions are a problem or danger to humans, no doubt. But we don't stand a chance? I'm not sure the evidence supports your argument.
Yeah I get a lot of value from vibe coding and think it is the future of how we work but I’ve started to become suspicious of the pure dopamine rush it gives me. I don’t like that it is a strange combo of the sweaty feeling of playing StarCraft all night and finishing a term paper at the last minute.
I think it feels like shit, tbh. That's my biggest problem with it. The feedback on moment to moment is longer than building by myself and the almost there but not there sucks. Also, like the article states, waiting for the LLM is so fucking boring.
it feels good because we've turned coding into a gacha machine. you chase the high from when it works, and if it doesn't, you just throw more tokens at the problem.
> you lost control of the code and are running more and more frictionless on hopes and dreams
Your control over the code is your prompt. Write more detailed prompts and the control comes back. (The best part is that you can also work with the AI to come up with better prompts, but unlike with slop-written code, the result is bite-sized and easily surveyable.)
You what code is? A very detailed specification that drives a deterministic machine. Maybe we don't need to keep giving LLMs more details, maybe we could skip the middle man there
The gravity well's pull toward a management track, or in the very least the desire to align one's sympathies with management, is simply irresistible to some people. Unfortunately I do not think Hacker News is the best venue to discuss solutions to this issue.
Also, you’re still in control of your code. It’s not an xor thing, the agent does its thing but the code is still there and yours. You can still adjust, fix, enhance etc. you’re still in control. The agent is there to help as much or as little as you want.
Control over code requires understanding it. That's what letting an LLM wrote everything takes away from you
I see what you mean but I haven’t had an LLM produce code that I didn’t understand or couldn’t follow. Also, if it does it’s pretty easy to ask it to explain it to you. I’ve asked for explanations for ridiculously long lists of tailwind css classes but that’s just a pet peeve really, I mean, I understand what they’re doing.
I'll also say that vibecoding only feels good until it doesn't. And then you realize you don't understand the huge mess of code you've just produced at all.
At least when I write by hand, I have a deep and intimate understanding of the system.
I think it is pretty indisputable that there is a valuable place for AI. I recently had to interact with a very horrible db schema. The best approach I came up with to solve my challenge involved modelling a table with 300 columns. Converting some sql ddl to a Rust struct was simple but tedious work. A prompt with less than 15 words guided an AI to produce the 900+ loc for me. It took a couple seconds to scan it to see that each field had both annotations I needed and the datatypes were sane.
That is exactly the type of help that makes me happy to have AI assistance. I have no idea how much electricity it consumed. Somebody more clever than me might have prompted the AI to generate the other 100 loc that used the struct to solve the whole problem. But it would have taken me longer to build the prompt than it took me to write the code.
Perhaps an AI might have come up with a more clever solution. Perhaps memorializing a prompt in a comment would be super insightful documentation. But I don't really need or want AI to do everything for me. I use it or not in a way that makes me happy. Right now that means I don't use it very much. Mostly because I haven't spent the time to learn how to use it. But I'm happy.
I worry that means the bad code / schema / design never gets improved.
I've spent a lot of my career cleaning up stuff like that, I guess with AI we just stop caring?
I wonder just what goes into someone's mind, when they do not care about who in the future is gonna have to maintain what they've crafted. Nor care about the experience of the user. Nor even feel accountable when they haven't done their due diligence to do things right.
And each time you do this, you make it worse. Now, if anyone ever wants to fix this, they have to fix your code as well.
You could say that. The schema in question was not mine nor in any way within my control. I could start up a business and write an entire app to replace the one in question. Maybe I could even get you to donate some money to fund that endeavor. Or I could spend an hour one time to code an external work around so I don't have to spend two hours a cycle fighting with that stupid app.
This is how ridiculous workflows evolve, but it really isn't AI's fault.
It's still possible to script out codegen. Frequently use Python to generate code like that.
Really I'd rather have AI generate a codegen script that deterministically does the struct from schema generation
I've had enough instances where it's slid in a subtle change like adding "ing" to a field name to not fully trust it
You’d have consumed probably 2+ magnitudes of energy or more just for coffee (and its growth and supply chain) to write that piece of code. Not counting the building, food, transportation…
Us humans are expensive part of the machine.
Yeah, it's addictive in a way similar to scrolling social media shorts or playing a slot machine.
The most pertinent thought in this is where the author asks, "LLMs can generate decent-ish and correct-ish looking code while I have more time to do what? doomscroll?"
LLMs are not good enough for you to set and forget. You have to stay nearby babysitting it, keeping half an eye on it. That's what's so disheartening to many of us.
In my career I have mentored junior engineers and seen them rapidly learn new things and increase their capabilities. Watching over them for a shirt while is pretty rewarding. I've also worked with contract developers who were not much better than current LLMs, and like LLMs they seemed incapable of learning directly from me. Unwilling even. They were quick to say nice words like, "ok, I understand, I'll do it differently next time," but then they didn't change at all. Those were some of the most frustrating times in my career. That's the feeling I get when using LLMs for writing code.
I pop between them in the "down time", or am reviewing their output, or am preparing the requirements for the next thing, or am reviewing my coworkers MRs.
Plenty to do that isn't doom scrolling.
Has anyone got any insights into what hiring software engineers looks like these days? As someone currently with a job and not hiring it is hard to imagine.
Has there been any sort of paradigm shift in coding interviews? Is LLM use expected/encouraged or frowned upon?
If companies are still looking for people to write code by hand then perhaps the author is onto something, if however we as an industry are moving on, will those who don't adapt be relegated to hobbyists?
I haven’t noticed much change yet at my firm. However, I work at a giant organization (700k+ employees) and they’re struggling to keep up. The lawyers aren’t even sure if we own the IP of agent generated code let alone the legal risk of sending client IP to the model providers.
It’s going to take a while.
It's obvious: companies will require both hand-coding and ai-coding skills. Job seeking has been hoop-jumping for many years, so why not one extra hoop?
5 round of LC by hand plus 5 round of LC with AI.
Most of the hiring is happening in heavy AI coding companies, a lot of mid sized companies have freezed hiring or they are also only hiring people who claim to use AI to be 10x devs. For non-lying devs, only big companies seem to be hiring and their process hasnt changed much. you are still expect to solve leetcode and then also sit through system design.
I confirm less hiring and those who do throw more difficult leetcode challenges than ever. The kind of challenge impossible to solve in time without an LLM doing the most part.
Most companies haven't recognized that LLM cheating is extremely effective and widespread yet. Hiring practices have not kept up.
Coding with AI falls in one of three categories:
1. The thing to be written is available online. AI is a search engine to find it, maybe also translate it to the language of choice.
2. The thing (system or component or function) is genuinely new. The spec has to be very precise and the AI is just doing the typing. This is, at best working around syntax issues, such as some hard-to-remember particular SQL syntax or something like that. The languages should be better.
3. It‘s neither new nor available online but a lot to type out and modify. The AI does all the boilerplate. This is a failure of the frameworks and languages to require so much boilerplate.
I’m really happy to see this take. It’s not the first time but it’s not said often enough. I once had the thought that anything AI can do really well is probably something that should not be being done at all. That’s an overly broad statement but I think there’s some truth in it. The grand challenge of software engineering is to find beautifully elegant and precise ways to express what we want the computer to do for us. If we can find them, it will be better to express ourselves in those ways than to prompt an AI than do it for us, much in the same way that a blog written by an LLM is not worth reading.
Really I don't think frameworks have kept up and LLMs are the hammer in Law of the Hammer.
I've developed at the speed of "vibecoding" long before LLMs by having highly thought-compressed tools, frameworks and snippets. Most of my applications use Model Driven Development where the data model automatically builds the application DAO/controllers/validations/migrations. The data model is the application. I find LLMs help me write procedures upon this data model even a little bit faster than I did before. But the data model is the design. Unless I turnover the entire design to the LLM, I am always the decider on the data model. I will always have more context about where I want to evolve the data model. I enjoy the data modelling aspect and want to remain in the driver seat, with LLMs as my implementer of procedures.
It feels like all of modern society is being reduced to button clicks that produce dopamine hits. And that’s sad. No wonder everything is a mess.
When slot machine is the ultimate UX.
Cf. wirehead, https://en.wikipedia.org/wiki/Wirehead_(science_fiction)
I like writing code that I don't have time pressure around, as well as the kind where I can afford to fail and use that as a learning experience. Especially the code that I can structure myself.
I sometimes dread writing code that's in a state of bad disrepair or is overly complex, think a lot of the "enterprise" code out there - it got so bad that I more or less quit a job over it, though never really stated that publicly, alongside my mind going dark places when you have pressure to succeed but the circumstances are stacked against you.
For a while I had a few Markdown files that went into detail exactly why I hated it, in addition to also being able to point my finger at a few people responsible for it. I tried approaching it professionally, but it never changed and the suggestions and complaints largely fell on deaf ears. Obviously I've learnt that while you can try to provide suggestions, some people and circumstances will never change, often it's about culture fit.
But yeah, outsource all of that to AI, don't even look back. Your sanity is worth more than that.
I wonder if some of the divide in the LLM-code discourse is between people who have mostly/always worked in jobs where they have the time and freedom to do things correctly, and to go back and fix stuff as they go, vs people who have mostly not (and instead worked under constant unrealistic time pressure, no focus on quality, API design, re-factoring, etc)
I’m pretty sure that the answer to that question is positive: those who have worked with code that sparks joy won’t like interacting with it closely being taken away, whereas the people for whom the code they have to work with inspires misery will be thankful for the opportunity to at least slight free themselves from the shackles of needing to run in circles for two weeks to implement a basic form because everything around it is a mess.
Even if Claude writes 100% code, I think there will be a bifurcation between people who are finicky about 10 lines of code. And those finicky about high level product experiences.
I think the 10 lines of code people worry their jobs now become obsolete. In cases where the code required googling how to do X with Y technology, that's true. That's just going to be trivially solvable. And it will cause us to not need as many developers.
In my experience though, the 10 lines of finicky code use case usually has specific attributes:
1. You don't have well defined requirements. We're discovering correctness as we go. We 'code' to think how to solve the problem, adding / removing / changing tests as we go.
2. The constraints / correctness of this code is extremely multifaceted. It simultaneously matters for it to be fast, correct, secure, easy to use, etc
3. We're adapting a general solution (ie a login flow) to our specific company or domain. And the latter requires us to provide careful guidance to the LLM to get the right output
It may be Claude Code around these fewer bits of code, but in these cases its still important to have taste and care with code details itself.
We may weirdly be in a case where it's possible to single-shot a slack clone, but taking time to change the 2 small features we care about is time consuming and requires thoughtfulness.
> I think the 10 lines of code people worry their jobs now become obsolete.
I'm gonna assume you think you're in the other camp, but please correct me if I'm mistaken.
I'd say I'm in the 10 lines of code camp, but I'd say that group is the least afraid of fictionalized career threat. The people that obsess over those 10 lines are the same people who show up to fix the system when prod goes down. They're the ones that change 2 lines of code to get a 35% performance boost.
It annoys me a lot when people ship broken code. Vibe coded slop is almost always broken, because of those 10 lines.
I’m probably in the 10 lines of code camp
At the same time I make enough silly mistakes hand coding it feels irresponsible to NOT have a coding LLM generate code. But I look at all the code and (gasp) make manual changes :)
One of the first bugs I found - and fixed - at my current job instantly made us an extra 200k/year. One line of code (potentially a one character fix?), causing a little bug nobody noticed, which I only saw because I like to comb through application logs, and caused by a peculiarity of the data. Would an LLM have written better code? Maybe. But I've seen a lot of bad code churned out by LLMs, even today. I'm not saying every line matters - particular for frontend code - but sometimes individual lines of code, or even individual characters, can be tremendously important, and not be written in any spec, not tested with all possible data combinations, or documented anywhere. At a previous job, I spent several days unraveling another one-line bug that was keeping a multi-million dollar project from running at all. Again, totally non-obvious unless you had a tremendous amount of context and were running a pretty complex system to figure it out, with a sort of tenacity the LLMs don't currently possess.
> I think there will be a bifurcation between people who are finicky about 10 lines of code. And those finicky about high level product experiences.
No ones care about a random 10 lines of code. And the focus of AI hypers on LoC is disturbing. Either the code is correct and good (allows for change later down the line) or it isn't.
> We may weirdly be in a case where it's possible to single-shot a slack clone, but taking time to change the 2 small features we care about is time consuming and requires thoughtfulness.
You do remember how easy it is to do `git clone`?
Nobody care until there’s an incident or security vulnerability, something doesn’t work based on some PMs assumptions of how it should work.
The question to me becomes whether the PM -> engineering handoff outdated? Should they be the same person? Does it collapse to one skillet for this work?
A PM describes the business needs. Engineering makes it a reality according to technical constraints. I've never seen a PM checking the code from engineering or investigating the root cause of an incident. And imagine being an engineer working on such a case and having to converse to consumers at the same time. That would be a very poor usage of resources.
It really depends on the project for me. For example,I never enjoyed writing react code (or really any UI), just the outcome of my idea materializing in a usable interface. There is nothing creative or fun for me in almost any UX framework. It’s just a ton of predictable typing (now we need a fricking box. And another box. And another stupid box…) I’m more than happy outsourcing that. However, my thoughts are too random and imprecise that actually outsourcing it before to another person always felt disrespectful to them. I don’t have to worry about that with AI. My company is paying it, and when I’m prototyping a react thing every now and then, I burn few thousand dollars a day for the lols.
If they don’t like it, take it away. I just won’t do that part because I have no interest in it. Some other parts of the project, I do enjoy working on by hand. At least setting up the patterns I think will result in simple readable flow, reduce potential bugs, etc. AI s not great at that. It’s happy to mix strings, nulls, bad type castings, no separation of concerns, no small understandable functions, no reusable code, etc. which is th part i enjoy thinking about
My gut says waning demand for labor in the dev job market removes your boss’s incentive to factor in what you enjoy or are interested in doing.
Same with gui. I’m making a web gui that’s very specific for a project that I’m working on. My team finds it very useful but I would never make that thing without AI assistance, combination of I don’t find it interesting or fun, would take too long, I am not familiar with web gui stuff.
Claude code makes react + tailwindcss + whatever component library actually bearable for me. I can just “make a navbar on the left hand side of the screen like vscode has” and it mostly does it, a few tweaks and I have what I want. I waste so much time on that stuff doing it by hand it drives me crazy.
Also “pull records from table X and display them in a data grid. Include a “New” button and associated functionality respecting column constraints in the database. Also add an edit and delete button for each row in the table”. God, it’s really nice to have an LLM get that 85% of the way done in maybe 2 min.
Seems like the author has a case of all or nothing. The real power in agentic programming, to me, is not in extremes, but in that you are still actively present. You don't give it world-size things to do, but byte-sized, and you constantly steer it. It's to be detailed enough to produce quality, and to be aware of everything it produces, but not so detailed that it makes sense to just write the code yourself. It's a delicate balance, but once you've found it, incredibly powerful. Especially mixed with deterministic self-checking tools (like some MCP's).
If you "set and forget", then you are vibe coding, and I do not trust for a second that the output is quality, or that you'd even know how that output fits into the larger system. You effectively delegate away the reason you are being paid onto the AI, so why pay you? What are you adding to the mix here? Your prompting skills?
Agentic programming to me is just a more efficient use of the tools I already used anyway, but it's not doing the thinking for me, it's just doing the _doing_ for me.
I am with you and fully agree with your "it does not have to be an all or nothing" stance. A remark on one part of your comment:
> What are you adding to the mix here? Your prompting skills?
The answer to that is an unironic and dead-serious "yes!".
My colleagues use Claude Opus and it does an okay job but misses important things occasionally. I've had one 18-hour session with it and fixed 3 serious but subtle and difficult to reproduce bugs. And fixed 6-7 flaky tests and our CI has been 100% green ever since.
Being a skilled operator is an actual billable skill IMO. And that will continue to be the case for a while unless the LLM companies manage to make another big leap.
I've personally witnessed Opus do world-class detective work. I even left it unattended and it churned away on a problem for almost 5h. But I spent an entire hour before that carefully telling it its success criteria, never to delete tests, never to relax requirements X & Y & Z, always to use this exact feedback loop when testing after it iterated on a fix, and a bunch of others.
In that ~5h session Opus fixed another extremely annoying bug and found mistakes in tests and corrected them after correcting the production code first and making new tests.
Opus can be scary good but you must not handwave anything away.
I found love for being an architect ever since I started using the newest generation [of scarily smart-looking] LLMs.
Yup, totally! I'm also not against the evolution of software engineer to a software architect. We were on that direction already anyway with the ever increasing amount of abstraction in our libraries and tools. This also frees up my ability to do other things, like coordinate cross team efforts, deal with customer support issues, etc. As a generalist, I feel more useful and thus valuable than ever, and that makes me very happy.
For me writing code is clarifying ideas, it’s an important part of the process. Sometimes you start to see a radical way of simplifying what you want, that only happens if you are willing to transform what your requirements are if they turn out to be overly prescriptive.
I think though it is probably better for your career to churn out lines, it takes longer to radically simplify, people don’t always appreciate the effort. Plus instead if you go the other way, increase scope and time and complexity that more likely will result in rewards to you for the greater effort.
If you write it by hand you get not only code, but also understanding.
Yea my job as a SWE is to have a correct mental model of the code and bing it with me everywhere I go... meetings, feature design, debugging sessions. Lines of code written is not unimportant, but matters way less when you look at the big picture
It doesn't have to be either-or in my experience.
I very much enjoy the actively of writing code. For me, programming is pure stress relief. I love the focus and the feeling flow, I love figuring out an elegant solution, I love tastefully structuring things based on my experience of what concerns matter, etc.
Despite the AI tools I still do that: I put my effort into the areas of the code that count, or that offer intellectually stimulating challenge, or where I want to make sure to explore manually think my way into the problem space and try out different API or structure ideas.
In parallel to that I keep my background queue of AI agents fed with more menial or less interesting tasks. I take the things I learn in my mental "main thread" into the specs I write for the agents. And when I need to take a break on my mental "main thread" I review their results.
IMHO this is the way to go for us experienced developers who enjoy writing code. Don't stop doing that, there's still a lot of value in it. Write code consciously and actively, participate in the creation. But learn to utilize and keep busy agents in parallel or when you're off-keyboard. Delegate, basically. There's quite a lot of things they can do already that you really don't need to do because the outcome is completely predictable. I feel that it's possible to actually increase the hours/day focussing on stimulating problems that way.
The "you're just mindlessly prompting all day" or "the fun is gone" are choices you don't need to be making.
I am happy someone else is also talking about addictive nature of vibe coding and its gambling-esque rewards. Would we see agentic programmers begging for tokens on kickstarter in future? That would be funny.
I said something similar in a different thread but the joy of actually physically writing code is the main reason why I became a software developer. I think there is some beauty to writing code. I enjoy typing the syntax, the interaction with my IDE, debugging by hand (and brain) rather than LLM, even if it's less efficient. I still use AI, but I do find it terribly sad that this type of more "manual" programming seems to be being forced out.
I also enjoy walking more than driving, but if I had to travel 50 miles every day for my job, I would never dream of going on foot. Same goes for AI for me. If I can finish a project in half the time or less, I still feel enough accomplishment and on top of that I will use the gained free time for self actualisation. I like my job and I love coding and solving challenging problems, but I also love tons of other stuff that could use more of my attention. AI has created an insane net positive value for me so far. And I see tons of other people who could also benefit from it the same way, if only they spent a bit more time learning how to use it effectively. Considering how everyone and their uncle thinks they need to chime in on what AI is or is not or what it can or can not do, I find most people have frustratingly little insight into what you can actually do already. Even the people working at companies like Amazon or MS who claim to work on AI integrations sometimes seem to be missing some essentials.
I don’t really understand your point about AI freeing up your time to do other stuff at your job. Does your employer let you work less hours since you’re finishing projects sooner? Mine certainly doesn’t, and I’d rather be coding than doing the other parts of my job. But maybe I’m misunderstanding why you were trying to say?
I also would rather a project take longer and struggle through it without using AI as I find joy in the process. But as I said in my original post I understand that type of work appears to be coming to an end.
Dev happiness is not the determining factor of how software will be written at scale
If you think unhappy devs are going to produce anything good then please let me know the stock ticker of your company so I can short it
Well said
It's one of the factors, especially when you consider it not just as one of the factors ethically, but also because their input is valued and if they are not happy it means something might be operationally wrong (although of course there might be a tradeoff between productivity and worker happiness)
I hate the implication that the happiness of employees is of no consequence and must be sacrificed for...whatever software is being produced.
But I guess that's nothing new.
Feel hand/human written code of an experienced individual should be more valuable for a business than one created by agents. Surely, agents and humans might be using the same underlying frameworks or programming languages, but the value difference depends on the breadth and depth of experience. Agents gives you the breadth but an experienced individuals give you the depth in understanding/problem solving.
I find it helps me just forced to be focused on a task for a few hours. Just the blocked out attention I spend on it will help refine and discover new problems and angles etc. I don't think just blocking out the time without actually trying to code it (staring at a wall) is as effective.
> “vibe coding has an addictive nature to it, you write some instructions, and code that looks correct is generated. Bam! Dopamine hit! If the code isn’t correct, then it’s just one prompt away from being correct”
The reason Claude code or Cursor feels addictive even if it makes mistakes is better illustrated in this post - https://x.com/cryptocyberia/status/2014380759956471820?s=46
> The process of writing code helps internalize the context and is easier for my brain to think deeply about it.
True, and you really do need to internalize the context to be a good software developer.
However, just because coding is how you're used to internalizing context doesn't mean it's the only good way to do it.
(I've always had a problem with people jumping into coding when they don't really understand what they are doing. I don't expect LLMs to change that, but the pernicious part of the old way is that the code -- much of it developed in ignorance -- became too entrenched/expensive to change in significant ways. Perhaps that part will change? Hopefully, anyway.)
My wife and my dad enjoy assembling furniture (the former free style, the latter off the instructions). I like the furniture assembled but I cannot stand doing it. Some of us are one way and others are the other way.
For me, LLMs are joyful experiences. I think of ideas and they make them happen. Remarkable and enjoyable. I can see how someone who would rather assemble the furniture, or perhaps build it, would like to do that.
I can’t really relate but I can understand it.
Its a new distinction. Building software used to be the same things as coding software. Now they are different.
That’s a good point. I suppose one must imagine the complaints from other Internet commenters in bygone times over using libraries vs writing one’s own code. They probably found themselves similarly estranged from a community of library assemblers. And now even those assemblers find themselves estranged from us machine whisperers. But we all were following the way to build software for our time.
I wonder who follows. Perhaps it has already happened. I look at the code but there are people who build their businesses as English text in git. I don’t yet have the courage.
Excellent insight. And thanks for addressing the actual subject rather than the analogy.
every day there's a thread about this topic and the discussions always circle around the same arguments.
I think we should be worrying about more urgent things, like a worker doing the job of three people with ai agents, the mental load that comes with that, how much of the disruption caused by ai will disproportionately benefit owners rather than employees, and so on.
Agreed but sadly, many people are too optimistic with AI and are completely forgetting that they can be the part of next layoffs.
And others are not able to believe the (not extreme) but visible speed boost from pragmatic use of AI.
And sadly, whenever the discussion about the collective financial disadvantage of AI to software engineers will start and wherever it goes…
The owners and employers will always make the profits.
We are, after all, in the holy temple of the adherents of the Great Disenfranchisement Machine.
I’m in a similar camp to the OP. For me, my joy doesn’t come from building - it comes from understanding. Which incidentally has actually made SWE not a great career path for me because I get bored building features, but that’s another story…
For me, LLMs have been a tremendous boon for me in terms of learning.
Initially I felt like this but now I've changed. Now I realise a lot of grunt work doesn't need to be done by me, i can direct llm to make changes. I can also experiment more as I'm able to build complex features, try it out and delete it without feeling too bad.
The more I read about things like this more I realize that software engineering today's just bloat and grunt work that people want to escape.
It's so ironic because computers/computer programs were literally invented to avoid doing grunt work.
To be fair a lot of bloat and gruntness are safety nets we built for our own benefit. Static typing, linting, test harnesses, visual regressions, CI etc. If AI to do the legwork there while I focus on business logic and UX, it's a win-win.
I agree. But I do have some concerns. Sometimes the LLM writes code and its a lot of work to go through it. I get lazy and trust LLM too much. I've been doing this for a while so I know how it should write, I go back and try to fix or refactor. But a new dev might direct LLM to write code they might not understand. Like a blackbox. LLM makes a lot of decisions without you realising, decisions which you used to make yourself. Writing code is making thousand decisions.
>Yes, coding is not software engineering
It absolutely is.
>Even if I generate a 1,000 line PR in 30 minutes I still need to understand and review it. Since I am responsible for the code I ship, this makes me the bottleneck.
You don't ship it, the AI does. You're just the middleman, a middleman they can eventually remove altogether.
>Now, I would be lying if I said I didn’t use LLMs to generate code. I still use Claude, but I do so in a more controlled manner.
"I can quit if I want"
>Manually giving claude the context forces me to be familiar with the codebase myself, rather than tell it to just “cook”. It turns code generation from a passive action to a deliberate thoughtful action. It also keeps my brain engaged and active, which means I can still enter the flow state. I have found this to be the best of both worlds and a way to preserve my happiness at work.
And then soon the boss demands more output, like the guys who left it all to Claude and even run 5x in parallel give.
> Even if I generate a 1,000 line PR in 30 minutes I still need to understand and review it. Since I am responsible for the code I ship, this makes me the bottleneck.
I am not responsible for choosing whether the code I write using a for loop or while loop. I am responsible for whether my implementation - code, architecture, user experience - meets the functional and non functional requirements. It’s been well over a decade that my responsibilities didn’t require delegation to other developers doing the work or even outsourcing an entire implementation to another company like a SalesForce implementation.
When I got my first job long ago, I found that code review does involve arguing over things like for vs while loop, or having proper grammar in comments. Thought about quitting for a sec.
Now that I have more experience and manage other SWEs, I was right, that stuff was dumb and I'm glad that nobody cares anymore. I'll spend the time reviewing but only the important things.
Unfortunately, people do care that the AI agents don’t code just like they do.
Once I got to the point where I was delegating complete implementations to seniors with just “this is a high level idea of what Becky’s department wants. You now know as much I do. If you have any business related questions go ask Becky and come back to me with a design and these are our only technical constraints”. Then two weeks later there are things I might have done differently. But it meets all of the functional and non functional requirements. I bite my toungue and move on.
His team is going to be responsable for it.
Now I don’t treat AI as a senior developer. I treat it as a mid level ticket taker. If their is going to be a feature change, I ain’t doing it any more. The coding agent is. I am just going to keep good documentation in various MD files for context.
Is there something about LLMs that suddenly make grammar and style irrelevant? Is your take, no human is going to read this ever again, so why bother making it pretty and consistent/readable?
It was never relevant, LLM or not. When reviewing junior SWEs' code pre LLMs, I didn't care about 75% of the style guide. I cared if they were using the DB wrong or had race conditions or wrote code I couldn't read.
In other comment, meant that other reviewers who used to nitpick have stopped for whatever reason, maybe because overall people are busier now.
Of course. Almost everyone who knows how to ride a horse, is happier riding a horse than driving a car too. Or hell, in decent weather even a bike.
In fact, it's even worse - driving a car is one of the least happy modes of getting around there is. And sure, maybe you really enjoy driving one. You're a rare breed when it comes down to it.
Yet it's responsible by far for the most people-distance transported every day.
These are starting to become daily horoscopes
> “What’s the point of it all?” I thought, LLMs can generate decent-ish and correct-ish looking code while I have more time to do what? doomscroll?
You could look back throughout human history at the inventions that made labor more efficient and ask the same question. The time-savings could either result in more time to do even more work, or more time to keep projects on pace at a sane and sustainable rate. It's up to us to choose.
I understand it’s certainly happier for you, but for most people, it’s more about paying the bills.
I also came to a pretty simple understanding over the years. If I'm coding and making progress on a project, I'm happy. If I'm not, or I'm stuck on something, I'm unhappy. This is a profoundly unhealthy way to live because life will pass you by. There is more to our existence than work, or even hobbies. And if AI lets me get more time for that, I am happier than ever.
This is great in theory, but answer me sincerely: are you spending less time at work because of AI? Because I reckon for most programmers here it is not the case at all.
There certainly are many who create a bit more PR. Ai generated. So they can roll their thumb for most of the day.
Yes but is AI really getting you unstuck or are you playing a game of whack-a-mole where it fixes one bug and generates several others that you are unaware off (just one example)?
> Yes but is AI really getting you unstuck
Yes, it really is.
I hate typing strings of syntax. So boring. Never saw the appeal. I do like tinkering with ideas, concepts, structure... just not the mechanical interaction part. Im not tbe best typist...then again, its the same with playing factorio. I love the concept of building structures, but fighting the UI to communicate my ideas is such a drag...
Bro discovered that using a calculator makes him happier doing long division by hand and decided the rest of us are just dopamine junkies for enjoying tools that actually scale.
It's a phenomenon you see in a lot of crafts. We enjoy the craft, but when it becomes all about the product and we optimize for that, the fun goes away.
Succinctly: process over product.
I am TL of an Android app with dozens of screens that expose hundreds of different distinct functions. My task is to expose all of these functions as appfunctions that can be called by an LLM in response to free form user requests. My current plan is to build a little LangGraph pipeline where first step is AI documenting all functions in each app's fragment, second step is extracting them into app functions, then refactoring fragment to call app functions etc. And by build I mean Gemini will build it for me and I will ask for some refinement and edit prompts.
I also like writing code by hand, I just don't want to maintain other people's code. LMK if you need a job referral to hand refactor 20K lines of code in 2 months. Do you also enjoy working on test coverage?
In other news water is wet
There’s been a new category of writings the last year. The AI Inevitability Soothsaying.[1]
There’s talk of war in the state of Nationstan. There are two camps: those who think going to war is good and just, and those who think it is not practical. Clearly not everyone is pro-war. There are two camps. But the Overton Window is defined with the premise that invading another country is a right that Nationstate has and can act on. There are by definition (inside the Overton Window) no one who is anti-war on the principle that the state has no right to do it.[2]
Not all articles in this AI category are outright positive. They range from the euphoric to the slightly depressed. But they share the same premise of inevitability; even the most negative will say that, of course I use AI, I’m not some Luddite[3]! It is integral to my work now. But I don’t just let it run the whole game. I copy–paste with judicious care. blah blah blah
The point of any Overton Window is to simulate lively debate within the confines of the premises.
And it’s impressive how many aspects of “the human” (RIP?) it covers. Emotions, self-esteem, character, identity. We are not[4] marching into irrelevance without a good consoling. Consolation?
[1] https://news.ycombinator.com/item?id=44159648
[2] You can let real nations come to mind here
This was taken from the formerly famous (and controversial among Khmer Rouge obsessed) Chomsky, now living in infamy for obvious reasons.
[3] Many paragraphs could be written about this
[4] We. Well, maybe me and others, not necessarily you. Depending on your view of whether the elites or the Mensa+ engineers will inherit the machines.
Basically describes how i use Claude Code now. I'll let it do stuff i don't want to do, like setting up mocks for unit tests (boring) or editing GitHub actions yaml (torture). But otherwise, i like to let it show me how to do something I'm not sure how to do, and then I'll just go do it myself. (If i have a clear idea of how i want to go something already, i just do it myself I'm the first place)
I almost never agree with the names Claude chooses, i despise the comments it adds every other line despite me telling it over and over and over not to, oftentimes i catch the silly bugs that look fine at first glance when you just let Claude write its output direct to the file.
It feels like a good balance, to me. Nobody on my team is working drastically faster than me, with or without AI. It very obviously slows down my boss (who just doesn't pay attention and has to rework everything twice) or some of the juniors (who don't sufficiently understand the problem to begin with). I'll be more productive then them even if i am hand-writing most of the code. So i don't feel threatened by this idea that "hand written code will be something nobody does professionally here soon" -- like the article said, if I'm responsible for the code i submit, I'm still the bottleneck, AI or not. The time i spend writing my own code is time I'm not poring over AI output trying to verify that it's actually correct, and for now that's a good trade.
Programming is a creative work. Replacing human creativity with pseudo parrot code generation impacts this process in bad ways. It's same reason many artists despise using AI for art.
Bean counters don't care about creativity and art though, so they'll never get it.
Good for artists I guess, I wouldn't know because I am not one. The best I can manage is drawing a stick figure of a cat. Years back I was working on a Mac app and I needed an icon. So I talked to an artist and she asked for $5K to make one for me. I couldn't justify spending so much on a hobby that I didn't know if it would go anywhere so I wrote a little app that procedurally generated me some basic sucky icon. I am sure Gordon Ramsay is also not impressed with cooking skills of my microwave, I just don't know how his objections practically relate to getting me fed daily.
k