I have 30+ years of industry experience and I've been leaning heavily into spec driven development at work and it is a game changer. I love programming and now I get to program at one level higher: the spec.
I spend hours on a spec, working with Claude Code to first generate and iterate on all the requirements, going over the requirements using self-reviews in Claude first using Opus 4.5 and then CoPilot using GPT-5.2. The self-reviews are prompts to review the spec using all the roles and perspectives it thinks are appropriate. This self review process is critical and really polishes the requirements (I normally run 7-8 rounds of self-review).
Once the requirements are polished and any questions answered by stakeholders I use Claude Code again to create a extremely detailed and phased implementation plan with full code, again all in the spec (using a new file is the requirements doc is so large is fills the context window). The implementation plan then goes though the same multi-round self review using two models to polish (again, 7 or 8 rounds), finalized with a review by me.
The result? I can then tell Claude Code to implement the plan and it is usually done in 20 minutes. I've delivered major features using this process with zero changes in acceptance testing.
What is funny is that everything old is new again. When I started in industry I worked in defense contracting, working on the project to build the "black box" for the F-22. When I joined the team they were already a year into the spec writing process with zero code produced and they had (iirc) another year on the schedule for the spec. At my third job I found a literal shelf containing multiple binders that laid out the spec for a mainframe hosted publishing application written in the 1970s.
Looking back I've come to realize the agile movement, which was a backlash against this kind of heavy waterfall process I experienced at the start of my career, was basically an attempt to "vibe code" the overall system design. At least for me AI assisted mini-waterfall ("augmented cascade"?) seems a path back to producing better quality software that doesn't suffer from the agile "oh, I didn't think of that".
About 15 years ago, I worked on code that delivered working versions to customers, repeatedly, who used it an reported zero bugs. It simply did what it was meant to, what had been agreed, from the moment they started using it.
The key was this: "the requirements are polished and any questions answered by stakeholders"
We simply knew precisely what we were meant to be creating before we started creating it. I wonder to what degree the magic of "spec driven development" as you call it is just that, and using Claude code or some other similar is actually just the expression of being forced to understand and express clearly just what you actually want to create (compared to the much more prevalent model of just making things in the general direction and seeing how it goes).
Waterfall can work great when: 1/ the focus is long-term both in terms of knowing that she company can take a few years to get the thing live but also that it will be around for many more years, 2/ the people writing the spec and the code are largely the same people.
Agile was really pushing to make sure companies could get software live before they died (number 1) and to remedy the anti-pattern that appeared with number 2 where non-technical business people would write the (half-assed) spec and then technical people would be expected do the monkey work of implementing it.
Agile core is the feedback loop. I can't believe people still don't get it. Feedback from reality is always faster than guessing on the air.
Waterfall is never great. The only time when you need something else than Agile is when lives are at stake, you need there formal specifications and rigorous testing.
SDD allows better output than traditional programming. It is similar to waterfall in the sense that the model helps you to write design docs in hours instead of days and take more into account as a result. But the feedback loop is there and it is still the key part in the process.
The only software I ever worked on that delivered on time, under budget, and with users reporting zero bugs over multiple deliveries, was done with heavy waterfall. The key was knowing in advance what we were meant to be making, before we made it. This did demand high-quality customers; most customers are just not good enough.
> Feedback from reality is always faster than guessing on the air
Only if you have no idea what the results will be.
Professional engineering takes parts with specific tolerances, tested for a specific application, using a tried-and-true design, combines them into the solution that other people have already made, and watches it work, exactly as predicted. That's how we can build a skyscraper "the first time" and have it not fall down. We don't need to build 20 tiny versions of a building until we get a working skyscraper.
But when you build a skyscraper you don’t one shot a completed building that stays static its entire life - you build a set of empty floors that someone else designs & fits out, sometimes years after the building as a whole is commissioned, usually several times in the lifespan of the superstructure.
And in the fitting out there often are things that exist only to get customer feedback (of sales), such as model apartments, sample cubicle layouts etc.
So yes, you are right that engineering can guide us to building something right first time - the hard part from software perspective is usually building the right thing, no the thing right.
An interesting analogy I came across once but could never find again is that with software systems, we’re not building a building, we’re designing a factory that produces an output - the example was a mattress factory that took in raw rubber feedstock & cloth and produced mattresses.
Are you running a mattress factory? Or are you trying to run a hotel, and need mattresses, so you build a mattress factory? The "software industry" is that - dysfunctional with perverse incentives.
We should not be building the same software over and over and over and over. I've built the same goddamn app 10 times in my career. And I watch other people build it, making the same old mistakes over and over, like a thousand other people haven't already gone through this and could easily tell you how not to do it. In other engineering professions, they write that stuff down, and say "follow this plan" because it avoids all the big problems. Thank god we have a building code and not "agile buildings".
Agile sucks because it incentivizes those obvious mistakes and reinventing of wheels. Planning allows someone to stop and look up the correct way of building the skyscraper before it's 100 feet in the air with a cracked foundation.
Waterfall, weeks of planning, write 3 times anyway.
The point is, people don't know what they want or are asking for, until it's in front of them. No system is perfect, but waterfall leads to bigger disasters.
Any real software (that delivers value over time) is constantly rewritten and that's a good thing. The question is whether the same people are rewriting it that wrote it and what percentage of that rewriting is based off of a spec or based off of feedback from elsewhere in the system.
> The only time when you need something else than Agile is when lives are at stake, you need there formal specifications and rigorous testing.
Lives are always at stake, given that we use software everywhere, and often in unintended ways, even outside its spec (isn't that a definition of a "hack"?).
People think of medical appliance software, space/air traffic software, defense systems or real-time embedded systems as the only environments where "lives are stake", but actually, in subtle ways, a violation
of user expectancy (in some software companies, UX issues count as serious bugs) in a Word processor, Web browser or
the sort command can kill a human.
Two real-life examples:
(1) a few years ago, a Chinese factory worker was killed by a robot. It was not in the spec that a human could ever walk in the robot's path (the first attested example of "AI" killing a human that I found at the time). This was way before deep larning entered the stage, and the factory was a closed and fully automated environment.
(2) Also a few years back, the Dutch software for social benefits management screwed up, and thousands of families just did not get pay out any money at all for an extended period. Allegedly, this led to starvations (I don't have details - but if any Dutch read this, please share), and eventually a whole Dutch government was forced to resign over the scandal.
That's a very narrow definition of engineering. What about property? Sensitive information?
It's a fine "whoopsie-doodle," when your software erases the life savings of a few thousand people. "We'll fix that in the next release," is already too little, too late.
This is correct. Agile is control theory applied to software engineering.
The plant to control here isn't something simple like a valve. You're performing cascaded control of another process where the code base is the interface to the plant you're controlling.
I spent my career building software for executives that wanted to know exactly what they were going to get and when because they have budgets and deadlines i.e. the real world.
Mostly I’ve seen agile as, let’s do the same thing 3x we could have done once if we spent time on specs. The key phrase here is “requirements analysis” and if you’re not good at it either your software sucks or you’re going to iterate needlessly and waste massive time including on bad architecture. You don’t iterate the foundation of a house.
I see scenarios where Agile makes sense (scoped, in house software, skunk works) but just like cloud, jwts, and several other things making it default is often a huge waste of $ for problems you/most don’t have.
Talk to the stakeholders. Write the specs. Analyze. Then build. “Waterfall” became like a dirty word. Just because megacorps flubbed it doesn’t mean you switch to flying blind.
> The key phrase here is “requirements analysis” and if you’re not good at it either your software sucks or you’re going to iterate needlessly and waste massive time including on bad architecture. You don’t iterate the foundation of a house.
This depends heavily on the kind of problem you are trying to solve. In a lot of cases requirements are not fixed but evolve over time, either reacting to changes in the real word environment or by just realizing things which are nice in theory are not working out in practice.
You don’t iterate the foundation of a house because we have done it enough times and also the environment the house exists in (geography, climate, ...) is usually not expected to change much. If that were the case we would certainly build houses differently than we usually do.
> making it default is often a huge waste of $ for problems you/most don’t have.
It's the opposite — knowing the exact spec of your program up front is vanishingly rare, probably <1% of all projects. Usually you have no clue what you're doing, just a vague goal. The only way to find out what to build is to build something, toss it over to the users and see what happens.
No developer or, dear god, "stakeholder" can possibly know what the users need. Asking the users up front is better, but still doesn't help much — they don't know what they want either.
No plan survives first contact with the enemy and there's no substitute for testing — reality is far too complex for you to be able to model it up front.
> You don’t iterate the foundation of a house.
You do, actually. Or rather, we have — over thousands of years we've iterated and written up what we've learned so that nobody has to iterate from scratch for every new house anymore. It's just that our physics, environment, and requirements for "a house" doesn't change constantly, like it does for software and we've had thousands of years to perfect the craft, not some 50 years.
Also, civil engineers mess up in exactly the same ways. Who needs testing? [1]. Who needs to iterate as they're building? [2].
> knowing the exact spec of your program up front is vanishingly rare, probably <1% of all projects
I don't have anything useful to add, but both of you speak and write with conviction from your own experience and perspective yet to refuse that the situation might be different from others.
"Software engineering" is a really broad field, some people can spend their whole life working on projects where everything is known up front, others the straight opposite.
Kind of feel like you both need to be clearer up front about your context and where you're coming from, otherwise you're probably both right, but just in your own contexts.
My experience is that such one-shotted projects never survive the collision with reality. Even with extremely detailed specs, the end result will not be what people had in mind, because human minds cannot fully anticipate the complexity of software, and all the edge cases it needs to handle. "Oh, I didn't think that this scheduled alarm is super annoying, I'd actually expect this other alarm to supersede it. It's great we've built this prototype, because this was hard to anticipate on paper."
I'm not saying I don't believe your report - maybe you are working in a domain where everything is super deterministic. Anyway, I don't.
I've been doing spec-driven development for the past 2 months, and it's been a game changer (especially with Opus 4.5).
Writing a spec is akin to "working backwards" (or future backwards thinking, if you like) -- this is the outcome I want, how do I get there?
The process of writing the spec actually exposes the edge cases I didn't think of. It's very much in the same vein as "writing as a tool of thought". Just getting your thoughts and ideas onto a text file can be a powerful thing. Opus 4.5 is amazing at pointing out the blind spots and inconsistencies in a spec. The spec generator that I use also does some reasoning checks and adds property-based test generation (Python Hypothesis -- similar to Haskell's Quickcheck), which anchors the generated code to reality.
Also, I took to heart Grant Slatton's "Write everything twice" [1] heuristic -- write your code once, solve the problem, then stash it in a branch and write the code all over again.
> Slatton: A piece of advice I've given junior engineers is to write everything twice. Solve the problem. Stash your code onto a branch. Then write all the code again. I discovered this method by accident after the laptop containing a few days of work died. Rewriting the solution only took 25% the time as the initial implementation, and the result was much better. So you get maybe 2x higher quality code for 1.25x the time — this trade is usually a good one to make on projects you'll have to maintain for a long time.
This is effective because initial mental models of a new problem are usually wrong.
With a spec, I can get a version 1 out quickly and (mostly) correctly, poke around, and then see what I'm missing. Need a new feature? I tell the Opus to first update the spec then code it.
And here's the thing -- if you don't like version 1 of your code, throw it away but keep the spec (those are your learnings and insights). Then generate a version 2 free of any sunk-cost bias, which, as humans, we're terrible at resisting.
Spec-driven development lets you "write everything twice" (throwaway prototypes) faster, which improves the quality of your insights into the actual problem. I find this technique lets me 2x the quality of my code, through sheer mental model updating.
And this applies not just to coding, but most knowledge work, including certain kinds of scientific research (s/code/LaTeX/).
My experience with both Opus and GPT-codex is that they both just forget to implement big chunks of specs, unless you give them the means to self-validate their spec conformance. I’m finding myself sometimes spending more time coming up with tooling to enable this, than the actual work.
The key is generating a task list from the spec. Kiro IDE (not cli) generates tasks.md automatically. This is a checklist that Opus has to check off.
Try Kiro. It's just an all-round excellent spec-driven IDE.
You can still use Claude Code to implement code from the spec, but Kiro is far better at generating the specs.
p.s. if you don't use Kiro (though I recommend it), there’s a new way too — Yegge’s beads. After you install, prompt Claude Code to `write the plan in epics, stories and tasks in beads`. Opus will -- through tool use -- ensure every bead is implemented. But this is a more high variance approach -- whereas Kiro is much more systematic.
I’ve even built my own todo tool in zig, which is backed by SQLite and allows arbitrary levels of todo hierarchy. Those clankers just start ignoring tasks or checking them off with a wontfix comment the first time they hit adversity. Codex is better at this because it keeps going at hard problems. But then it compacts so many times over that it forgets the todo instructions.
I think there's a difference between people getting a system a d realising it isn't actually what they wanted and "never survive collision with reality".
They survive by being modified and I don't think that invalidates the process that got them in front of people faster than would otherwise have been possible.
This isn't a defence of waterfall though. It's really about increasing the pace of agile and the size of the loop that is possible.
I believe the future of programming will be specs so I’m curious to ask you as someone who operates this way already, are there any public specs you could point to worth learning from that you revere? I’m thinking the same way past generations were referred to John Carmack’s Quake code next generations will celebrate great specs.
While the environment is changing. That's the key.
If you already know the requirements, and they aren't going to change for the duration of the project, then you don't need agile.
And if you have the time. I recently was on a project with a compressed timeline. The general requirements were known, but not in perfect detail. We began implementation anyway, because the schedule did not permit a fully phased waterfall. We had to adjust somewhat to things not being as we expected, but only a little - say, 10%. We got our last change of requirements 3 or 4 weeks before the completion of implementation. The key to making this work was regular, detailed, technical conversations between the customer's engineers, the requirements writers, and our implementers.
How does the resulting code look like though? I found that while <insert your favorite LLM> can spit out barely working C++ code fast, I then have to spend 10x time prodding it to refactor the code to look at least somewhat acceptable.
No matter how much I tell it that it is a "professional experienced 10x developer versed in modern C++, a second coming of Stroustrup" in per-project or global config files it still keeps spewing the same crap big (like manual memory management instead of RAII here and there, initializing fields in ctor body instead of initializer list, having manual init/cleanup methods in classes instead of a proper ctor/dtor design to ensure that objects are always in a consistent state, bunch of other anti-patterns, etc.) and small (checking for nullptr before passing the pointer to delete/free, manually instantiating objects as argument to shared_ptr ctor instead of make_shared, endlessly casting stuff around back and forth instead of designing data types properly, etc.).
Which makes sense I guess because it is how average C++ code on GitHub looks like unfortunately and that is what all those models were trained on, but I keep feeling like my job turning into performing endless code review for a not-very- bright junior developer that just refuses to learn...
This could be a language specific failure mode. C++ is hard for humans too, and the training code out there is very uneven (most of it pre-C++11, much of it written by non-craftspeople to do very specific things).
On the other hand, LLMs are great at Go because Go was designed for average engineers at scale, and LLMs behave like fast average engineers. Go as a language was designed to support minimal cleverness (there's only so many ways to do things, and abstractions are constrained). This kind of uniformity is catnip for LLM training.
This. I feel like the sentiment on HN is very binomial. For me my experience with LLMs is very much what you experience. Anything outside of generic tasks fails miserably. I’m really curious how people make it work so well.
Agile isn’t against spec writing. Specs can be a task in your story and so can automated tests. Both can be deliverables in your acceptance criteria. But that’s not how it went - because the human nature is to look for least effort.
Which AI, least effort is the specs so that’s the “greatest thing to do” again.
Perhaps a better way than to view them as alternative choices is to view them as alternative modes of working, between which it is sometimes helpful to switch?
We know old-style classic waterfall lacks flexibility and agile lacks planning, but I don't see a reason why not to switch back and forth multiple times in the same project.
Yep. I've been into spec-driven development for a long time (when we had humans as agents) and it's never really failed me. We just have literally more attention (hah!) from LLMs than from humans.
What's amusing to me is that PRIDE, the oldest generally available software methodology and perhaps the least appreciated, is basically just "spec driven development with human programmers". Most of the time, and personnel, involved in development is on elucidating the requirements and developing the spec; programmers only get involved at the end and their contribution is about 15%. For a few decades this was considered the "correct" way to develop software. But then PCs happened, mom-and-pop software vendors stuffing floppy disks into Ziploc happened, and the myth of the lone "genius programmer" took hold of the industry, and programmers experienced such prestige inflation that they thought they were able to call the shots, and by and large management acquiesced. And that's how we got Agile.
With the rise of AI, maybe programmers will be put back in their rightful place, as contributors of the final small piece of the development process: a translation from business terms to the language of the computer. Programming as a profession should, by all rights, be obsolete. We should be able to express the solution directly in business terms and have the translation take place automatically. Maybe that day will be here soon.
As it is so often in life, extreme approaches are often bad. If you do pure waterfall you risk finding out very late that your plan might not work out, either because of unforeseen technical difficulties implementing it, the given requirements actually being wrong/incomplete or just simply missing the point in time where you planned enough. If you do extreme agile you often end up with a shit architecture which actually, among other things, hurt your future agility but you get a result which you can validate against reality. The "oh, I didn't think of that" is definitely present in both extremes.
Agile is really about removing managers. The twelve principles does encourage short development cycles, but that's to prevent someone from going off into the weeds — having no manager to tell them to stop.
> Pre-training is, actually, our collective gift that allows many individuals to do things they could otherwise never do, like if we are now linked in a collective mind, in a certain way.
Is not a gift if it was stolen.
Anyway, in my opinion the code that was generated by the LLM is yours as long as you're responsible for it. When I look at a PR I'm reading the output of a person, independently of the tools that person used.
There's conflict perhaps when the submitter doesn't take full ownership of the code. So I agree with Antirez on that part
I don't respect the "license" because the very concept is broken and offensive to man and to God.
In an age when computers have enabled infinite duplicate of everything for practically free, it is a barbarian and vampire who would stand there with his hand held out expecting payment, or expecting his permission to be asked every time bytes are copied from A to B.
I have been pirating everything since day 1 and thank God for it. Otherwise I'd still be an ignorant backwards rube like the rest of the huddled masses, with no money to buy any of these books etc to educate myself or gas money to constantly be driving to the library. Now I have one of the largest, best curated private libraries in the world, all for free.
I make no apologies to anyone, nor do I ever ask anyone's permission to copy bits and bytes around or share them freely.
Now somebody came out with a new technological innovation which takes advantage of the power of freely shared information in order to do something great and extraordinarily useful--and of course the Copyright Crew is there to scream loudly at the injustice of it all. I'm sick of these people.
Licenses mean nothing if AI training on your data is fair use, which courts have yet to determine.
You can have a license that says "NO AI TRAINING EVER" in no uncertain terms and it would mean absolutely nothing because fair use isn't dictated by licenses.
God has already made the determination. It has been determined that licenses just mean nothing, period. Any claim to the contrary is only mafia figures with guns trying to enforce illegitimate "ownership" claims on something that can't actually be owned by anyone.
The War on Copying Data Freely will surely end in happiness and utopia, just like the War on Drugs did.
It is knowledge, it can't be stolen. It is stolen only in the sense of someone gatekeeping knowledge. Which is as a practice, the least we can say, dubious. because is math stolen ? if you stole math to build your knowledge on top of it, you own nothing and can claim to have been stolen yourself
Code is the expression of knowledge and can be protected by copyright.
A lot of the popular licenses on GitHub (like MIT) permits you to use a piece of code on the condition that you credit the original author. If an LLM outputs code from such a project (or remixes code from several such projects) then it needs to credit the original authors or be in violation.
If Disney's intellectual property can be stolen and needs to be protected for 95+ years by copyright then surely the bedroom programmers' labor deserves the same protections.
We're not talking about the expression of knowledge. What is used in AI models is the knowledge from that expression. That code is not copied as is, instead knowledge is extracted from it and used to produce similar code. Copyright does not apply, IMHO
So you can train AI on Disney Movies to generate and sell your own disney movies because "knowledge is extracted" from it ? Betcha that won't fly in the courts. Here is "Slim Cinderella" - trained and extracted from all Disney Cinderella movies!
Yes, I can train AI on Disney Movies and sell my own Disney movies. I might have to move to a more civilized nation free of thugs with guns who attempt to stop me from doing this, or I might have to lie very low and be careful not to attract their attention, but in either case it's quite possible, and even easy for me to do this thing. People will soon be doing it all the time on their 10 year old Dell.
Independent of ones philosophical stance on the broader topic: I find it highly concerning that AI companies, at least right now, seem to be largely exempt from all those rules which apply to everyone else, often enforced rigorously.
I draw from this that no-one should be subject to those rules, and we should try to use the AI companies as a wedge to widen that crack. Instead, most people people who claim that their objection is really only consistency, not love for IP spend their time trying to tighten the definitions of fair use, widen the definitions of derivative works, and in general make IP even stronger, which will effect far more than just the AI companies they're going after. This doesn't look to me like the behavior of people who truly only want consistency, but don't like IP.
And before you say that they're doing it because it's always better to resist massive, evil corporations than to side with them, even if it might seem expedient to do so, the people who are most strongly fighting against AI companies in favor of IP, in the name of "consistency" are themselves siding with Disney, one of the most evil companies — from the perspective of the health of the arts and our culture — that's working right now. So they're already fine with siding with corporations; they just happened to pick the side that's pro-IP.
oh hey, let's have a thought experiment in this world with no IP rules
suppose I write a webnovel that I publish for free on the net, and I solicit donations. Kinda like what's happening today anyway.
Now suppose I'm not good at marketing, but this other guy is. He takes my webnovel, changes some names, and publishes it online under his name. He is good at social media and marketing, and so makes a killing from donations. I don't see a dime. People accuse me of plagiarism. I have no legal recourse.
There are also unfair situations that can happen, equally as often, if IP does exist, and likewise, in those situations, those with more money, influence, or charisma will win out.
Also, the idea that that situation is unfair relies entirely on the idea that we own our ideas and have a right to secure (future, hypothetical) profit from them. So you're essentially begging the question.
You're also relying on a premise that, when drawn out, seems fundamentally absurd to me: that you should own not just the money you earn, but the rights to any money you might earn in the future, had someone not done something that caused unrelated others to never have paid you. If you extend that logic, any kind of competition is wrong!
there are two programmers.
first is very talented technically, but weak at negotiations, so he earns median pay.
second is average technically, but very good at negotiations, and he earns much better.
In China, engineers hold the most power, yet the country prospers. I don't think the problem is giving engineers power, rather a cultural thing. In china there is a general feeling of contributing towards the society, in the US everyone is trying to screw over each-other, for political or monetary reasons.
This is obviously false on the face of it. Let’s say I have a patent, song, or a book that that I receive large royalty payments for. It would obviously not be logical for me be in favor of abolishing something that’s beneficial to me.
Declaring that your side has a monopoly on logic is rarely helpful.
I agree with GP, and so, yes, I release everything I do — code and the hundreds of thousands of painstakingly researched, drafted, deeply thought through words of writing that I do — using a public domain equivalent license (to ensure it's as free as possible), the zero clause BSD.
I release all my code in the public domain too. I would never think of enlisting thugs with guns to dictate to other people how they use "my" code. It didn't come from me in the first place. It was all that pirating/reading I did online FOR FREE that enabled me to write this code. It was the genetics I inherited from my ancestors who made it happen.
I know you didn't ask me, and I don't care. I'm telling you.
The entire purpose of copyright and patent law is to enable uncreative fat cats like Kevin O'Leary to claim ownership over the creations of others. Period. It has nothing to do with "protecting" the creator, and never did. The ability to create is its own protection.
Ever watch Shark Tank? Mr. O'Leary--self-proclaimed Mr. Wonderful--LOVES deals involving patents and royalties. That's how he made all of his money, and without them he would be destitute.
Notice that the people with the fewest ideas are the biggest hoarders of whatever they have, and the most jealous of anyone who approaches "their" stuff--while those who are most creative are the most giving and sharing.
Personal blog: https://neonvagabond.xyz/ (591,305 total words, written over 6 years; feel free to do whatever you want with it)
My personal github page: https://github.com/alexispurslane/ (I only recently switched to Zero-Clause BSD for my code, and haven't gotten around to re-licensing all my old stuff, but I give you permission to send a PR with a different license to any of them if you wanna use any of it)
I arrived at a very similar conclusion since trying Claude Code with Opus 4.5 (a huge paradigm shift in terms of tech and tools). I've been calling it "zen coding", where you treat the codebase like a zen garden. You maintain a mental map of the codebase, spec everything before prompting for the implementation, and review every diff line by line. The AI is a tool to implement the system design, not the system designer itself (at least not for now...).
The distinction drawn between both concepts matters. The expertise is in knowing what to spec and catching when the output deviates from your design. Though, the tech is so good now that a carefully reviewed spec will be reliably implemented by a state-of-the-art LLM. The same LLM that produces mediocre code for a vague request will produce solid code when guided by someone who understands the system deeply enough to constrain it. This is the difference between vibe coding and zen coding.
Zen coders are masters of their craft; vibe coders are amateurs having fun.
And to be clear, nothing wrong with being an amateur and having fun. I "vibe code" several areas with AI that are not really coding, but other fields where I don't have professional knowledge in. And it's great, because LLMs try to bring you closer to the top of human knowledge on any field, so as an amateur it is incredible to experience it.
If you're this meticulous is it really any faster than writing code manually? I have found that in cases where I do care about the line-by-line it's actually slower to run it through Claude. It's only where I want to shovel it out that it's faster.
> I'm a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.
I disagree. The code you wrote is a collaboration with the model you used. To frame it this way, you are taking credit for the work the model did on your behalf. There is a difference between I wrote this code entirely by myself and I wrote the code with a partner. For me, it is analogous to the author of the score of an opera taking credit for the libretto because they gave the libretto author the rough narrative arc. If you didn't do it yourself, it isn't yours.
I generally prefer integrated works or at least ones that clearly acknowledge the collaboration and give proper credit.
Copyright infringement is a tort. “Illegal” is almost always used to refer to breaking of criminal law.
This seems like intentionally conflating them to imply that appropriating code for model training is a criminal offense, when, even in the most anti-AI, pro-IP view, it is plainly not.
> There are four essential elements to a charge of criminal copyright infringement. In order to sustain a conviction under section 506(a), the government must demonstrate: (1) that a valid copyright; (2) was infringed by the defendant; (3) willfully; and (4) for purposes of commercial advantage or private financial gain.
I think it’s very much an open debate if training a model on publicly available data counts as infringement or not.
I was about to argue, and then I suddenly remembered some past situations where a project manager clearly considered the code I wrote to be his achievement and proudly accepted the company's thanks.
The way I put it is: AI assistance in programming is a service, not a tool. It's like you're commissioning the code to be written by an outside shop. A lot of companies do this with human programmers, but when you commission OpenAI or Anthropic, the code they provide was written by machine.
Prompting the AI is indeed “do[ing] it yourself”. There’s nobody else here, and this code is original and never existed before, and would not exist here and now if I hadn’t prompted this machine.
Sure. But the sentence "I am a programmer" doesn't fit with prompting, just as much as me prompting for a drawing that resembles something doesn't make me a painter.
We maybe witnessing the last generation of master software artisans like antirez.
This is beautiful to see, their mastery harnessing the power of the intelligent machine tools to design, understand and build.
This is like seeing a master of image & light like michelangelo receiving a camera, photoshop and a printer. It's an exponential elevation of the art.
But to become a master like michelangelo one had to dedicate herself to the craft of manually mixing and applying materials to bend and modulate light, slowly building and consolidating those neural pathways by reflection and, most of all, practice, until those skills became as natural as getting up or bringing a hand to the mouth. When that happened, art flowed from her mind to the physical world and the body became the vessel of intuition.
A master like antirez had to wrap his head around concepts alien to the human mind. Bits, bytes, arrays, memory layout, processors, compilers, interfaces, abstractions, constraints, types, concurrency do not exist in the savannas that forged brains. Had to comprehend and learn to use his own cognitive capabilities and restrictions to know at what level to break the code units and the abstraction boundaries. At the very top, master this in a level so high that software became like Redis: beautiful, powerful and so elevated in the art that it became simpler, not more complex. It's Picasso drawing a dog.
The intelligent software building machines can do things no human manually can (given the same time, humans die, get old or get bored), but they are not brush and canvas. They function in another way, the mind needs other paths to master them. The path to master them is not the same path to master artisanal software building.
So, this new generation, wanting to build things not possible to the artisan, will become masters of another craft, one we right now cannot even comprehend or imagine, in the same way michelangelo could never imagine the level of control over light the modern photography masters have.
Me, not a master, but having dedicated my whole life to artisanal software building, am excited to receive and use the new tools, to experiment the new craft. Also frightened by the uncertainty of this new world.
> We maybe witnessing the last generation of master software artisans like antirez
What? He is mostly a AI influencer at this stage, even without getting paid for it (I think). There are always gonna be people writing code, people writing music, just because a machine can write code doesnt change the fact coding itself is a fun exercise.
More relevantly, I've been seeing an explosion of (ostensibly) human-produced artwork in my SM feed, despite that Stable Diffusion and the like are supposed to bypass the need for artistic skill and make your anime waifu come to laifu with a paragraph of prompt.
>A master like antirez had to wrap his head around concepts alien to the human mind. Bits, bytes, arrays, memory layout, processors, compilers, interfaces, abstractions, constraints, types, concurrency do not exist in the savannas that forged brains.
You still need to know these things if you're doing anything more complicated than making some CRUD dashboard. LLMs assist with some code generation, and assist with some knowledge lookup. That's pretty much it.
What seems to be the case is that you need to know everything you needed to know before, and* become good at leveraging AI tooling to make you go faster.
*Even this is optional. There is absolutely nothing stopping anyone from just ignoring everything about AI and keep developing software like pre-2022. The efficiency difference isn't even significance in the grand scheme of things. It's not like people had reams of perfect software specs just lying around waiting to be implemented. That's just not how people develop software; usually the spec emerges while you're writing the program.
Every time I hear someone mention they vibed a thing or claude gave them something, it just reads as a sort of admission that I'm about to read some _very_ "first draft"-feeling code. I get this even from people who spend a lot of time talking about needing to own code you send up.
People need to stop apologizing for their work product because of the tools they use. Just make the work product better and you don't have to apologize or waste people's time.
Especially given that you have these tools to make cleanup easier (in theory)!
I feel like this wording isn't great when there are many impactful open source programmers who have explicitly stated that they don't want their code used to train these models and licensed their work in a world where LLMs didn't exist. It wasn't their "gift", it was unwillingly taken from them.
> I'm a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.
I've seen LLMs generate code that I have immediately recognized as being copied a from a book or technical blog post I've read before (e.g. exact same semantics, very similar comment structure and variable names). Even if not legally required, crediting where you got ideas and code from is the least you can do. While LLMs just launder code as completely your own.
I don't think it's possible to separate any open source contribution from the ones that came before it, as we're all standing on the shoulders of giants. Every developer learns from their predecessors and adapts patterns and code from existing projects.
Exactly that. And all the books about, for instance, operating systems, totally based on the work of others: their ideas where collected and documented, the exact algorithms, and so forth. All the human culture worked this way. Moreover there is a strong pattern of the most prolific / known open source developers being NOT against the fact that their code was used for training: they can't talk for everybody but it is a signal that for many this use is within the scope of making source code available.
Yeah, documented *and credited*. I'm not against the idea of disseminating knowledge, and even with my misgivings about LLMs, I wouldn't have said anything if this blog post was simply "LLMs are really useful".
My comment was in response to you essentially saying "all the criticisms of LLMs aren't real, and you should be uncompromisingly proud about using them".
> Moreover there is a strong pattern of the most prolific / known open source developers being NOT against the fact that their code was used for training
I think it's easy to get "echo-chambered" by who you follow online with this, my experience has been the opposite, i don't think it's clear what the reality is.
If you fork an open source project and nuke the git history, that's considered to be a "dick move" because you are erasing the record of people's contributions.
The hard truth is that if you're big enough (and the original creator is small enough) you can just do whatever you want and to hell with what any license says about it.
To my understanding, the expensive lawyers hired by the biggest people around, filtered through layers of bureaucracy and translated to software teams, still result in companies mostly avoiding GPL code.
Which was in fact the very intent of the GPL from day one, putting all the marketing material and lies aside: to cripple and hinder the burgeoning open source ecosystem as long as possible.
This also explains decades of questionable decisions of projects like gcc, glibc, gimp (it's right there in the name!), gnome, etc. Richard Stallman is a plant.
Note his recent speech at the Georgia Tech, where he says a lot of very nice things, I'm sure...while wearing some goofy face mask like he's still scared to death of COVID. He is also well known for his lack of personal hygiene and well developed body odor, which is quite curious actually, as at least one person who put this character up for a few days reports that he is a fan of long, hot showers. It's almost like the whole "crusty bearded geek weirdo" thing is just an act, meant to give Free Software a bad reputation.
Reminds me very much of David McGowan's book Weird Scenes Inside the Canyon, in which he explains exactly who created the hippie movement and to what end. Exactly like that, in fact.
Great thing so many open source projects have willingly donated all their copyright ownership to the hands of this GNU organization, right? It will be closely guarded and protected, I'm sure.
I’ve been thinking that information provenance would be very useful for LLMs. Not just for attribution (git authors), but the LLM would know (and be able to control) which outputs are derived from reliable sources (e.g. Wikipedia vs a Reddit post; also which outputs are derived from ideologically-aligned sources, which would make LLMs more personal and subjectively better, but also easier to bias and generate deliberate misinformation).
“Information provenance” could (and I think most likely would, although I’m very unfamiliar with LLM internals) be which sources most plausibly derive an output, so even output that exists today could eventually get proper attribution.
At least today if you know something’s origin, and it’s both obvious and publicly online, you have proof via the Internet Archive.
> I don't think it's possible to separate any open source contribution from the ones that came before it, as we're all standing on the shoulders of giants. Every developer learns from their predecessors and adapts patterns and code from existing projects.
Yes but you can also ask the developer (wheter in libera.irc, or say if its a foss project on any foss talk, about which books and blogs they followed for code patterns & inspirations & just talk to them)
I do feel like some aspects of this are gonna get eaten away by the black box if we do spec-development imo.
> there are many impactful open source programmers who have explicitly stated that they don't want their code used to train these models and licensed their work in a world where LLMs didn't exist. It wasn't their "gift", it was unwillingly taken from them.
There are subtle legal differences between "free open source" licensing and putting things in the public domain.
If you use an open source license, you could forbid LLM training (in licensing law, contrary to all other areas of law, anything that is not granted to licensees is forbidden). Then you can take the big guys (MSFT, Meta, OpenAI, Google) to court if you can demonstrate they violated your terms.
If you place your software into the public domain, any use is fair, including ways to exploit the code or its derivatives not invented at the time of release.
Curiosly, doesn't the GPL even imply that if you pre-tain an LLM with GPLed code and use it to generate code (Claude Code etc.) that all generated code -- as derived intellectual property that it clearly is -- must also be open sourced as per GPL terms? (It would seem in the spirit of the licensors.) Haven't seen this raised or discussed anywhere yet.
> If you use an open source license, you could forbid LLM training
Established OSS licenses are all from before anyone imagined that LLMs would come into existence, let alone train on and then generate code. Discrimination on purpose is counter to OSI principles (https://opensource.org/osd):
> 6. No Discrimination Against Fields of Endeavor
> The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
The GPL argument you describe hinges on making the legal case that LLMs produce "derived works". When the output can't be clearly traced to source input (even the system itself doesn't know how) it becomes rather difficult to argue that in court.
One thing I'd love to point out here, to anyone wading through this discussion:
Step back and notice the VAST AMOUNT OF TIME AND ENERGY being wasted here and elsewhere, arguing about who claims to own what. What a giant waste, in an age where digital machines can reproduce any information infinitely for practically free.
Thanks, copyright law, and all the parasites (lawyers, etc) who depend on it for their big, expensive livelihood. Thanks to the government and corporations who have squeezed us all and made it so hard to make a living that everyone feels they now have to monetize and profit from everything to survive. The gift that keep on giving.
You pre suppose that output is derive work (not a given) and that training is not fair use (also not a given).
If the courts decide to apply the law as you assume the AI companies are all dead. But they are all betting that's not going to be the case. And since so much of the industry is taking the bet with them... The courts will take that into account
> I feel like this wording isn't great when there are many impactful open source programmers who have explicitly stated that they don't want their code used to train these models
That’s been the fate of many creators since the dawn of time. Kafka explicitly stated that he wanted his works to be burned after his death. So when you’re reading about Gregor’s awkward interactions with his sister, you’re literally consuming the private thoughts of a stranger who stated plainly that he didn’t want them shared with anyone.
Yet people still talk about Kafka’s “contribution to literature” as if it were otherwise, with most never even bothering to ask themselves whether they should be reading that stuff at all.
No, in the same way that I wouldn't cite Euler every time I used one of his theorems - because it's so well known that its history is well documented in countless places.
However, if I was using a more recent/niche/unknown theorem, it would absolutely be considered bad practice not to cite where I got it from.
If I was implementing any known (named) algorithm intentionally I think I would absolutely say so in a comment (`// here we use quick sort to...` and maybe why it's the choice) and then it's easy for someone to look up and see it's due to Hoare or whoever on Wikipedia etc.
Now many will downvote you because this is an algorithm and not some code. But the reality is that programming is in large part built looking at somebody else code / techniques, internalizing them, and reproducing them again with changes. So actually it works like that for code as well.
If you publish your code to others under permissive licenses, people using it to do things you do not want is not something being unwillingly taken from you.
You can do whatever you want with a gift. Once you release your code as free software, it is no longer yours. Your opinions about what is done with it are irrelevant.
But the license terms state under which conditions the code is released.
For example: MIT license states has this clause "The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software."
It stands to reason that if an LLM outputs something based on MIT-licensed code then that output should at least contain that copyright because it's what the original author wished.
And I saw a comment below arguing that knowledge cannot be copyrighted, but the code is an expression of that knowledge and that most certainly can be protected by copyright.
> It wasn't their "gift", it was unwillingly taken from them.
Yes. Exactly. As a developer in that case I feel almost violated in my trust in “the internet.” Well it’s even worse, I did not really trust it, but did not think it could be that bad.
I don't understand this perspective. Programmers often scoff at most other examples of intellectual property, some throwing it out all together. I remember reading Google vs Oracle where Oracle sued Google for stealing code to perform a range check, about about 9 lines long, used to check array index bounds.
I guess the difference is AI companies bad? This is transformative technology creating trillions in value and democratizing information, all subsidized by VC money. Why would anyone in open source who claims to have noble causes be against this? Because their repo will no longer get stars? Because no one will read their asinine stack overflow answer?
Hot take: The Supreme Court should have sided with Oracle. APIs are a clear example of unique expression, and there is no statute exempting them specifically from copyright protection. If they are not protected by copyright, is anything really? What meaning has copyright law then?
Why is copyright law more important than anything else? AI is likely to drive the next stage of humanity's intellectual evolution, while copyright is a leaky legal abstraction that we pulled out of our asses a couple hundred years ago.
One of these is much more important than the other. If the copyright cartels insist on fighting AI, then they must lose decisively.
In the 1950s/1960s, the term "automatic programming" referred to compiler construction: instead of writing assembler code by hand, a FORula TRANslator (FORTRAN) could "magically" turn a mathematical formula into code "by itself".
"4GL" was a phase in the 1980s when very high level languages very provided by software companies, often integrating DB access and especially suited for particular domains. The idea was that one could focus more on the
actual problem rather than having to write boilerplate
needed to solving it.
LLMs permit to go from natural language specification to draft implementation. If one is lucky, it runs and produes the desired results right away; more often, one needs to revise the code base iteratively, again navigated by NL commands, to fix errors, to change the design based on reviewing the first shot at it, to add features etc.
> That said, if vibe coding is the process of producing software without much understanding of what is going on [...], automatic programming is the process of producing software that attempts to be high quality and strictly following the producer's vision of the software [...], with the help of AI assistance.
He is absolutely right here, and I think in this article he has "shaped" the direction of future software engineering (which is already happening actually): we are moving closer and closer to a new way of writing code. But this time, for real. I mean that it will increasingly become the standard. Just as in the past an architect used to draw every detail by hand, while today much of the operational work is delegated to parametric software, CAD, BIM, and so on. The architect does not "draw less" because they know less, but because the value of their work has shifted. This is a concept we've repeated often in recent months, with the advent of Opus 4.5 and 5.2-Codex. But I think that here antirez has given it the right shape and also did well to distinguish it from mere vibecoding, which, as far as I'm concerned, are two radically different approaches.
This is a classic false dichotomy. Vibe coding, automatic coding and coding is clearly on a spectrum. And I can employ all the shades during a single project.
> Users should claim the output of LLMs as their own, for the following reason. LLMs are tools; tools can be used with varying degrees of skill; the output of tools (including LLMs) is a function of the user's skill; and therefore the output is attributable to and belongs to the user.
> Furthermore, we should use tools, including LLMs, actively and mindfully. We shouldn't switch off our brains and accept the output uncritically. We should iterate and improve as we go along.
I agree with you that the author seems to inappropriately convert differences in degree of skill into differences of kind.
Friendly reminder that almost nobody is working this way now. You (reader) don't have to spend 346742356 tokens on that refactor. antirez won't magically swoop in and put your employer out of business with the Perfect Prompt (and accompanying AI blog post). There's a lot of software out there and MoltBook isn't going to spontaneously put your employer out of business either.
Don't fall into the trap of thinking "if I don't heavily adopt Claude Code and agentic flows today I'll be working at Subway tomorrow." There's an unhealthy AI hype cottage industry right now and you aren't beholden to it. Change comes slowly, is unpredictable, and believe it or not writing Redis and linenoise.c doesn't make someone clairvoyant.
Putting your head in the sand and ignoring it all isn't a good strategy either. Like it or not, AI will be a part of the rest of your career in some quantity. Not just because we collectively decide that we want to use these tools, but because tools that undeniably provide a huge productivity boost when used correctly are something the economy cannot ignore.
My advice would be to avoid feeling compelled to try every new tool immediately, but at least try to stay aware of major developments. A career in software engineering also dooms you to life-long learning in a very fast changing environment. This is no different. Agents are tools that work quite differently from what we're used to, and need cognitive effort and learning to wield effectively.
Waking up one day to realise you're now expected to work naturally in tandem with an AI agent but lack the experience is not a far-fetched scenario.
Like with most technological change I think there is no need for FOMO. You run into problems if you completely ignore already established and proven tools and practices for years to come but you don't have to jump onto every "this changes everything, trust me bro" hype.
"Vibe coding" is good for describing a certain style of coding with AI.
"Automatic programming" is what I get paid for in my 9-5, things have to work and they have to work correctly. Things I write run in real production with real money at stake. Thus, I behave like an adult and a professional.
a better term might be “feedback engineering” or “verification engineering” (what feedback loop do I need to construct to ensure that the output artifact from the agent matches my specification)
This includes standard testing strategies, but also much more general processes
I think of it as steering a probability distribution
At least to me, this makes it clear where “vibe coding” sits … someone who doesn’t know how to express precise verification or feedback loops is going to get “the mean of all software”
I disagree with referring to this as automatic software as if it's a binary statement. It's very much a spectrum and this kind of software development is not fully automatic.
May be a language issue but "Automatic" would imply something happening without any intervention. Also, I dont like that everyone is trying to coin a term for this but there is already a term called lite coding for this sort of a setup, I just coined it.
>Vibe coding is the process of generating software using AI without being part of the process at all.
Even the most one shot prompt vibecoding is still getting high level intent from the person and then testing it in person. There is no "without being part of the process at all".
And from there its a gradient as to how much input & guidance is given.
This entire distinction he's trying to make here just doesn't make sense frankly. Trying to impose two categories on something that is clearly a continuous spectrum.
I don’t think that is a good term. We generally designate processes as “automatic” or “automation” that work without any human guidance or involvement at all. If you have to control and steer something, it’s not automatic.
There's a hidden assumption in the waterfall vs agile debate that AI might actually dissolve: the cost of iteration.
Waterfall made sense when changing code was expensive. Agile made sense when you couldn't know requirements upfront. But what if generating code becomes nearly free?
I've been experimenting with treating specs as the actual product - write the spec, let AI generate multiple implementations, throw them away daily. The spec becomes the persistent artifact that evolves, while code is ephemeral.
The surprising part: when iteration is cheap, you naturally converge on better specs. You're not afraid to be wrong because being wrong costs 20 minutes, not 2 sprints.
Anyone else finding that AI is making them more willing to plan deeply precisely because execution is so cheap that plans can be validated quickly?
You will say I programmed it, there is no longer for this distinction. But then you can add that you used automatic programming in the process. But shortly there will be no need to refer to this term similarly to how today you don't specify you used an editor...
(Yes?) but the editor isn't claiming to take your job in 5 years.
Also I do feel like this is a very substantial leap.
This is sort of like the difference between some and many.
Your editor has some effect on the final result so crediting it/mentioning it doesn't really impact it (but people still do mention their editor choices and I know some git repo's with .vscode which can show that the creator used vscode, I am unfamiliar if the same might be true for other editors too)
But especially in AI, the difference is that I personally feel like its doing many/most work. It's literally writing the code which turns into the binary which runs on machine while being a black box.
I don't really know because its something that I am contradicted about too but I just want to speak my mind even if it may be a little contradicted on the whole AI distinction thing which is why I wish to discuss it with ya.
LLMs translate specs into code, if you master conputational thinking like Antirez, you basically reduce LLMs to intelligent translators of the stated computational ideas and specifications into a(ny) formal language + the typing. In that scenario LLMs are a great tool and speedup the coding process. I like how the power is in semantics, whereas syntax becomes more and more a detail (and rightfully so)!
Thanks, sharing a lot on X / BlueSky + YouTube but once the C course on YouTube will be finished, I'll start a new course on programming in this way. I need a couple more lessons to declare the C course closed (later I'll restart it likely, the advanced part). So I can start with the AP course.
I do not agree at all with his contrasting definitions of “vibe coding” vs “automatic programming”. If a knowledgeable software engineer can say that Claude’s code is actually theirs, so can everyone else. Otherwise, we could argue that Hell has written a book about itself using Dante Alighieri as its tool, given how much we still do not know about our brains, language, creative process, etc.
"When the process is actual software production where you know what is going on, remember: it is the software you are producing. Moreover remember that the pre-training data, while not the only part where the LLM learns (RL has its big weight) was produced by humans, so we are not appropriating something else."
What does that even mean? You are a failed novelist who does not have ideas and is now selling out his fellow programmers because he wants to get richer.
> if vibe coding is the process of producing software without much understanding of what is going on (which has a place, and democratizes software production, so it is totally ok with me)
Strongly disagree. This is a huge waste of currently scarce compute/energy both in generating that broken slop and in running it. It's the main driver for the shortages. And it's getting worse.
A reminder that that your LLM output isn't your intellectual property no matter how much effort you feel went into its prompting.
Copyright protects human creations and the US Copyright Office has made it clear that AI output cannot be copyrighted without significant creative alterations from humans of the output after it is generated.
I stopped reading at "soon to become the practice of writing software".
That belief has no basis at this point and it's been demonstrated not only that AI doesn't improve coding but also that the costs associated are not sustainable.
Because typing in text and syntax is now becoming irrelevant and mostly taken care of by language models. Computational thinking and sematics on the other hand will remain essential in the craft and always have been.
Care to link your sources? At least one of the studies that got attention here was basically done with a bunch of programmers who had no prior experience with the tools.
It's getting silly. Every 3 days someone is trying to coin a new term for programming.
At the end of the day, you produce code for a compiler to produce other code, and then eventually run it.
It's called programming.
When carpenters got powertools, they didn't rename themselves automatic carpenters.
When architects started working with CAD instead of paper, they didn't become vibe architects, even though they literally copy-paste 3/5 of the content they produce.
Programming is evolving, there is a lot of senseless flailing because heads is spinning.
> I'm a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.
Disagree.
So when there is a bug / outage / error, due to "automatic programming" are you ready to be first in line to accept accountability (the LLM cannot be) when it all goes wrong in production? I do not think that would even be enough or whether this would work in the long term.
No excuses like "I prompted it wrong" or "Claude missed something" or "I didn't check over because 8 other AI agents said it was "absolutely right"™".
We will then have lots of issues such as this case study [0] where everything seemingly looks fine at first, all tests pass but in production, the logic was misinterpreted by the LLM with a wrong keyword, [0] during a refactor.
> So when there is a bug / outage / error, due to "automatic programming" you are first in line and ready to accept accountability when it all goes wrong in production?
Absolutely yes. Automatic programming does not mean software developers are no longer accountable for their errors. Also because you can use AP in order to do ways more QA efforts than possible in the past. If you decide to just add things without a rigorous process, it is your fault.
Agree. Much of the value of devs is understanding the thing they're working on so they know what to do when it breaks, and knows what new features it can easily support. Doesn't matter whether they wrote the code, a colleague wrote it, or an AI.
>> are you ready to be first in line to accept accountability
I'm accountable for the code i push to production. I have all the power and agency in this scenario, so i am the right person to be accountable for what's in my PR / CL.
That is the policy I set up for our team as well—when you push, you declare your absolute responsibility for any changes you made to the repository, regardless of the way they were conceived.
That is really about the least confusing part of the story.
Owning the issue is one thing, but being able to fix issues with a reasonable amount of resources is another.
To me code created like this smells like technical debt. When bugs appear after 6 months in production - as they do, if you didn't fully understand the code when developing it, how much time, energy and money will it cost to fix the problem later on?
More often than I like I had to deal with code where it felt like the developer did'nt actually understand what they were writing.
Sometimes I was this developer and it always creates issues.
I hope you aren't missing the point. My position is similar to the author. I WILL take responsibility for the code I push to production, and rather than input a prompt and roll the dice on the outcome, I am strategic in my prompts, ensuring the LLM has the right context each time I I voke it, some of that context being accurate descriptions of what I want built, and I am in charge of ensuring it has been properly vetted. Many times I will erase what the LLM has written and redo it, by myself depending on the situation.
Replace "LLM" with "IDE" and re-read. The LLM is another tool. Of course tools can't be held responsible, the person wielding the tool is.
> Many times I will erase what the LLM has written and redo it, by myself depending on the situation.
The contention here is that antirez doesn't think this is necessary anymore. 100% code gen, with the occassional "stepping in and tell the AI how to write a certain function"
Vibe Engineering. Automatic Programming. “We need to get beyond the arguments of slop vs sophistication..."
Everyone seems to want to invent a new word for 'programming with AI' because 'vibe coding' seems to have come to equate to 'being rubbish and writing AI slop'.
...buuuut, it doesn't really matter what you call it does it?
If the result is slop, no amount of branding is going to make it not slop.
People are not stupid. When I say "I vibe coded this shit" I do not mean, "I used good engineering practices to...". I mean... I was lazy and slapped out some stupid thing that sort of worked.
/shrug
When AI assisted programming is generally good enough not to be called slop, we will simply call it 'programming'.
Until then, it's slop.
There is programming, and there is vibe coding. People know what they mean.
That's kind of Salvatore's point though; programming without some kind of AI contribution will become rare over time, like people writing assembly by hand is rare now. So the distinction becomes meaningless.
I prefer "LLM-assisted programming" as it captures the value/responsibilty boundary pretty exactly. I think it was coined by simonw here, but unfortuantely "vibe coding" become all encompassing instead of proper software engineers using "LLM-assistant" to properly distinguish themselves from vibe bros with very shallow knowledge.
"OpenAI is exploring licensing models tied to customer outcomes, including pharma partnerships." [1]
"OpenAI CFO Sarah Friar sketched a future in which the company's business models evolve beyond subscriptions and could include royalty streams tied to customer results." [1]
"Speaking on a recent podcast, Friar floated the possibility of "licensing models" in which OpenAI would get paid when a customer's AI-enabled work produces measurable outcomes." [1]
$30 a month or whatever amount of $$ per token does not justify the valuation of these companies. But you know what does? 5% of revenue from your software that their AI helped you to create. I can see world in which you must state you've used their AI to write code and you must use specific licenses to that code, which allows them part of your revenue.
I posted yesterday about how I'd invented a new compression algorithm, and used an AI to code it. The top comment was like "You or Claude? ... also ... maybe consider more than just 1-shotting some random idea." This was apparently based on the signal that I had incorrectly added ZIP to the list of tools that uses LZW (which is a tweak of LZ78, which is a dictionary version of the back-reference variant by the same Level-Ziv team of LZ77, the thing actually used in Zip). This mistake was apparently signal that I had no idea what I was doing, was a script kiddie who had just tried to one shot some crap idea, and ended up with slop.
This was despite the code working and the results table being accurate. Admittedly the readme was hyped and that probably set this person off too. But they were so far off in their belief that this was Claude's idea, Claude's solution, and just a one-off that it seemed they not only totally misrepresented me and my work, but the whole process that it would actually take to make something like this.
I feel that perhaps someone making such comments does not have much familiarity with automatic programming. Because here's what actually happened: the path to get from my idea (intuited in 2013, but beyond my skills to do easily until using AI) was about as far from a 'one-shot' as you can get.
The first iteration (Basic LZW + unbounded edit scripts + Huffman) was roughly 100x slower. I spent hours guiding the implementation through specific optimization attempts:
- BK-trees for lookups (eventually discarded as slow).
- Then going to Arithmetic coding. First both codes + scripts, later splitting.
- Various strategies for pruning/resetting unbounded dictionaries.
- Finally landing on a fixed dict size with a Gray-Code-style nearest neighbor search to cap the exploration.
The AI suggested some tactical fixes (like capping the Levenshtein table, splitting edits/codes in Arithemtic coding), but the architectural pivots came from me. I had to find the winning path.
I stopped when the speed hit 'sit-there-and-watch-it-able' (approx 15s for 2MB) and the ratio consistently beat LZW (interestingly, for smaller dics, which makes sense, as the edit scripts make each word more expressive).
That was my bar: Is it real? Does it work? Can it beat LZW? Once it did, I shared it. I was focused on the bench accuracy, not the marketing copy. I let the AI write the hype readme - I didn't really think it mattered. Yes, this person fixated on a small mistake there, and completely misrepresented or had the wrong model of waht it actually took to produce this.
I believe that kind of misperception must be the result of a lack of familiarity with using these tools in practice. I consider these kind of "disdain from the unserious & inexperienced" to be low quality, low effort comments than essentially equate AI with clueless engineers and slop.
As antirze lays out: the same LLMs depending on the human that is guiding the process with their intuition, design, continuous steering and idea of software.
Maybe some people are just pissed off - maybe their dev skills sucked beofre AI, and maybe they still suck with AI, and now they are mad at everything good people are doing with AI, and AI itself?
Idk, man. I just reckon this is the age where you can really make things happen, that you couldn't make before, and you should be into and positive. If you are a serious about making stuff. And making stuff is never easy. And it's always about you. A master doesn't blame his tools.
How big of a Carmack fan are you really, if you don't know one of his most well known takes on programming? (And you definitely don't need to be a fan.) Carmack has been heavily in favor of leveraging power tools since way back.
Direct quote from the man himself:
> I will engage with what I think your gripe is — AI tooling trivializing the skillsets of programmers, artists, and designers.
> My first games involved hand assembling machine code and turning graph paper characters into hex digits. Software progress has made that work as irrelevant as chariot wheel maintenance.
> Building power tools is central to all the progress in computers.
> Game engines have radically expanded the range of people involved in game dev, even as they deemphasized the importance of much of my beloved system engineering.
> AI tools will allow the best to reach even greater heights, while enabling smaller teams to accomplish more, and bring in some completely new creator demographics.
> Yes, we will get to a world where you can get an interactive game (or novel, or movie) out of a prompt, but there will be far better exemplars of the medium still created by dedicated teams of passionate developers.
> The world will be vastly wealthier in terms of the content available at any given cost.
I've seen that before. Re-reading it, I don't really get the same "vibe" as antirez's level of AI advocacy. You also conveniently omitted the last paragraph of the tweet:
> Will there be more or less game developer jobs? That is an open question. It could go the way of farming, where labor saving technology allow a tiny fraction of the previous workforce to satisfy everyone, or it could be like social media, where creative entrepreneurship has flourished at many different scales. Regardless, “don’t use power tools because they take people’s jobs” is not a winning strategy.
But yeah, it (almost) sounds like an ad for AI, but I like to believe it's still a measured somewhat neutral stance. The difference is that Carmack doesn't consistently post things like this unprompted, unlike antirez.
This. Thanks. It's a relief to see I am not the only one completely disappointed. I still believe that these posts are just an ad stunt to publicize their soon-to-be released AI tool. If they really believe what they're writing, it's really sad.
How does it feel to read yet another unbelievably unenlightening article about LLM usage voted to the top of the frontpage for the thousandth day in a row?
You either die as a programmer hero or live long enough to be a Linkedin-style influencer.
On a more serious note, the technology & its use cases of AI are pretty dividing especially within software engineering. I would consider the fact that the financial incentives driving it and the what ~3 TRILLION $ invested in AI driving up some of this divide too.
How many times are we going to reinvent the wheel of LLM usage and applaud? Why every day is there another LLM usage article adding essentially nothing educational or significant to the discourse voted to the top of the frontpage? Am I just jaded? It feels like the bar for "Successful article on Hacker News" is so much lower for LLM discourse than for any other subject
This was just such a worthless post that it made me sad. No arguments with moral weight or clarity. Just another hollowed out shell beeping out messages of doom...
I think if a manager just gave some high order instructions and then went mostly handsoff until teammembers started quitting, dying etc, only then he steps in, that would be vibe managing. Normal managing would be much more supervision and guidance through feedback. This aligns 100% with TFA.
> Pre-training is, actually, our collective gift that allows many individuals to do things they could otherwise never do, like if we are now linked in a collective mind, in a certain way.
The question is if you can have it all? Can you get faster results and still be growing your skills. Can we 10x the collective mind knowledge with use of AI or we need to spend a lot of time learning the old wayTM to move the industry forward.
Also nobody needs to justify what tools they are using. If there is a pressure to justify them, we are doing something wrong.
I have 30+ years of industry experience and I've been leaning heavily into spec driven development at work and it is a game changer. I love programming and now I get to program at one level higher: the spec.
I spend hours on a spec, working with Claude Code to first generate and iterate on all the requirements, going over the requirements using self-reviews in Claude first using Opus 4.5 and then CoPilot using GPT-5.2. The self-reviews are prompts to review the spec using all the roles and perspectives it thinks are appropriate. This self review process is critical and really polishes the requirements (I normally run 7-8 rounds of self-review).
Once the requirements are polished and any questions answered by stakeholders I use Claude Code again to create a extremely detailed and phased implementation plan with full code, again all in the spec (using a new file is the requirements doc is so large is fills the context window). The implementation plan then goes though the same multi-round self review using two models to polish (again, 7 or 8 rounds), finalized with a review by me.
The result? I can then tell Claude Code to implement the plan and it is usually done in 20 minutes. I've delivered major features using this process with zero changes in acceptance testing.
What is funny is that everything old is new again. When I started in industry I worked in defense contracting, working on the project to build the "black box" for the F-22. When I joined the team they were already a year into the spec writing process with zero code produced and they had (iirc) another year on the schedule for the spec. At my third job I found a literal shelf containing multiple binders that laid out the spec for a mainframe hosted publishing application written in the 1970s.
Looking back I've come to realize the agile movement, which was a backlash against this kind of heavy waterfall process I experienced at the start of my career, was basically an attempt to "vibe code" the overall system design. At least for me AI assisted mini-waterfall ("augmented cascade"?) seems a path back to producing better quality software that doesn't suffer from the agile "oh, I didn't think of that".
About 15 years ago, I worked on code that delivered working versions to customers, repeatedly, who used it an reported zero bugs. It simply did what it was meant to, what had been agreed, from the moment they started using it.
The key was this: "the requirements are polished and any questions answered by stakeholders"
We simply knew precisely what we were meant to be creating before we started creating it. I wonder to what degree the magic of "spec driven development" as you call it is just that, and using Claude code or some other similar is actually just the expression of being forced to understand and express clearly just what you actually want to create (compared to the much more prevalent model of just making things in the general direction and seeing how it goes).
Waterfall can work great when: 1/ the focus is long-term both in terms of knowing that she company can take a few years to get the thing live but also that it will be around for many more years, 2/ the people writing the spec and the code are largely the same people.
Agile was really pushing to make sure companies could get software live before they died (number 1) and to remedy the anti-pattern that appeared with number 2 where non-technical business people would write the (half-assed) spec and then technical people would be expected do the monkey work of implementing it.
No.
Agile core is the feedback loop. I can't believe people still don't get it. Feedback from reality is always faster than guessing on the air.
Waterfall is never great. The only time when you need something else than Agile is when lives are at stake, you need there formal specifications and rigorous testing.
SDD allows better output than traditional programming. It is similar to waterfall in the sense that the model helps you to write design docs in hours instead of days and take more into account as a result. But the feedback loop is there and it is still the key part in the process.
"Waterfall is never great."
The only software I ever worked on that delivered on time, under budget, and with users reporting zero bugs over multiple deliveries, was done with heavy waterfall. The key was knowing in advance what we were meant to be making, before we made it. This did demand high-quality customers; most customers are just not good enough.
> Feedback from reality is always faster than guessing on the air
Only if you have no idea what the results will be.
Professional engineering takes parts with specific tolerances, tested for a specific application, using a tried-and-true design, combines them into the solution that other people have already made, and watches it work, exactly as predicted. That's how we can build a skyscraper "the first time" and have it not fall down. We don't need to build 20 tiny versions of a building until we get a working skyscraper.
But when you build a skyscraper you don’t one shot a completed building that stays static its entire life - you build a set of empty floors that someone else designs & fits out, sometimes years after the building as a whole is commissioned, usually several times in the lifespan of the superstructure.
And in the fitting out there often are things that exist only to get customer feedback (of sales), such as model apartments, sample cubicle layouts etc.
So yes, you are right that engineering can guide us to building something right first time - the hard part from software perspective is usually building the right thing, no the thing right.
An interesting analogy I came across once but could never find again is that with software systems, we’re not building a building, we’re designing a factory that produces an output - the example was a mattress factory that took in raw rubber feedstock & cloth and produced mattresses.
Are you running a mattress factory? Or are you trying to run a hotel, and need mattresses, so you build a mattress factory? The "software industry" is that - dysfunctional with perverse incentives.
We should not be building the same software over and over and over and over. I've built the same goddamn app 10 times in my career. And I watch other people build it, making the same old mistakes over and over, like a thousand other people haven't already gone through this and could easily tell you how not to do it. In other engineering professions, they write that stuff down, and say "follow this plan" because it avoids all the big problems. Thank god we have a building code and not "agile buildings".
Agile sucks because it incentivizes those obvious mistakes and reinventing of wheels. Planning allows someone to stop and look up the correct way of building the skyscraper before it's 100 feet in the air with a cracked foundation.
I've live through both eras...
Agile, hardly any planning, write 3 times.
Waterfall, weeks of planning, write 3 times anyway.
The point is, people don't know what they want or are asking for, until it's in front of them. No system is perfect, but waterfall leads to bigger disasters.
Any real software (that delivers value over time) is constantly rewritten and that's a good thing. The question is whether the same people are rewriting it that wrote it and what percentage of that rewriting is based off of a spec or based off of feedback from elsewhere in the system.
> The only time when you need something else than Agile is when lives are at stake, you need there formal specifications and rigorous testing.
Lives are always at stake, given that we use software everywhere, and often in unintended ways, even outside its spec (isn't that a definition of a "hack"?).
People think of medical appliance software, space/air traffic software, defense systems or real-time embedded systems as the only environments where "lives are stake", but actually, in subtle ways, a violation of user expectancy (in some software companies, UX issues count as serious bugs) in a Word processor, Web browser or the sort command can kill a human.
Two real-life examples:
(1) a few years ago, a Chinese factory worker was killed by a robot. It was not in the spec that a human could ever walk in the robot's path (the first attested example of "AI" killing a human that I found at the time). This was way before deep larning entered the stage, and the factory was a closed and fully automated environment.
(2) Also a few years back, the Dutch software for social benefits management screwed up, and thousands of families just did not get pay out any money at all for an extended period. Allegedly, this led to starvations (I don't have details - but if any Dutch read this, please share), and eventually a whole Dutch government was forced to resign over the scandal.
That's a very narrow definition of engineering. What about property? Sensitive information?
It's a fine "whoopsie-doodle," when your software erases the life savings of a few thousand people. "We'll fix that in the next release," is already too little, too late.
This is correct. Agile is control theory applied to software engineering.
The plant to control here isn't something simple like a valve. You're performing cascaded control of another process where the code base is the interface to the plant you're controlling.
I spent my career building software for executives that wanted to know exactly what they were going to get and when because they have budgets and deadlines i.e. the real world.
Mostly I’ve seen agile as, let’s do the same thing 3x we could have done once if we spent time on specs. The key phrase here is “requirements analysis” and if you’re not good at it either your software sucks or you’re going to iterate needlessly and waste massive time including on bad architecture. You don’t iterate the foundation of a house.
I see scenarios where Agile makes sense (scoped, in house software, skunk works) but just like cloud, jwts, and several other things making it default is often a huge waste of $ for problems you/most don’t have.
Talk to the stakeholders. Write the specs. Analyze. Then build. “Waterfall” became like a dirty word. Just because megacorps flubbed it doesn’t mean you switch to flying blind.
> The key phrase here is “requirements analysis” and if you’re not good at it either your software sucks or you’re going to iterate needlessly and waste massive time including on bad architecture. You don’t iterate the foundation of a house.
This depends heavily on the kind of problem you are trying to solve. In a lot of cases requirements are not fixed but evolve over time, either reacting to changes in the real word environment or by just realizing things which are nice in theory are not working out in practice.
You don’t iterate the foundation of a house because we have done it enough times and also the environment the house exists in (geography, climate, ...) is usually not expected to change much. If that were the case we would certainly build houses differently than we usually do.
> making it default is often a huge waste of $ for problems you/most don’t have.
It's the opposite — knowing the exact spec of your program up front is vanishingly rare, probably <1% of all projects. Usually you have no clue what you're doing, just a vague goal. The only way to find out what to build is to build something, toss it over to the users and see what happens.
No developer or, dear god, "stakeholder" can possibly know what the users need. Asking the users up front is better, but still doesn't help much — they don't know what they want either.
No plan survives first contact with the enemy and there's no substitute for testing — reality is far too complex for you to be able to model it up front.
> You don’t iterate the foundation of a house.
You do, actually. Or rather, we have — over thousands of years we've iterated and written up what we've learned so that nobody has to iterate from scratch for every new house anymore. It's just that our physics, environment, and requirements for "a house" doesn't change constantly, like it does for software and we've had thousands of years to perfect the craft, not some 50 years.
Also, civil engineers mess up in exactly the same ways. Who needs testing? [1]. Who needs to iterate as they're building? [2].
[1]: https://youtu.be/jxNM4DGBRMU?t=397
[2]: https://youtu.be/jxNM4DGBRMU?t=837
> knowing the exact spec of your program up front is vanishingly rare, probably <1% of all projects
I don't have anything useful to add, but both of you speak and write with conviction from your own experience and perspective yet to refuse that the situation might be different from others.
"Software engineering" is a really broad field, some people can spend their whole life working on projects where everything is known up front, others the straight opposite.
Kind of feel like you both need to be clearer up front about your context and where you're coming from, otherwise you're probably both right, but just in your own contexts.
My experience is that such one-shotted projects never survive the collision with reality. Even with extremely detailed specs, the end result will not be what people had in mind, because human minds cannot fully anticipate the complexity of software, and all the edge cases it needs to handle. "Oh, I didn't think that this scheduled alarm is super annoying, I'd actually expect this other alarm to supersede it. It's great we've built this prototype, because this was hard to anticipate on paper."
I'm not saying I don't believe your report - maybe you are working in a domain where everything is super deterministic. Anyway, I don't.
I've been doing spec-driven development for the past 2 months, and it's been a game changer (especially with Opus 4.5).
Writing a spec is akin to "working backwards" (or future backwards thinking, if you like) -- this is the outcome I want, how do I get there?
The process of writing the spec actually exposes the edge cases I didn't think of. It's very much in the same vein as "writing as a tool of thought". Just getting your thoughts and ideas onto a text file can be a powerful thing. Opus 4.5 is amazing at pointing out the blind spots and inconsistencies in a spec. The spec generator that I use also does some reasoning checks and adds property-based test generation (Python Hypothesis -- similar to Haskell's Quickcheck), which anchors the generated code to reality.
Also, I took to heart Grant Slatton's "Write everything twice" [1] heuristic -- write your code once, solve the problem, then stash it in a branch and write the code all over again.
> Slatton: A piece of advice I've given junior engineers is to write everything twice. Solve the problem. Stash your code onto a branch. Then write all the code again. I discovered this method by accident after the laptop containing a few days of work died. Rewriting the solution only took 25% the time as the initial implementation, and the result was much better. So you get maybe 2x higher quality code for 1.25x the time — this trade is usually a good one to make on projects you'll have to maintain for a long time.
This is effective because initial mental models of a new problem are usually wrong.
With a spec, I can get a version 1 out quickly and (mostly) correctly, poke around, and then see what I'm missing. Need a new feature? I tell the Opus to first update the spec then code it.
And here's the thing -- if you don't like version 1 of your code, throw it away but keep the spec (those are your learnings and insights). Then generate a version 2 free of any sunk-cost bias, which, as humans, we're terrible at resisting.
Spec-driven development lets you "write everything twice" (throwaway prototypes) faster, which improves the quality of your insights into the actual problem. I find this technique lets me 2x the quality of my code, through sheer mental model updating.
And this applies not just to coding, but most knowledge work, including certain kinds of scientific research (s/code/LaTeX/).
[1] https://grantslatton.com/software-pathfinding
My experience with both Opus and GPT-codex is that they both just forget to implement big chunks of specs, unless you give them the means to self-validate their spec conformance. I’m finding myself sometimes spending more time coming up with tooling to enable this, than the actual work.
The key is generating a task list from the spec. Kiro IDE (not cli) generates tasks.md automatically. This is a checklist that Opus has to check off.
Try Kiro. It's just an all-round excellent spec-driven IDE.
You can still use Claude Code to implement code from the spec, but Kiro is far better at generating the specs.
p.s. if you don't use Kiro (though I recommend it), there’s a new way too — Yegge’s beads. After you install, prompt Claude Code to `write the plan in epics, stories and tasks in beads`. Opus will -- through tool use -- ensure every bead is implemented. But this is a more high variance approach -- whereas Kiro is much more systematic.
I’ve even built my own todo tool in zig, which is backed by SQLite and allows arbitrary levels of todo hierarchy. Those clankers just start ignoring tasks or checking them off with a wontfix comment the first time they hit adversity. Codex is better at this because it keeps going at hard problems. But then it compacts so many times over that it forgets the todo instructions.
I think there's a difference between people getting a system a d realising it isn't actually what they wanted and "never survive collision with reality".
They survive by being modified and I don't think that invalidates the process that got them in front of people faster than would otherwise have been possible.
This isn't a defence of waterfall though. It's really about increasing the pace of agile and the size of the loop that is possible.
I think I agree with what you’re saying? But that’s not the waterfall approach GP pitched.
I believe the future of programming will be specs so I’m curious to ask you as someone who operates this way already, are there any public specs you could point to worth learning from that you revere? I’m thinking the same way past generations were referred to John Carmack’s Quake code next generations will celebrate great specs.
Agile solves the problem of discovering a workable set of requirements while the environment is changing.
If you already know the requirements, it doesn't need to come into play.
While the environment is changing. That's the key.
If you already know the requirements, and they aren't going to change for the duration of the project, then you don't need agile.
And if you have the time. I recently was on a project with a compressed timeline. The general requirements were known, but not in perfect detail. We began implementation anyway, because the schedule did not permit a fully phased waterfall. We had to adjust somewhat to things not being as we expected, but only a little - say, 10%. We got our last change of requirements 3 or 4 weeks before the completion of implementation. The key to making this work was regular, detailed, technical conversations between the customer's engineers, the requirements writers, and our implementers.
Isn't this just a new name for "Design by Contract"?
https://www.goodreads.com/book/show/15182720-design-by-contr...
but using a Large-Language-Model rather than a subordinate team?
c.f., https://se.inf.ethz.ch/~meyer/publications/old/dbc_chapter.p...
How does the resulting code look like though? I found that while <insert your favorite LLM> can spit out barely working C++ code fast, I then have to spend 10x time prodding it to refactor the code to look at least somewhat acceptable.
No matter how much I tell it that it is a "professional experienced 10x developer versed in modern C++, a second coming of Stroustrup" in per-project or global config files it still keeps spewing the same crap big (like manual memory management instead of RAII here and there, initializing fields in ctor body instead of initializer list, having manual init/cleanup methods in classes instead of a proper ctor/dtor design to ensure that objects are always in a consistent state, bunch of other anti-patterns, etc.) and small (checking for nullptr before passing the pointer to delete/free, manually instantiating objects as argument to shared_ptr ctor instead of make_shared, endlessly casting stuff around back and forth instead of designing data types properly, etc.).
Which makes sense I guess because it is how average C++ code on GitHub looks like unfortunately and that is what all those models were trained on, but I keep feeling like my job turning into performing endless code review for a not-very- bright junior developer that just refuses to learn...
This could be a language specific failure mode. C++ is hard for humans too, and the training code out there is very uneven (most of it pre-C++11, much of it written by non-craftspeople to do very specific things).
On the other hand, LLMs are great at Go because Go was designed for average engineers at scale, and LLMs behave like fast average engineers. Go as a language was designed to support minimal cleverness (there's only so many ways to do things, and abstractions are constrained). This kind of uniformity is catnip for LLM training.
This. I feel like the sentiment on HN is very binomial. For me my experience with LLMs is very much what you experience. Anything outside of generic tasks fails miserably. I’m really curious how people make it work so well.
Agile isn’t against spec writing. Specs can be a task in your story and so can automated tests. Both can be deliverables in your acceptance criteria. But that’s not how it went - because the human nature is to look for least effort.
Which AI, least effort is the specs so that’s the “greatest thing to do” again.
Perhaps a better way than to view them as alternative choices is to view them as alternative modes of working, between which it is sometimes helpful to switch?
We know old-style classic waterfall lacks flexibility and agile lacks planning, but I don't see a reason why not to switch back and forth multiple times in the same project.
Yep. I've been into spec-driven development for a long time (when we had humans as agents) and it's never really failed me. We just have literally more attention (hah!) from LLMs than from humans.
> "using a new file is the requirements doc is so large is fills the context window"
using a new file IF the requirements doc is so large IT fills the context window
I need Claude to review my HN comments.
What's amusing to me is that PRIDE, the oldest generally available software methodology and perhaps the least appreciated, is basically just "spec driven development with human programmers". Most of the time, and personnel, involved in development is on elucidating the requirements and developing the spec; programmers only get involved at the end and their contribution is about 15%. For a few decades this was considered the "correct" way to develop software. But then PCs happened, mom-and-pop software vendors stuffing floppy disks into Ziploc happened, and the myth of the lone "genius programmer" took hold of the industry, and programmers experienced such prestige inflation that they thought they were able to call the shots, and by and large management acquiesced. And that's how we got Agile.
With the rise of AI, maybe programmers will be put back in their rightful place, as contributors of the final small piece of the development process: a translation from business terms to the language of the computer. Programming as a profession should, by all rights, be obsolete. We should be able to express the solution directly in business terms and have the translation take place automatically. Maybe that day will be here soon.
As it is so often in life, extreme approaches are often bad. If you do pure waterfall you risk finding out very late that your plan might not work out, either because of unforeseen technical difficulties implementing it, the given requirements actually being wrong/incomplete or just simply missing the point in time where you planned enough. If you do extreme agile you often end up with a shit architecture which actually, among other things, hurt your future agility but you get a result which you can validate against reality. The "oh, I didn't think of that" is definitely present in both extremes.
Agile is really about removing managers. The twelve principles does encourage short development cycles, but that's to prevent someone from going off into the weeds — having no manager to tell them to stop.
> Pre-training is, actually, our collective gift that allows many individuals to do things they could otherwise never do, like if we are now linked in a collective mind, in a certain way.
Is not a gift if it was stolen.
Anyway, in my opinion the code that was generated by the LLM is yours as long as you're responsible for it. When I look at a PR I'm reading the output of a person, independently of the tools that person used.
There's conflict perhaps when the submitter doesn't take full ownership of the code. So I agree with Antirez on that part
>Is not a gift if it was stolen.
Yeah, I had a visceral reaction to that statement.
Yet nobody is changing their licenses to exclude AI use. So I assume they are OK with it.
I would and people do but no one respects the license because it is largely uninforceable
I don't respect the "license" because the very concept is broken and offensive to man and to God.
In an age when computers have enabled infinite duplicate of everything for practically free, it is a barbarian and vampire who would stand there with his hand held out expecting payment, or expecting his permission to be asked every time bytes are copied from A to B.
I have been pirating everything since day 1 and thank God for it. Otherwise I'd still be an ignorant backwards rube like the rest of the huddled masses, with no money to buy any of these books etc to educate myself or gas money to constantly be driving to the library. Now I have one of the largest, best curated private libraries in the world, all for free.
I make no apologies to anyone, nor do I ever ask anyone's permission to copy bits and bytes around or share them freely.
Now somebody came out with a new technological innovation which takes advantage of the power of freely shared information in order to do something great and extraordinarily useful--and of course the Copyright Crew is there to scream loudly at the injustice of it all. I'm sick of these people.
Licenses mean nothing if AI training on your data is fair use, which courts have yet to determine.
You can have a license that says "NO AI TRAINING EVER" in no uncertain terms and it would mean absolutely nothing because fair use isn't dictated by licenses.
God has already made the determination. It has been determined that licenses just mean nothing, period. Any claim to the contrary is only mafia figures with guns trying to enforce illegitimate "ownership" claims on something that can't actually be owned by anyone.
The War on Copying Data Freely will surely end in happiness and utopia, just like the War on Drugs did.
What's the point of changing the license? It will be scrapped anyway.
'The only winning move is not to play' - stop contributing to OSS.
It is knowledge, it can't be stolen. It is stolen only in the sense of someone gatekeeping knowledge. Which is as a practice, the least we can say, dubious. because is math stolen ? if you stole math to build your knowledge on top of it, you own nothing and can claim to have been stolen yourself
I disagree.
Code is the expression of knowledge and can be protected by copyright.
A lot of the popular licenses on GitHub (like MIT) permits you to use a piece of code on the condition that you credit the original author. If an LLM outputs code from such a project (or remixes code from several such projects) then it needs to credit the original authors or be in violation.
If Disney's intellectual property can be stolen and needs to be protected for 95+ years by copyright then surely the bedroom programmers' labor deserves the same protections.
We're not talking about the expression of knowledge. What is used in AI models is the knowledge from that expression. That code is not copied as is, instead knowledge is extracted from it and used to produce similar code. Copyright does not apply, IMHO
So you can train AI on Disney Movies to generate and sell your own disney movies because "knowledge is extracted" from it ? Betcha that won't fly in the courts. Here is "Slim Cinderella" - trained and extracted from all Disney Cinderella movies!
Yes, I can train AI on Disney Movies and sell my own Disney movies. I might have to move to a more civilized nation free of thugs with guns who attempt to stop me from doing this, or I might have to lie very low and be careful not to attract their attention, but in either case it's quite possible, and even easy for me to do this thing. People will soon be doing it all the time on their 10 year old Dell.
Are you against copyright, patents, and IP in all forms then?
Independent of ones philosophical stance on the broader topic: I find it highly concerning that AI companies, at least right now, seem to be largely exempt from all those rules which apply to everyone else, often enforced rigorously.
I draw from this that no-one should be subject to those rules, and we should try to use the AI companies as a wedge to widen that crack. Instead, most people people who claim that their objection is really only consistency, not love for IP spend their time trying to tighten the definitions of fair use, widen the definitions of derivative works, and in general make IP even stronger, which will effect far more than just the AI companies they're going after. This doesn't look to me like the behavior of people who truly only want consistency, but don't like IP.
And before you say that they're doing it because it's always better to resist massive, evil corporations than to side with them, even if it might seem expedient to do so, the people who are most strongly fighting against AI companies in favor of IP, in the name of "consistency" are themselves siding with Disney, one of the most evil companies — from the perspective of the health of the arts and our culture — that's working right now. So they're already fine with siding with corporations; they just happened to pick the side that's pro-IP.
oh hey, let's have a thought experiment in this world with no IP rules
suppose I write a webnovel that I publish for free on the net, and I solicit donations. Kinda like what's happening today anyway.
Now suppose I'm not good at marketing, but this other guy is. He takes my webnovel, changes some names, and publishes it online under his name. He is good at social media and marketing, and so makes a killing from donations. I don't see a dime. People accuse me of plagiarism. I have no legal recourse.
Is this fair?
There are also unfair situations that can happen, equally as often, if IP does exist, and likewise, in those situations, those with more money, influence, or charisma will win out.
Also, the idea that that situation is unfair relies entirely on the idea that we own our ideas and have a right to secure (future, hypothetical) profit from them. So you're essentially begging the question.
You're also relying on a premise that, when drawn out, seems fundamentally absurd to me: that you should own not just the money you earn, but the rights to any money you might earn in the future, had someone not done something that caused unrelated others to never have paid you. If you extend that logic, any kind of competition is wrong!
let's have another though experiment:
there are two programmers. first is very talented technically, but weak at negotiations, so he earns median pay. second is average technically, but very good at negotiations, and he earns much better.
is it fair?
life is not fair.
Surely one easily see that the second programmer didn't take the first programmer talent (or his knowledge) and claimed it as their own...
Engineers man… of all the problems we see today, giving real power to engineers is probably a root cause of many.
In China, engineers hold the most power, yet the country prospers. I don't think the problem is giving engineers power, rather a cultural thing. In china there is a general feeling of contributing towards the society, in the US everyone is trying to screw over each-other, for political or monetary reasons.
I am.
Absolutely. As any logical person should be.
This is obviously false on the face of it. Let’s say I have a patent, song, or a book that that I receive large royalty payments for. It would obviously not be logical for me be in favor of abolishing something that’s beneficial to me.
Declaring that your side has a monopoly on logic is rarely helpful.
If you are so adamant about this, why don't you release all your own code in the public domain? Aren't you gatekeeping knowledge too?
I agree with GP, and so, yes, I release everything I do — code and the hundreds of thousands of painstakingly researched, drafted, deeply thought through words of writing that I do — using a public domain equivalent license (to ensure it's as free as possible), the zero clause BSD.
That's commendable, but unfortunately I asked GP.
I release all my code in the public domain too. I would never think of enlisting thugs with guns to dictate to other people how they use "my" code. It didn't come from me in the first place. It was all that pirating/reading I did online FOR FREE that enabled me to write this code. It was the genetics I inherited from my ancestors who made it happen.
I know you didn't ask me, and I don't care. I'm telling you.
The entire purpose of copyright and patent law is to enable uncreative fat cats like Kevin O'Leary to claim ownership over the creations of others. Period. It has nothing to do with "protecting" the creator, and never did. The ability to create is its own protection.
Ever watch Shark Tank? Mr. O'Leary--self-proclaimed Mr. Wonderful--LOVES deals involving patents and royalties. That's how he made all of his money, and without them he would be destitute.
Notice that the people with the fewest ideas are the biggest hoarders of whatever they have, and the most jealous of anyone who approaches "their" stuff--while those who are most creative are the most giving and sharing.
Is there a link?
Sure!
Personal blog: https://neonvagabond.xyz/ (591,305 total words, written over 6 years; feel free to do whatever you want with it)
My personal github page: https://github.com/alexispurslane/ (I only recently switched to Zero-Clause BSD for my code, and haven't gotten around to re-licensing all my old stuff, but I give you permission to send a PR with a different license to any of them if you wanna use any of it)
I arrived at a very similar conclusion since trying Claude Code with Opus 4.5 (a huge paradigm shift in terms of tech and tools). I've been calling it "zen coding", where you treat the codebase like a zen garden. You maintain a mental map of the codebase, spec everything before prompting for the implementation, and review every diff line by line. The AI is a tool to implement the system design, not the system designer itself (at least not for now...).
The distinction drawn between both concepts matters. The expertise is in knowing what to spec and catching when the output deviates from your design. Though, the tech is so good now that a carefully reviewed spec will be reliably implemented by a state-of-the-art LLM. The same LLM that produces mediocre code for a vague request will produce solid code when guided by someone who understands the system deeply enough to constrain it. This is the difference between vibe coding and zen coding.
Zen coders are masters of their craft; vibe coders are amateurs having fun.
And to be clear, nothing wrong with being an amateur and having fun. I "vibe code" several areas with AI that are not really coding, but other fields where I don't have professional knowledge in. And it's great, because LLMs try to bring you closer to the top of human knowledge on any field, so as an amateur it is incredible to experience it.
> review every diff line by line
If you're this meticulous is it really any faster than writing code manually? I have found that in cases where I do care about the line-by-line it's actually slower to run it through Claude. It's only where I want to shovel it out that it's faster.
> Zen coders
Please don’t, it’s just my day job.
> I'm a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.
I disagree. The code you wrote is a collaboration with the model you used. To frame it this way, you are taking credit for the work the model did on your behalf. There is a difference between I wrote this code entirely by myself and I wrote the code with a partner. For me, it is analogous to the author of the score of an opera taking credit for the libretto because they gave the libretto author the rough narrative arc. If you didn't do it yourself, it isn't yours.
I generally prefer integrated works or at least ones that clearly acknowledge the collaboration and give proper credit.
Or for another analogy, just substitute the LLM for an outsourced firm. Instead of hiring a firm to do the work, you're hiring a LLM.
Also it's not only the work of "the model" it's the work of human beings the model is trained on, often illegally.
Copyright infringement is a tort. “Illegal” is almost always used to refer to breaking of criminal law.
This seems like intentionally conflating them to imply that appropriating code for model training is a criminal offense, when, even in the most anti-AI, pro-IP view, it is plainly not.
> “Illegal” is almost always used to refer to breaking of criminal law.
This is false, at least in general usage. It is very common to hear about civil offenses being referred to as illegal behavior.
https://www.justice.gov/archives/jm/criminal-resource-manual...
> There are four essential elements to a charge of criminal copyright infringement. In order to sustain a conviction under section 506(a), the government must demonstrate: (1) that a valid copyright; (2) was infringed by the defendant; (3) willfully; and (4) for purposes of commercial advantage or private financial gain.
I think it’s very much an open debate if training a model on publicly available data counts as infringement or not.
I'm replying to your comment about infringement being a civil tort versus a crime, it can be both.
I was about to argue, and then I suddenly remembered some past situations where a project manager clearly considered the code I wrote to be his achievement and proudly accepted the company's thanks.
The way I put it is: AI assistance in programming is a service, not a tool. It's like you're commissioning the code to be written by an outside shop. A lot of companies do this with human programmers, but when you commission OpenAI or Anthropic, the code they provide was written by machine.
The line gets blurrier the more auto-complete you use.
Agentic programming is at the end of the day a higher level auto complete, with extremely fuzzy matching on English.
But when you write a block and you let copilot complete 3, 4, 5 statements. Are you really writing the code?
Truth is the highest level of autocomplete
How many JavaScript libraries does the average fortune 1000 developer invoke when programming?
That average fortune 1000 developer is still expected to abide by the licensing terms of those libraries.
And in practice, tools like NPM makes sure to output all of the libraries' licenses.
Prompting the AI is indeed “do[ing] it yourself”. There’s nobody else here, and this code is original and never existed before, and would not exist here and now if I hadn’t prompted this machine.
Sure. But the sentence "I am a programmer" doesn't fit with prompting, just as much as me prompting for a drawing that resembles something doesn't make me a painter.
Exactly. He's acting as something closer to a technical manager (who can dip into the code if need be but mostly doesn't) than a programmer.
So, what's your take on Andy Warhol, or sampling in music?
We maybe witnessing the last generation of master software artisans like antirez.
This is beautiful to see, their mastery harnessing the power of the intelligent machine tools to design, understand and build.
This is like seeing a master of image & light like michelangelo receiving a camera, photoshop and a printer. It's an exponential elevation of the art.
But to become a master like michelangelo one had to dedicate herself to the craft of manually mixing and applying materials to bend and modulate light, slowly building and consolidating those neural pathways by reflection and, most of all, practice, until those skills became as natural as getting up or bringing a hand to the mouth. When that happened, art flowed from her mind to the physical world and the body became the vessel of intuition.
A master like antirez had to wrap his head around concepts alien to the human mind. Bits, bytes, arrays, memory layout, processors, compilers, interfaces, abstractions, constraints, types, concurrency do not exist in the savannas that forged brains. Had to comprehend and learn to use his own cognitive capabilities and restrictions to know at what level to break the code units and the abstraction boundaries. At the very top, master this in a level so high that software became like Redis: beautiful, powerful and so elevated in the art that it became simpler, not more complex. It's Picasso drawing a dog.
The intelligent software building machines can do things no human manually can (given the same time, humans die, get old or get bored), but they are not brush and canvas. They function in another way, the mind needs other paths to master them. The path to master them is not the same path to master artisanal software building.
So, this new generation, wanting to build things not possible to the artisan, will become masters of another craft, one we right now cannot even comprehend or imagine, in the same way michelangelo could never imagine the level of control over light the modern photography masters have.
Me, not a master, but having dedicated my whole life to artisanal software building, am excited to receive and use the new tools, to experiment the new craft. Also frightened by the uncertainty of this new world.
What a time to be alive.
> We maybe witnessing the last generation of master software artisans like antirez
What? He is mostly a AI influencer at this stage, even without getting paid for it (I think). There are always gonna be people writing code, people writing music, just because a machine can write code doesnt change the fact coding itself is a fun exercise.
> We maybe witnessing the last generation of master software artisans like antirez.
I'm told that chess is more popular than ever, despite it being decades since a human could dream of beating a top computer at it.
More relevantly, I've been seeing an explosion of (ostensibly) human-produced artwork in my SM feed, despite that Stable Diffusion and the like are supposed to bypass the need for artistic skill and make your anime waifu come to laifu with a paragraph of prompt.
Sorry, "SM"?
social media
not... not the other kind of SM
Not really.
>A master like antirez had to wrap his head around concepts alien to the human mind. Bits, bytes, arrays, memory layout, processors, compilers, interfaces, abstractions, constraints, types, concurrency do not exist in the savannas that forged brains.
You still need to know these things if you're doing anything more complicated than making some CRUD dashboard. LLMs assist with some code generation, and assist with some knowledge lookup. That's pretty much it.
What seems to be the case is that you need to know everything you needed to know before, and* become good at leveraging AI tooling to make you go faster.
*Even this is optional. There is absolutely nothing stopping anyone from just ignoring everything about AI and keep developing software like pre-2022. The efficiency difference isn't even significance in the grand scheme of things. It's not like people had reams of perfect software specs just lying around waiting to be implemented. That's just not how people develop software; usually the spec emerges while you're writing the program.
Well said. I think you see where things are going clearer than most here.
Every time I hear someone mention they vibed a thing or claude gave them something, it just reads as a sort of admission that I'm about to read some _very_ "first draft"-feeling code. I get this even from people who spend a lot of time talking about needing to own code you send up.
People need to stop apologizing for their work product because of the tools they use. Just make the work product better and you don't have to apologize or waste people's time.
Especially given that you have these tools to make cleanup easier (in theory)!
The problem is that it's become feasible to get further in a design without caring about architecture or code quality.
Yep. And they should also raise their expectations and work on delivering better vibecoded apps. Likely also with automation.
> Pre-training is, actually, our collective gift
I feel like this wording isn't great when there are many impactful open source programmers who have explicitly stated that they don't want their code used to train these models and licensed their work in a world where LLMs didn't exist. It wasn't their "gift", it was unwillingly taken from them.
> I'm a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.
I've seen LLMs generate code that I have immediately recognized as being copied a from a book or technical blog post I've read before (e.g. exact same semantics, very similar comment structure and variable names). Even if not legally required, crediting where you got ideas and code from is the least you can do. While LLMs just launder code as completely your own.
I don't think it's possible to separate any open source contribution from the ones that came before it, as we're all standing on the shoulders of giants. Every developer learns from their predecessors and adapts patterns and code from existing projects.
Exactly that. And all the books about, for instance, operating systems, totally based on the work of others: their ideas where collected and documented, the exact algorithms, and so forth. All the human culture worked this way. Moreover there is a strong pattern of the most prolific / known open source developers being NOT against the fact that their code was used for training: they can't talk for everybody but it is a signal that for many this use is within the scope of making source code available.
> their ideas where collected and documented
Yeah, documented *and credited*. I'm not against the idea of disseminating knowledge, and even with my misgivings about LLMs, I wouldn't have said anything if this blog post was simply "LLMs are really useful".
My comment was in response to you essentially saying "all the criticisms of LLMs aren't real, and you should be uncompromisingly proud about using them".
> Moreover there is a strong pattern of the most prolific / known open source developers being NOT against the fact that their code was used for training
I think it's easy to get "echo-chambered" by who you follow online with this, my experience has been the opposite, i don't think it's clear what the reality is.
If you fork an open source project and nuke the git history, that's considered to be a "dick move" because you are erasing the record of people's contributions.
LLMs are doing this on an industrial scale.
I don't really understand how that isn't allowed/disallowed simply on the basis of whether the licence permits use without attribution?
The hard truth is that if you're big enough (and the original creator is small enough) you can just do whatever you want and to hell with what any license says about it.
To my understanding, the expensive lawyers hired by the biggest people around, filtered through layers of bureaucracy and translated to software teams, still result in companies mostly avoiding GPL code.
Which was in fact the very intent of the GPL from day one, putting all the marketing material and lies aside: to cripple and hinder the burgeoning open source ecosystem as long as possible.
This also explains decades of questionable decisions of projects like gcc, glibc, gimp (it's right there in the name!), gnome, etc. Richard Stallman is a plant.
Note his recent speech at the Georgia Tech, where he says a lot of very nice things, I'm sure...while wearing some goofy face mask like he's still scared to death of COVID. He is also well known for his lack of personal hygiene and well developed body odor, which is quite curious actually, as at least one person who put this character up for a few days reports that he is a fan of long, hot showers. It's almost like the whole "crusty bearded geek weirdo" thing is just an act, meant to give Free Software a bad reputation.
Reminds me very much of David McGowan's book Weird Scenes Inside the Canyon, in which he explains exactly who created the hippie movement and to what end. Exactly like that, in fact.
Great thing so many open source projects have willingly donated all their copyright ownership to the hands of this GNU organization, right? It will be closely guarded and protected, I'm sure.
I’ve been thinking that information provenance would be very useful for LLMs. Not just for attribution (git authors), but the LLM would know (and be able to control) which outputs are derived from reliable sources (e.g. Wikipedia vs a Reddit post; also which outputs are derived from ideologically-aligned sources, which would make LLMs more personal and subjectively better, but also easier to bias and generate deliberate misinformation).
“Information provenance” could (and I think most likely would, although I’m very unfamiliar with LLM internals) be which sources most plausibly derive an output, so even output that exists today could eventually get proper attribution.
At least today if you know something’s origin, and it’s both obvious and publicly online, you have proof via the Internet Archive.
You can say that about literally everything, yet we have robust systems for protecting intellectual property, anyway.
> I don't think it's possible to separate any open source contribution from the ones that came before it, as we're all standing on the shoulders of giants. Every developer learns from their predecessors and adapts patterns and code from existing projects.
Yes but you can also ask the developer (wheter in libera.irc, or say if its a foss project on any foss talk, about which books and blogs they followed for code patterns & inspirations & just talk to them)
I do feel like some aspects of this are gonna get eaten away by the black box if we do spec-development imo.
> there are many impactful open source programmers who have explicitly stated that they don't want their code used to train these models and licensed their work in a world where LLMs didn't exist. It wasn't their "gift", it was unwillingly taken from them.
There are subtle legal differences between "free open source" licensing and putting things in the public domain. If you use an open source license, you could forbid LLM training (in licensing law, contrary to all other areas of law, anything that is not granted to licensees is forbidden). Then you can take the big guys (MSFT, Meta, OpenAI, Google) to court if you can demonstrate they violated your terms.
If you place your software into the public domain, any use is fair, including ways to exploit the code or its derivatives not invented at the time of release.
Curiosly, doesn't the GPL even imply that if you pre-tain an LLM with GPLed code and use it to generate code (Claude Code etc.) that all generated code -- as derived intellectual property that it clearly is -- must also be open sourced as per GPL terms? (It would seem in the spirit of the licensors.) Haven't seen this raised or discussed anywhere yet.
> If you use an open source license, you could forbid LLM training
Established OSS licenses are all from before anyone imagined that LLMs would come into existence, let alone train on and then generate code. Discrimination on purpose is counter to OSI principles (https://opensource.org/osd):
> 6. No Discrimination Against Fields of Endeavor
> The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
The GPL argument you describe hinges on making the legal case that LLMs produce "derived works". When the output can't be clearly traced to source input (even the system itself doesn't know how) it becomes rather difficult to argue that in court.
One thing I'd love to point out here, to anyone wading through this discussion:
Step back and notice the VAST AMOUNT OF TIME AND ENERGY being wasted here and elsewhere, arguing about who claims to own what. What a giant waste, in an age where digital machines can reproduce any information infinitely for practically free.
Thanks, copyright law, and all the parasites (lawyers, etc) who depend on it for their big, expensive livelihood. Thanks to the government and corporations who have squeezed us all and made it so hard to make a living that everyone feels they now have to monetize and profit from everything to survive. The gift that keep on giving.
You pre suppose that output is derive work (not a given) and that training is not fair use (also not a given).
If the courts decide to apply the law as you assume the AI companies are all dead. But they are all betting that's not going to be the case. And since so much of the industry is taking the bet with them... The courts will take that into account
> I feel like this wording isn't great when there are many impactful open source programmers who have explicitly stated that they don't want their code used to train these models
That’s been the fate of many creators since the dawn of time. Kafka explicitly stated that he wanted his works to be burned after his death. So when you’re reading about Gregor’s awkward interactions with his sister, you’re literally consuming the private thoughts of a stranger who stated plainly that he didn’t want them shared with anyone.
Yet people still talk about Kafka’s “contribution to literature” as if it were otherwise, with most never even bothering to ask themselves whether they should be reading that stuff at all.
If he didn't want us to read Metamorphosis he probably shouldn't have had it published. It was, long before his death.
But it's true much of his work was unpublished when he died and was "rescued" or "stolen", depending on what narrative you prefer.
when you inplement a quick sort, do you credit Hoare in the comments?
No, in the same way that I wouldn't cite Euler every time I used one of his theorems - because it's so well known that its history is well documented in countless places.
However, if I was using a more recent/niche/unknown theorem, it would absolutely be considered bad practice not to cite where I got it from.
If I was implementing any known (named) algorithm intentionally I think I would absolutely say so in a comment (`// here we use quick sort to...` and maybe why it's the choice) and then it's easy for someone to look up and see it's due to Hoare or whoever on Wikipedia etc.
Now many will downvote you because this is an algorithm and not some code. But the reality is that programming is in large part built looking at somebody else code / techniques, internalizing them, and reproducing them again with changes. So actually it works like that for code as well.
Intellectual property is not absolute and can be expropriated, just like any other property.
"Expropriated" usually means a government order, though. Do LLMs have one?
If you publish your code to others under permissive licenses, people using it to do things you do not want is not something being unwillingly taken from you.
You can do whatever you want with a gift. Once you release your code as free software, it is no longer yours. Your opinions about what is done with it are irrelevant.
But the license terms state under which conditions the code is released.
For example: MIT license states has this clause "The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software."
It stands to reason that if an LLM outputs something based on MIT-licensed code then that output should at least contain that copyright because it's what the original author wished.
And I saw a comment below arguing that knowledge cannot be copyrighted, but the code is an expression of that knowledge and that most certainly can be protected by copyright.
> It wasn't their "gift", it was unwillingly taken from them.
Yes. Exactly. As a developer in that case I feel almost violated in my trust in “the internet.” Well it’s even worse, I did not really trust it, but did not think it could be that bad.
I don't understand this perspective. Programmers often scoff at most other examples of intellectual property, some throwing it out all together. I remember reading Google vs Oracle where Oracle sued Google for stealing code to perform a range check, about about 9 lines long, used to check array index bounds.
I guess the difference is AI companies bad? This is transformative technology creating trillions in value and democratizing information, all subsidized by VC money. Why would anyone in open source who claims to have noble causes be against this? Because their repo will no longer get stars? Because no one will read their asinine stack overflow answer?
https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_....
Hot take: The Supreme Court should have sided with Oracle. APIs are a clear example of unique expression, and there is no statute exempting them specifically from copyright protection. If they are not protected by copyright, is anything really? What meaning has copyright law then?
Why is copyright law more important than anything else? AI is likely to drive the next stage of humanity's intellectual evolution, while copyright is a leaky legal abstraction that we pulled out of our asses a couple hundred years ago.
One of these is much more important than the other. If the copyright cartels insist on fighting AI, then they must lose decisively.
Hot take: Intellectual property law is stifling innovation and humanity would be better served scrapping it.
In the 1950s/1960s, the term "automatic programming" referred to compiler construction: instead of writing assembler code by hand, a FORula TRANslator (FORTRAN) could "magically" turn a mathematical formula into code "by itself".
"4GL" was a phase in the 1980s when very high level languages very provided by software companies, often integrating DB access and especially suited for particular domains. The idea was that one could focus more on the actual problem rather than having to write boilerplate needed to solving it.
LLMs permit to go from natural language specification to draft implementation. If one is lucky, it runs and produes the desired results right away; more often, one needs to revise the code base iteratively, again navigated by NL commands, to fix errors, to change the design based on reviewing the first shot at it, to add features etc.
> That said, if vibe coding is the process of producing software without much understanding of what is going on [...], automatic programming is the process of producing software that attempts to be high quality and strictly following the producer's vision of the software [...], with the help of AI assistance.
He is absolutely right here, and I think in this article he has "shaped" the direction of future software engineering (which is already happening actually): we are moving closer and closer to a new way of writing code. But this time, for real. I mean that it will increasingly become the standard. Just as in the past an architect used to draw every detail by hand, while today much of the operational work is delegated to parametric software, CAD, BIM, and so on. The architect does not "draw less" because they know less, but because the value of their work has shifted. This is a concept we've repeated often in recent months, with the advent of Opus 4.5 and 5.2-Codex. But I think that here antirez has given it the right shape and also did well to distinguish it from mere vibecoding, which, as far as I'm concerned, are two radically different approaches.
This is a classic false dichotomy. Vibe coding, automatic coding and coding is clearly on a spectrum. And I can employ all the shades during a single project.
Distilled:
> Users should claim the output of LLMs as their own, for the following reason. LLMs are tools; tools can be used with varying degrees of skill; the output of tools (including LLMs) is a function of the user's skill; and therefore the output is attributable to and belongs to the user.
> Furthermore, we should use tools, including LLMs, actively and mindfully. We shouldn't switch off our brains and accept the output uncritically. We should iterate and improve as we go along.
I agree with you that the author seems to inappropriately convert differences in degree of skill into differences of kind.
AI is like an instrument which can be played in various ways, different styles and intensities.
One might say it's spec strumming.
Friendly reminder that almost nobody is working this way now. You (reader) don't have to spend 346742356 tokens on that refactor. antirez won't magically swoop in and put your employer out of business with the Perfect Prompt (and accompanying AI blog post). There's a lot of software out there and MoltBook isn't going to spontaneously put your employer out of business either.
Don't fall into the trap of thinking "if I don't heavily adopt Claude Code and agentic flows today I'll be working at Subway tomorrow." There's an unhealthy AI hype cottage industry right now and you aren't beholden to it. Change comes slowly, is unpredictable, and believe it or not writing Redis and linenoise.c doesn't make someone clairvoyant.
Putting your head in the sand and ignoring it all isn't a good strategy either. Like it or not, AI will be a part of the rest of your career in some quantity. Not just because we collectively decide that we want to use these tools, but because tools that undeniably provide a huge productivity boost when used correctly are something the economy cannot ignore.
My advice would be to avoid feeling compelled to try every new tool immediately, but at least try to stay aware of major developments. A career in software engineering also dooms you to life-long learning in a very fast changing environment. This is no different. Agents are tools that work quite differently from what we're used to, and need cognitive effort and learning to wield effectively.
Waking up one day to realise you're now expected to work naturally in tandem with an AI agent but lack the experience is not a far-fetched scenario.
Like with most technological change I think there is no need for FOMO. You run into problems if you completely ignore already established and proven tools and practices for years to come but you don't have to jump onto every "this changes everything, trust me bro" hype.
Yes! Hooray! Automatic Programming!
I embrace this new term.
"Vibe coding" is good for describing a certain style of coding with AI.
"Automatic programming" is what I get paid for in my 9-5, things have to work and they have to work correctly. Things I write run in real production with real money at stake. Thus, I behave like an adult and a professional.
Thank you 'antirez for introducing this language.
a better term might be “feedback engineering” or “verification engineering” (what feedback loop do I need to construct to ensure that the output artifact from the agent matches my specification)
This includes standard testing strategies, but also much more general processes
I think of it as steering a probability distribution
At least to me, this makes it clear where “vibe coding” sits … someone who doesn’t know how to express precise verification or feedback loops is going to get “the mean of all software”
I disagree with referring to this as automatic software as if it's a binary statement. It's very much a spectrum and this kind of software development is not fully automatic.
There's actually a wealth of literature on defining levels of software automation (such as: https://doi.org/10.1016/j.apergo.2015.09.013).
This raises some questions:
* Does the spec become part of the repository?
* Does "true open source" require that?
* Is the spec what you edit?
May be a language issue but "Automatic" would imply something happening without any intervention. Also, I dont like that everyone is trying to coin a term for this but there is already a term called lite coding for this sort of a setup, I just coined it.
>Vibe coding is the process of generating software using AI without being part of the process at all.
Even the most one shot prompt vibecoding is still getting high level intent from the person and then testing it in person. There is no "without being part of the process at all".
And from there its a gradient as to how much input & guidance is given.
This entire distinction he's trying to make here just doesn't make sense frankly. Trying to impose two categories on something that is clearly a continuous spectrum.
Or call it agentic coding..
Whatever you call it, for an experienced engineer to gain so much leverage in so little time while maintaining quality, it’s vibey and a ton of fun.
Have we ever had autocomplete programming? Then why have a new term for LLM-assisted programming?
Everyone wants to take credit for a naming convention, become part of history I suppose!
I'll do my own, narcissistically: Typeless programming!
Only if you exclude those long pages of markdown spec you had to type!
I don’t think that is a good term. We generally designate processes as “automatic” or “automation” that work without any human guidance or involvement at all. If you have to control and steer something, it’s not automatic.
There's a hidden assumption in the waterfall vs agile debate that AI might actually dissolve: the cost of iteration.
Waterfall made sense when changing code was expensive. Agile made sense when you couldn't know requirements upfront. But what if generating code becomes nearly free?
I've been experimenting with treating specs as the actual product - write the spec, let AI generate multiple implementations, throw them away daily. The spec becomes the persistent artifact that evolves, while code is ephemeral.
The surprising part: when iteration is cheap, you naturally converge on better specs. You're not afraid to be wrong because being wrong costs 20 minutes, not 2 sprints.
Anyone else finding that AI is making them more willing to plan deeply precisely because execution is so cheap that plans can be validated quickly?
"I automatically programmed it" doesn't really roll off the tongue, nor does it make much sense - I reckon we need a better term.
It certainly quicker (and at times, more fun!) to develop this way, that is for certain.
You will say I programmed it, there is no longer for this distinction. But then you can add that you used automatic programming in the process. But shortly there will be no need to refer to this term similarly to how today you don't specify you used an editor...
I like to think that the prompt is dark magic and the outputs are conjured. I get to feel like a wizard.
(Yes?) but the editor isn't claiming to take your job in 5 years.
Also I do feel like this is a very substantial leap.
This is sort of like the difference between some and many.
Your editor has some effect on the final result so crediting it/mentioning it doesn't really impact it (but people still do mention their editor choices and I know some git repo's with .vscode which can show that the creator used vscode, I am unfamiliar if the same might be true for other editors too)
But especially in AI, the difference is that I personally feel like its doing many/most work. It's literally writing the code which turns into the binary which runs on machine while being a black box.
I don't really know because its something that I am contradicted about too but I just want to speak my mind even if it may be a little contradicted on the whole AI distinction thing which is why I wish to discuss it with ya.
LLMs translate specs into code, if you master conputational thinking like Antirez, you basically reduce LLMs to intelligent translators of the stated computational ideas and specifications into a(ny) formal language + the typing. In that scenario LLMs are a great tool and speedup the coding process. I like how the power is in semantics, whereas syntax becomes more and more a detail (and rightfully so)!
I coined the term lite coding for this after reading this article and now my chatGPT has convinced me that I am a genius
"Throwaway prototype" - that's the traditional term for this.
@antirez if you reading this, it would be insigthful I think if you could share what is your current AI workflow, the tools you use, etc. Thanks!
Thanks, sharing a lot on X / BlueSky + YouTube but once the C course on YouTube will be finished, I'll start a new course on programming in this way. I need a couple more lessons to declare the C course closed (later I'll restart it likely, the advanced part). So I can start with the AP course.
Looking forward! The C course is great!
Appreciated :)
I do not agree at all with his contrasting definitions of “vibe coding” vs “automatic programming”. If a knowledgeable software engineer can say that Claude’s code is actually theirs, so can everyone else. Otherwise, we could argue that Hell has written a book about itself using Dante Alighieri as its tool, given how much we still do not know about our brains, language, creative process, etc.
It’s very healthy to have the “strong anti-disclosure” position expressed with clarity and passion.
"When the process is actual software production where you know what is going on, remember: it is the software you are producing. Moreover remember that the pre-training data, while not the only part where the LLM learns (RL has its big weight) was produced by humans, so we are not appropriating something else."
What does that even mean? You are a failed novelist who does not have ideas and is now selling out his fellow programmers because he wants to get richer.
> if vibe coding is the process of producing software without much understanding of what is going on (which has a place, and democratizes software production, so it is totally ok with me)
Strongly disagree. This is a huge waste of currently scarce compute/energy both in generating that broken slop and in running it. It's the main driver for the shortages. And it's getting worse.
I would hate a future without personal computing.
A reminder that that your LLM output isn't your intellectual property no matter how much effort you feel went into its prompting.
Copyright protects human creations and the US Copyright Office has made it clear that AI output cannot be copyrighted without significant creative alterations from humans of the output after it is generated.
There are other ways to protect code assets than through Copyright.
Not that I necessarily disagree with any of it, but one word comes to mind as I read through it: “copium”
I stopped reading at "soon to become the practice of writing software".
That belief has no basis at this point and it's been demonstrated not only that AI doesn't improve coding but also that the costs associated are not sustainable.
I continued reading, but you're right. Why did the author feel that it was necessary to include that?
Because typing in text and syntax is now becoming irrelevant and mostly taken care of by language models. Computational thinking and sematics on the other hand will remain essential in the craft and always have been.
Care to link your sources? At least one of the studies that got attention here was basically done with a bunch of programmers who had no prior experience with the tools.
It's getting silly. Every 3 days someone is trying to coin a new term for programming.
At the end of the day, you produce code for a compiler to produce other code, and then eventually run it.
It's called programming.
When carpenters got powertools, they didn't rename themselves automatic carpenters.
When architects started working with CAD instead of paper, they didn't become vibe architects, even though they literally copy-paste 3/5 of the content they produce.
Programming is evolving, there is a lot of senseless flailing because heads is spinning.
> I'm a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.
Disagree.
So when there is a bug / outage / error, due to "automatic programming" are you ready to be first in line to accept accountability (the LLM cannot be) when it all goes wrong in production? I do not think that would even be enough or whether this would work in the long term.
No excuses like "I prompted it wrong" or "Claude missed something" or "I didn't check over because 8 other AI agents said it was "absolutely right"™".
We will then have lots of issues such as this case study [0] where everything seemingly looks fine at first, all tests pass but in production, the logic was misinterpreted by the LLM with a wrong keyword, [0] during a refactor.
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
> So when there is a bug / outage / error, due to "automatic programming" you are first in line and ready to accept accountability when it all goes wrong in production?
Absolutely yes. Automatic programming does not mean software developers are no longer accountable for their errors. Also because you can use AP in order to do ways more QA efforts than possible in the past. If you decide to just add things without a rigorous process, it is your fault.
Agree. Much of the value of devs is understanding the thing they're working on so they know what to do when it breaks, and knows what new features it can easily support. Doesn't matter whether they wrote the code, a colleague wrote it, or an AI.
Yep writing the code might have gotten a little bit easier but was never was the hard part to begin with.
>> are you ready to be first in line to accept accountability
I'm accountable for the code i push to production. I have all the power and agency in this scenario, so i am the right person to be accountable for what's in my PR / CL.
That is the policy I set up for our team as well—when you push, you declare your absolute responsibility for any changes you made to the repository, regardless of the way they were conceived.
That is really about the least confusing part of the story.
Owning the issue is one thing, but being able to fix issues with a reasonable amount of resources is another.
To me code created like this smells like technical debt. When bugs appear after 6 months in production - as they do, if you didn't fully understand the code when developing it, how much time, energy and money will it cost to fix the problem later on?
More often than I like I had to deal with code where it felt like the developer did'nt actually understand what they were writing. Sometimes I was this developer and it always creates issues.
I hope you aren't missing the point. My position is similar to the author. I WILL take responsibility for the code I push to production, and rather than input a prompt and roll the dice on the outcome, I am strategic in my prompts, ensuring the LLM has the right context each time I I voke it, some of that context being accurate descriptions of what I want built, and I am in charge of ensuring it has been properly vetted. Many times I will erase what the LLM has written and redo it, by myself depending on the situation.
Replace "LLM" with "IDE" and re-read. The LLM is another tool. Of course tools can't be held responsible, the person wielding the tool is.
> Many times I will erase what the LLM has written and redo it, by myself depending on the situation.
The contention here is that antirez doesn't think this is necessary anymore. 100% code gen, with the occassional "stepping in and tell the AI how to write a certain function"
Your position is more balanced and quite similar to https://mitchellh.com/writing/non-trivial-vibing
Vibe Engineering. Automatic Programming. “We need to get beyond the arguments of slop vs sophistication..."
Everyone seems to want to invent a new word for 'programming with AI' because 'vibe coding' seems to have come to equate to 'being rubbish and writing AI slop'.
...buuuut, it doesn't really matter what you call it does it?
If the result is slop, no amount of branding is going to make it not slop.
People are not stupid. When I say "I vibe coded this shit" I do not mean, "I used good engineering practices to...". I mean... I was lazy and slapped out some stupid thing that sort of worked.
/shrug
When AI assisted programming is generally good enough not to be called slop, we will simply call it 'programming'.
Until then, it's slop.
There is programming, and there is vibe coding. People know what they mean.
We don't need new words.
That's kind of Salvatore's point though; programming without some kind of AI contribution will become rare over time, like people writing assembly by hand is rare now. So the distinction becomes meaningless.
There is no perfect black or perfect white, so the distinction is meaningless, everything is gray.
Describes appropriation and then says "so it's not appropriation". Wat.
I prefer "LLM-assisted programming" as it captures the value/responsibilty boundary pretty exactly. I think it was coined by simonw here, but unfortuantely "vibe coding" become all encompassing instead of proper software engineers using "LLM-assistant" to properly distinguish themselves from vibe bros with very shallow knowledge.
I think Antirez is gonna change his tune about this as soon as OpenAI et.al. start requesting royalties from software you built using their AI.
No problem, we'll switch to EU/Chinese models in a blink. Just started using Kimi 2.5 a couple days ago and it's almost on par with Opus 4.5.
I think he would too, but they’re obviously not going to do that.
"OpenAI is exploring licensing models tied to customer outcomes, including pharma partnerships." [1]
"OpenAI CFO Sarah Friar sketched a future in which the company's business models evolve beyond subscriptions and could include royalty streams tied to customer results." [1]
"Speaking on a recent podcast, Friar floated the possibility of "licensing models" in which OpenAI would get paid when a customer's AI-enabled work produces measurable outcomes." [1]
$30 a month or whatever amount of $$ per token does not justify the valuation of these companies. But you know what does? 5% of revenue from your software that their AI helped you to create. I can see world in which you must state you've used their AI to write code and you must use specific licenses to that code, which allows them part of your revenue.
[1] https://www.businessinsider.com/openai-cfo-sarah-friar-futur...
This won’t happen with general software though, that ship has sailed, the space is too competitive.
I hope they try.
Thank you. I and you can be proud. Yes we can! :)
I posted yesterday about how I'd invented a new compression algorithm, and used an AI to code it. The top comment was like "You or Claude? ... also ... maybe consider more than just 1-shotting some random idea." This was apparently based on the signal that I had incorrectly added ZIP to the list of tools that uses LZW (which is a tweak of LZ78, which is a dictionary version of the back-reference variant by the same Level-Ziv team of LZ77, the thing actually used in Zip). This mistake was apparently signal that I had no idea what I was doing, was a script kiddie who had just tried to one shot some crap idea, and ended up with slop.
This was despite the code working and the results table being accurate. Admittedly the readme was hyped and that probably set this person off too. But they were so far off in their belief that this was Claude's idea, Claude's solution, and just a one-off that it seemed they not only totally misrepresented me and my work, but the whole process that it would actually take to make something like this.
I feel that perhaps someone making such comments does not have much familiarity with automatic programming. Because here's what actually happened: the path to get from my idea (intuited in 2013, but beyond my skills to do easily until using AI) was about as far from a 'one-shot' as you can get.
The first iteration (Basic LZW + unbounded edit scripts + Huffman) was roughly 100x slower. I spent hours guiding the implementation through specific optimization attempts:
- BK-trees for lookups (eventually discarded as slow).
- Then going to Arithmetic coding. First both codes + scripts, later splitting.
- Various strategies for pruning/resetting unbounded dictionaries.
- Finally landing on a fixed dict size with a Gray-Code-style nearest neighbor search to cap the exploration.
The AI suggested some tactical fixes (like capping the Levenshtein table, splitting edits/codes in Arithemtic coding), but the architectural pivots came from me. I had to find the winning path.
I stopped when the speed hit 'sit-there-and-watch-it-able' (approx 15s for 2MB) and the ratio consistently beat LZW (interestingly, for smaller dics, which makes sense, as the edit scripts make each word more expressive).
That was my bar: Is it real? Does it work? Can it beat LZW? Once it did, I shared it. I was focused on the bench accuracy, not the marketing copy. I let the AI write the hype readme - I didn't really think it mattered. Yes, this person fixated on a small mistake there, and completely misrepresented or had the wrong model of waht it actually took to produce this.
I believe that kind of misperception must be the result of a lack of familiarity with using these tools in practice. I consider these kind of "disdain from the unserious & inexperienced" to be low quality, low effort comments than essentially equate AI with clueless engineers and slop.
As antirze lays out: the same LLMs depending on the human that is guiding the process with their intuition, design, continuous steering and idea of software.
Maybe some people are just pissed off - maybe their dev skills sucked beofre AI, and maybe they still suck with AI, and now they are mad at everything good people are doing with AI, and AI itself?
Idk, man. I just reckon this is the age where you can really make things happen, that you couldn't make before, and you should be into and positive. If you are a serious about making stuff. And making stuff is never easy. And it's always about you. A master doesn't blame his tools.
cope
[stub for offtopicness]
How does it feel to see all your programming heroes turn into Linkedin-style influencers?
Please don't cross into personal attack, no matter how much you dislike an article.
You may not owe programming heroes or Linkedin inluencers better, but you owe this community better if you're participating in it.
https://news.ycombinator.com/newsguidelines.html
I used to aspire to reach the same and now I lose a bit more respect with their every drag of the AI-pipe.
The trick always was to not heroify people.Mentally putting people on a pedestal is almost always a mistake.
I don't see Carmack or Torvalds doing this, so it's all good (for now).
How big of a Carmack fan are you really, if you don't know one of his most well known takes on programming? (And you definitely don't need to be a fan.) Carmack has been heavily in favor of leveraging power tools since way back.
Direct quote from the man himself:
> I will engage with what I think your gripe is — AI tooling trivializing the skillsets of programmers, artists, and designers.
> My first games involved hand assembling machine code and turning graph paper characters into hex digits. Software progress has made that work as irrelevant as chariot wheel maintenance.
> Building power tools is central to all the progress in computers.
> Game engines have radically expanded the range of people involved in game dev, even as they deemphasized the importance of much of my beloved system engineering.
> AI tools will allow the best to reach even greater heights, while enabling smaller teams to accomplish more, and bring in some completely new creator demographics.
> Yes, we will get to a world where you can get an interactive game (or novel, or movie) out of a prompt, but there will be far better exemplars of the medium still created by dedicated teams of passionate developers.
> The world will be vastly wealthier in terms of the content available at any given cost.
https://x.com/ID_AA_Carmack/status/1909311174845329874
I've seen that before. Re-reading it, I don't really get the same "vibe" as antirez's level of AI advocacy. You also conveniently omitted the last paragraph of the tweet:
> Will there be more or less game developer jobs? That is an open question. It could go the way of farming, where labor saving technology allow a tiny fraction of the previous workforce to satisfy everyone, or it could be like social media, where creative entrepreneurship has flourished at many different scales. Regardless, “don’t use power tools because they take people’s jobs” is not a winning strategy.
But yeah, it (almost) sounds like an ad for AI, but I like to believe it's still a measured somewhat neutral stance. The difference is that Carmack doesn't consistently post things like this unprompted, unlike antirez.
The closest we've gone with Torvalds was using LLM's for non-important tasks.
In case you didn’t know, Linus does vibe code now:
https://github.com/torvalds/AudioNoise/blob/main/README.md
Just the visualizer:
> Also note that the python visualizer tool has been basically written by vibe-coding.
Also that readme is still fairly technical, no any kind of advocacy or heavy pro-AI sentiments of any kind.
Or Knuth.
This is why you should never meet--nor listen too much--to your heroes.
This. Thanks. It's a relief to see I am not the only one completely disappointed. I still believe that these posts are just an ad stunt to publicize their soon-to-be released AI tool. If they really believe what they're writing, it's really sad.
I feel slightly disappointed. At the same time nobody is obliged to live like the public (or his "fans") think that person should live.
How does it feel to read yet another unbelievably unenlightening article about LLM usage voted to the top of the frontpage for the thousandth day in a row?
You either die as a programmer hero or live long enough to be a Linkedin-style influencer.
On a more serious note, the technology & its use cases of AI are pretty dividing especially within software engineering. I would consider the fact that the financial incentives driving it and the what ~3 TRILLION $ invested in AI driving up some of this divide too.
It’s not automatic programming, any more than compiling is. It’s a form of high level programming.
It’s also sloppy and irresponsible. But hey, you can fake your work faster and more convincingly than ever before.
Call it slop coding.
How many times are we going to reinvent the wheel of LLM usage and applaud? Why every day is there another LLM usage article adding essentially nothing educational or significant to the discourse voted to the top of the frontpage? Am I just jaded? It feels like the bar for "Successful article on Hacker News" is so much lower for LLM discourse than for any other subject
This was just such a worthless post that it made me sad. No arguments with moral weight or clarity. Just another hollowed out shell beeping out messages of doom...
Vibe coding is an idiotic term and it's a shame that it stuck. If I'm a project lead and just giving directions to the devs I'm also "vibe coding"?
I guess a large of that is that 1-2 years ago the whole process was much more non-deterministic and actually getting a sensible result much harder.
I think if a manager just gave some high order instructions and then went mostly handsoff until teammembers started quitting, dying etc, only then he steps in, that would be vibe managing. Normal managing would be much more supervision and guidance through feedback. This aligns 100% with TFA.
Sculpt coding??
Sculding??
Rice by any other name??
> Pre-training is, actually, our collective gift that allows many individuals to do things they could otherwise never do, like if we are now linked in a collective mind, in a certain way.
The question is if you can have it all? Can you get faster results and still be growing your skills. Can we 10x the collective mind knowledge with use of AI or we need to spend a lot of time learning the old wayTM to move the industry forward.
Also nobody needs to justify what tools they are using. If there is a pressure to justify them, we are doing something wrong.
People feel ripped off by AI & products which use AI. So this is the reason why you have to justify the tool use of AI.