I’ve noticed this trend in comments across the internet. Someone will ask or say something, the someone else will reply with “I asked ChatGPT and it says…” or “According to AI…”
ChatGPT is free and available to everyone, and so are a dozen other LLMs. If the person making the comment wanted to know what ChatGPT had to say, they could just ask it themselves. I guess people feel like they’re being helpful, but I just don’t get it.
Though with that said, I’m happy when they at least say it’s from an LLM. At least then I know I can ignore
It. Worse is replying as if it’s their own answer, but really it’s just copy pasted from an LLM. Those are more insidious.
Isn't it the modern equivalent of "let me Google that for you"?
My experience is that the vast majority of people do 0 research (AI assisted or not) before asking questions online. Questions that could have usually been answered in a few seconds if they had tried.
If someone preface a question by saying they've done their research but would like validation, then yes it's in incredibly poor taste.
There's seemingly a difference in motive. The people sharing AI responses seem to be from people fascinated by AI generally, and want to share the response.
The "let me Google that for you" was more trying to get people to look up trivial things on their own, rather than query some forum repeatedly.
exactly, the "i asked chatgpt" people give off 'im helping' vibes but in reality they are just annoying and clogging up the internet with spam that nobody asked for
It is the modern equivalent of "let me Google that for you" except for that most of the people doing it don't seem to realize that they're telling the person to fuck off, while that absolutely was the intent with lmfgtfy.
> Isn't it the modern equivalent of "let me Google that for you"?
No. With Google you get many answers. With AI you get one. Also we know that AI is unreliable with some possibility, it’s highly probable that you can get a better source on Google than that. This is especially bad when the question is something niche. So, it’s definitely a worse version of lmgtfy.
It was meant to be - the whole point of that meme was to shame/annoy lazy people into looking things up themselves, instead of asking random strangers on the internet.
And also just spouting unhinged shit divorced from reality, which is pretty common. You get tired enough dealing with these people that an AI response is warranted.
That's not like pasting in a screenshot or a copy/paste of an AI answer, it's being intentionally dismissive. You weren't actually doing the "work" for them, you were calling them lazy.
The way I usually see the AI paste being used is from people trying to refute something somebody said, but about a subject that they don't know anything about.
> What can be asserted without evidence can also be dismissed without evidence.
Becomes
> That which can be asserted without thought can be dismissed without thought.
Since no current AI thinks but humans do I’m just going to dismiss anything an AI says out of hand because you are pushing the cost of parsing what it said onto me and off you and nah, ain’t accepting that.
Elegant and correct.
It seems so obvious to me that if someone wanted a ChatGPT answer they would have sought it out for themselves and yet... it's happened to me more than a few times. I think some people think they are being clever and resourceful (or 'efficient') but it just dilutes their own authority on that which they were asked to opine.
The irony is that the disclosure of “I asked ChatGPT and it says…” is done as a courtesy to let the reader be informed. Given the increasing backlash against that disclosure, people will just stop disclosing which is worse for everyone.
The only workaround is to just text as-is and call it out when it's wrong/bad, AI-generated or otherwise, as we've done before 2023.
I think it's fine to not disclose it. Like, don't you find "Sent from my iPhone" that iPhones automatically add to emails annoying? Technicalities like that don't bring anything to the conversation.
I think typically, the reason people are disclosing their usage of LLMs is that they want offload responsibility. To me it's important to see them taking responsibility for their words. You wouldn't blame Google for bad search results, would you? You can only blame the entity that you can actually influence.
That’s true. Unfortunately the ideal takeaway from that sentiment should be “don’t reply with copy pasted LLM answers”, but I know that what you’re saying will happen instead.
Exactly, it is important and courteous still to cite your resources and tools.
I find a good workaround is to just say "some very quick research of my own leads me to ...", and then summarize what ChatGPT said. Especially if you are using e.g. an LLM with search enabled, this is borderline almost literally true, but makes it clear you aren't just stating something completely on your own.
Of course, you should still actually verify the outputs. If you do, there is not much wrong with not mentioning using the LLM, since you've don't the most important thing anyway (not be lazy in your response). If you don't verify, you had better say that.
Except it isn't. It's a disclosure to say "If I'm wrong, it's not my fault".
Because if they'd actually read the output, then cross-checked it and developed some confidence in the opinion, they wouldn't put what they perceive as the most important part up front ("I used ChatGPT") - they'd put the conclusion.
It isn't this cut and dry. You can cross-check and verify, but still have blind spots (or know that the tools have biases as well), and so consider it still important to mention the LLM use up front.
Or, if you preface a comment with "I am not an expert, but...", it is not often about seeking to avoid all blame, but to simply give the reader reasonable context.
Of course, you are right, it is also sometimes just lazy dodging.
> someone else will reply with “I asked ChatGPT and it says…” or “According to AI…”
I had a consultant I’m working with have an employee do that to me. I immediately insisted that every hour they’ve billed on that person’s name be refunded.
Indeed. On the other hand, there's a difference between "I one-prompted some mini LLM" and "A deep-thinking LLM aided me through research with fact-checking, agents, tools and lots of input from me." While both can be phrased with “I asked ChatGPT and it says…” or “According to AI…”, the latter would not annoy me.
I think you are misreading the GP, they said "A deep-thinking LLM aided me through research *with fact-checking, agents, tools and lots of input from me*", which I read as implying they did the fact checking, and not the LLM.
I have coworkers who, on a zoom meeting, will respond to questions and discussions by spamming the zoom chat with paste-dumps of ChatGPT, etc. So frustrating and tiresome.
If possible I just go silent when people start copy and pasting ChatGPT at me. Only works in certain cases like Teams/Slack DMs, but it does remove a distraction from my day.
I start responding again when they can be bothered to formulate their own thoughts.
It must be the randomness built into LLMs that makes people think it's something worth sharing. I guess it's no different from sharing a cool Minecraft map with your friends or something. The difference is Minecraft is fun, reading LLM content is not.
I do not use AI for engineering work and never will, because doing the work of thinking for myself is how I maintain the neural capacity for forming my own original thoughts and ideas no AI has seen before.
If anyone gives me an opinion from an AI, they disrespect me and themselves to a point they are dead to me in an engineering capacity. Once someone outsources their brain they are unlikely to keep learning or evolving from that point, and are unlikely to have a future in this industry as they are so easily replaceable.
It is increasingly incredibly important here to make a distinction between using an LLM with and without search. Without search, I agree with you.
But e.g. ChatGPT with search enabled is often an invaluable research tool, and dramatically speeds up finding relevant sources. It basically automates the spidering of references and links, and also handles the basic checks for semantic relevance quite well, and this task requires little real intelligence or thought. Only once you hit a highly specific and niche technical domain will it start to fail you here (since it will match on common-language semantics that do not often align with technical usage).
For a lot of topics, I now have the reverse feeling that a person who has NOT used an LLM to facilitate search—on basic or even intermediate questions—is increasingly more of a concern.
Like any reference or other person, one needs to question whether those ideas fit into their mental models and verify things anyhow. One never could just trust that something is true without at least quick mental tests. AI is no different than other sources here. As was drilled into us in high school, use multiple sources and verify them.
Soc. At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
Phaedr. Yes, Socrates, you can easily invent tales of Egypt, or of any other country.
I have recently started to use codex on the command line. Before I put the prompt in, I get an idea in my head of what should happen.
Then I give it the instructions, sometimes clarifying my own thoughts while doing it. These are high level instructions, not "change this file". Then it bumps away for minutes at a time, after which I diff the results and consider if it matches up to what I would expect. At that point lower level instructions if appropriate.
Consider whether it was a better solution or not, then ask questions around the edges that I thought were wrong.
It turns my work from typing code in to pretty much code design and review. These are the hard tasks.
I am 25 years into my career at this point, and co-leading two companies, however I expect the same from people I work with and mentor. Vibe coders have no chance keeping up with me in doing things that have never been done before.
Maybe someone vibe codes their way through some webapp stuff and they review and clean it up, I will not stop them, but if someone turns in obvious AI slop they clearly do not understand they are dead to me.
I do security auditing and engineering though, which is in very high demand and something LLMs are incredibly bad at right now.
> I do not use AI for engineering work and never will
> Once someone outsources their brain they are unlikely to keep learning or evolving from that point
It doesn't piss me off, it makes me feel sorry for you. Sorry that you're incapable of imagining those with curiosity who use AI to do exactly what you're claiming they don't - learning. The uncurious were uncurious before AI and remain so. They never asked why, they still don't. Similarly, the curious still ask why as much as ever. Rather than Google though, they now have a more reliable source to ask things to, that explains things much better and more quickly.
I hear you grunt! Reliable? Hah! It's a stochastic parrot hallucinating half the time!
Note that I said more reliable than Google, which was the status quo. Google is unreliable. Yes, even when consulting three separate sources. Not that one has the time for that, not in reality.
You've got it the wrong way around. LLMs do the exact opposite - they increase the gap between the curious and the nots. It accelerates the learning rate gap between them. The nots.. they're in for a tough time. LLMs are so convenient, they'll cruise through life copypasting their answers, until they are asked to demonstrate competence in a setting where none are available and everything falls apart.
If you still find this hard to imagine, here's how it goes. In your mind LLM usage by definition goes like this - and for the uncurious, this is indeed how it would go.
User: Question. LLM: Answer. End of conversation.
By the curious, it's used like this.
User: Question. LLM: Answer. User: Why A? Why B? LLM: Answer. User: But C. What's the tradeoff? LLM: Answer. User: Couldn't also X? LLM: Answer. User: I'm not familiar with Y, explain.
The problem with current LLMs is that they are so sycophantic that they don't tell you when you're asking the wrong questions. I've been down the path of researching something with an LLM, and I came to a conclusion. I later revisited the subject and this time I instead read a longer-form authoritative source and it was clear that "interrogating" the matter meant I missed important information.
No, but it's also equally not a useful contribution. If wikipedia says something then I'm going to link the article, then give a quick summary of what in the article relates to whatever my point is.
Not write "Wikipedia says..." and paste the entire article verbatim.
Even that annoys me because who knows how accurate that is at any moment. Wikipedia is great for getting a general intro to a thing, but it is not a source.
I would rather people go find the actual whitepaper or source in the footnotes and give me that, and/or give me their own opinion on it.
I’m in the same boat, and what tipped me there is the ethical non-starter that OpenAI and Anthropic represent. They strip-mined the Web, ripped off copyrighted works in neat space, admitting that going through the proper channels was a waste of business resources.
They believe that the entirety of human ingenuity should be theirs at no cost, and then they have the audacity to SELL their ill-gotten collation of that knowledge back to you? All the while persuading world governments that their technology is the new operating system of the 21st century.
On top of which, the most popular systems are proprietary applications running on someone else's machines. After everything GNU showed us for 40 years, I'm surprised programmers are so quick to hand off so much of their process to non-free SaaSS.
I actually do not use any proprietary software of any kind in my work. Any tools I can not alter to my liking are not -my- tools and could be taken away from me or changed at any time.
Why do you believe it's disingenuous when a capable person tells you that they don't need the crutches that you are advertising? We've accomplished millennia of engineering without the need for a bullshit generator, but you somehow assume everyone needs such a tool just because you are such a fanboy for it? Talk about disingenuous...
Using an AI to think for me would be like going to a gym and paying a robot to lift weights for me.
Like sure that is cool that is possible, but if I do not do the work myself I will not get stronger.
Our brains are the same way.
I also do not use a GPS because there are literally studies with MRI scans proving it makes an entire section of our brain go dark compared to London taxi drivers required by law to navigate with their brains.
I also navigate life without a smartphone at all, and it has given me what feels like focus super powers compared to those around me, when in reality probably most people had that level of focus before smartphones were a thing.
All said AI is super interesting when doing specialized work at scale no human has time for, like identifying cancer by training on massive datasets.
No idea if it actually makes me smarter, but I have noticed I have an atypically high level of mental pain tolerance to pursue things many told me were impossible and quickly gave up on.
Eh… your complaint describes every single piece of information available on the internet.
Let’s try it with other stuff:
“Looking at solutions on stack overflow outsources your brain”
“Searching arxiv for literature on a subject outsources your brain”
“Reading a tutorial on something outsources your brain”
There’s nothing that makes ChatGPT et al appreciably different from the above, other than the tendency to hallucinate.
ChatGPT is a better search engine than search engines for me, since it gives links to cite what it’s talking about and I can check those, but it pays attention to precisely what I asked about and generally doesn’t include unrelated crap.
The only complaint I have is the hallucinations, but it just means I have to check its sources, which is exactly the case already for something as mundane as Wikipedia.
Ho hum. Maybe take some time to reevaluate your conclusions here.
That tendency to hallucinate that you so conveniently downplay is a major problem. I'll take reading the reference manual myself all day rather than sifting through the output of a bullshit generator.
Sure it is, as long as those engineers apply and honest effort and learn from their mistakes. Even if they don't do things faster then you initially, at least they learned something.
Unfortunately that logic does not apply to models.
Then I’m lost. I thought this was about the laziness of outsourcing thinking. Why would the outsourcee’s ability to learn impact whether it’s lazy or not?
I do not use books for engineering work and never will, because doing the work of thinking for myself is how I maintain the neural capacity for forming my own original thoughts and ideas no writer has seen before.
If anyone gives me an opinion from a book, they disrespect me and themselves to a point they are dead to me in an engineering capacity. Once someone outsources their brain they are unlikely to keep learning or evolving from that point, and are unlikely to have a future in this industry as they are so easily replaceable.
If this pisses you off, ask yourself why.
(You can replace AI with any resource and it sounds just as silly :P)
Yes, if you find a book that is as bad as AI advice, you should definitely throw it away and never read it. If someone is quoting a known-bad book, you should ignore their advice (and as a courtesy, tell them their book is bad)
It's so strange that pro-AI people don't see this obvious fact and keep trying to compare AI with things that are actually correct.
It's so strange that anti-AI people think the advice and information you can get from a good model (if you know how to operate it well) is less valuable than the advice and information you can get from a book.
That "a good model (if you know how to operate it well)" is doing a lot of lifting. To be sure, there are a lot of bad books, and you can get negative advice from them, but a book has fixed content that can gain and lose a reputation, whereas a model (even a good one!) has highly variable content dependent on "if you know how to operate it well". So when someone or some group that I respect recommends a book, I can read the words with some amount of trust that the content is valuable. When someone quotes a model's response without any commentary or affirmation, it does not inspire any trust that the content is valuable. It just indicates that the person has abdicated their thought process.
I agree that quoting a model's answer to someone else is bad form - you can get a model to say ANYTHING if you prompt it to, so a screenshot of a ChatGPT conversation to try and prove a point is meaningless slop.
I find models vastly more useful than most technical books in my own work because I know how to feed in the right context and then ask them the right questions about it.
There isn't a book on earth that could answer the question "which remaining parts of my codebase still use the .permission_allowed() method and what edge-cases do they have that would prevent them from being upgraded to the new .allowed() mechanism"?
And as long as you don't copy-paste its advice into comments, that's fine.
No one really cares how you found all those .permission_allowed() calls to replace - was it grep, or intense staring, or AI model. All that matters is you stand behind it, and act as an author. Original post said it very well:
> ChatGPT isn’t on the team. It won’t be in the post-mortem when things break. It won’t get paged at 2 AM. It doesn’t understand the specific constraints, tech debt, or your business context. It doesn’t have skin in the game. You do.
Further, grep (and any of its similar siblings) works just fine for such a task, is deterministic, won't feed you bullshit, and doesn't charge you tokens to do a worse job than existing free tools will do well. Better yet, from my experience with the dithering pace of LLMs, you'll get your answer quicker, too.
>There isn't a book on earth that could answer the question "which remaining parts of my codebase still use the .permission_allowed() method and what edge-cases do they have that would prevent them from being upgraded to the new .allowed() mechanism"?
You're so close to realising why the book counter argument doesn't make any sense!
> anti-AI people think the advice and information you can get from a good model (if you know how to operate it well) is less valuable than the advice and information you can get from a book
Those people exist and they’re wrong.
More frequently, however, I find I’m judging the model less than its user. If I get an email that smells of AI, I ignore it. That’s partly because I have the luxury to do so. It’s largely because engaging has commonly proven fruitless.
You see a similar effect on HN. Plenty of people use AI to think through problems. But the comments that quote it directly are almost always trash.
"But the comments that quote it directly are almost always trash."
Because the output is almost always trash, and it takes re-doing the work and then claiming that it came from LLMs for it not to be.
These tools are being sold to me as similar to what a Jr engineer offers me, but that's not at all true because I would fire a Jr that came to me with such bullshit and needing such significant hand-holding so often as what I see coming out of an LLM.
Of course I respect humans, I am a human myself! And I learned a lot from others, asking them (occasionally stupid) questions and listening to their explanations. Doing the same to other is just being fair. Explain a thing and make someone more knowledgeable! Maybe next time _they_ will help you!
This does not apply to AI of course. In most cases, if a person did an AI PR/comment once, they will keep doing AI PRs/comments, so your explanation will be forgotten next time they clear context. Might as well not waste your time and dismiss it right away.
Congratulations on misunderstanding and misrepresenting the point. (This is sarcasm, btw.)
It’s not the source that matters. It’s not the source that he’s complaining about. It’s the nature of the interaction with the source.
I’m not against watching video, but I won’t watch TikTok videos, because they are done in a way that is dangerously addictive. The nature of engagement with TikTok is the issue, not “I can’t learn from electrical devices.”
Each of us must beware of the side effects of using tools. Each kind of tool has its hazards.
I will not use Jr developers for engineering work and never will, because doing the work of a Jr.....
You don't have to outsource your thinking to find value in AI tools you just have to find the right tasks for them. The same as you would with any developer jr to you.
I'm not going to use AI to engineer some new complex feature of my system but you can bet I'm going to use it to help with refactoring or test writing or a second opinion on possible problems with a module.
> unlikely to have a future in this industry as they are so easily replaceable.
The reality is that you will be unlikely to compete with people who use these tools effectively. Same as the productivity difference between a developer with a good LSP and one without or a good IDE or a good search engine.
When I was a kid I had a text editor and a book and it worked. But now that better tools are around I'm certainly going to make use of them.
> The reality is that you will be unlikely to compete with people who use these tools effectively.
If you looked me or my work up, I think you would likely feel embarrassed by this statement. I have a number of world firsts under my belt that AI would have been unable to meaningfully help with.
It is also unlikely I would have every developed the skill to do any of that aside from doing everything the hard way.
I just looked and I'm not sure what I'm meant to be seeing that would cause me to feel embarrassed but congrats on whatever it is. How much more could you have developed or achieved if you didn't limit yourself?
Do you do all your coding in ed or are you already using technology to offload brain power and memory requirements in your coding?
I don't know, just a quick glance at that repo and I feel like AI could have written your shell scripts which took up several tries from multiple people to get right about as well as the humans did.
So your ok with using tools to offload thinking and memory as long as they are FOSS?
It took some iteration and hands on testing to get that right across multiple operating systems. Also to pass shellcheck, etc.
Even if an LLM -could- do that sort of thing as well as my team and I can, we would lose a lot of the arcane knowledge required to debug things, and spot sneaky bugs, and do code review, if we did not always do this stuff by hand.
It is kind of like how writing things down helps commit them to memory. Typing to a lesser extent does the same.
Regardless those scripts are like <1% of the repo and took a few hours to write by hand. The rest of the repo requires extensive knowledge of linux internals, compiler internals, full source bootstrapping, brand new features in Docker and the OCI specs, etc.
That took a lot of reasoning from humans to get right, in spite of the actual code being just a bunch of shell commands.
There are just no significant shortcuts for that stuff, and again if there were, taking them is likely to rob me of building enough cache in my brain to solve the edge cases.
Also yes, I only use FOSS tools with deterministic behavior I can modify, improve, and rely on to be there year after year, and thus any time spent mastering them is never wasted.
I decided to see if I could get an old Perl and C codebase running via WebAasembly in the browser having Claude brute-force figuring out how to compile the various components to WASM. Details
here: https://simonwillison.net/2025/Oct/22/sloccount-in-webassemb...
I'm not saying it could have created your exact example (I doubt that it could) but you may be under-estimating how promising it's getting for problems of that shape.
I do not doubt that LLMs might some day be able to generate something like my work in stagex, but it would only be because someone trained one on my work and that of other people that insist on solving new problems by hand.
Even then, I would never use it, because it would be some proprietary model I have to pay some corpo for either with my privacy, my money, or both... and they could take it away at any time. I do not believe in or use centralized corpotech. Centralized power is always abused eventually. Also is that regurgitated code under an incompatible license? Who knows.
Also, again, I would rob myself of the experience and neural pathway growth and rote memory that come from doing things myself. I need to lift my own weights to build physical strength just as I need to solve my own puzzles to build patience and memory for obscure details that make me better at auditing the code of others and spotting security bugs other humans and machines miss.
I know when I can get away with LTO, and when I cannot, without causing issues with determinism, and how to track down over linking and under linking. Experience like that you only get by experimenting and compiling shit hundreds of times, and that is why stagex is the first Linux distro to ever hit 100% determinism.
Circling back, no, I am not worried about being unemployable because I do not use LLMs.
And hey, if I am totally wrong and LLMs can create perfectly secure projects better than I can in the future, and spot security bugs better than I can, and I am unemployable, then I will go be an artist or something, because there are always people out there that appreciate hard work done by humans by hand, because that is how I am wired.
> Even then, I would never use it, because it would be some proprietary model I have to pay some corpo for either with my privacy, my money, or both... and they could take it away at any time.
Have you been following the developments in open source / open weight models you can run on your own hardware?
They're getting pretty good now, especially the ones coming out of China. The GLM, Qwen and DeepSeek models out of China are all excellent. Mistral's open weight models (from France) are good too, as are the OpenAI gpt-oss models.
No privacy or money cost involved in running those.
I get your concern about learning more if you do everything yourself. All I can say there is that the rate and depth of technical topics I'm learning has been expanded by my LLM usage because I'm able to take on a much wider range of technical projects, all of which teach me new things.
You're not alone in this - there are many experienced developers who are choosing not to engage with this new family of technology. I've been thinking of it similar to veganism - there are plenty of rational reasons to embrace a vegan lifestyle and I respect people who do it but I've made different choices myself.
You're criticizing me for directly crediting the original here. That's the correct and ethical thing to do!
Honestly, I've seen the occasional bad faith argument from people with a passionate dislike of AI tooling but this one is pretty extreme even by those standards.
I hope you don't ever use open source libraries in your own work.
Actually, my criticism was the result of my own misunderstanding of what you were claiming. My apologies for that, although I'm still unlikely to use these tools based upon the example when my own personal counterexamples have shown me that it's often as much or more work to get there via prompting than it is to simply do the thinking myself. Have a good day.
Originally I tried to get it working loading code directly but as far as I can tell there's no stable CDN build of that, so I had to vendor it instead.
FFS stop it with the “it’s just the same as a human” BS. It’s not just like working with a junior engineer! Please spend 60 seconds genuinely reflecting on that argument before letting it escape like drool from the lips of your writing fingers.
We work with junior engineers because we are investing in them. We will get a return on that investment. We also work with other humans because they are accountable for their actions. AI does not learn and grow anything like the satisfying way that our fellow humans do, and it cannot be held responsible for its actions.
As the OP said, AI is not on the team.
You have ignored the OP’s point, which is not that AI is a useless tool, but that merely being an AI jockey has no future. Of course we must learn to use tools effectively. No one is arguing with that.
I'm not saying it's the same as working with a jr developer. I'm saying that not using something less skilled than yourself for less skilled tasks is stupid and self defeating.
Yes, when someone builds a straw man you ignore it. There is a huge canyon between never use AI in engineer(op proposal) and only use AI for all your engineering(op complaint).
There's a very good argument for not using tools vended by folks who habitually lie as much as the AI vendors (and their tools). I don't want their fingers anywhere in my engineering org, quite honestly. Given their ethics around intellectual property in general, I must assume that my company's IP is being stolen every time a junior engineer lazily uses one of these tools.
I'm sure you never use any Google or Microsoft products at all, such as Google Search, Maps or Android, and none of the companies and engineering teams you've ever worked with have used such products, given how habitually they lie (and the fact that they're two major AI vendors).
If so, congratulations for being old or belonging to the 0.01%. Good luck finding a first job where that holds in 2025.
Not at all true, though. You see, I expect the Jr will grow and learn from those off-loaded tasks in such a way that they will eventually become another Sr in the atelier. That development of the society of engineers is precisely what I do not wish to ever outsource to some oligarch's rental fleet of bullshit machines.
I personally think this could pop up as policy at work. I'd personally push for that. "If you're pasting AI responses without filtering through the lens of your own thoughts and experience..."
Like, it's fine for you to use AI, just like one would use Google. But you wouldn't paste "here are 10 results I got from Google". So don't paste whatever AI said without doing the work, yourself, of reviewing and making sense of it. Don't push that work onto others.
The scenario the author describes is bound to happen more and more frequently, and IMO the way to address it is by evolving the culture and best practices for code reviews.
A simple solution would be to mandate that while posting coversations with AI in PR comments is fine, all actions and suggested changes should be human generated.
They human generated actions can’t be a lazy: “Please look at AI suggestion and incorporate as appropriate. ”, or “what do you think about this AI suggestion”.
Acceptable comments could be:
- I agree with the AI for xyz reasons, please fix.
- I thought about AIs suggestions, and here’s the pros and cons. Based on that I feel we should make xyz changes for abc reasons.
If these best practices are documented, and the reviewer does not follow them, the PR author can simply link to the best practices and kindly ask the reviewer to re-review.
Relying heavily on information supplied by LLMs is a problem, but so is this toxic negativity towards technology. It's a tool, sometimes useful, and other times crap. Critical thinking and literacy is the key skill that helps you tell the difference, and a blanket rejection (just like absolute reliance) is the opposite of critical thinking.
I'm starting to run into the other end of this as a reviewer, and I hate it.
Stories full of nonsensical, clearly LLM-generated acceptance requirements containing implementation details which are completely unrelated to how the feature actually needs to work in our product. Fine, I didn't need them anyway.
PRs with those useless, uniformly-formatted LLM-generated descriptions which don't do what a PR description should do, with a half-arsed LLM attempt at summary of the code changes and links to the files in the PR description. It would have been nice if you had told me what your PR is for and what your intent as the author is, and maybe to call out things which were relevant to the implementation I might have "why?" questions about. But fine, I guess, being able to read, understand and evaluate the code is part of my job as a reviewer.
---- < the line
PRs littered with obvious LLM comments you didn't care enough to take out, where something minor and harmless, but _completely pointless_ has been added (as in if you'd read and understood what this code does, you'd have removed it), with an LLM comment left in above it AND at the end of the line, where it feels like I'm the first person to have tried to read and understand the code, and I feel like asking open-ended questions like "Why was this line added?" to get you to actually read and think about what's supposed to be your code, rather than a review comment explaining why it's not needed acting as a direct conduit from me to your LLM's "You're absolutely right!" response.
This absolutely has been my more recent frustration as well, specifically this:
> uniformly-formatted LLM-generated descriptions which don't do what a PR description should do, with a half-arsed LLM attempt at summary of the code changes and links to the files in the PR description. It would have been nice if you had told me what your PR is for and what your intent as the author is, and maybe to call out things which were relevant to the implementation I might have "why?" questions about.
If I want to see what the code changes do, I will read the code. I want your PR description to tell me things like:
- What the tradeoffs, if any, to this implementation are
- If there were potential other approaches you decided not to follow for XYZ reason so that I don't make a comment asking about it
- If there is more work to be done, and if so what it is
- Any impacts this change might have on other systems
- etc.
Sure, if you want to add a handful of sentences summarizing the change at a high level just to get me in context, that's fine, but again if I want to see what changed, I will go look at what changed.
I've come down pretty hard on friends who, when I ask for advice about something, come back with a ChatGPT snippet (mostly D&D-related, not work-related).
I know ChatGPT exists. I could have fucking copied-and-pasted my question myself. I'm not asking you to be the interface between me and it. I'm asking you, what you think, what your thoughts and opinions are.
I just throw everyone who tells me “chatgpt said” or “ask chatgpt” into the idiot pile in my brain. It’s not nice but usually these are people who tell me incorrect things in the first place. Or turn in half finished unoptimized work. Maybe Llms are just a way to identify the mentally lazy?
No one I know who says this kind of thing would read this article. People love being lazy.
It's kinda hilarious to watch people make themselves redundant. Like you're essentially saying "you don't need me, you could have just asked ChatGPT for a review".
I wrote before about just sending me the prompt[0], but if your prompt is literally my code then I don't need you at all.
I'm surprised nobody else has gone meta yet, so I suppose I must. Anyway, "ChatGPT said this" ... about this thread.
----
In many of the Hacker News comments, a core complaint was not just that AI is sometimes used lazily, but that LLM outputs are fundamentally unreliable—that they generate confidently stated nonsense (hallucinations, bullshit in the Frankfurtian philosophical sense: speech unconcerned with truth).
Here’s a more explicitly framed summary of that sentiment:
⸻
Central Critique: AI as a Bullshit Generator
Many commenters argue that:
• LLMs don’t “know” things—they generate plausible language based on patterns, not truth.
• Therefore, any use of them without rigorous verification is inherently flawed.
• Even when they produce correct answers, users can’t trust them without external confirmation, which defeats many of the supposed productivity gains.
• Some assert that AI output should be treated not as knowledge but as an unreliable guess-machine.
Examples of the underlying sentiment:
• “LLMs produce bullshit that looks authoritative, and people post it without doing the work to separate truth from hallucination.”
• “It costs almost nothing to generate plausible nonsense now, and that cheapness is actively polluting technical discourse.”
• “‘I asked ChatGPT’ is not a disclaimer; it’s an admission that you didn’t verify anything.”
A few participants referenced Harry Frankfurt’s definition of bullshit:
• The bullshitter’s goal isn’t to lie (which requires knowledge of the truth), but simply to produce something that sounds right.
• Many commenters argue LLMs embody this: they’re indifferent to truth, tailored to maximize coherence, authority, and user satisfaction.
This wasn’t a side issue—it was a core rejection of uncritical AI use.
⸻
So to clarify: the strong anti-AI sentiment isn’t just about laziness.
It’s about:
• Epistemic corruption: degrading the reliability of discourse.
• False confidence: turning uncertainty into authoritative prose.
• Pollution of knowledge spaces: burying truth under fluent fabrication.
It is similar enough. People would just find the first thing in a disagreement that had headline that corroborated their opinion, this was often Wikipedia or the Summary on google.
People did this with code as well. DDG used to show you the first Stackoverflow post that was close to what you searched. However sometimes this was obviously wrong, people have just copied and pasted that wholesale.
I think the difference is people use those as citations for specific facts, not to logically analyze your code. If you're asked how technical detail of C++ works then simply citing Google is acceptable. If you're asked about broader details that depend on certain technicalities specific to your codebase, Googling would be silly.
Does that particularly matter in the context of this post? Either way, it sounds like OP was handed homework by the responder, and farming that out to yet another LLM seems kind of pointless, when OP could just ask the LLM for its opinion directly.
While LLM code feedback might be wordy and dubious, I have personally found that asking Claude to review a PR and related feedback to provide some value. From my perspective anyways, Claude seems able to cut through the BS and say if a recommendation is worth the squeeze or in what contexts the feedback has merit or is just pedantic. Of course, your mileage my vary as they say.
Careful with this idea, I had someone take a thread we were engaged in and feed it to an LLM, asking it to confirm his feelings about the conversation, only to post it back to the group thread. It was used to attack me personally in a public space.
Fortunately
1. The person was transparent about it, even posting a link to the chat session
2. They had to follow on prompt to really engage the sycophancy
3. The forum admins stepped in to speak to this individual even before I was aware of it
I actually did what you suggested, fed everything back into another LLM, but did so with various prompts to test things out. The responses where... interesting, the positive prompt did return something quite good. A (paraphrased) quote from it
"LLMs are a powerful rhetorical tool. Bringing one to a online discussion is like bringing a gun to a knife fight."
That being said, how you prompt will get you wildly different responses from the same (other) inputs. I was able to get it to sycophant my (not actually) hurt feelings.
Counterpoint: "Chatgpt said this" is an entirely legitimate approach in many contexts and this attitude is toxic.
One example: Code reviews are inherently asymmetrical. You may have spent days building up context, experimenting, and refactoring to make a PR. Then the reviewer is expected to have meaningful insight in (generously) an hour? AI code reviews help bring balance; it may notice stuff a human wouldn't, and it's ok for the human reviewer to say "hey, chatgpt says this is an issue but I'm not sure - what do you think?"
We run all our PRs through automated (claude) reviews automatically, and it helps a LOT.
Another example: Lots of times we have several people debugging an issue and nobody has full context. Folks are looking at code, folks are running LLM prompts, folks are searching slack, etc. Sometimes the LLMs come up with good ideas but nobody is sure, because none of us have all the context we need. "Chatgpt says..." is a way of bringing it to everyone's attention.
I think this can be generalized to forum posts. "Chatgpt says" is similar to "Wikipedia says". It's not the end of the conversation, but it helps get everyone on the same page, especially when nobody is an expert.
I'd agree. Certainly mentioning that information came from an LLM is important so people know to discount or manage it. It's possibly incorrect but still useful as an averaged answer of some parts of the internet.
Certainly citing GPT is better than just assuming it's right and not citing it along with an assertion.
I’ve noticed this trend in comments across the internet. Someone will ask or say something, the someone else will reply with “I asked ChatGPT and it says…” or “According to AI…”
ChatGPT is free and available to everyone, and so are a dozen other LLMs. If the person making the comment wanted to know what ChatGPT had to say, they could just ask it themselves. I guess people feel like they’re being helpful, but I just don’t get it.
Though with that said, I’m happy when they at least say it’s from an LLM. At least then I know I can ignore It. Worse is replying as if it’s their own answer, but really it’s just copy pasted from an LLM. Those are more insidious.
Isn't it the modern equivalent of "let me Google that for you"?
My experience is that the vast majority of people do 0 research (AI assisted or not) before asking questions online. Questions that could have usually been answered in a few seconds if they had tried.
If someone preface a question by saying they've done their research but would like validation, then yes it's in incredibly poor taste.
There's seemingly a difference in motive. The people sharing AI responses seem to be from people fascinated by AI generally, and want to share the response.
The "let me Google that for you" was more trying to get people to look up trivial things on their own, rather than query some forum repeatedly.
exactly, the "i asked chatgpt" people give off 'im helping' vibes but in reality they are just annoying and clogging up the internet with spam that nobody asked for
they're more clueless than condescending
It is the modern equivalent of "let me Google that for you" except for that most of the people doing it don't seem to realize that they're telling the person to fuck off, while that absolutely was the intent with lmfgtfy.
> Isn't it the modern equivalent of "let me Google that for you"?
No. With Google you get many answers. With AI you get one. Also we know that AI is unreliable with some possibility, it’s highly probable that you can get a better source on Google than that. This is especially bad when the question is something niche. So, it’s definitely a worse version of lmgtfy.
> Isn't it the modern equivalent of "let me Google that for you"?
When you put it that way I guess it kind of is.
> If someone preface a question by saying they've done their research but would like validation, then yes it's in incredibly poor taste.
100% agree with you there
>Isn't it the modern equivalent of "let me Google that for you"?
Which was just as irritating.
It was meant to be - the whole point of that meme was to shame/annoy lazy people into looking things up themselves, instead of asking random strangers on the internet.
And also just spouting unhinged shit divorced from reality, which is pretty common. You get tired enough dealing with these people that an AI response is warranted.
Let me google that for you was when a person e.g. asked "what's a tomato?", and you'd paste in the link http://www.google.com/search?q=what's+a+tomato
That's not like pasting in a screenshot or a copy/paste of an AI answer, it's being intentionally dismissive. You weren't actually doing the "work" for them, you were calling them lazy.
The way I usually see the AI paste being used is from people trying to refute something somebody said, but about a subject that they don't know anything about.
To modifying a hitchism.
> What can be asserted without evidence can also be dismissed without evidence.
Becomes
> That which can be asserted without thought can be dismissed without thought.
Since no current AI thinks but humans do I’m just going to dismiss anything an AI says out of hand because you are pushing the cost of parsing what it said onto me and off you and nah, ain’t accepting that.
Elegant and correct. It seems so obvious to me that if someone wanted a ChatGPT answer they would have sought it out for themselves and yet... it's happened to me more than a few times. I think some people think they are being clever and resourceful (or 'efficient') but it just dilutes their own authority on that which they were asked to opine.
People often don't want an AI answer because it won't readily agree with their worldview, particularly in politics.
That's wonderfully succinct argument.
It hinges assuming that ChatGPT does not thinking, which is clearly false.
Hell, Feynman said as much in 1985. https://www.youtube.com/watch?v=ipRvjS7q1DI
I really don't think Feynman said ChatGPT was "thinking" in 1985.
He was referring to computers and program execution in general.
He was wrong. He wasn't right about everything just because he's reddit's favourite physicist (after Neil Degrass Tyson).
The irony is that the disclosure of “I asked ChatGPT and it says…” is done as a courtesy to let the reader be informed. Given the increasing backlash against that disclosure, people will just stop disclosing which is worse for everyone.
The only workaround is to just text as-is and call it out when it's wrong/bad, AI-generated or otherwise, as we've done before 2023.
I think it's fine to not disclose it. Like, don't you find "Sent from my iPhone" that iPhones automatically add to emails annoying? Technicalities like that don't bring anything to the conversation.
I think typically, the reason people are disclosing their usage of LLMs is that they want offload responsibility. To me it's important to see them taking responsibility for their words. You wouldn't blame Google for bad search results, would you? You can only blame the entity that you can actually influence.
That’s true. Unfortunately the ideal takeaway from that sentiment should be “don’t reply with copy pasted LLM answers”, but I know that what you’re saying will happen instead.
Exactly, it is important and courteous still to cite your resources and tools.
I find a good workaround is to just say "some very quick research of my own leads me to ...", and then summarize what ChatGPT said. Especially if you are using e.g. an LLM with search enabled, this is borderline almost literally true, but makes it clear you aren't just stating something completely on your own.
Of course, you should still actually verify the outputs. If you do, there is not much wrong with not mentioning using the LLM, since you've don't the most important thing anyway (not be lazy in your response). If you don't verify, you had better say that.
Except it isn't. It's a disclosure to say "If I'm wrong, it's not my fault".
Because if they'd actually read the output, then cross-checked it and developed some confidence in the opinion, they wouldn't put what they perceive as the most important part up front ("I used ChatGPT") - they'd put the conclusion.
It isn't this cut and dry. You can cross-check and verify, but still have blind spots (or know that the tools have biases as well), and so consider it still important to mention the LLM use up front.
Or, if you preface a comment with "I am not an expert, but...", it is not often about seeking to avoid all blame, but to simply give the reader reasonable context.
Of course, you are right, it is also sometimes just lazy dodging.
> someone else will reply with “I asked ChatGPT and it says…” or “According to AI…”
I had a consultant I’m working with have an employee do that to me. I immediately insisted that every hour they’ve billed on that person’s name be refunded.
How did that go?
> How did that go?
They removed the analyst from our project and credited the hours along with a bonus.
Indeed. On the other hand, there's a difference between "I one-prompted some mini LLM" and "A deep-thinking LLM aided me through research with fact-checking, agents, tools and lots of input from me." While both can be phrased with “I asked ChatGPT and it says…” or “According to AI…”, the latter would not annoy me.
LLMs are incapable of fact checking. They have no concept of facts.
I think you are misreading the GP, they said "A deep-thinking LLM aided me through research *with fact-checking, agents, tools and lots of input from me*", which I read as implying they did the fact checking, and not the LLM.
I have coworkers who, on a zoom meeting, will respond to questions and discussions by spamming the zoom chat with paste-dumps of ChatGPT, etc. So frustrating and tiresome.
If possible I just go silent when people start copy and pasting ChatGPT at me. Only works in certain cases like Teams/Slack DMs, but it does remove a distraction from my day.
I start responding again when they can be bothered to formulate their own thoughts.
It must be the randomness built into LLMs that makes people think it's something worth sharing. I guess it's no different from sharing a cool Minecraft map with your friends or something. The difference is Minecraft is fun, reading LLM content is not.
I do not use AI for engineering work and never will, because doing the work of thinking for myself is how I maintain the neural capacity for forming my own original thoughts and ideas no AI has seen before.
If anyone gives me an opinion from an AI, they disrespect me and themselves to a point they are dead to me in an engineering capacity. Once someone outsources their brain they are unlikely to keep learning or evolving from that point, and are unlikely to have a future in this industry as they are so easily replaceable.
If this pisses you off, ask yourself why.
It is increasingly incredibly important here to make a distinction between using an LLM with and without search. Without search, I agree with you.
But e.g. ChatGPT with search enabled is often an invaluable research tool, and dramatically speeds up finding relevant sources. It basically automates the spidering of references and links, and also handles the basic checks for semantic relevance quite well, and this task requires little real intelligence or thought. Only once you hit a highly specific and niche technical domain will it start to fail you here (since it will match on common-language semantics that do not often align with technical usage).
For a lot of topics, I now have the reverse feeling that a person who has NOT used an LLM to facilitate search—on basic or even intermediate questions—is increasingly more of a concern.
Like any reference or other person, one needs to question whether those ideas fit into their mental models and verify things anyhow. One never could just trust that something is true without at least quick mental tests. AI is no different than other sources here. As was drilled into us in high school, use multiple sources and verify them.
For me, I am not sure it has eliminated thinking.
I have recently started to use codex on the command line. Before I put the prompt in, I get an idea in my head of what should happen.
Then I give it the instructions, sometimes clarifying my own thoughts while doing it. These are high level instructions, not "change this file". Then it bumps away for minutes at a time, after which I diff the results and consider if it matches up to what I would expect. At that point lower level instructions if appropriate.
Consider whether it was a better solution or not, then ask questions around the edges that I thought were wrong.
It turns my work from typing code in to pretty much code design and review. These are the hard tasks.
This is absolutely spot on how I feel about all of it.
Would you mind sharing what career stage you are in? I don't feel empowered to take such a stance, though I'd like to.
I am 25 years into my career at this point, and co-leading two companies, however I expect the same from people I work with and mentor. Vibe coders have no chance keeping up with me in doing things that have never been done before.
Maybe someone vibe codes their way through some webapp stuff and they review and clean it up, I will not stop them, but if someone turns in obvious AI slop they clearly do not understand they are dead to me.
I do security auditing and engineering though, which is in very high demand and something LLMs are incredibly bad at right now.
> I do not use AI for engineering work and never will
> Once someone outsources their brain they are unlikely to keep learning or evolving from that point
It doesn't piss me off, it makes me feel sorry for you. Sorry that you're incapable of imagining those with curiosity who use AI to do exactly what you're claiming they don't - learning. The uncurious were uncurious before AI and remain so. They never asked why, they still don't. Similarly, the curious still ask why as much as ever. Rather than Google though, they now have a more reliable source to ask things to, that explains things much better and more quickly.
I hear you grunt! Reliable? Hah! It's a stochastic parrot hallucinating half the time!
Note that I said more reliable than Google, which was the status quo. Google is unreliable. Yes, even when consulting three separate sources. Not that one has the time for that, not in reality.
You've got it the wrong way around. LLMs do the exact opposite - they increase the gap between the curious and the nots. It accelerates the learning rate gap between them. The nots.. they're in for a tough time. LLMs are so convenient, they'll cruise through life copypasting their answers, until they are asked to demonstrate competence in a setting where none are available and everything falls apart.
If you still find this hard to imagine, here's how it goes. In your mind LLM usage by definition goes like this - and for the uncurious, this is indeed how it would go.
User: Question. LLM: Answer. End of conversation.
By the curious, it's used like this.
User: Question. LLM: Answer. User: Why A? Why B? LLM: Answer. User: But C. What's the tradeoff? LLM: Answer. User: Couldn't also X? LLM: Answer. User: I'm not familiar with Y, explain.
The problem with current LLMs is that they are so sycophantic that they don't tell you when you're asking the wrong questions. I've been down the path of researching something with an LLM, and I came to a conclusion. I later revisited the subject and this time I instead read a longer-form authoritative source and it was clear that "interrogating" the matter meant I missed important information.
Is copy-pasting from Wikipedia an "opinion" from Wikipedia?
No, but it's also equally not a useful contribution. If wikipedia says something then I'm going to link the article, then give a quick summary of what in the article relates to whatever my point is.
Not write "Wikipedia says..." and paste the entire article verbatim.
Even that annoys me because who knows how accurate that is at any moment. Wikipedia is great for getting a general intro to a thing, but it is not a source.
I would rather people go find the actual whitepaper or source in the footnotes and give me that, and/or give me their own opinion on it.
I’m in the same boat, and what tipped me there is the ethical non-starter that OpenAI and Anthropic represent. They strip-mined the Web, ripped off copyrighted works in neat space, admitting that going through the proper channels was a waste of business resources.
They believe that the entirety of human ingenuity should be theirs at no cost, and then they have the audacity to SELL their ill-gotten collation of that knowledge back to you? All the while persuading world governments that their technology is the new operating system of the 21st century.
Give me a dystopian break, honestly.
On top of which, the most popular systems are proprietary applications running on someone else's machines. After everything GNU showed us for 40 years, I'm surprised programmers are so quick to hand off so much of their process to non-free SaaSS.
<3
> I do not use AI for engineering work and never will
So, working with CLAUDE doesn't count. Gotcha.
> If this pisses you off, ask yourself why.
It doesn't piss me off, but your comment is disingenuous at best.
I really dislike how the companies try to antromorphisize their software offerings.
At my previous company they called it 'sparring with <name of the software>'. You don't 'work' with Claude.
You use the software, you instruct it what to do. And it gives you an output that you can then (hopefully) utilize. It's not human.
I actually do not use any proprietary software of any kind in my work. Any tools I can not alter to my liking are not -my- tools and could be taken away from me or changed at any time.
Why do you believe it's disingenuous when a capable person tells you that they don't need the crutches that you are advertising? We've accomplished millennia of engineering without the need for a bullshit generator, but you somehow assume everyone needs such a tool just because you are such a fanboy for it? Talk about disingenuous...
> If this pisses you off, ask yourself why.
Why would it piss me off that you’re so closed minded about an incredible technology?
Using an AI to think for me would be like going to a gym and paying a robot to lift weights for me.
Like sure that is cool that is possible, but if I do not do the work myself I will not get stronger.
Our brains are the same way.
I also do not use a GPS because there are literally studies with MRI scans proving it makes an entire section of our brain go dark compared to London taxi drivers required by law to navigate with their brains.
I also navigate life without a smartphone at all, and it has given me what feels like focus super powers compared to those around me, when in reality probably most people had that level of focus before smartphones were a thing.
All said AI is super interesting when doing specialized work at scale no human has time for, like identifying cancer by training on massive datasets.
All tools have uses and abuses.
> Using an AI to think for me
People are using LLMs to generate code without doing this.
> Our brains are the same way.
How many IQ points do you gain per year of subjecting yourself to this?
No idea if it actually makes me smarter, but I have noticed I have an atypically high level of mental pain tolerance to pursue things many told me were impossible and quickly gave up on.
Sounds fun!
Eh… your complaint describes every single piece of information available on the internet.
Let’s try it with other stuff:
“Looking at solutions on stack overflow outsources your brain”
“Searching arxiv for literature on a subject outsources your brain”
“Reading a tutorial on something outsources your brain”
There’s nothing that makes ChatGPT et al appreciably different from the above, other than the tendency to hallucinate.
ChatGPT is a better search engine than search engines for me, since it gives links to cite what it’s talking about and I can check those, but it pays attention to precisely what I asked about and generally doesn’t include unrelated crap.
The only complaint I have is the hallucinations, but it just means I have to check its sources, which is exactly the case already for something as mundane as Wikipedia.
Ho hum. Maybe take some time to reevaluate your conclusions here.
That tendency to hallucinate that you so conveniently downplay is a major problem. I'll take reading the reference manual myself all day rather than sifting through the output of a bullshit generator.
Is it OK to outsource to engineers who are either more senior or more junior, or must one do every aspect of every project entirely oneself?
Sure it is, as long as those engineers apply and honest effort and learn from their mistakes. Even if they don't do things faster then you initially, at least they learned something.
Unfortunately that logic does not apply to models.
Then I’m lost. I thought this was about the laziness of outsourcing thinking. Why would the outsourcee’s ability to learn impact whether it’s lazy or not?
LLM are plenty useful, don't get me wrong, but:
If your interaction with the junior dev is not much different than interacting with an LLM, something is off.
Training a junior dev will make you a better dev. Teaching is learning. And a junior dev will ask questions that challenge your assumptions.
It's the opposite of "outsourcing."
I do not use books for engineering work and never will, because doing the work of thinking for myself is how I maintain the neural capacity for forming my own original thoughts and ideas no writer has seen before.
If anyone gives me an opinion from a book, they disrespect me and themselves to a point they are dead to me in an engineering capacity. Once someone outsources their brain they are unlikely to keep learning or evolving from that point, and are unlikely to have a future in this industry as they are so easily replaceable.
If this pisses you off, ask yourself why.
(You can replace AI with any resource and it sounds just as silly :P)
Yes, if you find a book that is as bad as AI advice, you should definitely throw it away and never read it. If someone is quoting a known-bad book, you should ignore their advice (and as a courtesy, tell them their book is bad)
It's so strange that pro-AI people don't see this obvious fact and keep trying to compare AI with things that are actually correct.
It's so strange that anti-AI people think the advice and information you can get from a good model (if you know how to operate it well) is less valuable than the advice and information you can get from a book.
That "a good model (if you know how to operate it well)" is doing a lot of lifting. To be sure, there are a lot of bad books, and you can get negative advice from them, but a book has fixed content that can gain and lose a reputation, whereas a model (even a good one!) has highly variable content dependent on "if you know how to operate it well". So when someone or some group that I respect recommends a book, I can read the words with some amount of trust that the content is valuable. When someone quotes a model's response without any commentary or affirmation, it does not inspire any trust that the content is valuable. It just indicates that the person has abdicated their thought process.
I agree that quoting a model's answer to someone else is bad form - you can get a model to say ANYTHING if you prompt it to, so a screenshot of a ChatGPT conversation to try and prove a point is meaningless slop.
I find models vastly more useful than most technical books in my own work because I know how to feed in the right context and then ask them the right questions about it.
There isn't a book on earth that could answer the question "which remaining parts of my codebase still use the .permission_allowed() method and what edge-cases do they have that would prevent them from being upgraded to the new .allowed() mechanism"?
And as long as you don't copy-paste its advice into comments, that's fine.
No one really cares how you found all those .permission_allowed() calls to replace - was it grep, or intense staring, or AI model. All that matters is you stand behind it, and act as an author. Original post said it very well:
> ChatGPT isn’t on the team. It won’t be in the post-mortem when things break. It won’t get paged at 2 AM. It doesn’t understand the specific constraints, tech debt, or your business context. It doesn’t have skin in the game. You do.
Further, grep (and any of its similar siblings) works just fine for such a task, is deterministic, won't feed you bullshit, and doesn't charge you tokens to do a worse job than existing free tools will do well. Better yet, from my experience with the dithering pace of LLMs, you'll get your answer quicker, too.
>There isn't a book on earth that could answer the question "which remaining parts of my codebase still use the .permission_allowed() method and what edge-cases do they have that would prevent them from being upgraded to the new .allowed() mechanism"?
You're so close to realising why the book counter argument doesn't make any sense!
> anti-AI people think the advice and information you can get from a good model (if you know how to operate it well) is less valuable than the advice and information you can get from a book
Those people exist and they’re wrong.
More frequently, however, I find I’m judging the model less than its user. If I get an email that smells of AI, I ignore it. That’s partly because I have the luxury to do so. It’s largely because engaging has commonly proven fruitless.
You see a similar effect on HN. Plenty of people use AI to think through problems. But the comments that quote it directly are almost always trash.
"But the comments that quote it directly are almost always trash."
Because the output is almost always trash, and it takes re-doing the work and then claiming that it came from LLMs for it not to be.
These tools are being sold to me as similar to what a Jr engineer offers me, but that's not at all true because I would fire a Jr that came to me with such bullshit and needing such significant hand-holding so often as what I see coming out of an LLM.
So if anyone with below 120 iq gives you their opinion is disrespectful because they are stupid?
—-
It’s interesting that we have to respect human “stupid” opinions but anything from AI is discarded immediately.
I’d advocate for respecting any opinion. And consider good or at least good willed opinion.
Of course I respect humans, I am a human myself! And I learned a lot from others, asking them (occasionally stupid) questions and listening to their explanations. Doing the same to other is just being fair. Explain a thing and make someone more knowledgeable! Maybe next time _they_ will help you!
This does not apply to AI of course. In most cases, if a person did an AI PR/comment once, they will keep doing AI PRs/comments, so your explanation will be forgotten next time they clear context. Might as well not waste your time and dismiss it right away.
Congratulations on misunderstanding and misrepresenting the point. (This is sarcasm, btw.)
It’s not the source that matters. It’s not the source that he’s complaining about. It’s the nature of the interaction with the source.
I’m not against watching video, but I won’t watch TikTok videos, because they are done in a way that is dangerously addictive. The nature of engagement with TikTok is the issue, not “I can’t learn from electrical devices.”
Each of us must beware of the side effects of using tools. Each kind of tool has its hazards.
Yeah except it's not quite the same thing, is it?
The fact that you're presenting this as a comically absurd comparison tells me that you know well that it's an absurd comparison.
At least you can counter with an argument. You just seem to agree both are absurd.
Nah, I thought OP was spot on. A book isn't in the same class of things as an automated bullshit generator.
What is this new breed of interactive books that give you half baked opinions and incorrect facts in response to a prompt?
It's called a "scam". You're welcome.
I will not use Jr developers for engineering work and never will, because doing the work of a Jr.....
You don't have to outsource your thinking to find value in AI tools you just have to find the right tasks for them. The same as you would with any developer jr to you.
I'm not going to use AI to engineer some new complex feature of my system but you can bet I'm going to use it to help with refactoring or test writing or a second opinion on possible problems with a module.
> unlikely to have a future in this industry as they are so easily replaceable.
The reality is that you will be unlikely to compete with people who use these tools effectively. Same as the productivity difference between a developer with a good LSP and one without or a good IDE or a good search engine.
When I was a kid I had a text editor and a book and it worked. But now that better tools are around I'm certainly going to make use of them.
> The reality is that you will be unlikely to compete with people who use these tools effectively.
If you looked me or my work up, I think you would likely feel embarrassed by this statement. I have a number of world firsts under my belt that AI would have been unable to meaningfully help with.
It is also unlikely I would have every developed the skill to do any of that aside from doing everything the hard way.
I just looked and I'm not sure what I'm meant to be seeing that would cause me to feel embarrassed but congrats on whatever it is. How much more could you have developed or achieved if you didn't limit yourself?
Do you do all your coding in ed or are you already using technology to offload brain power and memory requirements in your coding?
AI would have been near useless when I was creating https://stagex.tools https://codeberg.org/stagex/stagex, for instance.
Also I use VIM. Any FOSS tools with predictable deterministic behavior I can fully control are fine.
I don't know, just a quick glance at that repo and I feel like AI could have written your shell scripts which took up several tries from multiple people to get right about as well as the humans did.
So your ok with using tools to offload thinking and memory as long as they are FOSS?
Take this one for example https://codeberg.org/stagex/stagex/src/branch/main/src/compa...
It took some iteration and hands on testing to get that right across multiple operating systems. Also to pass shellcheck, etc.
Even if an LLM -could- do that sort of thing as well as my team and I can, we would lose a lot of the arcane knowledge required to debug things, and spot sneaky bugs, and do code review, if we did not always do this stuff by hand.
It is kind of like how writing things down helps commit them to memory. Typing to a lesser extent does the same.
Regardless those scripts are like <1% of the repo and took a few hours to write by hand. The rest of the repo requires extensive knowledge of linux internals, compiler internals, full source bootstrapping, brand new features in Docker and the OCI specs, etc.
Absolutely 0 chance an LLM could have helped with bootstrapping a primitive c toolchain from 180 bytes of x86 machine code like this: https://codeberg.org/stagex/stagex/src/branch/main/packages/...
That took a lot of reasoning from humans to get right, in spite of the actual code being just a bunch of shell commands.
There are just no significant shortcuts for that stuff, and again if there were, taking them is likely to rob me of building enough cache in my brain to solve the edge cases.
Also yes, I only use FOSS tools with deterministic behavior I can modify, improve, and rely on to be there year after year, and thus any time spent mastering them is never wasted.
That x86 machine code link reminded me of an LLM project I did just last week - https://tools.simonwillison.net/sloccount
I decided to see if I could get an old Perl and C codebase running via WebAasembly in the browser having Claude brute-force figuring out how to compile the various components to WASM. Details here: https://simonwillison.net/2025/Oct/22/sloccount-in-webassemb...
Here are notes it wrote for me on the compilation process it figured out: https://github.com/simonw/tools/blob/473e89edfebc27781b43443...
I'm not saying it could have created your exact example (I doubt that it could) but you may be under-estimating how promising it's getting for problems of that shape.
I do not doubt that LLMs might some day be able to generate something like my work in stagex, but it would only be because someone trained one on my work and that of other people that insist on solving new problems by hand.
Even then, I would never use it, because it would be some proprietary model I have to pay some corpo for either with my privacy, my money, or both... and they could take it away at any time. I do not believe in or use centralized corpotech. Centralized power is always abused eventually. Also is that regurgitated code under an incompatible license? Who knows.
Also, again, I would rob myself of the experience and neural pathway growth and rote memory that come from doing things myself. I need to lift my own weights to build physical strength just as I need to solve my own puzzles to build patience and memory for obscure details that make me better at auditing the code of others and spotting security bugs other humans and machines miss.
I know when I can get away with LTO, and when I cannot, without causing issues with determinism, and how to track down over linking and under linking. Experience like that you only get by experimenting and compiling shit hundreds of times, and that is why stagex is the first Linux distro to ever hit 100% determinism.
Circling back, no, I am not worried about being unemployable because I do not use LLMs.
And hey, if I am totally wrong and LLMs can create perfectly secure projects better than I can in the future, and spot security bugs better than I can, and I am unemployable, then I will go be an artist or something, because there are always people out there that appreciate hard work done by humans by hand, because that is how I am wired.
> Even then, I would never use it, because it would be some proprietary model I have to pay some corpo for either with my privacy, my money, or both... and they could take it away at any time.
Have you been following the developments in open source / open weight models you can run on your own hardware?
They're getting pretty good now, especially the ones coming out of China. The GLM, Qwen and DeepSeek models out of China are all excellent. Mistral's open weight models (from France) are good too, as are the OpenAI gpt-oss models.
No privacy or money cost involved in running those.
I get your concern about learning more if you do everything yourself. All I can say there is that the rate and depth of technical topics I'm learning has been expanded by my LLM usage because I'm able to take on a much wider range of technical projects, all of which teach me new things.
You're not alone in this - there are many experienced developers who are choosing not to engage with this new family of technology. I've been thinking of it similar to veganism - there are plenty of rational reasons to embrace a vegan lifestyle and I respect people who do it but I've made different choices myself.
"This tool uses the WebAssembly build of Perl running actual SLOCCount algorithms from licquia/sloccount."
The best form of (AI) plagiarism is to simply wrap the original tool in your own facade and pretend like you built anything of value.
Is this intended to be a bad joke?
You're criticizing me for directly crediting the original here. That's the correct and ethical thing to do!
Honestly, I've seen the occasional bad faith argument from people with a passionate dislike of AI tooling but this one is pretty extreme even by those standards.
I hope you don't ever use open source libraries in your own work.
Actually, my criticism was the result of my own misunderstanding of what you were claiming. My apologies for that, although I'm still unlikely to use these tools based upon the example when my own personal counterexamples have shown me that it's often as much or more work to get there via prompting than it is to simply do the thinking myself. Have a good day.
Thanks - this was a misunderstanding, apology accepted!
For whatever it's worth, this is exactly the kind of awful that I never want in any code base that I'm working on:
<https://github.com/simonw/tools/blob/473e89edfebc27781b43443...>
At least run a pretty-printer on the code so that it can be reviewed by anything but a robot.
Part of developing a good software system is about exercising taste in vendored-in libraries and, especially, the structure around them.
P.S. I've gone to look at other chunks of Javascript and see that I was unlucky to grab this steaming pile first.
That was vendored in from this project: https://webperl.zero-g.net/ - it's one of the files distributed in the zip file listed here: https://webperl.zero-g.net/using.html#basic-usage
Originally I tried to get it working loading code directly but as far as I can tell there's no stable CDN build of that, so I had to vendor it instead.
FFS stop it with the “it’s just the same as a human” BS. It’s not just like working with a junior engineer! Please spend 60 seconds genuinely reflecting on that argument before letting it escape like drool from the lips of your writing fingers.
We work with junior engineers because we are investing in them. We will get a return on that investment. We also work with other humans because they are accountable for their actions. AI does not learn and grow anything like the satisfying way that our fellow humans do, and it cannot be held responsible for its actions.
As the OP said, AI is not on the team.
You have ignored the OP’s point, which is not that AI is a useless tool, but that merely being an AI jockey has no future. Of course we must learn to use tools effectively. No one is arguing with that.
You fanboys drive me nuts.
I'm not saying it's the same as working with a jr developer. I'm saying that not using something less skilled than yourself for less skilled tasks is stupid and self defeating.
Yes, when someone builds a straw man you ignore it. There is a huge canyon between never use AI in engineer(op proposal) and only use AI for all your engineering(op complaint).
There's a very good argument for not using tools vended by folks who habitually lie as much as the AI vendors (and their tools). I don't want their fingers anywhere in my engineering org, quite honestly. Given their ethics around intellectual property in general, I must assume that my company's IP is being stolen every time a junior engineer lazily uses one of these tools.
I'm sure you never use any Google or Microsoft products at all, such as Google Search, Maps or Android, and none of the companies and engineering teams you've ever worked with have used such products, given how habitually they lie (and the fact that they're two major AI vendors).
If so, congratulations for being old or belonging to the 0.01%. Good luck finding a first job where that holds in 2025.
Not at all true, though. You see, I expect the Jr will grow and learn from those off-loaded tasks in such a way that they will eventually become another Sr in the atelier. That development of the society of engineers is precisely what I do not wish to ever outsource to some oligarch's rental fleet of bullshit machines.
But happily serve other the collective oligarchs by training their next generation of knights...
I'm far more fond of knights than kings. So, yes, I would much more happily train another wave of humans at my craft than salt the earth behind me.
I personally think this could pop up as policy at work. I'd personally push for that. "If you're pasting AI responses without filtering through the lens of your own thoughts and experience..."
Like, it's fine for you to use AI, just like one would use Google. But you wouldn't paste "here are 10 results I got from Google". So don't paste whatever AI said without doing the work, yourself, of reviewing and making sense of it. Don't push that work onto others.
The scenario the author describes is bound to happen more and more frequently, and IMO the way to address it is by evolving the culture and best practices for code reviews.
A simple solution would be to mandate that while posting coversations with AI in PR comments is fine, all actions and suggested changes should be human generated.
They human generated actions can’t be a lazy: “Please look at AI suggestion and incorporate as appropriate. ”, or “what do you think about this AI suggestion”.
Acceptable comments could be: - I agree with the AI for xyz reasons, please fix. - I thought about AIs suggestions, and here’s the pros and cons. Based on that I feel we should make xyz changes for abc reasons.
If these best practices are documented, and the reviewer does not follow them, the PR author can simply link to the best practices and kindly ask the reviewer to re-review.
Relying heavily on information supplied by LLMs is a problem, but so is this toxic negativity towards technology. It's a tool, sometimes useful, and other times crap. Critical thinking and literacy is the key skill that helps you tell the difference, and a blanket rejection (just like absolute reliance) is the opposite of critical thinking.
Counterpoint: asking GPT can provide useful calibration not to facts but to median mental models
Think of it as a dynamic opinion poll -- the probabilistic take on this thing is such and such.
As a bonus you can prime the respondent's persona.
// After posting, I see another comment at bottom opening with "Counterpoint:"... Different point though.
I'm starting to run into the other end of this as a reviewer, and I hate it.
Stories full of nonsensical, clearly LLM-generated acceptance requirements containing implementation details which are completely unrelated to how the feature actually needs to work in our product. Fine, I didn't need them anyway.
PRs with those useless, uniformly-formatted LLM-generated descriptions which don't do what a PR description should do, with a half-arsed LLM attempt at summary of the code changes and links to the files in the PR description. It would have been nice if you had told me what your PR is for and what your intent as the author is, and maybe to call out things which were relevant to the implementation I might have "why?" questions about. But fine, I guess, being able to read, understand and evaluate the code is part of my job as a reviewer.
---- < the line
PRs littered with obvious LLM comments you didn't care enough to take out, where something minor and harmless, but _completely pointless_ has been added (as in if you'd read and understood what this code does, you'd have removed it), with an LLM comment left in above it AND at the end of the line, where it feels like I'm the first person to have tried to read and understand the code, and I feel like asking open-ended questions like "Why was this line added?" to get you to actually read and think about what's supposed to be your code, rather than a review comment explaining why it's not needed acting as a direct conduit from me to your LLM's "You're absolutely right!" response.
This absolutely has been my more recent frustration as well, specifically this:
> uniformly-formatted LLM-generated descriptions which don't do what a PR description should do, with a half-arsed LLM attempt at summary of the code changes and links to the files in the PR description. It would have been nice if you had told me what your PR is for and what your intent as the author is, and maybe to call out things which were relevant to the implementation I might have "why?" questions about.
If I want to see what the code changes do, I will read the code. I want your PR description to tell me things like:
- What the tradeoffs, if any, to this implementation are
- If there were potential other approaches you decided not to follow for XYZ reason so that I don't make a comment asking about it
- If there is more work to be done, and if so what it is
- Any impacts this change might have on other systems
- etc.
Sure, if you want to add a handful of sentences summarizing the change at a high level just to get me in context, that's fine, but again if I want to see what changed, I will go look at what changed.
What is coming/accelerating is the mental form of obesity, with very similar corporate interests and dynamics.
I've come down pretty hard on friends who, when I ask for advice about something, come back with a ChatGPT snippet (mostly D&D-related, not work-related).
I know ChatGPT exists. I could have fucking copied-and-pasted my question myself. I'm not asking you to be the interface between me and it. I'm asking you, what you think, what your thoughts and opinions are.
We - humans - are getting ready for A"G"I
It's also a good reason to not believe what's being said given the still extremely high rate of hallucinations.
I just throw everyone who tells me “chatgpt said” or “ask chatgpt” into the idiot pile in my brain. It’s not nice but usually these are people who tell me incorrect things in the first place. Or turn in half finished unoptimized work. Maybe Llms are just a way to identify the mentally lazy?
No one I know who says this kind of thing would read this article. People love being lazy.
It's kinda hilarious to watch people make themselves redundant. Like you're essentially saying "you don't need me, you could have just asked ChatGPT for a review".
I wrote before about just sending me the prompt[0], but if your prompt is literally my code then I don't need you at all.
[0] https://blog.gpkb.org/posts/just-send-me-the-prompt/
I'm surprised nobody else has gone meta yet, so I suppose I must. Anyway, "ChatGPT said this" ... about this thread.
----
In many of the Hacker News comments, a core complaint was not just that AI is sometimes used lazily, but that LLM outputs are fundamentally unreliable—that they generate confidently stated nonsense (hallucinations, bullshit in the Frankfurtian philosophical sense: speech unconcerned with truth).
Here’s a more explicitly framed summary of that sentiment:
⸻
Central Critique: AI as a Bullshit Generator
Many commenters argue that: • LLMs don’t “know” things—they generate plausible language based on patterns, not truth. • Therefore, any use of them without rigorous verification is inherently flawed. • Even when they produce correct answers, users can’t trust them without external confirmation, which defeats many of the supposed productivity gains. • Some assert that AI output should be treated not as knowledge but as an unreliable guess-machine.
Examples of the underlying sentiment: • “LLMs produce bullshit that looks authoritative, and people post it without doing the work to separate truth from hallucination.” • “It costs almost nothing to generate plausible nonsense now, and that cheapness is actively polluting technical discourse.” • “‘I asked ChatGPT’ is not a disclaimer; it’s an admission that you didn’t verify anything.”
⸻
Philosophical framing (which commenters alluded to)
A few participants referenced Harry Frankfurt’s definition of bullshit: • The bullshitter’s goal isn’t to lie (which requires knowledge of the truth), but simply to produce something that sounds right. • Many commenters argue LLMs embody this: they’re indifferent to truth, tailored to maximize coherence, authority, and user satisfaction.
This wasn’t a side issue—it was a core rejection of uncritical AI use.
⸻
So to clarify: the strong anti-AI sentiment isn’t just about laziness.
It’s about: • Epistemic corruption: degrading the reliability of discourse. • False confidence: turning uncertainty into authoritative prose. • Pollution of knowledge spaces: burying truth under fluent fabrication.
"Google said this" ... "Wikipedia said this" ... "Encyclopedia Britannica said this"
It is not the same. It needs some searching, reading and comprehension to cite Google etc. Copying a LLM output "costs" almost no energy.
It is similar enough. People would just find the first thing in a disagreement that had headline that corroborated their opinion, this was often Wikipedia or the Summary on google.
People did this with code as well. DDG used to show you the first Stackoverflow post that was close to what you searched. However sometimes this was obviously wrong, people have just copied and pasted that wholesale.
well. "Google said this" is pretty close nowadays.
the other two are still incomparably better in practice though.
I think the difference is people use those as citations for specific facts, not to logically analyze your code. If you're asked how technical detail of C++ works then simply citing Google is acceptable. If you're asked about broader details that depend on certain technicalities specific to your codebase, Googling would be silly.
This is an honest question. Did you try pasting your PR and the ChatGPT feedback into Claude and asking it for an analysis of the code and feedback?
Does that particularly matter in the context of this post? Either way, it sounds like OP was handed homework by the responder, and farming that out to yet another LLM seems kind of pointless, when OP could just ask the LLM for its opinion directly.
While LLM code feedback might be wordy and dubious, I have personally found that asking Claude to review a PR and related feedback to provide some value. From my perspective anyways, Claude seems able to cut through the BS and say if a recommendation is worth the squeeze or in what contexts the feedback has merit or is just pedantic. Of course, your mileage my vary as they say.
Sure. But again, that's not what OP's post is about.
Careful with this idea, I had someone take a thread we were engaged in and feed it to an LLM, asking it to confirm his feelings about the conversation, only to post it back to the group thread. It was used to attack me personally in a public space.
Fortunately
1. The person was transparent about it, even posting a link to the chat session
2. They had to follow on prompt to really engage the sycophancy
3. The forum admins stepped in to speak to this individual even before I was aware of it
I actually did what you suggested, fed everything back into another LLM, but did so with various prompts to test things out. The responses where... interesting, the positive prompt did return something quite good. A (paraphrased) quote from it
"LLMs are a powerful rhetorical tool. Bringing one to a online discussion is like bringing a gun to a knife fight."
That being said, how you prompt will get you wildly different responses from the same (other) inputs. I was able to get it to sycophant my (not actually) hurt feelings.
Counterpoint: "Chatgpt said this" is an entirely legitimate approach in many contexts and this attitude is toxic.
One example: Code reviews are inherently asymmetrical. You may have spent days building up context, experimenting, and refactoring to make a PR. Then the reviewer is expected to have meaningful insight in (generously) an hour? AI code reviews help bring balance; it may notice stuff a human wouldn't, and it's ok for the human reviewer to say "hey, chatgpt says this is an issue but I'm not sure - what do you think?"
We run all our PRs through automated (claude) reviews automatically, and it helps a LOT.
Another example: Lots of times we have several people debugging an issue and nobody has full context. Folks are looking at code, folks are running LLM prompts, folks are searching slack, etc. Sometimes the LLMs come up with good ideas but nobody is sure, because none of us have all the context we need. "Chatgpt says..." is a way of bringing it to everyone's attention.
I think this can be generalized to forum posts. "Chatgpt says" is similar to "Wikipedia says". It's not the end of the conversation, but it helps get everyone on the same page, especially when nobody is an expert.
I'd agree. Certainly mentioning that information came from an LLM is important so people know to discount or manage it. It's possibly incorrect but still useful as an averaged answer of some parts of the internet.
Certainly citing GPT is better than just assuming it's right and not citing it along with an assertion.