This part really caught my attention (along with the rest of the preceding paragraph):
> Our inability to see opportunities and fulfillment in life as it is, leads to the inevitable conclusion that life is never enough, and we would always rather be doing something else.
I agree with the article completely, as it effectively names an uneasy feeling of hesitation I’ve had all along with how I use LLMs. I have found them tremendously valuable as sounding boards when I’m going in circles in my own well-worn cognitive (and sometimes even emotional) ruts. I have also found them valuable as research assistants, and I feel grateful that they arrived right around the time that search engines began to feel all but useless. I haven’t yet found them valuable in writing on my behalf, whether it’s prose or code.
During my formal education, I was very much a math and science person. I enjoyed those subjects. They came easily to me, which I also enjoyed. I did two years of liberal arts in undergrad, and they kicked my butt academically in a way that I didn’t realize was possible. I did not enjoy having to learn how to think and articulate those thoughts in seminars and essays. I did not enjoy the vulnerability of sharing myself that way, or of receiving feedback. If LLMs had existed, I’m certain I would have leaned hard on them to get some relief from the constant feeling of struggle and inadequacy. But then I wouldn’t have learned how to think or how to articulate myself, and my life and career would have been significantly less meaningful, interesting, and satisfying.
For those unaware, this phrase: "The lump of cognition fallacy" is a derivative of the classic economic fallacy: Lump of Labor Fallacy (or Lump of Jobs)
Google AI describes it as:
This is the most common form, often used in debates about technology, immigration, or retirement.
Definition: The belief that there is a set, finite amount of work to be done in an economy.
The Fallacy: Assuming that if one person works more, or if a machine does a job, there is less work left for others.
Reality: An increase in labor or technology (like AI or automation) can increase productivity, lower costs, and boost economic activity, which actually creates more demand for labor.
Examples:
"If immigrants come to this country, they will take all our jobs" (ignoring that immigrants also consume goods and create demand for more jobs).
"AI will destroy all employment" (ignoring that technology typically shifts the nature of work rather than eliminating it).
I really liked this piece, and I share the concern, but I think “outsourcing thinking” is slightly the wrong frame.
In my own work, I found the real failure mode wasn’t using AI, it was automating the wrong parts. When I let AI generate summaries or reflections for me, I lost the value of the task. Not because thinking disappeared, but because the meaning-making did.
The distinction that’s helped me is:
- If a task’s value comes from doing the thinking (reflection, synthesis, judgment), design AI as a collaborator, asking questions, prompting, pushing back.
- If the task is execution or recall, automate it aggressively.
So the problem isn’t that we outsource thinking, it’s that we sometimes bypass the cognitive loops that actually matter. The design choice is whether AI replaces those loops or helps surface them.
Ever since Google experimented LLM in Gmail it bothers me alot. I firmly believe every word and the way you put them together portrays who you are. Using LLM for direct communication is harmful to human connections.
It can be. It can also not be. A friend of mine had a PITA boss. Thanks to ChatGPT he salvaged his relationship with him even though he hated working with him.
He went on to something else but his stress levels went way down.
All this is to say: I agree with you if the human connection is in good faith. If it isn’t then LLMs are helpful sometimes.
It sounds like that relationship was not supposed to be salvaged to begin with. ChatGPT perhaps prolonged your friend's suffering, who ended up moving on in the end. Perhaps unnecessarily delayed.
My knee-jerk reaction is that outsourcing thinking and writing to an LLM is a defeat of massive proportions, a loss of authenticity in an increasingly less authentic world.
On the other hand, before LLMs came along, didn't we ask a friend or colleague for their opinion on an email we were about to write to our boss about an important professional or personal matter?
I have been asked several times to give advice on the content and tone of emails or messages that some of my friends were about to send. On some occasions, I have written emails on their behalf.
Is it really any different to ask an LLM instead of me? Do I have a better understanding of the situation, the tone, the words, or the content to use?
Firstly, when you ask a friend or colleague you're asking a favour that you know will take them some time and effort. So you save it for the important stuff, and the rest of the time you keep putting in the effort yourself. With an LLM it's much easier to lean on the assistance more frequently.
Secondly, I think when a friend is giving advice the responses are more likely to be advice, i.e. more often generalities like "you should emphasize this bit of your resume more strongly" or point fixes to grammar errors, partly because that's less effort and partly because "let me just rewrite this whole thing the way I would have written it" can come across as a bit rude if it wasn't explicitly asked for. Obviously you can prompt the LLM to only provide critique at that level, but it's also really easy to just let it do a lot more of the work.
But if you know you're prone to getting into conflicts in email, an LLM powered filter on outgoing email that flagged up "hey, you're probably going to regret sending that" mails before they went out the door seems like it might be a helpful tool.
"Firstly, when you ask a friend or colleague you're asking a favour that you know will take them some time and effort. So you save it for the important stuff, and the rest of the time you keep putting in the effort yourself. With an LLM it's much easier to lean on the assistance more frequently."
- I find this a point in favor of LLM and not a flaw. It is a philosophical stance, one for which what does not require effort or time is intrinsically not valuable (see using GLP peptides vs sucking it up for losing weight). Sure, it requires effort and dedication to clean your house, but given the means (money), wouldn't you prefer to have someone else clean your place?
"Secondly, I think when a friend is giving advice the responses are more likely to be advice"
- You can ask an LLM for advice instead of writing directly and without further reflection on the writing provided by the model.
Here I find parallels with therapy, which in its modern version, does not provide answers, but questions, means of investigation, and tools to better deal with the problems of our lives.
But if you ask people who go to therapy, the vast majority of them would much prefer to receive direct guidance (“Do this/don't do that”).
In the cases in which I wrote a message or email on behalf of someone else, I was asked to do it: can you write it for me, please? I even had to write recommendation letters for myself--I was asked to do that by my PhD supervisor.
I wasn't arguing that getting LLMs to do this is necessarily bad -- I just think it really is different from having in the past been able to ask other humans for help, and so that past experience isn't a reliable guide to whether we might find we have problems with unexpected effects of this new technology.
If you are concerned about possible harms in "outsourcing thinking and writing" (whether to an LLM or another human) then I think that the frequency and completeness with which you do that outsourcing matters a lot.
It can become an indispensable asset over time, or a tool that can be used at certain times to solve, for example, mundane problems that we have always found annoying and that we can now outsource, or a coaching companion that can help us understand something we did not understand before. Since humans are naturally lazy, most will default to the first option.
It's a bit like the evolution of driving. Today, only a small percentage of people are able to describe how an internal combustion engine works (<1%?), something that was essential in the early decades after the invention of the car. But I don't think that those who don't understand how an engine works feel that their driving experience is limited in any way.
Certainly, thinking and reasoning are universal tools, and it could be that in the near future we will find ourselves dumber than we were before, unable to do things that were once natural and intuitive.
But LLMs are here to stay, they will improve over time, and it may well be that in a few decades, the human experience will undergo a downgrade (or an upgrade?) and consist mainly of watching short videos, eating foods that are engineered to stimulate our dopamine receptors, and living a predominantly hedonistic life, devoid of meaning and responsibility. Or perhaps I am describing the average human experience of today.
This comment has made me glad for LLM in Gmail. If someone is going to over analyze my every word because he firmly believes it portrays who I am, I'd appreciate the layer obfuscation between me and this creepazoid.
Actions? I generally judge people by what they do, not what they say - though of course I have to admit that saying things does fall under "doing something", if it's impactful.
What I am worried about (and it's something about regular internet search that has worried me for the past ~10 years or so) is that, after they've trained a generation of folks to rely on this tech, they're going to start inserting things into the training data (or whatever the method would be) to bias it towards favoring certain agendas wrt the information it presents to the users in response to their queries.
> after they've trained a generation of folks to rely on this tech ... bias it towards favoring certain agendas
previously, this happened with print media. Then it happened with the airwaves. It only makes logical sense that the trend continues with LLMs.
Basically, the fundamental issue is that the source of information is under someone else's control, and that someone will always have an agenda.
But with LLMs, it's crucial to try change the trend. IMHO, it should be possible for a regular person to own their computing - this should include the LLM capability/hardware, as well as the model(s). Without such capabilities, the exact same will happen as has in the past with new technologies.
> it should be possible for a regular person to own their computing
And regular persons will not care about this and will select a model with biases of anyone who they deem "works better for me at this one task that I needed".
Just like you said:
> previously, this happened with print media. Then it happened with the airwaves. It only makes logical sense that the trend continues with LLMs.
I wish it wasn't so, but I have no idea how to make people care about not being under someone's control.
I worried about this a lot more at the tail end of 2003, when OpenAI's GPT-4 (since March) was still very clearly ahead of every other model. It briefly looked like control of the most useful model would stay with a single organization, giving them outsized influence in how LLMs shape human society.
I don't worry about that any more because there's so much competition: dozens of organizations now produce usable LLMs and the "best" is no longer static. We have frontier models from the USA, France (Mistral) and China now.
The risk of a model monopoly centralizing cultural power feels a lot lower now then it did a couple of years ago.
I don't think model competition is necessarily the fix to this issue. We're not even sure if the setup as it exists today will be the norm. It could be that other entities license out the models for their own projects which then become the primary contact point for users and LLMs. They are obviously going to want to fine-tune the models to their use-case and this could result in intentional commercial or ideological biases.
And commercial biases wouldn't necessarily be affected by competition in the way that you're describing.
For example, if it becomes profitable for one of these companies to offer to insert links to buy ingredients at WalMart (or wherever) for the goulash recipe you asked for that's going to become the thing that companies go after.
And all of this assumes that these biases will be obvious rather than subtle.
Model competition does nothing to address monopoly consolidation of compute. If you have control over compute, you can exert control over the masses. It doesn't matter how good my open source model is if I can't acquire the resources to run it. And I have no doubt that the big players will happily buy legislation to both entrench their compute monopoly/cartel and control what can be done using their compute (e.g. making it a criminal offence to build a competitor).
Model competition means that users have multiple options to chose from, so if it turns out one of the models has biases baked in they can switch to another.
Which incentivizes the model vendors not to mess with the models in ways that might lose them customers.
Absolutely. Like most things on the Internet, it will get enshittified. I think it is very likely that at some point there will be "ads" in the form of the chat bot giving recommendations that favor certain products and services.
This is already happening. People are conditioned to embrace capitalism, where a small percentage of the population are born into the owning class, and a majority who labour.
I think that's called feudalism. Maybe our reality doesn't work like it's named and we are starting to have other system despite what we are calling it.
Being told how my grandma had problems and was eventually told to shut down her knitting production (done in free time in addition to regular work) by police in the Communist Poland, I believe that it's better to have somehow upgraded capitalism then try to build a good communism just one more time.
It still got her enough extra buck to build a house in the city after moving out from the village.
The opposition to capitalism have such a disastrous track record, economically and in terms of body count, that embracing capitalism is far more sensible.
I'm not saying that the other systems, by which I assume you mean the various marxist political projects, are good (and we won't even get into how many of those alternatives were actually not-capitalism) but I think to dismiss the "body count" of capitalism while simultaneously ascribing all deaths under those alternative systems as the direct result of {otherSystem} is extremely disingenuous. Doubly so given that modern first-world capitalism often outsources the human cost of it's milieu to the third world so that middle-class suburbanites don't have to see real price of their mass-produced lifestyles.
The alternative systems were just as willing to plunder their satellite states and the third world IIRC as the capitalists were so it would be an equal demerit for both, I'd think?
Modern Western countries mostly drifted towards a mix of capitalism and social democracy.
"modern first-world capitalism often outsources the human cost of it's milieu to the third world"
This is a bit of "damned if you do, damned if you don't".
If you don't do any business with poorer countries, you can be called a heartless isolationist who does not want to share any wealth and only hoards his money himself.
If you do business with poorer countries, but let them determine their own internal standards, you will be accused of outsourcing unpleasant features of capitalism out of your sight.
If you do business with poorer countries and simultaneously demand that they respect your standards in ecology, human rights etc., you will be accused of ideological imperialism and making impossible demands that a poorer country cannot realistically meet.
I really like the “reversibility” framing. The difference for me is whether the tool is doing a step (like “check my grammar” or “summarize these notes”) vs quietly doing the whole loop (generate→decide→act) while I just approve.
Once you’re mostly rubber-stamping, you stop building the internal model you’d need to notice when it’s subtly wrong — and you also stop practicing the taste/judgment layer that’s hard to recover.
This feels like a UI/design problem as much as a model problem: tools that start from your draft and offer diffs keep you in the loop; tools that start from a blank page train “acceptance,” not thinking.
This is something I noticed myself. I let AI handle some of my project and later realized I didn't even understand my own project well enough to make decisions about it :)
But that's exactly what you should be doing, technically. Human in the loop is a dead concept, you should never need to understand your code or even know what changes to make. All you should be concerned about is having the best possible harness so your LLM can do everything as efficiently as possible.
If it gets stuck, use another LLM as the debugger. If that gets stuck then use another LLM. Turtles all the way down.
This list of things not to use AI for is so quaint. There's a story on the front page right now from The Atlantic: "Film students who can no longer sit through films". But why? Aren't they using social media, YouTube, Netflix, etc responsibly? Surely they know the risks, and surely people will be just as responsible with AI, even given the enormous economic and professional pressures to be irresponsible.
> Surely they know the risks, and surely people will be just as responsible with AI
I can't imagine even half of students can understand the short and long term risk of using social media and AI intensively.
At least I couldn't when I was a student.
What is the lesson in the anecdote about film students? To me, it’s that people like the idea of studying film more than they like actually studying film. I fail to see the connection to social media or AI.
I am actually, we haven't owned car for years. We also rarely watch TV and eschew social media, so I can still pay attention and analyze things.
But this makes me super weird! This is the whole point of social media bans for kids: if you make it optional, it'll still be prevalent and people making healthy choices will be social weirdos. Healthy paths need to be free and accessible, and things need to be built around them (eg don't assume everyone has a smartphone, etc)
It's a funnily relevant parallel you're making, because designing everything around the car has absolutely been one of the biggest catastrophes of 2nd half of the 20th century. Much like "AI" in the past couple years, the personal automobile is a useful tool but making anything and everything subservient towards its use has had catastrophic consequences.
It is political. Designing everything around cars benefits the class of people called "Car Owners". Not so much people who don't have the money or desire to buy a car.
Although, congestion pricing is a good counter-example. On the surface it looks like it is designed to benefit users of public transportation. But turns out it also benefits car-owners, because it reduces traffic jams and lets you get to your destination with your own car faster.
But having a car is kind of bad. Maybe you remember when everyone smoked, and there was stuff for smokers everywhere. Sure that made it easier for smokers, but ultimately that wasn't good for them (nor anyone around them).
No, it benefits car manufacturers and sellers, and mechanics and gas stations.
Network/snowball effects are not all good. If local businesses close because everybody drives to WalMart to save a buck, now other people around those local businesses also have to buy a car.
I remember a couple of decades ago when some bus companies in the UK were privatized, and they cut out the "unprofitable" feeder routes.
Guess what? More people in cars, and those people didn't just park and take the bus when they got to the main route, either.
Recently a side discussion came up - people in the Western world are "rediscovering" fermented, and pickled, foods that are still in heavy use in Asian cultures.
Fermentation was a great way to /preserve/ food, but it can be a bit hit and miss. Pickling can be outright dangerous if not done correctly - botulism is a constant risk.
When canning of foods came along it was a massive game changer, many foods became shelf stable for months or years.
Fermentation and pickling was dropped almost universally (in the West).
The “lump of cognition” framing misses something important. it’s not about how much thinking we do, but which thinking we stop doing. A lot of judgment, ownership, and intuition comes from boring or repetitive work, and outsourcing that isn’t free. Lowering the cost of producing words clearly isn’t the same as increasing the amount of actual thought.
I'm grateful that I spent a significant part of my life forced to solve problems and forced to struggle to produce the right words. In hindsight I know that that's where all the learning was. If I'd had a shortcut machine when I was young I'd have used it all the time, learned much less, and grown up dependent on it.
I'd argue that choosing words is a key skill because language is one of our tools for examining ideas and linking together parts of our brains in new ways.
Even just writing notes you'll never refer to again, you're making yourself codify vaguer ideas or impressions, test assumptions, and then compress the concept for later. It's an new external information channel between different regions of your head which seems to provide value.
Looking at the words that get produced at this lowered cost, and observing how satisfactory they apparently are to most people (and observing the simplicity of the heuristics people use to try to root out "cheap" words), has been quite instructive (and depressing).
One bothersome aspect of generative assistance for personal and public communication not mentioned is that it introduces a lazy hedge, where a person can always claim that "Oh, but that was not really what I meant" or "Oh, but I would not express myself in that way" - and use it as a tool to later modify or undo their positions - effectively reducing honesty instead of increasing it.
> where a person can always claim that "Oh, but that was not really what I meant"
that already happens today - they claim autocorrect or spell checks instead of ai previously.
I don't accept these as excuses as valid (even if it was real). It does not give them a valid out to change their mind regardless of the source of the text.
Arguably, excusing oneself because of autocorrect is comparable to the classic "Dictated but not read" [0] disclaimer of old. Excusing oneself because an LLM wrote what was ostensibly your own text is more akin to confessing that your assistant wrote the whole thing and you tried to pass it off as your own without even bothering to read it.
Yep! However the problem will increase by many orders of magnitude as the volume of generated content far surpasses the content created by autocorrect mechanisms, in addition to autocorrect being a far more local modification that does not generate entire paragraphs or segments of content, making it harder to excuse large changes in meaning.
I agree that they make for poor excuses - but as generative content seeps into everything I fear it will become more commonly invoked.
yep, but invoking it doesnt force you to accept it. The only thing you get to control is your own personal choices. That's why i am telling you not to accept it, and i hope that people reading this will consider this their default stance.
Never in my life would I accept that as a valid excuse. If you sent the mail, committed the code or whatever, you take responsibility for it. Anything else is just pathetic.
Good question. I certainly commit that error sometimes, like everyone else. But the issue here is people using LLMs to write eg emails and then not taking responsibility for what they write. That has nothing to do with attribution, only accountability.
"I was having a bad day, my mother had just died" is a very valid explanation for a poorly worded email. "It was AI" is not.
I mean he mentioned it in IMO too harsh of a way (e.g. “pathetic”) but I do think it raises the point: if you don’t own up to your actions then how can you be held accountable to anything?
Unless we want to live in a world where accountability is optional, I think taking responsibility for your actions is the only choice.
And to be honest, today I don’t know where we stand on this. It seems a lot of people don’t care enough about accountability but then again a lot of people do. That’s just my take.
Yes, thank you. I used "pathetic" in the meaning of something which makes feel sorry for them, not something despicable. I fully expect people to stand by what they write and not blame AI etc, but my comment came across as too aggressive.
I mean we're only human. We all make mistakes. Sure, some mistakes are worse than others but in the abstract, even before AI, who hasn't sent an email that they later regretted?
Yes, we all make mistakes. But when I make mistakes when sending an email you can be damn sure that they are my own mistakes which I take full accountability for.
I guess you got hung up on the word "pathetic". See my comment below, I used it not as "despicable" but rather "something to feel sorry for". Indeed, people writing emails using LLMs and then blame the AI for consequences, that is something that makes me feel sorry for them.
Implying mental health issues? That makes me think you were triggered by my comment.
> The category of writing that I like to call "functional text", which are things like computer code and pure conveyance of information (e.g., recipes, information signs, documentation), is not exposed to the same issues.
I hate this take, computer code is just as rich in personality as writing. I can tell a tremendous amount about what kind of person someone is solely based off their code. Code is an incredibly personal expression of ones mental state, even if you might not realize it. LLMs have dehumanized this and the functional outcomes become FAR more unpredictable.
I think we can make an analogy with our own brains, which have evolutionary older parts (limbic system) and evolutionary younger parts (neocortex). Now AI, I think it will be our new neocortex, another layer to our brain. And you can see limbic system didn't "outsource" thinking to neocortex - it's still doing it; but it can take (mostly good) advice from it.
Applying this analogy to human relationships - neocortex allowed us to be more social. Social communication with limbic system was mostly "you smell like a member of our species and I want to have sex with you". So having neocortex expanded our social skills to having friends etc.
I think AI will have a similar effect. It will allow us to individually communicate with large amount of other people (millions). But it will be a different relationship than what we today call "personal communication", face to face, driven by our neocortex. It will be as incomprehensible for our neocortex as our language is incomprehensible for the limbic system.
Very interesting, thanks for sharing this. After reading Karpathy's recent tweet about "A few random notes from claude coding quite [...]" it got me thinking a lot about offloading thinking and more specifically failure. Failure is important for learning. When I use AI and they make mistakes, I often tend to blame the AI and offload the failure. I think this post explores similar thoughts, without talking much about failure. It will be interesting to see the long-term effects.
My fundamental argument: The way the average person is using AI today is as "Thinking as a Service" and this is going to have absolutely devastating long term consequences, training an entire generation not to think for themselves.
I think you hit the nail on the head. Without years of learning by doing, experience in the saddle as you put it, who would be equipped to judge or edit the output of AI? And as knowledge workers with hands-on experience age out of the workforce, who will replace us?
The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true. We don't usually need to worry that a calculator might be giving us the wrong result, or an inferior result. It simply gives us an objective fact. Whereas the output of LLMs can be subjectively considered good or bad - even when it is accurate.
So imagine teaching an architecture student to draw plans for a house, with a calculator that spit out incorrect values 20% of the time, or silently developed an opinion about the height of countertops. You'd not just have a structurally unsound plan, you'd also have a student who'd failed to learn anything useful.
> The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true.
This really resonates with me.
If calculators returned even 99.9% correct answers, it would be impossible to reliably build even small buildings with them.
We are using AI for a lot of small tasks inside big systems, or even for designing the entire architecture, and we still need to validate the answers by ourselves, at least for the foreseeable future.
But outsourcing thinking reduces a lot of brain powers to do that, because it often requires understanding problems' detailed structure and internal thinking path.
In current situation, by vibing and YOLOing most problems, we are losing the very ability we still need and can't replace with AI or other tools.
If you don't have building codes, you can totally yolo build a small house, no calculator needed. It may not be a great house, just like vibeware may not be great, but also, you have something.
I'm not saying this is ideal, but maybe there's another perspective to consider as well, which is lowering barriers to entry and increased ownership.
Many people can't/won't/don't do what it takes to build things, be it a house or an app, if they're starting from zero knowledge. But if you provide a simple guide they can follow, they might end actually building something. They'll learn a little along the way, make it theirs, and end up with ownership of their thing. As an owner, change comes from you, and so you learn a bit more about your thing.
Obviously whatever gets built by a noob isn't likely to be of the same caliber as a professional who spent half their life in school and job training, but that might be ok. DIY is a great teacher and motivator to continue learning.
Contrast to high barriers to entry, where nothing gets built and nothing gets learned, and the user is left dependent on the powers that be to get what he wants, probably overpriced, and with features he never wanted.
If you're a rocket surgeon and suddenly outsource all your thinking to a new and unpredictable machine, while you get fat and lazy watching tv, that's on you. But for a lot of people who were never going to put in years of preparation just to do a thing, vibing their idea may be a catalyst for positive change.
To continue the analogy, there’s something called renting and the range of choices. If there’s no code and you can’t build your own house, you’re left with bad houses built by someone else. It’s more likely to be bad when the owner already knows he will not be living in them as building it right can be expensive and time consuming.
When slop becomes easier, there are a lot more people ready to push it to others than people that tries to produce guenuine work. Especially when theh are hard to distinguish superficially.
There's another category error compounding this issue: People think that because past revolutions in technology eventually led to higher living standards after periods of disruption, this one will too. I think this one is the exception for the reasons enumerated by the parent's blog post.
In point of fact, most technological revolutions have fairly immediately benefited a significant number of people in addition to those in the top 1% -- either by increasing demand for labor, or reducing the price of goods, or both.
The promise of LLMs is that they benefit people in the top 1% (investors and highly paid specialists) by reducing the demand for labor to produce the same stuff that was already being produced. There is an incidental initial increase in (or perhaps just reallocation of) labor to build out infrastructure, but that is possibly quite short-lived, and simultaneously drives a huge increase in the cost of electricity, buildings, and computer-related goods.
But the benefits of new technologies are never spread evenly.
When the technology of travel made remote destinations more accessible, it created tourist traps. Some well placed individuals and companies do well out of this, but typically, most people living near tourist traps suffer from the crowds and increased prices.
When power plants are built, neighbors suffer noise and pollution, but other people can turn their lights on.
We haven't yet begun to be able to calculate all the negative externalities of LLMs.
I would not be surpised if the best negative externality comparisons were to the work of Thomas Midgley, who gifted the world both leaded gasoline and CFC refrigerants.
It's funny, I'm working on trying to get LLMs to place electrical devices, and it silently developed opinions that my switches above countertops should be at 4 feet and not the 3'10 I'm asking for (the top cannot be above 4')
That's quite funny, and almost astonishing, because I'm not an architect, and that scenario just came out of my head randomly as I wrote it. It seemed like something an architect friend of mine who passed away recently, and was a big fan of Douglas Adams, would have joked about. Maybe I just channeled him from the afterlife, and maybe he's also laughing about it.
On the whole, not trusting one's own tools is a regression, not an advancement. The cognitive load it imposes on even the most capable and careful person can lead to all sorts of downstream effects.
There's an Isaac Asimov story where people are "educated" by programming knowledge into their brains, Matrix style.
A certain group of people have something wrong with their brain where they can't be "educated" and are forced to learn by studying and such. The protagonist of the story is one of these people and feels ashamed at his disability and how everyone around him effortlessly knows things he has to struggle to learn.
He finds out (SPOILER) that he was actually selected for a "priesthood" of creative/problem solvers, because the education process gives knowledge without the ability to apply it creatively. It allows people to rapidly and easily be trained on some process but not the ability to reason it out.
That would have devastating consequences in the pre-LLM era, yes. What is less obvious is whether it'll be an advantage or disadvantage going forward. It is like observing that cars will make people fat and lazy and have devastating consequences on health outcomes - that is exactly what happened but the net impact was still positive because cars boost wealth, lifestyles and access to healthcare so much that the net impact is probably positive even if people get less exercise.
It is unclear that a human thinking about things is going to be an advantage in 10, 20 years. Might be, might not be. In 50 years people will probably be outraged if a human makes an important decision without deferring to an LLM's opinion. I'm quite excited that we seem to be building scaleable superintelligences that can patiently and empathetically explain why people are making stupid political choices and what policy prescriptions would actually get a good outcome based on reading all the available statistical and theoretical literature. Screw people primarily thinking for themselves on that topic, the public has no idea.
Eh 1953 was more about what’s going to happen to the people left behind, e.g. Childhood’s End. The vast majority of people will be better off having the market-winning AI tell them what to do.
Or how about that vast majority gets a decent education and higher standard of living so they can spend time learning and thinking on their own? You and a lot of folks seem to take for granted our unjust economy and its consequences, when we could easily change it.
How is that relevant? You can give whatever support you like to humans, but machine learning is doing the same thing in general cognition that it has done in every competitive game. It doesn't matter how much education the humans get - if they try to make complex decisions using their brain then, silicon will outperform them at planning to achieve desirable outcomes. Material prosperity is a desirable outcome, machines will be able to plot a better path to it than some trained monkey. The only question is how long it'll take to resolve the engineering challenges.
There are some facts which makes it not outside the realm of possibility. Like computers being better at chess and go and giving directions to places or doing puzzles. (The picture-on-cardboard variety.)
I think the comparison to giving change is a good one, especially given how frequently the LLM hype crowd uses the fictitious "calculator in your pocket" story. I've been in the exact situation you've described, long before LLMs came out and cashiers have had calculators in front of them for longer than we've had smartphones.
I'll add another analogy. I tell people when I tip I "round off to the nearest dollar, move the decimal place (10%), and multiply by 2" (generating a tip that will be in the ballpark of 18%), and am always told "that's too complicated". It's a 3 step process where the hardest thing is multiplying a number by 2 (and usually a 2 digit number...). It's always struck me as odd that the response is that this is too complicated rather than a nice tip (pun intended) for figuring out how much to tip quickly and with essentially zero thinking. If any of those three steps appear difficult to you then your math skills are below that of elementary school.
I also see a problem with how we look at math and coding. I hear so often "abstraction is bad" yet, that is all coding (and math) is. It is fundamentally abstraction. The ability to abstract is what makes humans human. All creatures abstract, it is a necessary component of intelligence, but humans certainly have a unique capacity for it. Abstraction is no doubt hard, but when in life was anything worth doing easy? I think we unfortunately are willing to put significantly more effort into justifying our laziness than we will to be not lazy. My fear is that we will abdicate doing worthwhile things because they are hard. It's a thing people do every day. So many people love to outsource their thinking. Be it to a calculator, Google, "the algorithm", their favorite political pundit, religion, or anything else. Anything to abdicate responsibility. Anything to abdicate effort.
So I think AI is going to be no different from calculators, as you suggest. They can be great tools to help people do so much. But it will be far more commonly used to outsource thinking, even by many people considered intelligent. Skills atrophy. It's as simple as that.
I briefly taught a beginner CS course over a decade ago, and at the time it was already surprising and disappointing how many of my students would reach for a calculator to do single-digit arithmetic; something that was a requirement to be committed to memory when I was still in school. Not surprisingly, teaching them binary and hex was extremely frustrating.
I tell people when I tip I "round off to the nearest dollar, move the decimal place (10%), and multiply by 2" (generating a tip that will be in the ballpark of 18%), and am always told "that's too complicated".
I would tell others to "shift right once, then divide by 2 and add" for 15%, and get the same response.
However, I'm not so sure what you mean by a problem with thinking that abstraction is bad. Yes, abstraction is bad --- because it is a way to hide and obscure the actual details, and one could argue that such dependence on opaque things, just like a calculator or AI, is the actual problem.
I believe that collectively we passed that point long before the onset of LLMs. I have a feeling that throughout the human history vast amounts of people ware happy to outsource their thinking and even pay to do so. We just used to call those arrangements religions.
Religions may outsource opinions on morality, but no one went to their spiritual leader to ask about the Pythagorean theorem or the population of Zimbabwe.
Obviously I was using the Pythagorean theorem as a random not literal example. But I’m also curious about what you mean. Mind linking to the specific relevant parts? Linking to humongous articles doesn’t help much.
I was linking it partially tongue in cheek, but oracles and the auspices in antiquity were specifically not about morality. They were about predicting the future. If you wanted to know if you should invade Carthage on a certain day, you'd check the chickens. Literally. And plenty of medical practices were steeped in religious fare, too. If you go back further, a lot of shamanistic practices divine the facts about the present reality. In the words of Terrence McKenna, "[Shamans] cure disease (and another way of putting that is: they have a remarkable facility for choosing patients who will recover), they predict weather (very important), they tell where game has gone, the movement of game, and they seem to have a paranormal ability to look into questions, as I mentioned, who’s sleeping with who, who stole the chicken, who—you know, social transgressions are an open book to them." All very much dealing with facts, not morality.
> The cosmos of the acusmata, however, clearly shows a belief in a world structured according to mathematics, and some of the evidence for this belief may have been drawn from genuine mathematical truths such as those embodied in the “Pythagorean” theorem and the relation of whole number ratios to musical concords.
There are numerous sections throughout both of these entries that discuss Pythagoras, mathematics, and religion. Plato too is another fruitful avenue, if you wanted to explore that further.
That’s a bit cynical. Religion is more like a technology. It was continuously invented to solve problems and increase capacity. Newer religions superseded older and survived based on productive and coercive supremacy.
If religion is a technology, it's inarguably one that prevented the development of a lot of other technologies for long periods of time. Whether that was a good thing is open to interpretation.
On the other hand it produced a lot of related technology. Calendars, mathematics, writing, agricultural practices, government and economic systems. Most of this stuff emerged as an effort to document and proliferate spiritual ideas.
I see your point, but I'd say religion's main technological purpose is as a storage system for the encoding of other technologies (and social patterns) into rituals, the reasons for which don't need to be understood; to the point that it actively discourages examination of their reasons, as what we could call an error-checking protocol. So a religion tends to freeze those technologies in the time at the point of inception, and to treat any reexamining of them as heresy. Calendars are useful for iron age farming, but you can't get past a certain point as a civilization if you're unwilling to reconsider your position that the sun and stars revolve around the earth, for example.
I'll say that I'm still kinda on the fence here, but I will point out that your argument is exactly the same as the argument against calculators back in the 70s/80s, computers and the internet in the 90s, etc.
You could argue that a lot of the people who few up with calculators have lost any kind of mathematical intuition. I am always horrified how bad a lot of people are with simple math, interest rates and other things. This definitely opened up a lot of opportunities for companies to exploit this ignorance.
The difference is a calculator always returns 2+2=4. And even then if you ended up with 6 instead of 4, the fact you know how to do addition already leads you to believe you fat fingered the last entry and that 2+2 does not equal 6.
Can’t say the same for LLM. Our teachers were right with the internet of course as well. If you remember those early internet wild west school days, no one was using the internet to actually look up a good source. No one even knew what that meant. Teachers had to say “cite from these works or references we discussed in class” or they’d get junk back.
Right so apply the exact same logic to LLMs as you did to the internet.
At first the internet was unreliable. Nobody could trust the information it gave you. So teachers insisted that students only use their trusted sources. But eventually the internet matured and now it would be seen as ridiculous for a teacher to tell a student not to do research on the internet.
Too late. Outsourcing has already accomplished this.
No one is making cool shit for themselves. Everyone is held hostage ensuring Wall Street growth.
The "cross our fingers and hope for the best" position we find ourselves in politically is entirely due to labor capture.
The US benefited from a social network topology of small businesses. No single business being a lynch pin that would implode everything.
Now the economy is a handful of too big to fails eroding links between human nodes by capturing our agency.
I argued as hard as I could against shipping electronics manufacturing overseas so the next generation would learn real engineering skills. But 20 something me had no idea how far up the political tree the decision was made back then. I helped train a bunch of people's replacements before the telecom focused network hardware manufacturer I worked for then shut down.
American tech workers are now primarily cloud configurators and that's being automated away.
This is a decades long play on the part of aging leadership to ensure Americans feel their only choice is capitulate.
What are we going to do, start our own manufacturing business? Muricans are fish in a barrel.
The interesting axis here isn’t how much cognition we outsource, it’s how reversible the outsourcing is. Using an LLM as a scratchpad (like a smarter calculator or search engine) is very different from letting it quietly shape your writing, decisions, and taste over years. That’s the layer where tacit knowledge and identity live, and it’s hard to get back once the habit forms.
We already saw a softer version of this with web search and GPS: people didn’t suddenly forget how to read maps, but schools and orgs stopped teaching it, and now almost nobody plans a route without a blue dot. I suspect we’ll see the same with writing and judgment: the danger isn’t that nobody thinks, it’s that fewer people remember how.
Yet it does feel different with LLMs compared to your examples. Yes, people can’t navigate without Apple/Google maps, but that’s still very different from losing critical thinking skills.
That said, LLMs are perhaps accelerating that but aren’t the only cause (lack of reading, more short form content, etc)
Humans are highly adaptable. It's hard to go back while the thing we're used to still exists, but if it vanished from the world we'd adapt within a few weeks.
The author says it's too long. So let's tighten it up.
A criticism of the use of large language models (LLMs) is that it can deprive us of cognitive skills.
Are some kinds of use are better than others? Andy Masley's blog says "thinking often leads to more things to think about", so we shouldn't worry about letting machines do the thinking for us — we will be able to think about other things.
My aim is not to refute all his arguments, but to highlight issues with "outsourcing thinking".
Masley writes that it's "bad to outsource your cognition when it:"
- Builds tacit knowledge you'll need in future.
- Is an expression of care for someone else.
- Is a valuable experience on its own.
- Is deceptive to fake.
- Is focused in a problem that is deathly important to get right, and where you don't totally trust who you're outsourcing it to.
How we choose to use chatbots is about how we want our lives and society to be.
That's what he has to say. Plus some examples, which help make the message concrete. It's a useful article if edited properly.
I think that this summary is oversimplifying: The rest of the blog post elaborates on how the author and Masley has a completely different interpretation of that bullet point list. The rest of the text is not only examples; it provides elaborations of what thought processes led him to his conclusions. I found the nuancing of the two opposing interpretations, not the conclusion, the most enjoyable part of the post.
(This comment could also be shortened to “that’s oversimplifying”. I think my longer version is both more convincing and enjoyable.)
I feel like your comment is in itself a great analogy for the "beware of using LLMs in human communication" argument. LLMs are in the end statistical models that regress to the mean, so they by design flatten out our communication, much like a reductionist summary does. I care about the nuance that we lose when communicating through "LLM filters", but others dont apparently.
That makes for a tough discussion unfortunately. I see a lot of value lost by having LLMs in email clients, and I dont observe the benefit; LLMs are a net time sink because I have to rewrite its output myself anyway. Proponents seem to not see any value loss, and they do observe an efficiency gain.
I am curious to see how the free market will value LLM communication. Will the lower quality, higher quantity be a net positive for job seekers sending applications or sales teams nursing leads? The way I see it either we end up in a world where eg job matching is almost completely automated, or we find an effective enough AI spam filter and we will be effectively back to square one. I hope it will be the latter, because agents negotiating job positions is bound to create more inequality, with all jobs getting filled by applicants hiring the most expensive agent.
Either way, so much compute and human capital will go wasted.
> Proponents seem to not see any value loss, and they do observe an efficiency gain.
You get to start by dumping your raw unfiltered emotions into the text box and have the AI clean it up for you.
If you're in customer support, and have to deal with dumbasses all day long who are too stupid to read the fucking instructions. I imagine being able to type that out, and then have the AI remove profanity and not insult customers to be rather cathartic. Then, substitute "read the manual" for an actually complicated to explain thing.
I don’t understand this summary - isn’t this a summary of the authors recitation of Masleys position? It’s missing the part that actually matters, the authors position and how it differs from Masley?
The main difference is that the computer you use for writing is not requiring you to pay for every word. And that's the difference in the business models being pushed right now all around the world.
I like this imaginary world you propose that gives free computers, free electricity, a free place to store it, and is free from danger from other tribes.
If an AI thinks for you, you're no longer "outsourcing" parts of your mind. What we call "AI" now is technically impressive but is not the end point for where AI is likely to end up. For example, imagine an AI that is smart enough to emotionally manipulate you, at what point in this interaction do you lose your agency to "outsource" yourself instead of acting as a conduit to "outsource" the thoughts of an artificial entity? It speaks to our collective hubris that we seek to create an intellectually superior entity and yet still think we'll maintain control over it instead of the other way around.
1. Do you think it's impossible for AI to have it's own volition?
2. We don't have full control over the design of AI. Current AI models are grown rather than fully designed, the outcomes of which are not predictable. Would you want to see limits placed on AI until we had a better grasp of how to design AI with predictable behaviour?
I still read the LLMs output quite critically and I cringe whenever I do. LLMs are just plain wrong a lot of the time. They’re just not very intelligent. They’re great at pretending to be intelligent. They imitate intelligence. That is all they do. And I can see it every single time I interact with them. And it terrifies me that others aren’t quite as objective.
I usually feed my articles to it and ask for insight into whats working. I usually wait to initiate any sort of AI insight until my rough draft is totally done...
Working in this manner, it is so painfully clear it doesnt really follow the flow of the article even. It misses on so many critical details and just sorta fills in its own blanks wrong... When you tell it that its missing a critical detail, it treats you like some genius, every single time.
It is hard for me to try to imagine growing up with it, and using it to write my own words for me. The only time i copy paste words to a fellow human that is ai generated, is for totally generic customer service style replies, for questions i dont totally consider worthy of any real time.
AI has kinda taken away my flow state for coding, rare as it was... I still get it when writing stuff I am passionate about, and I can't imagine I'll ever wanna outsource that.
> And it terrifies me that others aren’t quite as objective.
I have been reminded constantly throughout this that a very large fraction of people are easily impressed by such prose. Skill at detecting AI output (in any given endeavour), I think, correlates with skill at valuing the same kind of work generally.
Put more bluntly: slop is slop, and it has been with us for far longer than AI.
I really enjoyed and agree with the majority of the article, but this was my nit as well. My hatred of vacation planning is often the reason I don't go on more vacations. It seems like automating a task that is experienced by the individual as completely monotonous ( and only affects that individual) would be a great example of something worth handing off to a text generator.
For me there’s a lot of risk in vacationing in a new area I have no idea about. ChatGPT helps me here.
It all comes down to people who have comfort in their own workflows and it takes mental load to change it. And then find reasons to work backwards to justify not liking AI.
One perspective I’m circling right now about this topic is that maybe we’re coming to realize as a society that what we considered intelligence (or symbolic intelligence whatever you wanna call that thing that we measure with traditional IQ tests, verbal fluency, etc) is actually a far less essential cognitive aspect to us as humans then we had previously assumed and is in fact, far more mechanical in nature than we had formerly believed.
This ties with how I sometimes describe current generation AI as a form of mechanized intelligence: like Babbage’s calculating machine, but scaled up to be able to represent all kinds of classes of things.
And in this perspective that I’m circling these days where I’m currently coming down on it is maybe the effect of this realization will be something like the dichotomy outlined in the Dune series: namely, that between mechanized intelligence embodied by mentats and the more intuitive and prescient aspects of cognition embodied by the Benni Jessarit and Paul’s lineage.
A simple but direct way to describe this transition in perspective may be that we come to see what we formally thought of as intelligence in the West/reductive tradition as a form of mechanized calculation that it’s possible to outsource to automatic non-biological processes, and we start to lean in more deeply to the more intuitive and prescient aspects of cognition.
One thing I’m reminded of is how Indian yogic texts describe various aspects of mind.
I’m not sure if it’s a one-to-one mapping because I’m not across that material but merely the idea of distinguishing between different aspects of mind is something with precedent; and central to that is the idea of removing association between self identity and the aspects of mind.
And so maybe one of the effects for us as a society will be something akin to that.
To his point: personally, I find it shifts 'where and when' I have to deal with the 'cognitive load'. I've noticed (at times) feeling more impatient, that I tend to skim the results more often, and that it takes a bit more mental energy to maintain my attention..
How many of you know how to do home improvement? Fix your own clothes? Grow your own food? Cook your own food? How about making a fire or shelter? People used to know all of those things. Now they don't, but we seem to be getting along in life fine anyway. Sure we're all frightened by the media at the dangers lurking from not knowing more, but actually our lives are fine.
The things that are actually dangerous in our lives? Not informing ourselves enough about science, politics, economics, history, and letting angry people lead us astray. Nobody writes about that. Instead they write about spooky things that can't be predicted and shudder. It's easier to wonder about future uncertainty than deal with current certainty.
Executive function is not the same as weaving or carpentry. The scary problem comes from people who are trying to abdicate their entire understand-and-decide phase to an outside entity.
What's more, that's not fundamentally a new thing, it's always been possible for someone to helplessly cling to another human as their brain... but we've typically considered that to be a mental-disorder and/or abuse.
I know how to cook! You open the freezer, grab a Hot Pocket, Unwrap it, put it in the microwave, hit 2, and wait 3 minutes (it has to cool). That's what you meant, right?
I mean grill a steak, cook a chicken in the oven, chop some vegetables and prepare a salad, cook some pasta with a simple tomato sauce, etc. Do people really don't know how to do this? It's not rocket science.
It seems wild to me to assume most people on HN don't know how to cook even basic stuff...
Systems used to be robust, now they’re fragile due to extreme outsourcing and specialization. I challenge the belief that we’re getting along fine. I argue systems are headed to failure, because of over optimization that prioritized output over resilience.
A lot of this stuff depends on how a person chooses to engage, but my contrarian take is that actually throughout history whenever anyone said X technology will lead to the downfall of humanity for y reasons, that take was usually correct.
The article he references gives this example:
“Is it lazy to watch a movie instead of making up a story in your head?”
Yes, yes it is, this was a worry when we transitioned from oral culture to written culture, and I think it was probably prescient.
For many if not most people cultural or technological expectations around what skills you _have_ to learn probably have an impact on total capability. We probably lost something when Google Maps came out and the average person didn’t have to learn to read a map.
When we transitioned from paper and evening news to 24 hour partisan cable news, I think more people outsourced their political opinions to those channels.
> We probably lost something when Google Maps came out and the average person didn’t have to learn to read a map.
Even in my mid 30s I see this issue with people around my age. Even for local areas, it seems like no one really understands what direction they are heading, they just kinda toggle on the GPS and listen for what to do... forever?
On pretty much every modern GPS, there is a button to show the full route instead of the current step the user is on(as well as keeping it in a static orientation). I feel like just that being the default most of the time, would help a ton of people.
When it comes to what we believe, humans see what they want to see. In other words, we have what Julia Galef calls a soldier mindset. From tribalism and wishful thinking, to rationalizing in our personal lives and everything in between, we are driven to defend the ideas we most want to believe--and shoot down those we don't. But if we want to get things right more often, argues Galef, we should train ourselves to have a scout mindset. Unlike the soldier, a scout's goal isn't to defend one side over the other. It's to go out, survey the territory, and come back with as accurate a map as possible. Regardless of what they hope to be the case, above all, the scout wants to know what's actually true. In The Scout Mindset, Galef shows that what makes scouts better at getting things right isn't that they're smarter or more knowledgeable than everyone else. It's a handful of emotional skills, habits, and ways of looking at the world--which anyone can learn. With fascinating examples ranging from how to survive being stranded in the middle of the ocean, to how Jeff Bezos avoids overconfidence, to how superforecasters outperform CIA operatives, to Reddit threads and modern partisan politics, Galef explores why our brains deceive us and what we can do to change the way we think.
> Social media has given me a rather dim view of the quality of people's thinking, long before AI. Outsourcing it could well be an improvement.
Cogito, ergo sum
The corollary is: absence of thinking equals non-existence. I don't see how that can be an improvement. Improvement can happen only when it's applied to the quality of people's thinking.
The converse need not hold. Cognition implies existence; it is sufficient but not necessary. Plenty of things exist without thinking.
(And that's not what the Cogito means in the first place. It's a statement about knowledge: I think therefore it is a fact that I am. Descartes is using it as the basis of epistemology; he has demonstrated from first principles that at least one thing exists.)
I know the trivialities. I didn't intend to make a general or formal statement, we're talking about people. In a competitive world, those who've been reduced to idiocracy won't survive, AI not only isn't going to help them, it will be used against them.
> Plenty of things exist without thinking.
Existence in an animal farm isn't human existence.
Thinking developed naturally as a tool that helps our species to stay dominant on the planet, at least on land. (Not by biomass but by the ability to control.)
If outsourcing thought is beneficial, those who practice it will thrive; if not, they will eventually cease to practice it, one way or another.
Thought, as any other tool, is useful when it solves more problems than it creates. For instance, an ability to move very fast may be beneficial if it gets you where you want to be, and detrimental, if it misses the destination often enough, and badly enough. Similarly, if outsourced intellectual activities miss the mark often enough, and badly enough, the increased speed is not very helpful.
I suspect that the best results would be achieved by outsourcing relatively small intellectual acts in a way that guarantees very rare, very small errors. That is, AI will become useful when AI becomes dependable, comparable to our other tools.
> If outsourcing thought is beneficial, those who practice it will thrive
It makes them prey to and dependent on those who are building and selling them the thinking.
> I suspect that the best results would be achieved by outsourcing relatively small intellectual acts in a way that guarantees very rare, very small errors. That is, AI will become useful when AI becomes dependable, comparable to our other tools.
That's like saying ultra processed foods provide the best results when eaten sparingly, so it will become useful when people adopt overall responsible diets. Okay, sure, but what does that matter in practice since it isn't happening?
This part really caught my attention (along with the rest of the preceding paragraph):
> Our inability to see opportunities and fulfillment in life as it is, leads to the inevitable conclusion that life is never enough, and we would always rather be doing something else.
I agree with the article completely, as it effectively names an uneasy feeling of hesitation I’ve had all along with how I use LLMs. I have found them tremendously valuable as sounding boards when I’m going in circles in my own well-worn cognitive (and sometimes even emotional) ruts. I have also found them valuable as research assistants, and I feel grateful that they arrived right around the time that search engines began to feel all but useless. I haven’t yet found them valuable in writing on my behalf, whether it’s prose or code.
During my formal education, I was very much a math and science person. I enjoyed those subjects. They came easily to me, which I also enjoyed. I did two years of liberal arts in undergrad, and they kicked my butt academically in a way that I didn’t realize was possible. I did not enjoy having to learn how to think and articulate those thoughts in seminars and essays. I did not enjoy the vulnerability of sharing myself that way, or of receiving feedback. If LLMs had existed, I’m certain I would have leaned hard on them to get some relief from the constant feeling of struggle and inadequacy. But then I wouldn’t have learned how to think or how to articulate myself, and my life and career would have been significantly less meaningful, interesting, and satisfying.
For those unaware, this phrase: "The lump of cognition fallacy" is a derivative of the classic economic fallacy: Lump of Labor Fallacy (or Lump of Jobs)
Google AI describes it as:
I really liked this piece, and I share the concern, but I think “outsourcing thinking” is slightly the wrong frame.
In my own work, I found the real failure mode wasn’t using AI, it was automating the wrong parts. When I let AI generate summaries or reflections for me, I lost the value of the task. Not because thinking disappeared, but because the meaning-making did.
The distinction that’s helped me is: - If a task’s value comes from doing the thinking (reflection, synthesis, judgment), design AI as a collaborator, asking questions, prompting, pushing back. - If the task is execution or recall, automate it aggressively.
So the problem isn’t that we outsource thinking, it’s that we sometimes bypass the cognitive loops that actually matter. The design choice is whether AI replaces those loops or helps surface them.
I wrote more about that here if useful: https://jonmagic.com/posts/designing-collaborations-not-just...
Ever since Google experimented LLM in Gmail it bothers me alot. I firmly believe every word and the way you put them together portrays who you are. Using LLM for direct communication is harmful to human connections.
It can be. It can also not be. A friend of mine had a PITA boss. Thanks to ChatGPT he salvaged his relationship with him even though he hated working with him.
He went on to something else but his stress levels went way down.
All this is to say: I agree with you if the human connection is in good faith. If it isn’t then LLMs are helpful sometimes.
It sounds like that relationship was not supposed to be salvaged to begin with. ChatGPT perhaps prolonged your friend's suffering, who ended up moving on in the end. Perhaps unnecessarily delayed.
My knee-jerk reaction is that outsourcing thinking and writing to an LLM is a defeat of massive proportions, a loss of authenticity in an increasingly less authentic world.
On the other hand, before LLMs came along, didn't we ask a friend or colleague for their opinion on an email we were about to write to our boss about an important professional or personal matter? I have been asked several times to give advice on the content and tone of emails or messages that some of my friends were about to send. On some occasions, I have written emails on their behalf.
Is it really any different to ask an LLM instead of me? Do I have a better understanding of the situation, the tone, the words, or the content to use?
I think there are a couple of differences here:
Firstly, when you ask a friend or colleague you're asking a favour that you know will take them some time and effort. So you save it for the important stuff, and the rest of the time you keep putting in the effort yourself. With an LLM it's much easier to lean on the assistance more frequently.
Secondly, I think when a friend is giving advice the responses are more likely to be advice, i.e. more often generalities like "you should emphasize this bit of your resume more strongly" or point fixes to grammar errors, partly because that's less effort and partly because "let me just rewrite this whole thing the way I would have written it" can come across as a bit rude if it wasn't explicitly asked for. Obviously you can prompt the LLM to only provide critique at that level, but it's also really easy to just let it do a lot more of the work.
But if you know you're prone to getting into conflicts in email, an LLM powered filter on outgoing email that flagged up "hey, you're probably going to regret sending that" mails before they went out the door seems like it might be a helpful tool.
"Firstly, when you ask a friend or colleague you're asking a favour that you know will take them some time and effort. So you save it for the important stuff, and the rest of the time you keep putting in the effort yourself. With an LLM it's much easier to lean on the assistance more frequently."
- I find this a point in favor of LLM and not a flaw. It is a philosophical stance, one for which what does not require effort or time is intrinsically not valuable (see using GLP peptides vs sucking it up for losing weight). Sure, it requires effort and dedication to clean your house, but given the means (money), wouldn't you prefer to have someone else clean your place?
"Secondly, I think when a friend is giving advice the responses are more likely to be advice"
- You can ask an LLM for advice instead of writing directly and without further reflection on the writing provided by the model. Here I find parallels with therapy, which in its modern version, does not provide answers, but questions, means of investigation, and tools to better deal with the problems of our lives.
But if you ask people who go to therapy, the vast majority of them would much prefer to receive direct guidance (“Do this/don't do that”).
In the cases in which I wrote a message or email on behalf of someone else, I was asked to do it: can you write it for me, please? I even had to write recommendation letters for myself--I was asked to do that by my PhD supervisor.
I wasn't arguing that getting LLMs to do this is necessarily bad -- I just think it really is different from having in the past been able to ask other humans for help, and so that past experience isn't a reliable guide to whether we might find we have problems with unexpected effects of this new technology.
If you are concerned about possible harms in "outsourcing thinking and writing" (whether to an LLM or another human) then I think that the frequency and completeness with which you do that outsourcing matters a lot.
It all depends on the use one makes of it.
It can become an indispensable asset over time, or a tool that can be used at certain times to solve, for example, mundane problems that we have always found annoying and that we can now outsource, or a coaching companion that can help us understand something we did not understand before. Since humans are naturally lazy, most will default to the first option.
It's a bit like the evolution of driving. Today, only a small percentage of people are able to describe how an internal combustion engine works (<1%?), something that was essential in the early decades after the invention of the car. But I don't think that those who don't understand how an engine works feel that their driving experience is limited in any way.
Certainly, thinking and reasoning are universal tools, and it could be that in the near future we will find ourselves dumber than we were before, unable to do things that were once natural and intuitive.
But LLMs are here to stay, they will improve over time, and it may well be that in a few decades, the human experience will undergo a downgrade (or an upgrade?) and consist mainly of watching short videos, eating foods that are engineered to stimulate our dopamine receptors, and living a predominantly hedonistic life, devoid of meaning and responsibility. Or perhaps I am describing the average human experience of today.
Not really, he was looking for other jobs. One can't just be without a job unless they have enough savings which he didn't.
https://en.wikipedia.org/wiki/NPC_(meme)
This comment has made me glad for LLM in Gmail. If someone is going to over analyze my every word because he firmly believes it portrays who I am, I'd appreciate the layer obfuscation between me and this creepazoid.
If your words don’t portray who you are, what does?
People make mistakes in the words they use, I often think “oops, I shouldn’t have said it like that”.
If said once, yes.
Actions? I generally judge people by what they do, not what they say - though of course I have to admit that saying things does fall under "doing something", if it's impactful.
The truth is that both words and actions communicate something, especially in combination. And sometimes words are the action.
Assuming you did not use an LLM to craft your comment, I’d say “case in point”.
What I am worried about (and it's something about regular internet search that has worried me for the past ~10 years or so) is that, after they've trained a generation of folks to rely on this tech, they're going to start inserting things into the training data (or whatever the method would be) to bias it towards favoring certain agendas wrt the information it presents to the users in response to their queries.
> after they've trained a generation of folks to rely on this tech ... bias it towards favoring certain agendas
previously, this happened with print media. Then it happened with the airwaves. It only makes logical sense that the trend continues with LLMs.
Basically, the fundamental issue is that the source of information is under someone else's control, and that someone will always have an agenda.
But with LLMs, it's crucial to try change the trend. IMHO, it should be possible for a regular person to own their computing - this should include the LLM capability/hardware, as well as the model(s). Without such capabilities, the exact same will happen as has in the past with new technologies.
> it should be possible for a regular person to own their computing
And regular persons will not care about this and will select a model with biases of anyone who they deem "works better for me at this one task that I needed".
Just like you said:
> previously, this happened with print media. Then it happened with the airwaves. It only makes logical sense that the trend continues with LLMs.
I wish it wasn't so, but I have no idea how to make people care about not being under someone's control.
I worried about this a lot more at the tail end of 2003, when OpenAI's GPT-4 (since March) was still very clearly ahead of every other model. It briefly looked like control of the most useful model would stay with a single organization, giving them outsized influence in how LLMs shape human society.
I don't worry about that any more because there's so much competition: dozens of organizations now produce usable LLMs and the "best" is no longer static. We have frontier models from the USA, France (Mistral) and China now.
The risk of a model monopoly centralizing cultural power feels a lot lower now then it did a couple of years ago.
I don't think model competition is necessarily the fix to this issue. We're not even sure if the setup as it exists today will be the norm. It could be that other entities license out the models for their own projects which then become the primary contact point for users and LLMs. They are obviously going to want to fine-tune the models to their use-case and this could result in intentional commercial or ideological biases.
And commercial biases wouldn't necessarily be affected by competition in the way that you're describing. For example, if it becomes profitable for one of these companies to offer to insert links to buy ingredients at WalMart (or wherever) for the goulash recipe you asked for that's going to become the thing that companies go after.
And all of this assumes that these biases will be obvious rather than subtle.
Model competition does nothing to address monopoly consolidation of compute. If you have control over compute, you can exert control over the masses. It doesn't matter how good my open source model is if I can't acquire the resources to run it. And I have no doubt that the big players will happily buy legislation to both entrench their compute monopoly/cartel and control what can be done using their compute (e.g. making it a criminal offence to build a competitor).
Model competition means that users have multiple options to chose from, so if it turns out one of the models has biases baked in they can switch to another.
Which incentivizes the model vendors not to mess with the models in ways that might lose them customers.
> to bias it towards favoring certain agendas wrt the information it presents to the users in response to their queries.
Do you mean like Grok is already doing in such a ham-fisted way?
Absolutely. Like most things on the Internet, it will get enshittified. I think it is very likely that at some point there will be "ads" in the form of the chat bot giving recommendations that favor certain products and services.
This is already happening. People are conditioned to embrace capitalism, where a small percentage of the population are born into the owning class, and a majority who labour.
I think that's called feudalism. Maybe our reality doesn't work like it's named and we are starting to have other system despite what we are calling it.
Being told how my grandma had problems and was eventually told to shut down her knitting production (done in free time in addition to regular work) by police in the Communist Poland, I believe that it's better to have somehow upgraded capitalism then try to build a good communism just one more time.
It still got her enough extra buck to build a house in the city after moving out from the village.
Communism is neither the opposite of laissez-faire capitalism nor the only alternative.
The opposition to capitalism have such a disastrous track record, economically and in terms of body count, that embracing capitalism is far more sensible.
I'm not saying that the other systems, by which I assume you mean the various marxist political projects, are good (and we won't even get into how many of those alternatives were actually not-capitalism) but I think to dismiss the "body count" of capitalism while simultaneously ascribing all deaths under those alternative systems as the direct result of {otherSystem} is extremely disingenuous. Doubly so given that modern first-world capitalism often outsources the human cost of it's milieu to the third world so that middle-class suburbanites don't have to see real price of their mass-produced lifestyles.
The alternative systems were just as willing to plunder their satellite states and the third world IIRC as the capitalists were so it would be an equal demerit for both, I'd think?
Modern Western countries mostly drifted towards a mix of capitalism and social democracy.
"modern first-world capitalism often outsources the human cost of it's milieu to the third world"
This is a bit of "damned if you do, damned if you don't".
If you don't do any business with poorer countries, you can be called a heartless isolationist who does not want to share any wealth and only hoards his money himself.
If you do business with poorer countries, but let them determine their own internal standards, you will be accused of outsourcing unpleasant features of capitalism out of your sight.
If you do business with poorer countries and simultaneously demand that they respect your standards in ecology, human rights etc., you will be accused of ideological imperialism and making impossible demands that a poorer country cannot realistically meet.
Pick your poison.
I really like the “reversibility” framing. The difference for me is whether the tool is doing a step (like “check my grammar” or “summarize these notes”) vs quietly doing the whole loop (generate→decide→act) while I just approve.
Once you’re mostly rubber-stamping, you stop building the internal model you’d need to notice when it’s subtly wrong — and you also stop practicing the taste/judgment layer that’s hard to recover.
This feels like a UI/design problem as much as a model problem: tools that start from your draft and offer diffs keep you in the loop; tools that start from a blank page train “acceptance,” not thinking.
This is something I noticed myself. I let AI handle some of my project and later realized I didn't even understand my own project well enough to make decisions about it :)
But that's exactly what you should be doing, technically. Human in the loop is a dead concept, you should never need to understand your code or even know what changes to make. All you should be concerned about is having the best possible harness so your LLM can do everything as efficiently as possible.
If it gets stuck, use another LLM as the debugger. If that gets stuck then use another LLM. Turtles all the way down.
/s
No thinking allowed! Only vibes
(I don’t have the link to the video, but hopefully someone else remembers this joke)
I didn't get it (Cat felling high and looking around meme)
This list of things not to use AI for is so quaint. There's a story on the front page right now from The Atlantic: "Film students who can no longer sit through films". But why? Aren't they using social media, YouTube, Netflix, etc responsibly? Surely they know the risks, and surely people will be just as responsible with AI, even given the enormous economic and professional pressures to be irresponsible.
What is the lesson in the anecdote about film students? To me, it’s that people like the idea of studying film more than they like actually studying film. I fail to see the connection to social media or AI.
AI performs strictly in the Platonic world, as is the social media experience. As is the film student.
Yikes, that was too real
Social media's rotted their attention span
> Film students who can no longer sit through films
Everyone loves watching films until they get a curriculum with 100 of them along with a massive reading list, essays, and exams coming up.
I learned that when I decided to become a competitive Warcraft 3 player.
Apparently, my competitiveness lasts for a month.
Gaming is much more fun when you get to decide when to quit and how to play.
> surely people will be just as responsible with AI
That's exactly what worries us.
We lose something when we give up horses for cars.
Have too many of us outsourced our ability to raise horses for transport?
Surely you're capable of walking all day without break?
I am actually, we haven't owned car for years. We also rarely watch TV and eschew social media, so I can still pay attention and analyze things.
But this makes me super weird! This is the whole point of social media bans for kids: if you make it optional, it'll still be prevalent and people making healthy choices will be social weirdos. Healthy paths need to be free and accessible, and things need to be built around them (eg don't assume everyone has a smartphone, etc)
It's a funnily relevant parallel you're making, because designing everything around the car has absolutely been one of the biggest catastrophes of 2nd half of the 20th century. Much like "AI" in the past couple years, the personal automobile is a useful tool but making anything and everything subservient towards its use has had catastrophic consequences.
It is political. Designing everything around cars benefits the class of people called "Car Owners". Not so much people who don't have the money or desire to buy a car.
Although, congestion pricing is a good counter-example. On the surface it looks like it is designed to benefit users of public transportation. But turns out it also benefits car-owners, because it reduces traffic jams and lets you get to your destination with your own car faster.
>Designing everything around cars benefits the class of people called "Car Owners".
Designing everything around cars hurts everyone including car owners. Having no option but to drive everywhere just sucks.
But the AD for my Cadillac says I’m an incredible person for driving it, that cant be wrong.
But having a car is kind of bad. Maybe you remember when everyone smoked, and there was stuff for smokers everywhere. Sure that made it easier for smokers, but ultimately that wasn't good for them (nor anyone around them).
No, it benefits car manufacturers and sellers, and mechanics and gas stations.
Network/snowball effects are not all good. If local businesses close because everybody drives to WalMart to save a buck, now other people around those local businesses also have to buy a car.
I remember a couple of decades ago when some bus companies in the UK were privatized, and they cut out the "unprofitable" feeder routes.
Guess what? More people in cars, and those people didn't just park and take the bus when they got to the main route, either.
>No, it benefits car manufacturers and sellers, and mechanics and gas stations.
Everybody thinks they're customers when they buy a car, but they're really the product. These industries, and others, are the real customers
> Everybody thinks they're customers
So much so that my comment attracted downvotes.
C'est la vie.
Perhaps the films were weren't worth sitting through?
Recently a side discussion came up - people in the Western world are "rediscovering" fermented, and pickled, foods that are still in heavy use in Asian cultures.
Fermentation was a great way to /preserve/ food, but it can be a bit hit and miss. Pickling can be outright dangerous if not done correctly - botulism is a constant risk.
When canning of foods came along it was a massive game changer, many foods became shelf stable for months or years.
Fermentation and pickling was dropped almost universally (in the West).
> Fermentation and pickling was dropped almost universally (in the West).
What are you talking about? What do you think pickles are? Or sauerkraut, for that matter?
Or cheese or beer?
They're making a (strong) comeback (although sauerkraut is still seen as "ethnic" in the anglosphere), sure
How often have you made them yourself, how often does your friend at work make them (if ever?)
Edit: I'm sure you can add to https://news.ycombinator.com/item?id=46733306
Pickles are in McDonald's burgers which is probably as mainstream across the globe as you can get.
The “lump of cognition” framing misses something important. it’s not about how much thinking we do, but which thinking we stop doing. A lot of judgment, ownership, and intuition comes from boring or repetitive work, and outsourcing that isn’t free. Lowering the cost of producing words clearly isn’t the same as increasing the amount of actual thought.
I'm grateful that I spent a significant part of my life forced to solve problems and forced to struggle to produce the right words. In hindsight I know that that's where all the learning was. If I'd had a shortcut machine when I was young I'd have used it all the time, learned much less, and grown up dependent on it.
I'd argue that choosing words is a key skill because language is one of our tools for examining ideas and linking together parts of our brains in new ways.
Even just writing notes you'll never refer to again, you're making yourself codify vaguer ideas or impressions, test assumptions, and then compress the concept for later. It's an new external information channel between different regions of your head which seems to provide value.
Looking at the words that get produced at this lowered cost, and observing how satisfactory they apparently are to most people (and observing the simplicity of the heuristics people use to try to root out "cheap" words), has been quite instructive (and depressing).
I think a good workflow is to use the llm as a minimum to fix typo and grammar issues.
From there you can also go from first draft to feedback.
One bothersome aspect of generative assistance for personal and public communication not mentioned is that it introduces a lazy hedge, where a person can always claim that "Oh, but that was not really what I meant" or "Oh, but I would not express myself in that way" - and use it as a tool to later modify or undo their positions - effectively reducing honesty instead of increasing it.
> where a person can always claim that "Oh, but that was not really what I meant"
that already happens today - they claim autocorrect or spell checks instead of ai previously.
I don't accept these as excuses as valid (even if it was real). It does not give them a valid out to change their mind regardless of the source of the text.
Arguably, excusing oneself because of autocorrect is comparable to the classic "Dictated but not read" [0] disclaimer of old. Excusing oneself because an LLM wrote what was ostensibly your own text is more akin to confessing that your assistant wrote the whole thing and you tried to pass it off as your own without even bothering to read it.
[0] https://en.wikipedia.org/wiki/Dictated_but_not_read
Yep! However the problem will increase by many orders of magnitude as the volume of generated content far surpasses the content created by autocorrect mechanisms, in addition to autocorrect being a far more local modification that does not generate entire paragraphs or segments of content, making it harder to excuse large changes in meaning.
I agree that they make for poor excuses - but as generative content seeps into everything I fear it will become more commonly invoked.
> I fear it will become more commonly invoked.
yep, but invoking it doesnt force you to accept it. The only thing you get to control is your own personal choices. That's why i am telling you not to accept it, and i hope that people reading this will consider this their default stance.
Never in my life would I accept that as a valid excuse. If you sent the mail, committed the code or whatever, you take responsibility for it. Anything else is just pathetic.
Are you embracing the fundamental attribution error?
Good question. I certainly commit that error sometimes, like everyone else. But the issue here is people using LLMs to write eg emails and then not taking responsibility for what they write. That has nothing to do with attribution, only accountability.
"I was having a bad day, my mother had just died" is a very valid explanation for a poorly worded email. "It was AI" is not.
You must be a delightful person to work with.
> If you sent the mail, committed the code or whatever, you take responsibility for it. Anything else is just pathetic.
Have you discussed this with your therapist?
I mean he mentioned it in IMO too harsh of a way (e.g. “pathetic”) but I do think it raises the point: if you don’t own up to your actions then how can you be held accountable to anything?
Unless we want to live in a world where accountability is optional, I think taking responsibility for your actions is the only choice.
And to be honest, today I don’t know where we stand on this. It seems a lot of people don’t care enough about accountability but then again a lot of people do. That’s just my take.
Yes, thank you. I used "pathetic" in the meaning of something which makes feel sorry for them, not something despicable. I fully expect people to stand by what they write and not blame AI etc, but my comment came across as too aggressive.
I mean we're only human. We all make mistakes. Sure, some mistakes are worse than others but in the abstract, even before AI, who hasn't sent an email that they later regretted?
Yes, we all make mistakes. But when I make mistakes when sending an email you can be damn sure that they are my own mistakes which I take full accountability for.
Making mistakes and regretting is of course perfectly ok!
What I reacted to was blaming the LLM. "I am sorry,I meant like this ..." versus "it wasn't me, it was AI".
Therapists are also supposed to take responsibility for their work.
I guess you got hung up on the word "pathetic". See my comment below, I used it not as "despicable" but rather "something to feel sorry for". Indeed, people writing emails using LLMs and then blame the AI for consequences, that is something that makes me feel sorry for them.
Implying mental health issues? That makes me think you were triggered by my comment.
> The category of writing that I like to call "functional text", which are things like computer code and pure conveyance of information (e.g., recipes, information signs, documentation), is not exposed to the same issues.
I hate this take, computer code is just as rich in personality as writing. I can tell a tremendous amount about what kind of person someone is solely based off their code. Code is an incredibly personal expression of ones mental state, even if you might not realize it. LLMs have dehumanized this and the functional outcomes become FAR more unpredictable.
I think we can make an analogy with our own brains, which have evolutionary older parts (limbic system) and evolutionary younger parts (neocortex). Now AI, I think it will be our new neocortex, another layer to our brain. And you can see limbic system didn't "outsource" thinking to neocortex - it's still doing it; but it can take (mostly good) advice from it.
Applying this analogy to human relationships - neocortex allowed us to be more social. Social communication with limbic system was mostly "you smell like a member of our species and I want to have sex with you". So having neocortex expanded our social skills to having friends etc.
I think AI will have a similar effect. It will allow us to individually communicate with large amount of other people (millions). But it will be a different relationship than what we today call "personal communication", face to face, driven by our neocortex. It will be as incomprehensible for our neocortex as our language is incomprehensible for the limbic system.
Very interesting, thanks for sharing this. After reading Karpathy's recent tweet about "A few random notes from claude coding quite [...]" it got me thinking a lot about offloading thinking and more specifically failure. Failure is important for learning. When I use AI and they make mistakes, I often tend to blame the AI and offload the failure. I think this post explores similar thoughts, without talking much about failure. It will be interesting to see the long-term effects.
I actually wrote up quite a few thoughts related to this a few days ago but my take is far more pessimistic: https://www.neilwithdata.com/outsourced-thinking
My fundamental argument: The way the average person is using AI today is as "Thinking as a Service" and this is going to have absolutely devastating long term consequences, training an entire generation not to think for themselves.
I think you hit the nail on the head. Without years of learning by doing, experience in the saddle as you put it, who would be equipped to judge or edit the output of AI? And as knowledge workers with hands-on experience age out of the workforce, who will replace us?
The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true. We don't usually need to worry that a calculator might be giving us the wrong result, or an inferior result. It simply gives us an objective fact. Whereas the output of LLMs can be subjectively considered good or bad - even when it is accurate.
So imagine teaching an architecture student to draw plans for a house, with a calculator that spit out incorrect values 20% of the time, or silently developed an opinion about the height of countertops. You'd not just have a structurally unsound plan, you'd also have a student who'd failed to learn anything useful.
In current situation, by vibing and YOLOing most problems, we are losing the very ability we still need and can't replace with AI or other tools.
If you don't have building codes, you can totally yolo build a small house, no calculator needed. It may not be a great house, just like vibeware may not be great, but also, you have something.
I'm not saying this is ideal, but maybe there's another perspective to consider as well, which is lowering barriers to entry and increased ownership.
Many people can't/won't/don't do what it takes to build things, be it a house or an app, if they're starting from zero knowledge. But if you provide a simple guide they can follow, they might end actually building something. They'll learn a little along the way, make it theirs, and end up with ownership of their thing. As an owner, change comes from you, and so you learn a bit more about your thing.
Obviously whatever gets built by a noob isn't likely to be of the same caliber as a professional who spent half their life in school and job training, but that might be ok. DIY is a great teacher and motivator to continue learning.
Contrast to high barriers to entry, where nothing gets built and nothing gets learned, and the user is left dependent on the powers that be to get what he wants, probably overpriced, and with features he never wanted.
If you're a rocket surgeon and suddenly outsource all your thinking to a new and unpredictable machine, while you get fat and lazy watching tv, that's on you. But for a lot of people who were never going to put in years of preparation just to do a thing, vibing their idea may be a catalyst for positive change.
To continue the analogy, there’s something called renting and the range of choices. If there’s no code and you can’t build your own house, you’re left with bad houses built by someone else. It’s more likely to be bad when the owner already knows he will not be living in them as building it right can be expensive and time consuming.
When slop becomes easier, there are a lot more people ready to push it to others than people that tries to produce guenuine work. Especially when theh are hard to distinguish superficially.
> If calculators returned even 99.9% correct answers, it would be impossible to reliably build even small buildings with them.
I think past successes have led to a category error in the thinking of a lot of people.
For example, the internet, and many constituent parts of the internet, are built on a base of fallible hardware.
But mitigated hardware errors, whether equipment failures, alpha particles, or other, are uncorrelated.
If you had three uncorrelated calculators that each worked 99.99% of the time, and you used them to check each other, you'd be fine.
But three seemingly uncorrelated LLMs? No fucking way.
There's another category error compounding this issue: People think that because past revolutions in technology eventually led to higher living standards after periods of disruption, this one will too. I think this one is the exception for the reasons enumerated by the parent's blog post.
Agreed.
In point of fact, most technological revolutions have fairly immediately benefited a significant number of people in addition to those in the top 1% -- either by increasing demand for labor, or reducing the price of goods, or both.
The promise of LLMs is that they benefit people in the top 1% (investors and highly paid specialists) by reducing the demand for labor to produce the same stuff that was already being produced. There is an incidental initial increase in (or perhaps just reallocation of) labor to build out infrastructure, but that is possibly quite short-lived, and simultaneously drives a huge increase in the cost of electricity, buildings, and computer-related goods.
But the benefits of new technologies are never spread evenly.
When the technology of travel made remote destinations more accessible, it created tourist traps. Some well placed individuals and companies do well out of this, but typically, most people living near tourist traps suffer from the crowds and increased prices.
When power plants are built, neighbors suffer noise and pollution, but other people can turn their lights on.
We haven't yet begun to be able to calculate all the negative externalities of LLMs.
I would not be surpised if the best negative externality comparisons were to the work of Thomas Midgley, who gifted the world both leaded gasoline and CFC refrigerants.
The LLMs are not uncorrelated, though, they're all trained on the same dataset (the Internet) and subject to most of the same biases
Agreed.
This is why I differentiated "uncorrelated" from "seemingly uncorrelated." Sorry if that wasn't clear.
It's funny, I'm working on trying to get LLMs to place electrical devices, and it silently developed opinions that my switches above countertops should be at 4 feet and not the 3'10 I'm asking for (the top cannot be above 4')
That's quite funny, and almost astonishing, because I'm not an architect, and that scenario just came out of my head randomly as I wrote it. It seemed like something an architect friend of mine who passed away recently, and was a big fan of Douglas Adams, would have joked about. Maybe I just channeled him from the afterlife, and maybe he's also laughing about it.
They tend to develop silent opinions based on rules of thumb, so it's not actually reasoning that my symbol is to the center not the top.
I fear for trying to get it to unlearn code from the last building code cycle when there's changes
On the other hand the incorrect values may drive architects to think more critically about what their tools are producing.
On the whole, not trusting one's own tools is a regression, not an advancement. The cognitive load it imposes on even the most capable and careful person can lead to all sorts of downstream effects.
There's an Isaac Asimov story where people are "educated" by programming knowledge into their brains, Matrix style.
A certain group of people have something wrong with their brain where they can't be "educated" and are forced to learn by studying and such. The protagonist of the story is one of these people and feels ashamed at his disability and how everyone around him effortlessly knows things he has to struggle to learn.
He finds out (SPOILER) that he was actually selected for a "priesthood" of creative/problem solvers, because the education process gives knowledge without the ability to apply it creatively. It allows people to rapidly and easily be trained on some process but not the ability to reason it out.
Do you remember the title of that story, by chance?
Profession (1957)
https://en.wikipedia.org/wiki/Profession_(novella)
Profession as sibling said, available here: https://www.inf.ufpr.br/renato/profession.html
The wikipedia entry also has link to the text but the above is nicer IMHO, just the raw text. From a previous HN discussion some weeks ago!
That would have devastating consequences in the pre-LLM era, yes. What is less obvious is whether it'll be an advantage or disadvantage going forward. It is like observing that cars will make people fat and lazy and have devastating consequences on health outcomes - that is exactly what happened but the net impact was still positive because cars boost wealth, lifestyles and access to healthcare so much that the net impact is probably positive even if people get less exercise.
It is unclear that a human thinking about things is going to be an advantage in 10, 20 years. Might be, might not be. In 50 years people will probably be outraged if a human makes an important decision without deferring to an LLM's opinion. I'm quite excited that we seem to be building scaleable superintelligences that can patiently and empathetically explain why people are making stupid political choices and what policy prescriptions would actually get a good outcome based on reading all the available statistical and theoretical literature. Screw people primarily thinking for themselves on that topic, the public has no idea.
If you told me this was a verbatim cautionary sci-fi short story from 1953 I'd believe it.
Perhaps Asimov in 1958?
https://en.wikipedia.org/wiki/The_Feeling_of_Power
That said, I maintain there are huge qualitative differences between using a calculator versus "hey computer guess-solve this mess of inputs for me."
At long last, we have created the Torment Nexus from classic sci-fi novel "Don't Create The Torment Nexus"!
Eh 1953 was more about what’s going to happen to the people left behind, e.g. Childhood’s End. The vast majority of people will be better off having the market-winning AI tell them what to do.
Or how about that vast majority gets a decent education and higher standard of living so they can spend time learning and thinking on their own? You and a lot of folks seem to take for granted our unjust economy and its consequences, when we could easily change it.
How is that relevant? You can give whatever support you like to humans, but machine learning is doing the same thing in general cognition that it has done in every competitive game. It doesn't matter how much education the humans get - if they try to make complex decisions using their brain then, silicon will outperform them at planning to achieve desirable outcomes. Material prosperity is a desirable outcome, machines will be able to plot a better path to it than some trained monkey. The only question is how long it'll take to resolve the engineering challenges.
That is absurd and is not supported by any facts
There are some facts which makes it not outside the realm of possibility. Like computers being better at chess and go and giving directions to places or doing puzzles. (The picture-on-cardboard variety.)
You'd make a great dictator.
I think the comparison to giving change is a good one, especially given how frequently the LLM hype crowd uses the fictitious "calculator in your pocket" story. I've been in the exact situation you've described, long before LLMs came out and cashiers have had calculators in front of them for longer than we've had smartphones.
I'll add another analogy. I tell people when I tip I "round off to the nearest dollar, move the decimal place (10%), and multiply by 2" (generating a tip that will be in the ballpark of 18%), and am always told "that's too complicated". It's a 3 step process where the hardest thing is multiplying a number by 2 (and usually a 2 digit number...). It's always struck me as odd that the response is that this is too complicated rather than a nice tip (pun intended) for figuring out how much to tip quickly and with essentially zero thinking. If any of those three steps appear difficult to you then your math skills are below that of elementary school.
I also see a problem with how we look at math and coding. I hear so often "abstraction is bad" yet, that is all coding (and math) is. It is fundamentally abstraction. The ability to abstract is what makes humans human. All creatures abstract, it is a necessary component of intelligence, but humans certainly have a unique capacity for it. Abstraction is no doubt hard, but when in life was anything worth doing easy? I think we unfortunately are willing to put significantly more effort into justifying our laziness than we will to be not lazy. My fear is that we will abdicate doing worthwhile things because they are hard. It's a thing people do every day. So many people love to outsource their thinking. Be it to a calculator, Google, "the algorithm", their favorite political pundit, religion, or anything else. Anything to abdicate responsibility. Anything to abdicate effort.
So I think AI is going to be no different from calculators, as you suggest. They can be great tools to help people do so much. But it will be far more commonly used to outsource thinking, even by many people considered intelligent. Skills atrophy. It's as simple as that.
I briefly taught a beginner CS course over a decade ago, and at the time it was already surprising and disappointing how many of my students would reach for a calculator to do single-digit arithmetic; something that was a requirement to be committed to memory when I was still in school. Not surprisingly, teaching them binary and hex was extremely frustrating.
I tell people when I tip I "round off to the nearest dollar, move the decimal place (10%), and multiply by 2" (generating a tip that will be in the ballpark of 18%), and am always told "that's too complicated".
I would tell others to "shift right once, then divide by 2 and add" for 15%, and get the same response.
However, I'm not so sure what you mean by a problem with thinking that abstraction is bad. Yes, abstraction is bad --- because it is a way to hide and obscure the actual details, and one could argue that such dependence on opaque things, just like a calculator or AI, is the actual problem.
> shift right once, then divide by 2
So, shift right twice? ;)
I believe that collectively we passed that point long before the onset of LLMs. I have a feeling that throughout the human history vast amounts of people ware happy to outsource their thinking and even pay to do so. We just used to call those arrangements religions.
Religions may outsource opinions on morality, but no one went to their spiritual leader to ask about the Pythagorean theorem or the population of Zimbabwe.
Well, now, that's not actually true:
[1] https://plato.stanford.edu/entries/pythagoreanism/ [2] https://en.wikipedia.org/wiki/Pythia
Obviously I was using the Pythagorean theorem as a random not literal example. But I’m also curious about what you mean. Mind linking to the specific relevant parts? Linking to humongous articles doesn’t help much.
I was linking it partially tongue in cheek, but oracles and the auspices in antiquity were specifically not about morality. They were about predicting the future. If you wanted to know if you should invade Carthage on a certain day, you'd check the chickens. Literally. And plenty of medical practices were steeped in religious fare, too. If you go back further, a lot of shamanistic practices divine the facts about the present reality. In the words of Terrence McKenna, "[Shamans] cure disease (and another way of putting that is: they have a remarkable facility for choosing patients who will recover), they predict weather (very important), they tell where game has gone, the movement of game, and they seem to have a paranormal ability to look into questions, as I mentioned, who’s sleeping with who, who stole the chicken, who—you know, social transgressions are an open book to them." All very much dealing with facts, not morality.
With regards to Pythagoreanism, Pythagoras himself thought of mathematics in religious ways. From the entry on Pythagoras (https://plato.stanford.edu/entries/pythagoras/) in the SEP:
> The cosmos of the acusmata, however, clearly shows a belief in a world structured according to mathematics, and some of the evidence for this belief may have been drawn from genuine mathematical truths such as those embodied in the “Pythagorean” theorem and the relation of whole number ratios to musical concords.
There are numerous sections throughout both of these entries that discuss Pythagoras, mathematics, and religion. Plato too is another fruitful avenue, if you wanted to explore that further.
That’s a bit cynical. Religion is more like a technology. It was continuously invented to solve problems and increase capacity. Newer religions superseded older and survived based on productive and coercive supremacy.
If religion is a technology, it's inarguably one that prevented the development of a lot of other technologies for long periods of time. Whether that was a good thing is open to interpretation.
On the other hand it produced a lot of related technology. Calendars, mathematics, writing, agricultural practices, government and economic systems. Most of this stuff emerged as an effort to document and proliferate spiritual ideas.
I see your point, but I'd say religion's main technological purpose is as a storage system for the encoding of other technologies (and social patterns) into rituals, the reasons for which don't need to be understood; to the point that it actively discourages examination of their reasons, as what we could call an error-checking protocol. So a religion tends to freeze those technologies in the time at the point of inception, and to treat any reexamining of them as heresy. Calendars are useful for iron age farming, but you can't get past a certain point as a civilization if you're unwilling to reconsider your position that the sun and stars revolve around the earth, for example.
This is ahistorical, whiggish nonsense. The actual world is not a game of Civilization II.
Eh? I was talking about Galileo's trial for heresy.
> Can you audit/review/identify issues in a codebase if you've never written code?
Actual knowledge about systems work much better more often than not, LLMs are not sentient and still need to be driven to get decent results.
I'll say that I'm still kinda on the fence here, but I will point out that your argument is exactly the same as the argument against calculators back in the 70s/80s, computers and the internet in the 90s, etc.
You could argue that a lot of the people who few up with calculators have lost any kind of mathematical intuition. I am always horrified how bad a lot of people are with simple math, interest rates and other things. This definitely opened up a lot of opportunities for companies to exploit this ignorance.
This implies that people had better mathematical intuition, on average, pre calculator which seems difficult to believe.
The difference is a calculator always returns 2+2=4. And even then if you ended up with 6 instead of 4, the fact you know how to do addition already leads you to believe you fat fingered the last entry and that 2+2 does not equal 6.
Can’t say the same for LLM. Our teachers were right with the internet of course as well. If you remember those early internet wild west school days, no one was using the internet to actually look up a good source. No one even knew what that meant. Teachers had to say “cite from these works or references we discussed in class” or they’d get junk back.
Right so apply the exact same logic to LLMs as you did to the internet.
At first the internet was unreliable. Nobody could trust the information it gave you. So teachers insisted that students only use their trusted sources. But eventually the internet matured and now it would be seen as ridiculous for a teacher to tell a student not to do research on the internet.
Now replace "the internet" with "LLMs".
To some extent, the argument against calculators is perfectly valid.
The cash register says you owe $16.23, you give the cashier $21.28, and all hell breaks loose.
My experience is more that you give €20.28 and the cashier asks you whether you have €1.
Um, no? The cashier punches your $21.28 into the register, and it tells her that she needs to give you $5.05 in change.
Too late. Outsourcing has already accomplished this.
No one is making cool shit for themselves. Everyone is held hostage ensuring Wall Street growth.
The "cross our fingers and hope for the best" position we find ourselves in politically is entirely due to labor capture.
The US benefited from a social network topology of small businesses. No single business being a lynch pin that would implode everything.
Now the economy is a handful of too big to fails eroding links between human nodes by capturing our agency.
I argued as hard as I could against shipping electronics manufacturing overseas so the next generation would learn real engineering skills. But 20 something me had no idea how far up the political tree the decision was made back then. I helped train a bunch of people's replacements before the telecom focused network hardware manufacturer I worked for then shut down.
American tech workers are now primarily cloud configurators and that's being automated away.
This is a decades long play on the part of aging leadership to ensure Americans feel their only choice is capitulate.
What are we going to do, start our own manufacturing business? Muricans are fish in a barrel.
And some pretty well connected people are hinting at similar sense of what's wrong: https://www.barchart.com/story/news/36862423/weve-done-our-c...
The interesting axis here isn’t how much cognition we outsource, it’s how reversible the outsourcing is. Using an LLM as a scratchpad (like a smarter calculator or search engine) is very different from letting it quietly shape your writing, decisions, and taste over years. That’s the layer where tacit knowledge and identity live, and it’s hard to get back once the habit forms.
We already saw a softer version of this with web search and GPS: people didn’t suddenly forget how to read maps, but schools and orgs stopped teaching it, and now almost nobody plans a route without a blue dot. I suspect we’ll see the same with writing and judgment: the danger isn’t that nobody thinks, it’s that fewer people remember how.
Yet it does feel different with LLMs compared to your examples. Yes, people can’t navigate without Apple/Google maps, but that’s still very different from losing critical thinking skills.
That said, LLMs are perhaps accelerating that but aren’t the only cause (lack of reading, more short form content, etc)
How is navigation not critical thinking? Anyone Should! Be able to use a map to plan a route. Navigation is critical to survival imo
> it’s hard to get back once the habit forms.
Humans are highly adaptable. It's hard to go back while the thing we're used to still exists, but if it vanished from the world we'd adapt within a few weeks.
The author says it's too long. So let's tighten it up.
A criticism of the use of large language models (LLMs) is that it can deprive us of cognitive skills. Are some kinds of use are better than others? Andy Masley's blog says "thinking often leads to more things to think about", so we shouldn't worry about letting machines do the thinking for us — we will be able to think about other things.
My aim is not to refute all his arguments, but to highlight issues with "outsourcing thinking".
Masley writes that it's "bad to outsource your cognition when it:"
- Builds tacit knowledge you'll need in future.
- Is an expression of care for someone else.
- Is a valuable experience on its own.
- Is deceptive to fake.
- Is focused in a problem that is deathly important to get right, and where you don't totally trust who you're outsourcing it to.
How we choose to use chatbots is about how we want our lives and society to be.
That's what he has to say. Plus some examples, which help make the message concrete. It's a useful article if edited properly.
I think that this summary is oversimplifying: The rest of the blog post elaborates on how the author and Masley has a completely different interpretation of that bullet point list. The rest of the text is not only examples; it provides elaborations of what thought processes led him to his conclusions. I found the nuancing of the two opposing interpretations, not the conclusion, the most enjoyable part of the post.
(This comment could also be shortened to “that’s oversimplifying”. I think my longer version is both more convincing and enjoyable.)
I feel like your comment is in itself a great analogy for the "beware of using LLMs in human communication" argument. LLMs are in the end statistical models that regress to the mean, so they by design flatten out our communication, much like a reductionist summary does. I care about the nuance that we lose when communicating through "LLM filters", but others dont apparently.
That makes for a tough discussion unfortunately. I see a lot of value lost by having LLMs in email clients, and I dont observe the benefit; LLMs are a net time sink because I have to rewrite its output myself anyway. Proponents seem to not see any value loss, and they do observe an efficiency gain.
I am curious to see how the free market will value LLM communication. Will the lower quality, higher quantity be a net positive for job seekers sending applications or sales teams nursing leads? The way I see it either we end up in a world where eg job matching is almost completely automated, or we find an effective enough AI spam filter and we will be effectively back to square one. I hope it will be the latter, because agents negotiating job positions is bound to create more inequality, with all jobs getting filled by applicants hiring the most expensive agent.
Either way, so much compute and human capital will go wasted.
> Proponents seem to not see any value loss, and they do observe an efficiency gain.
You get to start by dumping your raw unfiltered emotions into the text box and have the AI clean it up for you.
If you're in customer support, and have to deal with dumbasses all day long who are too stupid to read the fucking instructions. I imagine being able to type that out, and then have the AI remove profanity and not insult customers to be rather cathartic. Then, substitute "read the manual" for an actually complicated to explain thing.
> You get to start by dumping your raw unfiltered emotions into the text box and have the AI clean it up for you.
Anyone semi-literate can write down what they're feeling.
It's sometimes called "journaling".
Thinking through what they've written, why they've written it, and whether they should do anything about it is often called "processing emotions."
The AI can't do that for you. The only way it could would be by taking over your brain, but then you wouldn't be you any more.
I think using the AI to skip these activities would be very bad for the people doing it.
It took me decades to realize there was value in doing it, and my life changed drastically for the better once I did.
I don’t understand this summary - isn’t this a summary of the authors recitation of Masleys position? It’s missing the part that actually matters, the authors position and how it differs from Masley?
Yep - it honestly reads like an LLM's summary, which often miss critical nuances.
I know, especially with the bullet points.
The meat there is when not to use an LLM. The author seems to mostly agree with Masley on what's important.
It actually isn’t very long. I was expecting it to be much longer after the author’s initial warning.
This here is why I always read the comments /first/ on HN
Outsourcing to thinking is exactly what I tell our developers. They are hired to do the kind of thinking I’d rather not do.
Some of humanity’s most significant inventions are language (symbolic communication), writing, the scientific method, electricity, the computer.
Notice something subtle.
Early inventions extend coordination. Middle inventions extend memory. Later inventions extend reasoning. The latest inventions extend agency.
This suggests that human history is less about tools and more about outsourcing parts of the mind into the world.
The main difference is that the computer you use for writing is not requiring you to pay for every word. And that's the difference in the business models being pushed right now all around the world.
I like this imaginary world you propose that gives free computers, free electricity, a free place to store it, and is free from danger from other tribes.
Sign me up for this utopia.
If an AI thinks for you, you're no longer "outsourcing" parts of your mind. What we call "AI" now is technically impressive but is not the end point for where AI is likely to end up. For example, imagine an AI that is smart enough to emotionally manipulate you, at what point in this interaction do you lose your agency to "outsource" yourself instead of acting as a conduit to "outsource" the thoughts of an artificial entity? It speaks to our collective hubris that we seek to create an intellectually superior entity and yet still think we'll maintain control over it instead of the other way around.
> we seek to create an intellectually superior entity and yet still think we'll maintain control over it instead of the other way around.
Intellect is not the same thing as volition.
> Intellect is not the same thing as volition.
Two questions...
1. Do you think it's impossible for AI to have it's own volition?
2. We don't have full control over the design of AI. Current AI models are grown rather than fully designed, the outcomes of which are not predictable. Would you want to see limits placed on AI until we had a better grasp of how to design AI with predictable behaviour?
There's a parallel there to drugs. They are most definitely not "intelligent", yet they can still destroy our agency or free-will.
Surely we can do better than reading TFA and manually commenting on it.
We are going to be able to think plenty about other things than what we are doing, yes. That is called anxiety.
I still read the LLMs output quite critically and I cringe whenever I do. LLMs are just plain wrong a lot of the time. They’re just not very intelligent. They’re great at pretending to be intelligent. They imitate intelligence. That is all they do. And I can see it every single time I interact with them. And it terrifies me that others aren’t quite as objective.
I usually feed my articles to it and ask for insight into whats working. I usually wait to initiate any sort of AI insight until my rough draft is totally done...
Working in this manner, it is so painfully clear it doesnt really follow the flow of the article even. It misses on so many critical details and just sorta fills in its own blanks wrong... When you tell it that its missing a critical detail, it treats you like some genius, every single time.
It is hard for me to try to imagine growing up with it, and using it to write my own words for me. The only time i copy paste words to a fellow human that is ai generated, is for totally generic customer service style replies, for questions i dont totally consider worthy of any real time.
AI has kinda taken away my flow state for coding, rare as it was... I still get it when writing stuff I am passionate about, and I can't imagine I'll ever wanna outsource that.
> When you tell it that its missing a critical detail, it treats you like some genius, every single time.
Yeah, or as I say, Uriah Heep.
To be fair, telling everybody they are geniuses is the obvious next step after participation awards.
Because people have figured out that participation awards are worthless, so let's give them all first place.
> And it terrifies me that others aren’t quite as objective.
I have been reminded constantly throughout this that a very large fraction of people are easily impressed by such prose. Skill at detecting AI output (in any given endeavour), I think, correlates with skill at valuing the same kind of work generally.
Put more bluntly: slop is slop, and it has been with us for far longer than AI.
Not to nitpick but I find his point on automating vacation planning on AI so silly.
Apparently he think of planning a vacation as some artistic expression.
I really enjoyed and agree with the majority of the article, but this was my nit as well. My hatred of vacation planning is often the reason I don't go on more vacations. It seems like automating a task that is experienced by the individual as completely monotonous ( and only affects that individual) would be a great example of something worth handing off to a text generator.
You might want to read this then: https://www.rnz.co.nz/news/top/585370/ai-on-australian-trave...
For me there’s a lot of risk in vacationing in a new area I have no idea about. ChatGPT helps me here.
It all comes down to people who have comfort in their own workflows and it takes mental load to change it. And then find reasons to work backwards to justify not liking AI.
One perspective I’m circling right now about this topic is that maybe we’re coming to realize as a society that what we considered intelligence (or symbolic intelligence whatever you wanna call that thing that we measure with traditional IQ tests, verbal fluency, etc) is actually a far less essential cognitive aspect to us as humans then we had previously assumed and is in fact, far more mechanical in nature than we had formerly believed.
This ties with how I sometimes describe current generation AI as a form of mechanized intelligence: like Babbage’s calculating machine, but scaled up to be able to represent all kinds of classes of things.
And in this perspective that I’m circling these days where I’m currently coming down on it is maybe the effect of this realization will be something like the dichotomy outlined in the Dune series: namely, that between mechanized intelligence embodied by mentats and the more intuitive and prescient aspects of cognition embodied by the Benni Jessarit and Paul’s lineage.
A simple but direct way to describe this transition in perspective may be that we come to see what we formally thought of as intelligence in the West/reductive tradition as a form of mechanized calculation that it’s possible to outsource to automatic non-biological processes, and we start to lean in more deeply to the more intuitive and prescient aspects of cognition.
One thing I’m reminded of is how Indian yogic texts describe various aspects of mind.
I’m not sure if it’s a one-to-one mapping because I’m not across that material but merely the idea of distinguishing between different aspects of mind is something with precedent; and central to that is the idea of removing association between self identity and the aspects of mind.
And so maybe one of the effects for us as a society will be something akin to that.
Interesting read..
To his point: personally, I find it shifts 'where and when' I have to deal with the 'cognitive load'. I've noticed (at times) feeling more impatient, that I tend to skim the results more often, and that it takes a bit more mental energy to maintain my attention..
Great blog post, and I fully agree. The human touch in communication and reflection can not be emphasized enough.
How many of you know how to do home improvement? Fix your own clothes? Grow your own food? Cook your own food? How about making a fire or shelter? People used to know all of those things. Now they don't, but we seem to be getting along in life fine anyway. Sure we're all frightened by the media at the dangers lurking from not knowing more, but actually our lives are fine.
The things that are actually dangerous in our lives? Not informing ourselves enough about science, politics, economics, history, and letting angry people lead us astray. Nobody writes about that. Instead they write about spooky things that can't be predicted and shudder. It's easier to wonder about future uncertainty than deal with current certainty.
Executive function is not the same as weaving or carpentry. The scary problem comes from people who are trying to abdicate their entire understand-and-decide phase to an outside entity.
What's more, that's not fundamentally a new thing, it's always been possible for someone to helplessly cling to another human as their brain... but we've typically considered that to be a mental-disorder and/or abuse.
> How many of you know how to [...] cook your own food?
That's a very low bar. I expect most people know how to cook, at least simple dishes.
I know how to cook! You open the freezer, grab a Hot Pocket, Unwrap it, put it in the microwave, hit 2, and wait 3 minutes (it has to cool). That's what you meant, right?
Some people really don't.
I mean grill a steak, cook a chicken in the oven, chop some vegetables and prepare a salad, cook some pasta with a simple tomato sauce, etc. Do people really don't know how to do this? It's not rocket science.
It seems wild to me to assume most people on HN don't know how to cook even basic stuff...
Systems used to be robust, now they’re fragile due to extreme outsourcing and specialization. I challenge the belief that we’re getting along fine. I argue systems are headed to failure, because of over optimization that prioritized output over resilience.
A lot of this stuff depends on how a person chooses to engage, but my contrarian take is that actually throughout history whenever anyone said X technology will lead to the downfall of humanity for y reasons, that take was usually correct.
The article he references gives this example:
“Is it lazy to watch a movie instead of making up a story in your head?”
Yes, yes it is, this was a worry when we transitioned from oral culture to written culture, and I think it was probably prescient.
For many if not most people cultural or technological expectations around what skills you _have_ to learn probably have an impact on total capability. We probably lost something when Google Maps came out and the average person didn’t have to learn to read a map.
When we transitioned from paper and evening news to 24 hour partisan cable news, I think more people outsourced their political opinions to those channels.
> We probably lost something when Google Maps came out and the average person didn’t have to learn to read a map.
Even in my mid 30s I see this issue with people around my age. Even for local areas, it seems like no one really understands what direction they are heading, they just kinda toggle on the GPS and listen for what to do... forever?
On pretty much every modern GPS, there is a button to show the full route instead of the current step the user is on(as well as keeping it in a static orientation). I feel like just that being the default most of the time, would help a ton of people.
Distributed verification. 8 billions of us can divide up the topics and subjects and pool together our opinions and best conclusions.
What is that saying again, a person is smart, a group is dumb?
That's the risk involved with opinions and conclusions.
https://www.goodreads.com/book/show/42041926-the-scout-minds...
When it comes to what we believe, humans see what they want to see. In other words, we have what Julia Galef calls a soldier mindset. From tribalism and wishful thinking, to rationalizing in our personal lives and everything in between, we are driven to defend the ideas we most want to believe--and shoot down those we don't. But if we want to get things right more often, argues Galef, we should train ourselves to have a scout mindset. Unlike the soldier, a scout's goal isn't to defend one side over the other. It's to go out, survey the territory, and come back with as accurate a map as possible. Regardless of what they hope to be the case, above all, the scout wants to know what's actually true. In The Scout Mindset, Galef shows that what makes scouts better at getting things right isn't that they're smarter or more knowledgeable than everyone else. It's a handful of emotional skills, habits, and ways of looking at the world--which anyone can learn. With fascinating examples ranging from how to survive being stranded in the middle of the ocean, to how Jeff Bezos avoids overconfidence, to how superforecasters outperform CIA operatives, to Reddit threads and modern partisan politics, Galef explores why our brains deceive us and what we can do to change the way we think.
Linus's law is the assertion that "given enough eyeballs, all bugs are shallow".
"A person is smart, people are dumb." I heard this for the first time from Men in Black, lol.
See Scott Alexander’s The Whispering Earring (2012):
https://gwern.net/doc/fiction/science-fiction/2012-10-03-yva...
Wasn't there a follow-up to this where Scott denied that the story was "about" the obvious thing for it to be about?
Social media has given me a rather dim view of the quality of people's thinking, long before AI. Outsourcing it could well be an improvement.
> Social media has given me a rather dim view of the quality of people's thinking, long before AI. Outsourcing it could well be an improvement.
Cogito, ergo sum
The corollary is: absence of thinking equals non-existence. I don't see how that can be an improvement. Improvement can happen only when it's applied to the quality of people's thinking.
The converse need not hold. Cognition implies existence; it is sufficient but not necessary. Plenty of things exist without thinking.
(And that's not what the Cogito means in the first place. It's a statement about knowledge: I think therefore it is a fact that I am. Descartes is using it as the basis of epistemology; he has demonstrated from first principles that at least one thing exists.)
I know the trivialities. I didn't intend to make a general or formal statement, we're talking about people. In a competitive world, those who've been reduced to idiocracy won't survive, AI not only isn't going to help them, it will be used against them.
> Plenty of things exist without thinking.
Existence in an animal farm isn't human existence.
Thinking developed naturally as a tool that helps our species to stay dominant on the planet, at least on land. (Not by biomass but by the ability to control.)
If outsourcing thought is beneficial, those who practice it will thrive; if not, they will eventually cease to practice it, one way or another.
Thought, as any other tool, is useful when it solves more problems than it creates. For instance, an ability to move very fast may be beneficial if it gets you where you want to be, and detrimental, if it misses the destination often enough, and badly enough. Similarly, if outsourced intellectual activities miss the mark often enough, and badly enough, the increased speed is not very helpful.
I suspect that the best results would be achieved by outsourcing relatively small intellectual acts in a way that guarantees very rare, very small errors. That is, AI will become useful when AI becomes dependable, comparable to our other tools.
> If outsourcing thought is beneficial, those who practice it will thrive
It makes them prey to and dependent on those who are building and selling them the thinking.
> I suspect that the best results would be achieved by outsourcing relatively small intellectual acts in a way that guarantees very rare, very small errors. That is, AI will become useful when AI becomes dependable, comparable to our other tools.
That's like saying ultra processed foods provide the best results when eaten sparingly, so it will become useful when people adopt overall responsible diets. Okay, sure, but what does that matter in practice since it isn't happening?
Outsourcing thinking is not a skill. It is the same as skipping gym. Nothing to practice here
A lot of people practice not going to a gym! I bet it reflects e.g. on their dating outcomes, at least statistically.
I suspect that outsourcing thinking may reflect on quite some outcomes, too. We just need time to gather the statistics.