By some perspective, what made us unique was our ability to have a hint on the future, pattern recognition, superstition, religion and then science gave us a grasp of the outcome of what happens in a future, usually after an action by ourselves. Things are of bad luck, or a sin, or should not be done because some negative outcome. So, following that line of thinking, what will come next are better predictive capabilities, taking more out of guessing or "this is random" and more deterministic. Think in psichohistory telling what will happen with cultures and civilizations for centuries giving the current state of things.
Anyway, the "we have AI, so will be soon no more things to discover" is similar to what was thought at the end of the XIX century that everything was discovered and only increasing precision was left. At the very least, we have a lot of learning about ourselves and how we understand reality, in the light of what AI could uncover with different methods than the traditional ones.
I’m curious. Why did you write it this way vs. “19th”? People from 400 AD to 1400 AD used to write it that way. I’m assuming you’re either very old or a history buff.
> So, following that line of thinking, what will come next are better predictive capabilities
You can also view science as a rejection of the ability to be able to predict (arbitrary) things. Any illusion otherwise is simply seemingly reliable knowledge of the past and present. The rise of eg disinformation and misinformation, siloed communication, the replication crisis could presage a future where confidence is generally lower than the past, and predictive power is more limited.
I caution heavily against the idea that what you perceive as "progress" is inevitable or will follow past trends
Reality is complicated. The future may be unknowable from a strict point of view. But educated guesses are better than just random. Not for lotto numbers, but to take better decisions. Deciding that everything is potentially false, biased, or unreliable and so doing whatever your guts (that are also biased) tell you may have a worse outcome.
There was an attitude at a University about 20 years ago when I was an undergrad, around, hmm, stochastic learning algorithms. And the attitude was, "we don't care why or how it works - we want to make the outcome happen".
I found it intellectually reprehensible then, and now.
> "we don't care why or how it works - we want to make the outcome happen".
That's the primary difference between science and engineering.
In science, understating how it works is critical, and doing something with that understanding is optional. In engineering getting the desired outcome is critical, and understanding why it works is optional.
> With the emergence of AI in science, we are witnessing the prelude to a curious inversion – our human ability to instrumentally control nature is beginning to outpace human understanding of nature, and in some instances, appears possible without understanding at all.
A while ago I read "Against Method" by Paul Feyerabend and there's a section that really stuck with me, where he talks about the "myth" of Galileo. His point is that Galileo serves as sort of the mythological prototype of a scientist, and that by picking at the loose ends of the myth one can identify some contradictory elements of the popular conception of "scientific method". One of his main points of contention is Galileo's faith in the telescope, his novel implementation of bleeding edge optics technology. Feyerebend argues that Galileo invented the telescope as primarily a military invention, it revolutionized the capabilities of artillery guns (and specifically naval artillary). Having secured his finances with some wealthy patrons, he then began to hunt for nobler uses of his new tool, and landed on astronomy.
Feyerabend's point (and what I'm slowly working up to) is that applying this new (and untested) military tool to what was a very ancient and venerable domain of inquiry was actually kind of scandalous. Up until that point all human knowledge of astronomy had been generated by direct observation of the phenomenon; by introducing this new tool between the human and the stars Galileo was creating a layer of separation which had never been there before, and this was the source of much of the contemporary controversy that led to his original censure. It was one thing to base your cosmology on what could be detected by the human eye, but it seemed very "wrong" (especially to the church) to insert an unfeeling lump of metal and glass into what had before been a very "pure" interaction, which was totally comprehensible to the typical educated human.
I feel like this article is expressing a very similar fear, and I furthermore think that it's kind of "missing the point" in the same way. Human comprehension is frequently augmented by technologly; no human can truly "understand" a gravitational wave experientially. At best we understand the n-th order 'signs' that the phenomenon imprints on the tools we construct. I'd argue that LLMs play a similar role in their application in math, for example. It's about widening our sensor array, more than it is delegating the knowledge work to a robot apprentice.
Fascinating point, and one I think can definitely apply here.
Though there is a key difference – Galileo could see through his telescope the same way, every time. He also understood what the telescope did to deliver his increased knowledge.
Compare this with LLMs, which provide different answers every time, and whose internal mechanisms are poorly understood. It presents another level of uncertainty which further reduces our agency.
> Though there is a key difference – Galileo could see through his telescope the same way, every time.
Actually this is a really critical error- a core point of contention at the time was that he didn't see the same thing every time. Small variations in the lens quality, weather conditions, and user error all contributed to the discovery of what we now call "instrument noise" (not to mention natural variation in the astronomical system which we just couldn't detect with the naked eye, for example the rings of Saturn). Indeed this point was so critical that it led to the invention of least-squares curve fitting (which, ironically, is how we got to where we are today). OLS allowed us to "tame" the parts of the system that we couldn't comprehend, but it was emphatically not a given that telescopes had inter-measurement reliability when they first debuted.
LLMs can be deterministic machines, you just need to control the random seeds and run it on the same hardware to avoid numerics differences.
Gradient descent is not a total black box, although it works so well as to be unintuitive. There is ongoing "interpretability" research too, with several key results already.
Deterministic doesn't necessarily mean that can be understood by an human mind. You can think about a process entirely deterministic but so complex and with so many moving parts (and probably chaotic) that a humble human cannot comprehend.
I have been meaning to read Feyerband for a while but never did. I think Against Method sounds like a good starting point.
Did Feyerband also not argue that Galileo's claim that Copernicus's theory was proved was false given it was not the best supported hypothesis by the evidence available at the time.
I very much agree with your last paragraph. Telescopes are comprehensible.
> Did Feyerband also not argue that Galileo's claim that Copernicus's theory was proved was false
My reading of AM was that it's less about what's "true" or "false" and more about how the actual structure of the scientific argument compares to what's claimed about it. The (rough) point (as I understand it) is that Galileo's scientific "findings" were motivated by human desires for wealth and success (what we might call historically contingent or "poltical" factors) as much as they were by "following the hard evidence".
> Telescopes are comprehensible.
"Comprehensible" is a relative measure, I think. Incomprehensible things become comprehensible with time and familiarity.
> With the emergence of AI in science, we are witnessing the prelude to a curious inversion – our human ability to instrumentally control nature is beginning to outpace human understanding of nature, and in some instances, appears possible without understanding at all.
This is not entirely new. For example, we had working (if inefficient) steam engines and pumps long before the development of thermodynamics. We had beer and cheese long before microbiology.
The whole article seemed a little tautological to me.
You could say: Scientific advances have massively accelerated with the use of the new tool of electricity, but there are serious concerns about the "black-box" nature of electricity, since no one has ever answered the question "what is charge?".
Modern semiconductors depend on quantum effects that no one has ever "explained", but they are highly repeatable, and make useful predictions that can be confirmed.
My expectation is that every advance cited in the article, and attributed to LLMs, is in fact the output of a team of human scientists using LLMs as a tool to expand their scope and increase productivity.
All of these examples are actually human endeavors.
The only qualitative difference I see, is that LLMs are a human invention, whereas electricity and quantum effects are natural phenomenon that were discovered, and utilized as a tool, by humans.
While LLMs, and subsequent s/w advances, may well lead us into new, even unexpected, realms of science, the need to be able to confirm repeatable results and verify the accuracy of predictions, will always be necessary.
As such, I would still call this science, not "after"...
By some perspective, what made us unique was our ability to have a hint on the future, pattern recognition, superstition, religion and then science gave us a grasp of the outcome of what happens in a future, usually after an action by ourselves. Things are of bad luck, or a sin, or should not be done because some negative outcome. So, following that line of thinking, what will come next are better predictive capabilities, taking more out of guessing or "this is random" and more deterministic. Think in psichohistory telling what will happen with cultures and civilizations for centuries giving the current state of things.
Anyway, the "we have AI, so will be soon no more things to discover" is similar to what was thought at the end of the XIX century that everything was discovered and only increasing precision was left. At the very least, we have a lot of learning about ourselves and how we understand reality, in the light of what AI could uncover with different methods than the traditional ones.
> XIX century
I’m curious. Why did you write it this way vs. “19th”? People from 400 AD to 1400 AD used to write it that way. I’m assuming you’re either very old or a history buff.
Or a human that does the LLM trick of recalling how the first reference I've read about that was written.
> So, following that line of thinking, what will come next are better predictive capabilities
You can also view science as a rejection of the ability to be able to predict (arbitrary) things. Any illusion otherwise is simply seemingly reliable knowledge of the past and present. The rise of eg disinformation and misinformation, siloed communication, the replication crisis could presage a future where confidence is generally lower than the past, and predictive power is more limited.
I caution heavily against the idea that what you perceive as "progress" is inevitable or will follow past trends
Reality is complicated. The future may be unknowable from a strict point of view. But educated guesses are better than just random. Not for lotto numbers, but to take better decisions. Deciding that everything is potentially false, biased, or unreliable and so doing whatever your guts (that are also biased) tell you may have a worse outcome.
There was an attitude at a University about 20 years ago when I was an undergrad, around, hmm, stochastic learning algorithms. And the attitude was, "we don't care why or how it works - we want to make the outcome happen".
I found it intellectually reprehensible then, and now.
> "we don't care why or how it works - we want to make the outcome happen".
That's the primary difference between science and engineering.
In science, understating how it works is critical, and doing something with that understanding is optional. In engineering getting the desired outcome is critical, and understanding why it works is optional.
> With the emergence of AI in science, we are witnessing the prelude to a curious inversion – our human ability to instrumentally control nature is beginning to outpace human understanding of nature, and in some instances, appears possible without understanding at all.
A while ago I read "Against Method" by Paul Feyerabend and there's a section that really stuck with me, where he talks about the "myth" of Galileo. His point is that Galileo serves as sort of the mythological prototype of a scientist, and that by picking at the loose ends of the myth one can identify some contradictory elements of the popular conception of "scientific method". One of his main points of contention is Galileo's faith in the telescope, his novel implementation of bleeding edge optics technology. Feyerebend argues that Galileo invented the telescope as primarily a military invention, it revolutionized the capabilities of artillery guns (and specifically naval artillary). Having secured his finances with some wealthy patrons, he then began to hunt for nobler uses of his new tool, and landed on astronomy.
Feyerabend's point (and what I'm slowly working up to) is that applying this new (and untested) military tool to what was a very ancient and venerable domain of inquiry was actually kind of scandalous. Up until that point all human knowledge of astronomy had been generated by direct observation of the phenomenon; by introducing this new tool between the human and the stars Galileo was creating a layer of separation which had never been there before, and this was the source of much of the contemporary controversy that led to his original censure. It was one thing to base your cosmology on what could be detected by the human eye, but it seemed very "wrong" (especially to the church) to insert an unfeeling lump of metal and glass into what had before been a very "pure" interaction, which was totally comprehensible to the typical educated human.
I feel like this article is expressing a very similar fear, and I furthermore think that it's kind of "missing the point" in the same way. Human comprehension is frequently augmented by technologly; no human can truly "understand" a gravitational wave experientially. At best we understand the n-th order 'signs' that the phenomenon imprints on the tools we construct. I'd argue that LLMs play a similar role in their application in math, for example. It's about widening our sensor array, more than it is delegating the knowledge work to a robot apprentice.
Fascinating point, and one I think can definitely apply here.
Though there is a key difference – Galileo could see through his telescope the same way, every time. He also understood what the telescope did to deliver his increased knowledge.
Compare this with LLMs, which provide different answers every time, and whose internal mechanisms are poorly understood. It presents another level of uncertainty which further reduces our agency.
> Though there is a key difference – Galileo could see through his telescope the same way, every time.
Actually this is a really critical error- a core point of contention at the time was that he didn't see the same thing every time. Small variations in the lens quality, weather conditions, and user error all contributed to the discovery of what we now call "instrument noise" (not to mention natural variation in the astronomical system which we just couldn't detect with the naked eye, for example the rings of Saturn). Indeed this point was so critical that it led to the invention of least-squares curve fitting (which, ironically, is how we got to where we are today). OLS allowed us to "tame" the parts of the system that we couldn't comprehend, but it was emphatically not a given that telescopes had inter-measurement reliability when they first debuted.
LLMs can be deterministic machines, you just need to control the random seeds and run it on the same hardware to avoid numerics differences.
Gradient descent is not a total black box, although it works so well as to be unintuitive. There is ongoing "interpretability" research too, with several key results already.
Deterministic doesn't necessarily mean that can be understood by an human mind. You can think about a process entirely deterministic but so complex and with so many moving parts (and probably chaotic) that a humble human cannot comprehend.
I have been meaning to read Feyerband for a while but never did. I think Against Method sounds like a good starting point.
Did Feyerband also not argue that Galileo's claim that Copernicus's theory was proved was false given it was not the best supported hypothesis by the evidence available at the time.
I very much agree with your last paragraph. Telescopes are comprehensible.
> Did Feyerband also not argue that Galileo's claim that Copernicus's theory was proved was false
My reading of AM was that it's less about what's "true" or "false" and more about how the actual structure of the scientific argument compares to what's claimed about it. The (rough) point (as I understand it) is that Galileo's scientific "findings" were motivated by human desires for wealth and success (what we might call historically contingent or "poltical" factors) as much as they were by "following the hard evidence".
> Telescopes are comprehensible.
"Comprehensible" is a relative measure, I think. Incomprehensible things become comprehensible with time and familiarity.
> With the emergence of AI in science, we are witnessing the prelude to a curious inversion – our human ability to instrumentally control nature is beginning to outpace human understanding of nature, and in some instances, appears possible without understanding at all.
This is not entirely new. For example, we had working (if inefficient) steam engines and pumps long before the development of thermodynamics. We had beer and cheese long before microbiology.
I guess after AI figures out how many r's there are in Strawberry it'll move on to quantum gravity.
I think it's probably best to wait until we get to science, and then figure out an after.
The whole article seemed a little tautological to me.
You could say: Scientific advances have massively accelerated with the use of the new tool of electricity, but there are serious concerns about the "black-box" nature of electricity, since no one has ever answered the question "what is charge?".
Modern semiconductors depend on quantum effects that no one has ever "explained", but they are highly repeatable, and make useful predictions that can be confirmed.
My expectation is that every advance cited in the article, and attributed to LLMs, is in fact the output of a team of human scientists using LLMs as a tool to expand their scope and increase productivity.
All of these examples are actually human endeavors.
The only qualitative difference I see, is that LLMs are a human invention, whereas electricity and quantum effects are natural phenomenon that were discovered, and utilized as a tool, by humans.
While LLMs, and subsequent s/w advances, may well lead us into new, even unexpected, realms of science, the need to be able to confirm repeatable results and verify the accuracy of predictions, will always be necessary.
As such, I would still call this science, not "after"...
To be pedantic, engineering.
Philosophy
Blindness?