> Over time you slowly build out a list of good "node papers"
(mostly literature reviews) and useful terms to speed up this
process, but it's always gonna be super time consuming.
One of the many many (for me) indispensable uses for language models is ... language.
Whether I am carefully creating terminology for ideas I want to be able to make sense out of in a time-resilient way, or discovering the esoteric terminology of some area I am unfamiliar with. The vocabulary, epistemological reasoning, and genuinely creative coining of terms, that LLMs excel at, is an incredible upgrade.
As far as I can tell, the biggest difference between areas where the literature is a nightmare and areas where it's like math is the degree to which things self-correct due to the interlocking of disparate parts. If you publish a bad end-result study, say you're measuring the effect of an environmental toxin on human cognitive decline, that's it. If it's right it's right, if it's wrong it's wrong. In contrast, if you discover a fundamental pathway in secondary metabolite biosynthesis, nobody else's research will make sense unless you get it right.
Some European languages have a word for "science". Some have a word for "Wissenschaft". I'm not aware of any language that has separate words for both concepts. Confusion ensues when "science" inevitably gets translated to "Wissenschaft", or the other way around.
Science is centered around the scientific method. A naive understanding of it can lead to an excessive focus on producing disconnected factoids called results. Wissenschaft has different failure modes, but because you are supposed to study your chosen topic systematically by any means necessary, you have to think more about what you are trying to achieve and how. For example, whether you want to produce results, explanations, case studies, models, predictions, or systems.
The literature tends to be better when people see results as intermediate steps they can build on, rather than as primary goals.
Huh, I never really thought deeply about this. My mother tongue is Dutch which has the word “Wetenschap” which maps directly to Wissenschaft.
But I don’t consciously distinguish that from the English “science”. Although obviously the connotation of science leans on the scientific method whilst “Wetenschap” is more on the “gaining of knowledge”.
While there is no single English-word translation I can think of, I guess “knowledge building” or “the effort to expand knowledge” might be good approximations.
Interesting, never thought about this distinction too much.
Theology is a "science" in the same way as social science is a science. They don't use the scientific approach as defined by Popper, but they still try to find out stuff in the best possible way.
Specifically theology searches for knowledge by appealing to scripture and tradition as sources of fact, in the same way that modern science appeals to empirical observation. A theologian of Aquinas's time would have been confident that both methods of study would lead to the same conclusions.
It's certainly the case that the term "science" now refers strictly to empirical science.
> Some European languages have a word for "science". Some have a word for "Wissenschaft". I'm not aware of any language that has separate words for both concepts
not really european but in russian it's neither. the word for science "наука" literally is closest to "teaching" or "education" (edit: and historically "punishment")
there is no stem for knowledge ("знать") OR science (doesn't even exist in russian) in that word:)
Maybe some readers will come across this and instead of foaming at the mouth want solutions
> Over time you slowly build out a list of good "node papers" (mostly literature reviews) and useful terms to speed up this process, but it's always gonna be super time consuming.
This maybe didn't exist at the time of writing of this blog post. But this is not super difficult now a days- though it does take some time. You can use services like connectedpapers.com which will build out graphs of references and will at a glance tell you which papers are sited more. You can find the more reliable stuff .. ie the "node papers".
The review paper is the traditional way. It's usually okay.. but very biased towards the author's background.
If it's very "fresh of the press" stuff then you judge it based on the journal's reputation and hope the reviews did their jobs. You will have more garbage to wade through. To me recent is generally bad...
> This maybe didn't exist at the time of writing of this blog post. But this is not super difficult now a days- though it does take some time. You can use services like connectedpapers.com which will build out graphs of references and will at a glance tell you which papers are sited more. You can find the more reliable stuff .. ie the "node papers".
True, but the aim isn't really in finding which ones are cited the most, although it does help you in the ordeal. In a sense those tools help having a macro understanding, but are very prone to an initial seed bias. It is difficult to get out of a closed sub-section of the field. This is especially the case in technical papers, which often fail to address surrounding issues. These issues might still be technical, just not within the grasp of that particular sub-group of authors.
In the end, like you said, it is very time consuming. You do need to go through each one individually and build an understanding and intuition for what to look for, and how to get out of those "cycles" for a deeper understanding. And you really are better off reading them yourself.
> The review paper is the traditional way. It's usually okay.. but very biased towards the author's background.
>
> If it's very "fresh of the press" stuff then you judge it based on the journal's reputation and hope the reviews did their jobs. You will have more garbage to wade through. To me recent is generally bad...
Some guidelines like PRISMA, or the various assessments of self-bias are generally good indicators the author cared. Having sections like these will help you getting the aforementioned intuition for what else to look through, given you have a recognition from the source itself of their bias (your own assessment may be biased, so some ground truth is good). Plus really thorough description of their methods for gathering the information (databases, queries, and themes they spent time on).
Agreed recent is generally bad, you need to allow some time for things to have a chance to get looked at.
>True, but the aim isn't really in finding which ones are cited the most, although it does help you in the ordeal.
Yeah it turns out that, much like with websites, PageRank is a fantastic tool for ranking quality research papers until the researchers realize that's how they're being ranked.
All of these Agile CI/CD guys should have the stats. We know exactly how many lines of code and labor hours it takes to solve a bug that was not in the test suite.
I did a PhD and keep saying this: the system is completely broken because the incentives are completely broken. Researchers have many things to keep in mind for their career, but truth isn’t really one of them. Citations are more important for career than truth and even false papers can get many citations.
This is sad for people that are not related to science. Since platonic ideal science is axiomatically infallible, by agreed definition (as is platonic free market and other similar models), it is assumed that anyone from outside the academia who is questioning whatever currently published results are is a quack, a luddite and has a psychosis. It does not matter how sus the process leading to those results is, only scientists are allowed to call out scientists
I didn't finish my PhD, but when I was doing it, it was upsetting how easy it was to come up with a conclusion first, and then find a paper to support it.
It was like finding out Santa Claus wasn't real to me.
The “cost of bugs” curve is incoherent nonsense for these reasons:
- We don’t care how much a bug costs to fix if we choose never to fix it. We do not automatically fix all bugs.
- The cost to find certain bugs early on is prohibitive, compared to waiting until the product is fully realized, when many subtle or systemic bugs become quite easy to find.
- It’s not hard to imagine an early stage “bug fix” in the requirements that leads to missed opportunities. For instance, if I were present at the beginning of Uber I think I could have persuaded them that Uber is totally unworkable as a business idea.
- Nothing whatsoever in the nature of bug fixing makes a bug necessarily harder to fix over time. It’s just that certain requirements bugs can snowball.
Pseudoscience like measuring cost to fix a bug in a classroom setting is bad. Specially if it literally does "cost" and "classroom" together. That's just a sad way to grab some more research funding to keep the machine going.
The "garbage pile" of papers is not a new problem. It's been plaguing the science world for quite a long time. And it's only going to get worse because the metric and the target are the same (Goodhart's Law).
From the article itself, each mentioned paper scream of "the author never had to write actual functional code for a living" to me.
> The "garbage pile" of papers is not a new problem. It's been plaguing the science world for quite a long time. And it's only going to get worse because the metric and the target are the same (Goodhart's Law).
I don't think this observation is valid. Papers are not expected to be infallible truth-makers. They are literally a somewhat standardized way for anyone to address a community of their peers and say "hey guys, check out this neat thing I noticed".
Papers are subject to a review because of editorial bars from each publication (that is, you can't just write a few sentences with a crayon and expect it to be published) and the paper should not just clone whatever people already wrote before. Other than this, unless you are committing academic fraud, you can still post something you found your conclusions were off or you missed something.
In such trajectory science is meant to cross the information overload / false equivalence threshold, where the "hey, check this out" scenario won't scale and the cost of validating all other people's papers outweigh the (theoretical) gains.
Not sure if you think that threshold has been crossed already or not.
The article is somewhat confused. The Scientific Method (https://en.wikipedia.org/wiki/Scientific_method) itself is an empirical process with caveats. We need to use our own intelligence in judging and deciding what and how to interpret some data/hypothesis. It is "trial-and-error" but with established principles/laws/heuristics added in to guide our "trials".
Thus for example; the answer to the question "Are Late-Stage Bugs More Expensive?" is a "Yes generally" since at later stage in development we have more of design/implementation done where we now have a larger number of interacting components and thus increased Complexity. So the probability that the bug lies in the interaction/intersection of various components is higher which may require us to rework (both design and implementation) a large part of the system. This is the reason we accept separation-of-concerns, modularization and frequent feedback loops between design/implementation/testing as "standard" Software Engineering Practices.
Lee Smolin in his essay There is No Scientific Method (https://bigthink.com/articles/there-is-no-scientific-method/) states the following which i think is very applicable to "Software Engineering" since it is more of a Human Process than Hard Science;
Science works because Scientists form communities and traditions based not on a common set of methods, but a common set of ethical principles. And there are two ethical principles that I think underlie the success of science...
The first one is that we agree to tell the truth and we agree to be governed by rational argument from public evidence. So when there is a disagreement it can be resolved by referring to a rational deduction from public evidence. We agree to be so swayed.
Whether we originally came to that point of view or not to that point of view, whether that was our idea or somebody else’s idea, whether it’s our research program or a rival research program, we agree to let evidence decide. Now one sees this happening all the time in science. This is the strength of science.
The second principle is that when the evidence does not decide, when the evidence is not sufficient to decide from rational argument, whether one point of view is right or another point of view is right, we agree to encourage competition and diversification amongst the professionals in the community.
Here I have to emphasize I’m not saying that anything goes. I’m not saying that any quack, anybody without an education is equal in interest or is equal in importance to somebody with his Ph.D. and his scientific training at a university...
I’m talking about the ethics within a community of people who have accreditation and are working within the community. Within the community it’s necessary for science to progress as fast as possible, not to prematurely form paradigms, not to prematurely make up our mind that one research program is right to the exclusion of others. It’s important to encourage competition, to encourage diversification, to encourage disagreement in the effort to get us to that consensus which is governed by the first principle.
Oh come on. They're obviously using the word "science" in this context as a shorthand for the institutions and processes we've set up to do research. Mostly because that's too many words for a title and nobody has come up with a catchy name that's not politically coded. It's also pretty normal usage of the word out in the wild.
What Smolin is trying to point out are the meta principles which underlie the feedback loop of the Scientific Method itself. Once the principles are adhered to, the loop becomes commonsense. This is because all of "doing Science" are human activities where we discover knowledge through three means viz. 1) Authority(Textual/Oral) 2) Reasoning 3) Experience. All three have to be considered to come to a definite conclusion. The submitted article ignores this trifecta and seems to conflate "Empiricism" solely with external validation.
We can't come up with anything better because we're using a term that would include anything better we come up with. There are religious studies in science! And if you suddenly had most of discoveries from revelation, that'd still be part of some old or new scientific discipline.
So you're mostly amazed by your own vocabulary papering over all the nonsense it includes
You think that's better than the work that preceded peer review, by people like Einstein, Bunsen, Kelvin, Planck, Darwin, Maxwell, Mendeleev, Michelson, Steinmetz, Faraday, Davy, Haber, Tesla, etc.? Because I have to say I find the pre-peer-review papers to generally be of much higher quality.
How did you come to the conclusion that those have not been peer-reviewed? Every uni course that presents the work of these people implicitly reviews it for consistency, and the advanced practices courses repeat their experiments.
You're still too vague. Do you mean the <100 old peer review system? And that it's better than all the scientific discoveries of the past thousands of years?
Science doesn't provide a Priest who will show up and sit with you at your time of grief or despair in handling the unpredictable. Priests in all religions are trained to occupy that space. And that is the prime reason Religions have survived for thousands of years long past the death of empires, kings and nations who all get tired or bored of showing up and occupying the unpredictability space.
Lot of that Despair is thanks to how the architecture of the chimp brain handles unpredictability over different time horizons - whats the system going to do tomorrow/next month/next year/next decades. Confidence decreases anxiety increases. You want to break the architecture keep feeding it the unpredictable.
So we get the corporal hudson in aliens cycle - "I'm ready, man. Check it out. I am the ultimate badass. State of the badass art" > unpredictability > "Whats happening man! Now what are we supposed to do? Thats it man. Game over man. Game over!"
I have considered the problem of how the strong social benefits and cohesion of religion might be reproduced in some way not tied to the very strange attractor of identity based beliefs and shibboleths.
Science, democracy, religion. Three curses. Each embodying ideals. Each the best choice we have for the areas where they do or have functioned well. Each presenting challenges, and dysfunctional local maxima, as maintenance/optimization problems.
Of the three, science's self-correcting basis does make it the least problematic.
In the highest contrast, mathematics foundations are only weak if you look! Whereas the piles built on debatably wobbly foundations hold up extremely well.
> Over time you slowly build out a list of good "node papers" (mostly literature reviews) and useful terms to speed up this process, but it's always gonna be super time consuming.
One of the many many (for me) indispensable uses for language models is ... language.
Whether I am carefully creating terminology for ideas I want to be able to make sense out of in a time-resilient way, or discovering the esoteric terminology of some area I am unfamiliar with. The vocabulary, epistemological reasoning, and genuinely creative coining of terms, that LLMs excel at, is an incredible upgrade.
As far as I can tell, the biggest difference between areas where the literature is a nightmare and areas where it's like math is the degree to which things self-correct due to the interlocking of disparate parts. If you publish a bad end-result study, say you're measuring the effect of an environmental toxin on human cognitive decline, that's it. If it's right it's right, if it's wrong it's wrong. In contrast, if you discover a fundamental pathway in secondary metabolite biosynthesis, nobody else's research will make sense unless you get it right.
Some European languages have a word for "science". Some have a word for "Wissenschaft". I'm not aware of any language that has separate words for both concepts. Confusion ensues when "science" inevitably gets translated to "Wissenschaft", or the other way around.
Science is centered around the scientific method. A naive understanding of it can lead to an excessive focus on producing disconnected factoids called results. Wissenschaft has different failure modes, but because you are supposed to study your chosen topic systematically by any means necessary, you have to think more about what you are trying to achieve and how. For example, whether you want to produce results, explanations, case studies, models, predictions, or systems.
The literature tends to be better when people see results as intermediate steps they can build on, rather than as primary goals.
"Science", "Research", and "Study" (as in scholarly study, not as in test prep) are often related and intertwined, but aren't the same things.
You can do historical research that is clearly scholarly, but not scientific.
You can use the scientific method for non-research purposes. And you can perform scholarly study without doing research or science.
Huh, I never really thought deeply about this. My mother tongue is Dutch which has the word “Wetenschap” which maps directly to Wissenschaft.
But I don’t consciously distinguish that from the English “science”. Although obviously the connotation of science leans on the scientific method whilst “Wetenschap” is more on the “gaining of knowledge”.
While there is no single English-word translation I can think of, I guess “knowledge building” or “the effort to expand knowledge” might be good approximations.
Interesting, never thought about this distinction too much.
The word "science" predates modern natural science, so I'm not sure these are really different words.
Thomas Aquinas asks if theology is a science. Spoiler alert: The answer is Yes.
Aquinas predated Popper, whose definition is more influential today. Nothing about theology is falsifiable, so no, the answer is "No."
Theology is a "science" in the same way as social science is a science. They don't use the scientific approach as defined by Popper, but they still try to find out stuff in the best possible way.
Specifically theology searches for knowledge by appealing to scripture and tradition as sources of fact, in the same way that modern science appeals to empirical observation. A theologian of Aquinas's time would have been confident that both methods of study would lead to the same conclusions.
It's certainly the case that the term "science" now refers strictly to empirical science.
> Some European languages have a word for "science". Some have a word for "Wissenschaft". I'm not aware of any language that has separate words for both concepts
not really european but in russian it's neither. the word for science "наука" literally is closest to "teaching" or "education" (edit: and historically "punishment")
there is no stem for knowledge ("знать") OR science (doesn't even exist in russian) in that word:)
>the word for science "наука"
It's literally "na-oo-ka? What in the hell is the etymology of that?
the stem "ук/уч" is in "учить" (to teach or in old times to punish) and other teaching related words, idk etymology
2021 with two past discussions:
https://news.ycombinator.com/item?id=27892615 - 168 comments
https://news.ycombinator.com/item?id=27891102 - 16 comments
Maybe some readers will come across this and instead of foaming at the mouth want solutions
> Over time you slowly build out a list of good "node papers" (mostly literature reviews) and useful terms to speed up this process, but it's always gonna be super time consuming.
This maybe didn't exist at the time of writing of this blog post. But this is not super difficult now a days- though it does take some time. You can use services like connectedpapers.com which will build out graphs of references and will at a glance tell you which papers are sited more. You can find the more reliable stuff .. ie the "node papers".
The review paper is the traditional way. It's usually okay.. but very biased towards the author's background.
If it's very "fresh of the press" stuff then you judge it based on the journal's reputation and hope the reviews did their jobs. You will have more garbage to wade through. To me recent is generally bad...
> This maybe didn't exist at the time of writing of this blog post. But this is not super difficult now a days- though it does take some time. You can use services like connectedpapers.com which will build out graphs of references and will at a glance tell you which papers are sited more. You can find the more reliable stuff .. ie the "node papers".
True, but the aim isn't really in finding which ones are cited the most, although it does help you in the ordeal. In a sense those tools help having a macro understanding, but are very prone to an initial seed bias. It is difficult to get out of a closed sub-section of the field. This is especially the case in technical papers, which often fail to address surrounding issues. These issues might still be technical, just not within the grasp of that particular sub-group of authors.
In the end, like you said, it is very time consuming. You do need to go through each one individually and build an understanding and intuition for what to look for, and how to get out of those "cycles" for a deeper understanding. And you really are better off reading them yourself.
> The review paper is the traditional way. It's usually okay.. but very biased towards the author's background. > > If it's very "fresh of the press" stuff then you judge it based on the journal's reputation and hope the reviews did their jobs. You will have more garbage to wade through. To me recent is generally bad...
Some guidelines like PRISMA, or the various assessments of self-bias are generally good indicators the author cared. Having sections like these will help you getting the aforementioned intuition for what else to look through, given you have a recognition from the source itself of their bias (your own assessment may be biased, so some ground truth is good). Plus really thorough description of their methods for gathering the information (databases, queries, and themes they spent time on).
Agreed recent is generally bad, you need to allow some time for things to have a chance to get looked at.
>True, but the aim isn't really in finding which ones are cited the most, although it does help you in the ordeal.
Yeah it turns out that, much like with websites, PageRank is a fantastic tool for ranking quality research papers until the researchers realize that's how they're being ranked.
Goodhart strikes again.
All of these Agile CI/CD guys should have the stats. We know exactly how many lines of code and labor hours it takes to solve a bug that was not in the test suite.
I did a PhD and keep saying this: the system is completely broken because the incentives are completely broken. Researchers have many things to keep in mind for their career, but truth isn’t really one of them. Citations are more important for career than truth and even false papers can get many citations.
This is sad for people that are not related to science. Since platonic ideal science is axiomatically infallible, by agreed definition (as is platonic free market and other similar models), it is assumed that anyone from outside the academia who is questioning whatever currently published results are is a quack, a luddite and has a psychosis. It does not matter how sus the process leading to those results is, only scientists are allowed to call out scientists
I didn't finish my PhD, but when I was doing it, it was upsetting how easy it was to come up with a conclusion first, and then find a paper to support it.
It was like finding out Santa Claus wasn't real to me.
The “cost of bugs” curve is incoherent nonsense for these reasons:
- We don’t care how much a bug costs to fix if we choose never to fix it. We do not automatically fix all bugs.
- The cost to find certain bugs early on is prohibitive, compared to waiting until the product is fully realized, when many subtle or systemic bugs become quite easy to find.
- It’s not hard to imagine an early stage “bug fix” in the requirements that leads to missed opportunities. For instance, if I were present at the beginning of Uber I think I could have persuaded them that Uber is totally unworkable as a business idea.
- Nothing whatsoever in the nature of bug fixing makes a bug necessarily harder to fix over time. It’s just that certain requirements bugs can snowball.
Science is very good.
Pseudoscience like measuring cost to fix a bug in a classroom setting is bad. Specially if it literally does "cost" and "classroom" together. That's just a sad way to grab some more research funding to keep the machine going.
The "garbage pile" of papers is not a new problem. It's been plaguing the science world for quite a long time. And it's only going to get worse because the metric and the target are the same (Goodhart's Law).
From the article itself, each mentioned paper scream of "the author never had to write actual functional code for a living" to me.
Creating metrics for code quality is about as reliable as doing it for essay quality
> The "garbage pile" of papers is not a new problem. It's been plaguing the science world for quite a long time. And it's only going to get worse because the metric and the target are the same (Goodhart's Law).
I don't think this observation is valid. Papers are not expected to be infallible truth-makers. They are literally a somewhat standardized way for anyone to address a community of their peers and say "hey guys, check out this neat thing I noticed".
Papers are subject to a review because of editorial bars from each publication (that is, you can't just write a few sentences with a crayon and expect it to be published) and the paper should not just clone whatever people already wrote before. Other than this, unless you are committing academic fraud, you can still post something you found your conclusions were off or you missed something.
Not a bad argument in an ideal world.
In such trajectory science is meant to cross the information overload / false equivalence threshold, where the "hey, check this out" scenario won't scale and the cost of validating all other people's papers outweigh the (theoretical) gains.
Not sure if you think that threshold has been crossed already or not.
The "garbage pile" of papers is not a problem, they are deliverables, representing completed work for which people have been payed.
If we stop paying for those "garbages", the problem might disappear. But what about the researchers or scientists who depend on that funding to live?
What needs to change is the very way academic work is organized, but nothing comes for free.
The article is somewhat confused. The Scientific Method (https://en.wikipedia.org/wiki/Scientific_method) itself is an empirical process with caveats. We need to use our own intelligence in judging and deciding what and how to interpret some data/hypothesis. It is "trial-and-error" but with established principles/laws/heuristics added in to guide our "trials".
Thus for example; the answer to the question "Are Late-Stage Bugs More Expensive?" is a "Yes generally" since at later stage in development we have more of design/implementation done where we now have a larger number of interacting components and thus increased Complexity. So the probability that the bug lies in the interaction/intersection of various components is higher which may require us to rework (both design and implementation) a large part of the system. This is the reason we accept separation-of-concerns, modularization and frequent feedback loops between design/implementation/testing as "standard" Software Engineering Practices.
Lee Smolin in his essay There is No Scientific Method (https://bigthink.com/articles/there-is-no-scientific-method/) states the following which i think is very applicable to "Software Engineering" since it is more of a Human Process than Hard Science;
Science works because Scientists form communities and traditions based not on a common set of methods, but a common set of ethical principles. And there are two ethical principles that I think underlie the success of science...
The first one is that we agree to tell the truth and we agree to be governed by rational argument from public evidence. So when there is a disagreement it can be resolved by referring to a rational deduction from public evidence. We agree to be so swayed.
Whether we originally came to that point of view or not to that point of view, whether that was our idea or somebody else’s idea, whether it’s our research program or a rival research program, we agree to let evidence decide. Now one sees this happening all the time in science. This is the strength of science.
The second principle is that when the evidence does not decide, when the evidence is not sufficient to decide from rational argument, whether one point of view is right or another point of view is right, we agree to encourage competition and diversification amongst the professionals in the community.
Here I have to emphasize I’m not saying that anything goes. I’m not saying that any quack, anybody without an education is equal in interest or is equal in importance to somebody with his Ph.D. and his scientific training at a university...
I’m talking about the ethics within a community of people who have accreditation and are working within the community. Within the community it’s necessary for science to progress as fast as possible, not to prematurely form paradigms, not to prematurely make up our mind that one research program is right to the exclusion of others. It’s important to encourage competition, to encourage diversification, to encourage disagreement in the effort to get us to that consensus which is governed by the first principle.
Oh come on. They're obviously using the word "science" in this context as a shorthand for the institutions and processes we've set up to do research. Mostly because that's too many words for a title and nobody has come up with a catchy name that's not politically coded. It's also pretty normal usage of the word out in the wild.
What Smolin is trying to point out are the meta principles which underlie the feedback loop of the Scientific Method itself. Once the principles are adhered to, the loop becomes commonsense. This is because all of "doing Science" are human activities where we discover knowledge through three means viz. 1) Authority(Textual/Oral) 2) Reasoning 3) Experience. All three have to be considered to come to a definite conclusion. The submitted article ignores this trifecta and seems to conflate "Empiricism" solely with external validation.
Science is amazing because it sucks and yet it's somehow still better than anything else we came up with for thousands of years.
We can't come up with anything better because we're using a term that would include anything better we come up with. There are religious studies in science! And if you suddenly had most of discoveries from revelation, that'd still be part of some old or new scientific discipline. So you're mostly amazed by your own vocabulary papering over all the nonsense it includes
I should have specified for pedants that I meant the system of peer review and scientific inquiry.
You think that's better than the work that preceded peer review, by people like Einstein, Bunsen, Kelvin, Planck, Darwin, Maxwell, Mendeleev, Michelson, Steinmetz, Faraday, Davy, Haber, Tesla, etc.? Because I have to say I find the pre-peer-review papers to generally be of much higher quality.
We can modify the human genome now, how is that not an order of magnitude more impressive?
How did you come to the conclusion that those have not been peer-reviewed? Every uni course that presents the work of these people implicitly reviews it for consistency, and the advanced practices courses repeat their experiments.
Also, survivorship bias.
You're still too vague. Do you mean the <100 old peer review system? And that it's better than all the scientific discoveries of the past thousands of years?
Science doesn't provide a Priest who will show up and sit with you at your time of grief or despair in handling the unpredictable. Priests in all religions are trained to occupy that space. And that is the prime reason Religions have survived for thousands of years long past the death of empires, kings and nations who all get tired or bored of showing up and occupying the unpredictability space.
Lot of that Despair is thanks to how the architecture of the chimp brain handles unpredictability over different time horizons - whats the system going to do tomorrow/next month/next year/next decades. Confidence decreases anxiety increases. You want to break the architecture keep feeding it the unpredictable.
So we get the corporal hudson in aliens cycle - "I'm ready, man. Check it out. I am the ultimate badass. State of the badass art" > unpredictability > "Whats happening man! Now what are we supposed to do? Thats it man. Game over man. Game over!"
Think about what science offers corporal hudson.
I have considered the problem of how the strong social benefits and cohesion of religion might be reproduced in some way not tied to the very strange attractor of identity based beliefs and shibboleths.
Science, democracy, religion. Three curses. Each embodying ideals. Each the best choice we have for the areas where they do or have functioned well. Each presenting challenges, and dysfunctional local maxima, as maintenance/optimization problems.
Of the three, science's self-correcting basis does make it the least problematic.
In the highest contrast, mathematics foundations are only weak if you look! Whereas the piles built on debatably wobbly foundations hold up extremely well.
So, basically:
Where science can't make you better, non-science can make you feel better.
Where truth is painful, untruth can attempt to provide comfort.
(not sure how any of this relates to the comment or the article, maybe I should have just ignored this)
[dead]