On hypothesis
A lot of science consists of testing the "null hypothesis". This is where you adopt a hypothesis -- a guess about the world -- take selected groups of data and show from that data that the hypothesis is not true. It's important to my discussion here to note that the null hypothesis is expected to be found not to hold.This is a vital part of the scientific method. It relies on the idea of "falsifiability": that science should only allow conjectures about the world and things in it that can be proved wrong. I should immediately point out that this means "wrong in principle". We don't expect science to be wrong; we expect that the questions we ask can be answered "no". The idea was formalised by Popper a few decades ago and it now dominates science, largely because it's obviously a good method for deciding scientific questions.
There's another kind of "science", or perhaps natural philosophy is a better term for it, in which you take a hypothesis you believe to be true and then you look at a group of data to show that it is in fact true. Sometimes scientists do this. They look for proof of dark matter, for example. Or they run a collider with hope of revealing the particles they say should be there.
This is fine as a way to build knowledge on two understandings: first, that the hypotheses you are testing must be very tight. What do I mean? Well, there's a big difference between saying "we'll find a superheavy particle if we go to energies above x TeV" and saying "we'll find a sporkitron at 89.7 TeV". The first is almost unprovable. The second is obviously easily proved or disproved. The second understanding is that you can't just accept positive evidence.
This latter is a problem in science. Scientists perform experiments. They don't succeed so the scientists don't bother with the journal article that would have announced success. This particular problem has been recognised by having trials registered before proceeding with them -- essential when we're looking at the efficacy of drugs, for example. A related problem is the idea that we will take a "sample" of data. Of course, when you're looking at a positive, it can always be hiding in the data you don't have.
A lot of people take this second approach. They assume something to be true. And then they look for things that tend to confirm it. This confirmation bias is poison to science and poison to critical thinking. An example is the belief that vaccines cause illness. Well, folks, here's the truth: vaccines *do* cause illness. No one doubts that. But if you take the times it causes illness as your acceptable evidence and dismiss the times it doesn't, you are a slave to confirmation bias. This btw is why we perform randomised controlled trials. The null hypothesis, as we noted, looks at groups of data. But those groups are test and control. The test group in this case is "vaccinated" and the control group is "not vaccinated" (we're leaving aside ways we ensure that other considerations are excluded). And the hypothesis that we're testing is that the two groups will give the same data within a degree of tolerance.
But the "positive hypothesis" *only* looks at the test group. Look, it says, the vaccinated group had illness. Point proved. It's proved ever harder by pointing to the individual data that show what you hypothesise. Vaccines create illness because here's an ill person. Hypothesis confirmed.
But the "positive hypothesis" *only* looks at the test group. Look, it says, the vaccinated group had illness. Point proved. It's proved ever harder by pointing to the individual data that show what you hypothesise. Vaccines create illness because here's an ill person. Hypothesis confirmed.
The problems with this kind of natural philosophy are obvious. Almost *any* hypothesis can be supported by some evidence or other. And it ignores the importance of properly weighting evidence.
Let's say you're testing the hypothesis that a drug won't make any difference to a condition (remember the null hypothesis is that the drug does nothing). You pick the first ten people you can find from the street who have the condition and give the drug to five and no drug to five. One of the five who had the drug improves. None of the undrugged five does. Hypothesis denied, the drug works!
Well, you see the problem here. There aren't enough people in your study and you didn't control what type of people they were. They're random but the point of a randomised trial is that you select data that is of the same type and them choose from it by random, not that you select random data from the entire world. In other words, you must narrow the world first. This study proves the null hypothesis. The drug doesn't work!
So this study isn't good because its evidence is too thin. Do the same study on a thousand people, controlled for things like weight, other morbidities, age, race, sex and so on and you have much stronger evidence.
The positive hypothesist tends to consider both studies to have the same weight of evidence. And that means they feel justified to exclude the second study. Or the sources they use for "research" simply do not include the bigger study. This is why we collect studies into journals (which does have its own issues) and don't just broadcast them in the Daily Mail or Washington Post.
In this post we've considered hypotheses that can be falsified by evidence. Next post we're going to look at ideas about the world that cannot be falsified because of their nature.
Let's say you're testing the hypothesis that a drug won't make any difference to a condition (remember the null hypothesis is that the drug does nothing). You pick the first ten people you can find from the street who have the condition and give the drug to five and no drug to five. One of the five who had the drug improves. None of the undrugged five does. Hypothesis denied, the drug works!
Well, you see the problem here. There aren't enough people in your study and you didn't control what type of people they were. They're random but the point of a randomised trial is that you select data that is of the same type and them choose from it by random, not that you select random data from the entire world. In other words, you must narrow the world first. This study proves the null hypothesis. The drug doesn't work!
So this study isn't good because its evidence is too thin. Do the same study on a thousand people, controlled for things like weight, other morbidities, age, race, sex and so on and you have much stronger evidence.
The positive hypothesist tends to consider both studies to have the same weight of evidence. And that means they feel justified to exclude the second study. Or the sources they use for "research" simply do not include the bigger study. This is why we collect studies into journals (which does have its own issues) and don't just broadcast them in the Daily Mail or Washington Post.
In this post we've considered hypotheses that can be falsified by evidence. Next post we're going to look at ideas about the world that cannot be falsified because of their nature.
0 Comments:
Post a Comment
<< Home