So just imagine this – it’s midnight, you have to get up at 6am, go hang out on campus for 14 hours the next day enduring a 6 hour break in the middle that is just long enough to come home but the bus ride is also just long enough that it’s not worth the effort. Sounds like a good time to sleep, no?
Of course not.
It’s time for a big idea. That absolutely MUST be written down BEFORE you sleep, because let’s face it, we always think we’ll remember in the morning, but science has shown we really just can’t. We don’t really consolidate anything that last 15 minutes, so I pick up my phone and dutifully start typing.
So what was this absolutely marvelous idea that absolutely HAD to be written down? That was worth sleep deprivation on a 14-hour-Monday (which just makes it suck even more)?
It’s (another) discussion on the issues of science. I’ve talked about the whole induction deduction issue before, and I have talked about how psychology is more aware of its propensity for errors in conclusions and results, largely due to the variation that exists within individuals, but I’ve missed out I think on how science contradicts some of it’s own principles, and has set up this double standard.
What is interesting is that science is paralleled largely as a man’s world. You know, as in men are rational and logical and they rely more on the right hemisphere (?) than women who are so incredibly irrational (but damn are they good in the kitchen!). And because of this “fact” women cannot be associated with science, it must be a man thing. Which to be fair more females are engaging in scientific pursuits lately thanks to various women’s and men’s movements, and some of the big thinkers in science are female (nevermind that Watson and Crick used a woman’s data and then took all the credit…). But we are presented with this wonderfully rational and emotionally sterile picture of science. Except that’s not really the case.
The problem with science is that its methods are designed to prove by deduction and inferences – we don’t prove something is true in theory. In theory we demonstrate that the alternative is extremely unlikely, to a level that our theory or proposal is more likely. And science acknowledges this in its theoretical methods, but the problem is that this information is then taken as fact and used on future studies as such. Thus our hypotheses become progressively more precarious as they come to rely on “most likely true” which was developed from another “most likely true” but science doesn’t acknowledge that. So we’ve essentially set off a cascade of potential error each time we report results.
What becomes even more problematic is the public reaction to such results. As a student in psychology and science it has been drilled into my head to carefully look at the results, and while I rarely track down the results unless it bears significance for me (usually in the context that I am already interested in the topic for personal or school related research), I know how to read a study and discriminate between statistical and practical significance.
Curious if the general population was as careful I asked a few friends, who, except for one, told me that they would likely simply take the evidence at face value without further skepticism or investigation assuming it came from a reasonably credible source (i.e. the newspaper. Because you know, those guys NEVER bias their information.). At this point I decided to ask them if they knew the difference between statistical and practical significance. It’s my bread and butter. What we want is both, but we don’t always get that. Tragically, again, only one person (the same person) understood what I was talking about.
The issue is that newspapers rarely distinguish the two. So people make life decisions potentially or at the very least alter their opinions about things, somewhat needlessly. So for non-psych junkies allow me to explain it in less than 30 seconds: statistical significance says that there’s a very small mathematical chance that there isn’t a difference between the groups/that our “best guess” was wrong; practical significance is whether or not something actually matters – is the effect size (i.e. the effect of whatever manipulation was applied on one group compared to another untreated) big enough to be a big deal?
For example, suppose a study reports than individuals on diet X lost significantly more than the individuals on diet Y. They very well may have – there might be a mathematical difference, but then you look at the effect size and the means, and you realize that there’s a high degree of overlap, lots of people on diet Y lost more than diet X and lots lost less, but the difference in the average amount of weight lost between groups is say 35lbs compared to 32lb.
So how science is reported and viewed is a problem. Beyond the general population’s lack of awareness of how to interpret the results, is the issue of how people don’t appear to get how science does it’s thing. We trust the process just a little bit too much. Even I have fallen prey to reading a study, remembering to check all the appropriate statistics and methodology and going, yup looks good, let’s source this in my paper and call it fact, without considering how it was that the scientists developed their theories and how shakey the foundation of their theory (or any theory for that matter) may be. Yes I ensure that I mention when things only showed increased likelihood (i.e. the study was correlational and thus proves very little according to science), but I also sort of treat it as fact, as a given, when I build it into my argument. So really, I’m not better than the general population, I’m just more aware of that fact that I’m doing it.
In most of the “pure sciences” – chemistry and physics – you are dealing with inanimate objects with no will or growth on their own. But I don’t recall there ever being a discussion of the null hypothesis, I remember taking measurements to calculate error – but the error measurements didn’t assume confounds, it affirmed that either you did something wrong or the scale or PCR or whatever other science machine wasn’t working properly. For example, you compare the mass you DID get versus the theoretical amount you SHOULD HAVE gotten. When you calculate the error it is based on a theory, which either was thought up almost out of thin air, or was based on other potentially flawed measurements. Thus the theoretical answer (based on either a theory/idea or a potentially flawed answer) is used to judge the accuracy of another lab experiment. Yet the point of comparison may be no more accurate.
To give the pure sciences credit, at least they have a tangible point of reference. In psychology we just have the complex calculations and a lot of assumptions because we never can KNOW if we were right or wrong about people. They change their minds too often. Psychology lacks a point of reference for it’s error, it can in no way calculate the true state of the world, and so we work in a world of theories, but we are not immune to using these theories as near fact. We perhaps acknowledge it more, but we are no better.
See the issues?
- We take “science” as fact, when really it’s probability because we can never know the true state of things.
- We judge probability from theory.
- We develop theories from ideas and pre-existing “fact.”
- Go back to 1.
There’s an infinite loop of probability painted as truth. But we don’t talk about it. Like if we ignore it, it’ll go away.
At the risk of sounding like some sort of hell-raiser, just out there to leave you hanging from a metaphorical cliff, I don’t have really any solutions to fix this. It’s part of being human, this notion of knowing and perceiving reality without actually having any proof of anything. The only solution I can offer is skepticism and awareness. Not taking things for face value, find the science and judge it’s results for yourself, or at least learn to read the graphs and data and not fall prey to tricks such as modifying the scale.