My problem with statistics

I’ve been wondering for a while now, and some articles that my advisor gave me to read only makes this suspicion even stronger – is the null hypothesis really worth anything?

For those not in the know, a null hypothesis is something that is drilled into budding scientists (Also, when do I get to call myself a scientist? Do I have to be published?) as practically the ONLY way to properly investigate ANYTHING. If you’re wondering about the relationship between two sets of observed phenomena, say… phalangeal length and phalangeal curvature (to give you an example from my thesis) and you think that one may have an effect on the other, well you can’t really PROVE that, can you? Instead you offer up a converse statement – that phalangeal length and phalangeal curvature have nothing to do with one another, and go ahead and test that. One of the first things I don’t like about this is the implicit assumption that the null hypothesis is somehow more acceptable. I understand that is this at least sort of guarding against bias – despite our best efforts, scientists are human and often “want” our hypothesis to be true. So essentially what we’re trying to do is find enough evidence to reject the null hypothesis. And how do we do that? With statistics!

In my master’s thesis, I contend that biomechanical evidence suggests that there should be a link between phalangeal length and curvature (and body mass and curvature, because why not?), at both an intra and interspecific level. And even if I find a correlation, if the confidence interval is below 95%, (or rather below 0.05) I am more or less forced to admit that there is no correlation. A confidence interval is a way of determining if the correlation that I found is due to random chance or not, because of course I’m not looking at every anthropoid that ever existed, I’m only measuring at most 200 individuals (I wish!) which is a miniscule fraction of the amount of animals that actually exist.

I remember my frustration at this a few years ago when I first started to read scientific papers. I had talked myself into a pretty sweet class at UMBC called “Physiological Bases of Behavior” that was essentially a literature review course. I was doing a presentation on the biology of meerkat social structure, and it appears that the alpha female in meerkat groups actually become slightly bigger after they’ve gotten alpha status. Not just in weight, but in length and skull width. However, in neither case was the difference “significant,” though if I remember correctly it was pretty close. This sort of bothered me, especially because I started to think about the evolution of eusociality in meerkats when compared with naked mole rats (where the alpha females DO have a statistically significant difference in size after they achieve alpha status – MASSIVELY so). What did the statistical significance of the size difference look like early in mole rat eusociality? It was probably not statistically significant, but it was still evolutionary significant, wasn’t it?




About alexclaxton

Paleoanthropology grad student, pop culture obsessive
This entry was posted in Uncategorized. Bookmark the permalink.

One Response to My problem with statistics

  1. I’ve had the same problem. I had to create multiple null hypotheses for my thesis but then my results weren’t statistically significant. But my advisor didn’t seem worried and was basically saying I could think they were significant even though the stats didn’t pan out and should present it this way. It made for a wonderful defense (which you’d know, if you’d made it). 🙂 The whole system needs reassessing.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s