Revisiting my old humorous post on why my papers are like fine wine got me thinking about wine tastings. Wine tastings are often done blind so that the tasters aren’t biased by knowing what they’re drinking. As psychologists have shown time and again, if people think they’re drinking expensive wine, they love it–even if they’re actually drinking the cheap stuff. The same effect shows up with violins (played blind, a Stradivarius sounds no better to professionals than a modern instrument), and indeed with just about anything. If you know it’s “supposed” to be good, you tend to decide that it is good. Duncan Watts just wrote a nice book on this.
So here’s a question: do we need “blind tastings” of scientific papers? Leave aside the question of whether this is even possible (sometimes it is, sometimes it isn’t). Do you think we need them? That is, do readers or reviewers tend to overrate papers by the scientific equivalent of Chateau d’Yquem*, and underrate papers by the scientific equivalent of Stag’s Leap? I’m sure that papers by famous people are more widely-read than papers by unknowns. But what I’m wondering about is our evaluation of a paper, given that we’ve read it.
*Purely as a joke, I was going to link the phrase “the scientific equivalent of Chateau d’Yquem” to a picture of a really famous ecologist, but I chickened out. I was afraid that I might be misread as implying that whoever I linked to doesn’t actually do good work. And I’m not myself a really famous ecologist, so I couldn’t link to a picture of myself.