Posted by: Jeremy Fox | June 1, 2011

Why ecologists should refight the ‘null model wars’ (UPDATED)

Sometimes, scientific debates get resolved in favor of one side or the other. Modern birds are descended from dinosaurs, and those who thought otherwise were incorrect. Sometimes, debates get resolved in favor of some intermediate or synthetic position. The neutralist-selectionist debate in evolution has given way to a sophisticated appreciation of the importance of both processes. Something similar might be said about the density-dependence vs. -independence debate in population ecology. Clements and Gleason disagreed as to whether ecological communities are highly integrated ‘superorganisms’ or nearly random assemblages of ‘individualistic’ species, but the modern view sees interacting species as highly non-independent even though interspecific interactions don’t produce anything like a Clementsian superorganism. And sometimes, debates just stop because the original question is no longer relevant, perhaps because it was ill-posed in the first place (think of alchemical debates).

But sometimes, debates remain relevant, but don’t get resolved. Instead, we just stop talking about them. Sometimes this is because everyone involved recognizes that the available data and analytical methods aren’t sufficient to settle the issue. But sometimes, relevant debates just stop simply because neither side has anything new to say. The ‘SLOSS’ debate, over whether it is better to have a single large nature reserve, or several small ones of equal total area, is an example. As a graduate student, I recall David Ehrenfeld, the first EiC of Conservation Biology, telling a class that he’d decided to stop publishing papers on the SLOSS debate because no one had anything new to say about it. The debate between frequentist and Bayesian statisticians, especially concerning the proper interpretation of ‘probability’, seems like another possible example. Certainly, Brian Dennis’ passionate argument for the relevance of this debate to ecologists (and in favor of the frequentist position) seems not to have sparked an ongoing public discussion in ecology as far I can tell.

Selective journals naturally don’t want to publish repetitive papers. More broadly, scientists (especially those not directly involved in a debate) get tired of seeing the same old arguments rehashed. We tend to see these kinds of debates as pointless, as an indication that a fruitless dead end has been reached and its time to direct our energies elsewhere. But I’m not so sure this is the right response. Just because a debate can’t be resolved (or can’t be resolved by data) doesn’t mean it ought to be ignored. Debates between liberal and conservative political views have been going on for centuries, but that doesn’t mean they can or should just be abandoned. The issues are too important, and too unavoidable, for that. Analogously, I think there are some arguments in science that we need to keep having, not because we hope to resolve them, but because they are arguments about important issues on which every one of us ought to have a thoughtful, considered view.

So what irresolvable old debates should we keep having? In an attempt to provoke some comments from my fellow Oikos editor Nick Gotelli, I suggest that ecologists need to refight the ‘null model wars’ (That high-pitched sound you hear is a collective scream of anguish from every community ecologist who was active in the late ‘70s and early ‘80s) For those of you who aren’t aware, in 1975 Jared Diamond (yes, the Jared Diamond) published a paper on the distribution of bird species on islands in Papua New Guinea. Diamond argued that species were distributed so as to obey ‘assembly rules’ generated by interspecific competition (e.g., some ‘forbidden’ combinations of species never coexist on the same island). Connor and Simberloff (1979) sharply criticized Diamond’s proposed rules, arguing that some were trivial tautologies, and others were actually consistent with what would be expected under a null model in which species were distributed randomly. This touched off years of vociferous debate on a range of interrelated issues, from the appropriate choice of null model, to the value of hypothesis testing as a research approach, to the relative advantages of observational vs. experimental data (see, e.g., the famous Nov. 1983 issue of American Naturalist, the edited volumes by Strong et al. 1984 and Diamond and Case 1986, and numerous Oikos papers such as this one). In the aftermath of this debate many community ecologists refocused on small-scale field experiments as a rigorous way to test for competition. More recently, interest in using null models to infer the causes of observed patterns in species distributions has revived, thanks largely to Nick’s work. Null models are now being applied to new problems, including species richness gradients (the ‘mid-domain effect’) and phylogenetic community ecology.

But this revival in interest in null models seems not to have been accompanied by revived interest in the very serious conceptual issues which were debated, and never fully resolved, during the null model wars. In all null model work, the goal is to compare observed data, which were presumably generated by the combined action of various processes, to data generated by a null model which omits the effect of some process of interest but retain the effects of the other processes. Obviously, the choice of null model is absolutely crucial here. You will be seriously misled if your null model inadvertently retains some effects of the process of interest (the ‘Narcissus effect’; Colwell and Winkler 1984), and/or removes effects of other processes. In light of that, here are some questions that I think we ought to be (re-)debating:

1. How do we identify the appropriate null expectation? Most recent work assumes that appropriate null expectations can be generated simply by randomizing the observed data, typically in some constrained way. Nick Gotelli has done fine work testing the ability of different randomization methods, choices of constraints, and test statistics to recover known patterns in artificial data. But that work doesn’t address the deeper issue of what patterns we should expect our data to exhibit in the absence of the process of interest. For instance, in a spatiotemporally heterogeneous world, in which species have finite dispersal abilities, do we really expect species to be distributed randomly with respect to one another even in the absence of competition?

2. How do we generate data conforming to our null expectation? Again, most recent work assumes that constrained randomization of the observed data is the way to go. But is it? For instance, if we want to detect effects of interspecific competition, it is not at all obvious that the standard sorts of constraints on randomization do a good job of removing all effects of competition (no one’s ever entirely solved the Narcissus effect), or of capturing the effects of all the non-competitive processes. Maybe it would be better to develop an explicit mechanistic model of the processes we believe generated the data, and then use that model to generate expected data when the process of interest is set to zero. For instance, this is basically how models of neutral drift and migration are used to generate null expectations in population genetics and macroecology, thereby (hopefully) allowing effects of selection to be detected. Here’s a paper of Nick’s that touches on this issue.

3. What do we do if different processes have highly non-additive effects, so that the effects of removing a given process are highly context-dependent? In particular, what do we do if an observed pattern is ‘overdetermined’, so that removal of any one process doesn’t change the pattern? Is a hypothesis-testing approach even the right way to go in these kinds of situations? (not that alternative approaches are any picnic in such situations either…)

4.  In cases where questions 1-3 don’t have fully-satisfactory answers, what do we do? Do we just forge ahead with admittedly-imperfect methods on the grounds that they’re the best available? Do we abandon null model approaches entirely and focus on alternative approaches like field experiments? Do we find creative ways to combine different, complementary lines of evidence, each of which compensates for the limitations of the others? I suspect this is the really irresolvable, eternal question, the one about the relative strengths and weaknesses of different research approaches and the best ways to combine them.

I do mean these questions as real, honest questions, not rhetorical ones. Hopefully folks will chime in with some responses in the comments. Certainly, these questions are being discussed–in particular cases. For instance, Connolly (2005) argues that the mid-domain effect is really a Narcissus effect, which largely vanishes when an appropriate mechanistic model is used to generate the null expectation (UPDATE: McClain et al. (2007) is another nice paper discussing these questions in the context of the mid-domain effect). Recent attempts to use randomized null models to detect effects of competition on species distributions have used increasingly large datasets and increasingly complex constraints in an attempt to address some of the issues I’ve raised. But there hasn’t been any broader-based debate the way there was in the late 70s and early 80s. I think that’s a shame, especially for those too young to have participated (or maybe even heard of!) the first null model wars. Indeed, I’m too young myself to have experienced the first null model wars—but I hope I get to experience NMW II.

p.s. Just so no one gets the wrong idea and thinks I’m throwing stones from inside a glass house, I happily agree that similarly challenging questions can be, and should be, asked about any research approach. I’ve certainly been asked hard questions about my own research approach (laboratory microcosms), which is absolutely fair. Indeed, my own reasons for doing what I do have shifted over the years in response to such questions, which I plan to talk about in a future post at some point. I appreciate being obliged to think hard about why I do what I do, and I hope the questions I’ve posed in this post will be taken in the same spirit.

About these ads


  1. Hi Jeremy,
    I am working up a dissertation proposal and a chapter of it concerns testing null model(s) vs. niche determinism in an extreme microbial environment with (probable) severe dispersal limitation. I’m just delving into the literature and was noticing a slight mustiness though there is some recent attention by microbial ecologists to community assembly processes. I don’t have much to contribute at this stage but I had to post to get notified via email about future comments.

  2. Question… Where does Neutral theory fit into to this? In some way I feel that it is a null model war.. And of course it fits into some broader concepts like metacommunity theory in which there are several concepts that explain species patterns at focal sites.

    • Neutral models are mechanistic, process-based models which omit certain processes (selection, most obviously). So if you’re interested in detecting the effects of selection, a neutral model is an appropriate null model. One nice thing about explicitly process-based null models, such as neutral models, is that they provide a check on our intuitions about when ‘randomness’ is the appropriate null hypothesis. When we’re interested in the effects of a particular process, we sometimes naively but incorrectly assume that that process is the only one which can produce non-random structure in our data, so that we can generate ‘null’ data simply by randomizing the observed data. Maybe we make this mistake because we’re so used to thinking about manipulative experiments, where (if we’ve properly randomized the assignment of experimental units to treatments) ‘randomness’ is indeed the appropriate null hypothesis. Neutral models show that, in the absence of selection, one does not actually expect a ‘random’ world. For instance, in the absence of selection one still expects that nearby sites will be more similar in species composition than widely-separated sites. Graham Bell’s work, especially his Science paper on neutral models, makes this point quite strongly.

      The ‘war’ over neutral models also highlights the possibility of overdetermination: some patterns in observed data are highly robust, so that the same patterns emerge from a range of models incorporating various combinations of processes. Species abundance distributions with many rare and few common species are perhaps the best-known example. Overdetermination highlights the value of looking at a range of models, rather than just a single null model and a single alternative.

      Bill Wimsatt’s wonderful essay, “False models as means to truer theories” (reprinted in his book, Re-engineering Philosophy for Limited Beings) is an excellent discussion of what biologists have meant by “neutral” models and “null” models, placed in the more general context of false models and their many uses. All models, null or not, are false in some sense, but models can be false in many different ways, each of which provides us with different information.

  3. “Do we find creative ways to combine different, complementary lines of evidence, each of which compensates for the limitations of the others?”

    Yes, and this isn’t just something that we should do when we can’t generate a “fully-satisfactory” null, it’s what we should be doing all of the time. Good methods are important for telling us about structures in data. Good reasoning based on combining multiple, imperfect, lines of evidence is science. I am starting to think that we spend a bit too much time in ecology worrying about finding exactly the right tool to perfectly characterize the structures in the data related to a single imperfect line of evidence, and too little time doing the hard work of integrating the evidence from across different approaches. I am certainly guilty of this myself.

    • Well said, and I strongly agree.

  4. [...] longstanding philosophical debates show no sign of definitive resolution! As I’ve noted elsewhere, this is a terrible argument for “pragmatism”, analogous to arguing that debates [...]

  5. [...] of San Marco” (1149 views), my story of how I almost quit science (874 views), and my argument that ecologists should refight the null model wars (862 views). And of course, none of those totals [...]

  6. [...] & Ulrich on null models. In an old post I argued that ecologists should refight the null model wars. I didn’t realize when I wrote that post that leading null model proponents Gotelli & [...]

  7. [...] also the useful perspective of Jeremy Fox at Oikos. Share this:TwitterFacebookLike this:LikeBe the first to like this [...]

  8. [...] of his work on their merits, even when they’re published on blogs (see, e.g., his comments on this post). He doesn’t just respond to criticisms by invoking the authority of his own [...]

  9. [...] as what astronomers do? Indeed, in the case of randomization-based null models, there are reasonable arguments that they don’t, and can’t, work at [...]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s



Get every new post delivered to your Inbox.

Join 3,010 other followers

%d bloggers like this: