I teach a graduate seminar on Darwin’s On the Origin of Species. We read and discuss the Origin and some related readings. It’s a lot of fun, for me and the students. If you haven’t yet read the Origin, or read it when you were too young to fully appreciate it, or haven’t read it in ages and don’t remember it that well, you really ought to (re-)read it. It’d be great choice for a graduate reading group–you could read a chapter a week and finish it in a semester. So here are a bunch of notes to help and encourage you to take the plunge.


  • Read the first edition. Six editions of the Origin were published in Darwin’s lifetime. If you just go to a library or bookstore and pick a random copy of the Origin off the shelf, you’re probably picking up a copy of the sixth edition (if it’s got a whole chapter devoted to refuting the objections of a guy named “Mivart”, it’s the sixth edition). The sixth edition is mainly of historical interest, as the final statement of Darwin’s views. Those views were heavily revised from the first edition, in response to the many criticisms Darwin received. Unfortunately, most of those criticisms were off base, so the first edition actually is more correct than the sixth. So as a scientist who’s likely to be curious about how much Darwin got right, and who probably wants to be able to trace back modern ideas to their Darwinian roots, you’ll want to read the first edition. The first edition also is shorter, clearer, and more tightly argued, making it an easier read. It’s been aptly remarked that the sixth edition could have been titled “On the Origin of Species By Means of Natural Selection and a Whole Bunch of Other Things.” And the first edition is the edition that started the intellectual revolution–it’s the edition that changed the world. So why not read that?
  • Consider which printing of the first edition you want. Darwin’s books have long since gone out of copyright, so you can read the first edition for free on various websites, such as this one. If you prefer a hard copy (and call me old fashioned, but I really think every biologist should own a hard copy), then I recommend The Annotated Origin. It’s a facsimile of the first edition, so it has the original pagination (helpful if you’ll also be reading scholarly articles about the Origin, as they all refer to the book using the original pagination). And as the title indicates, The Annotated Origin has extensive and very good marginal notes from biologist James T. Costa. This is the printing I plan to teach my class from in future. Another option, which I’ve used in my class in the past, is the famous Harvard facsimile edition first published for the Origin‘s 100th anniversary in 1959, which includes a famous and influential introduction by Ernst Mayr.
  • Do a bit of background reading. The Origin is quite accessible. It’s not technical; it was written to be read by any educated person. And while the style may not be your cup of tea (though I actually like it, or at least don’t mind it), it’s not difficult reading. So you can get a lot out of the Origin without doing any background reading. But background reading can definitely help you get more out of it. I require my students to read Janet Browne’s Darwin’s Origin of Species: A Biography. It’s a short (readable in a few hours) introduction to the writing of the Origin, the social and scientific context, reaction to the book, etc. Browne’s mammoth two volume biography of Darwin is great too, but probably much more than you’d want to bite off for a reading group. And of course there are many other things you could read; it’s not for nothing that historians of science talk about the “Darwin industry”.
  • Read it as part of a group. Read the Origin along with others so you can talk about your reactions as you go. Or just read along with John Whitfield, a science writer who back in 2009 did a nice series of blog posts called Blogging the Origin. He read the first edition and did a post on each chapter.

Food for thought

Here are a some suggestions for things to think about as you read the first edition of the Origin. Many of them reflect my own interests, of course, so just ignore them if you don’t share my interests. The Origin is a really rich book and there’s plenty in it for anyone.

  • The quotations with which Darwin prefaces the book.  One is from William Whewell, an influential thinker of the generation prior to Darwin’s, and the other is from the “inventor of the scientific method”, Francis Bacon. Both quotes talk about how science, and scientific laws, don’t conflict with Christianity. These quotes are an attempt by Darwin not just to defend against charges of impiety or atheism, but also to defend against charges of being unscientific. At the time, the leading view on the origin of species was “special creation”, which actually had relatively little in common with the forms of creationism espoused (often in thinly-disguised form) by fundamentalists today. It’s important to understand that “special creation” was just one manifestation of the deep intellectual commitments of most senior scientists of the day. To those scientists, such as the geologist Adam Sedgwick (once a mentor of Darwin’s) the whole point of science was to read nature as the “Book of God”, to document natural order and patterns as a physical manifestation of God’s plan. To someone like Sedgwick, Darwin’s explanation for the origin of species wasn’t just wrong, it wasn’t even the sort of thing that counted as a scientific explanation at all. And conversely, Darwin argues in the Origin that “special creation” is not so much an incorrect explanation for the origin of species, as a non-explanation–it leaves all sorts of surprising patterns in nature “untouched and unexplained”. It’s difficult for a modern reader to really “get” the mindset of a special creationist, but it’s worth a try in order to understand the Origin as Darwin and his readers understood it.
  • Darwin’s style. Note that the style is very cautious and modest (until the final, summary chapter, which is beautifully confident). Indeed, Darwin devotes a whole chapter to raising and then addressing objections to his ideas, and it’s clear from the way he writes (and from private correspondence) that he’s not just setting up straw men. He really does worry about these objections, perhaps even too much. It’s a far cry from the way most scientists write these days.
  • Ordering of the material. Note that Darwin doesn’t start out with anything exotic (nothing about the Galapagos Islands, for instance, which are hardly mentioned in the book). Instead, he starts out talking about domestic animals. It’s an attempt to get readers on board, by talking about something ordinary and familiar. More broadly, note that in the first few chapters Darwin lays out his big conceptual idea–evolution by natural selection–and then in the remainder of the book discusses how that hypothesis fits with and explains the available data. One can ask, as philosopher Eliot Sober has, if Darwin wrote the Origin “backwards”. That is, he starts out with the mechanism of evolutionary change, and only then does he go on to argue for the fact of evolutionary change. Which seems a bit backwards, when put that way–shouldn’t you start by describing what needs explaining before you explain it? You may want to think about why Darwin ordered the material the way he did.
  • What Darwin got right, and wrong, and the risk of mixing them up. Darwin gets a lot right in the Origin, including prefiguring almost every big idea in modern ecology (even trendy ecological ideas like biodiversity and ecosystem function!) My students are always shocked at just how much he gets right and how modern he sounds. He also gets some things wrong, of course (and not always because he was unaware of facts we’re aware of). But he gets so much right that it’s tempting to read into the Origin modern ideas that Darwin himself didn’t actually hold. Case in point: the book is infamous for not fully living up to its title because Darwin doesn’t really fully grasp how natural selection can generate new species from existing ones. That’s because he doesn’t really recognize the possibilities of spatially-varying selection (different variants favored in different locations) and frequency-dependent selection (relative fitness of different variants depends on their relative abundances). Instead, Darwin has what Costa aptly calls a “success breeds success” vision of how selection works–new, superior variants arise and then sweep to fixation everywhere they can spread to, replacing the previous variants. To get this “success breeds success” process to generate and maintain diversity, Darwin invokes what he calls his “Principle of Divergence”, which is the idea that parents will be fitter if they have divergent offspring (offspring that differ from one another in their phenotypes). The idea is basically to make the production of diversity itself a cause of evolutionary success. There are contexts in which this can work–but plenty of contexts in which it can’t (there are logical as well as empirical flaws to the idea as developed by Darwin). Now, I should note that I’m not an evolutionary biologist, and there are evolutionary biologists who read the Origin as presenting pretty much a fully-modern and correct theory of how selection affects speciation. All I can say is that I think they’re reading into the Origin, and in particular into the “Principle of Divergence”, something that just isn’t there. Read it and judge for yourself.
  • Explanation and unification. It’s often said that Darwin’s great achievement in the Origin is to unify and explain many apparently-unrelated facts. The Origin links together and explains facts about everything from animal breeding to biogeography to embryonic development to the fossil record. Which raises many deep and interesting conceptual issues. For instance, is unification always a good thing in a scientific theory? Why? Is it because unification is a mark of truth? For instance, maybe unification is a sort of “indirect” or “circumstantial” evidence. If a theory seems to work well to explain facts A, B, and C, then perhaps we ought to take that as indirect or circumstantial evidence in favor of its explanation for fact D. But on the other hand, conspiracy theories also unify many apparently-unrelated facts–which is usually taken to indicate that they’re false, not true! Or maybe unifying theories are valuable because, true or not, they’re more productive as “working hypotheses”, guiding future investigations by suggesting what questions to ask and what data to collect. Darwin famously called his theory “a theory by which to work”. Then again, maybe not. For instance, progress on understanding the causes of variation and heredity (problems that famously vexed Darwin) came not just from the rediscovery of Mendel’s work, but from breaking the problem up and disunifying it. Muller and his followers figured out what we now call transmission genetics by explicitly setting aside and ignoring what we now call developmental genetics, regarding it as a separate problem. And what exactly does it mean to “explain” some fact or set of facts, anyway, and how are explanation and unification connected, if at all? For instance, do explanations have to be unifying if they’re to count as explanations at all? The intuition here is that every theory or hypothesis has to take something for granted. So if you produce a separate, independent explanation for every single thing you want to explain, then you’re effectively just changing the question, substituting one set of unexplained, taken-for-granted statements for another. At best, you’re just pushing the required explanations back a step (e.g., if you “explain” the origin of life on earth by saying “it arrived on an asteroid”, all you’ve done is change the question to “where did life on the asteroid come from?”) But if you have a unifying explanation, a single explanation for a bunch of different facts, you’re “killing many birds with one stone” and reducing the number of unexplained statements we just have to take for granted. See here and here and here for some longer posts I did on issues of explanation and unification for the class blog.
  • Circular reasoning? Darwin developed his theory to explain lots of different facts about the world, and along the way he modified it in various ways as he discovered new facts. In light of that, isn’t it a bit (or even more than a bit!) circular to regard those facts as evidence for his theory, or as a test of his theory? Isn’t it circular (or maybe better, “double-dipping”) to use the facts to develop and inspire your theory, and then turn around and re-use those same facts to test the theory? After all, developing your theory so that it fits known facts guarantees that your theory will fit those facts! In philosophy of science, this is known as the “old evidence” problem: when does previously-known (“old”) evidence constitute evidence for a new theory? There are plenty of examples of the old evidence problem besides the Origin, so it’s a very general issue well worth thinking about (I emphasize I’m just throwing the issue out there as food for thought–I’m not saying whether I think Darwin’s argument is actually circular!)
  • Comparative reception of the Origin. After you read the Origin, you’ll probably find yourself wanting to dig into all sorts of related topics. One related topic my class discusses is the comparative reception of the Origin in different cultures and religions. There are obvious reasons why North American and European biologists focus so much on how Christians, especially conservative ones, react to evolution. But it’s worth remembering that there are other strains of Christianity, and other religions, and it’s very interesting to compare and contrast the ways they reacted to Darwin’s ideas.
  • UPDATE: Darwin is in the eye of the beholder. Darwin has been claimed as a model and even a hero by many groups. For instance, the Origin has been claimed as a biological justification for both communism and unregulated capitalism. John Whitfield astutely notes that Darwin has been claimed as a model by both the “lean and mean” school of evolutionary biology (theorists like Fisher, Hamilton, Maynard Smith, Dawkins, and Price, who focused on natural selection and its consequences using simple, elegant models) and the opposing “let a thousand flowers bloom” school (exemplified by Stephen J. Gould, with his emphasis on historical contingency and the complex interplay of many evolutionary forces). Darwin has of course been claimed as a model by naturalists, especially those who bemoan the perceived decline of field-based, observational natural history within biology. Even though Darwin himself did a lot of highly artificial greenhouse and lab experiments to which he attached great importance, wasn’t above collaborating with mathematicians (as in the section of the Origin in which he builds a geometry-based model of how honeybees can build perfectly hexagonal honeycombs), and was heavily criticized in his own time for engaging in ungrounded theoretical speculation rather than sticking close to the data and making inductive generalizations. Interestingly, many of today’s greatest naturalists, like E. O. Wilson and the Grants, achieved their greatness in a similar way, by seriously pursuing and integrating many different lines of work of which “natural history” was only one. So when you read the Origin and come to see Darwin as your hero (as you probably will!), pay attention to what you find heroic about him–it may well say more about you than it does about Darwin!

Over at Nothing in Biology Makes Sense, newly-minted PhD evolutionary biologist David Hembry reflects on the biggest changes in evolutionary biology and ecology since 2005. It’s a thoughtful piece, reflecting on some less-noted aspects of widely-noted trends. For instance, it’s not just the increasing availability of sequence data that makes synthesis and reanalysis of other people’s sequence data attractive, it’s also the fact that it’s cheap to do (particularly important in an era of rising fuel costs and increasing competition for funds). The same could be said of any database-based work, really, and also of theoretical work and laboratory microcosm work. It will be interesting to see how patterns of training, hiring, and publication shift in decades to come*, and if there aren’t frequency-dependent forces that will limit how far these directional trends can go (At some point, will really good field skills become highly prized precisely because of their scarcity, while good bioinformaticists/meta-analysts/theoreticians/programmers/etc. will be a dime a dozen?)

David also identifies some less-noted trends, such as the increasing focus of evolutionary biologists on “field model organisms” like sticklebacks and anoles, and how this poses problems of system choice for grad students who want to go on to academia. Do you choose the same model system as everyone else, thereby making it easier to ask big questions (after all, there are good reasons why sticklebacks and anoles are model systems!), but harder to stand out from the crowd? Or do you choose the road less traveled, which might make it harder to address big questions but also really impress people if you succeed? (sounds a bit like the handicap principle…)

Anyway, click through and read the whole thing.

*At least in ecology, there’s not yet much indication of a radical shift towards people publishing data collected by others.

I just tried to visit the Ecological Society of America website, and Google gave me this:

What the hell?! The diagnostic page says that over the past 90 days, a bunch of pages from esa.org resulted in malicious software being downloaded without user consent, including a bunch of exploits and trojans. I’m guessing this means the ESA website was hijacked sometime within the last few months?

UPDATE: Via the ESA Twitter feed, I see that the site was indeed hijacked within the last 24 hours. It’s been fixed, but it’ll take about 24 h for Google to recognize the fix.

Posted by: Jeremy Fox | June 15, 2012

Take-home messages vs. the devil in the details

As scientists, whether we’re reading a paper or listening to a talk, we often focus on the take-home message. The main conclusion. The key point. The bottom line. The gist. The summary.

But should we do that? Always?

Because the devil is in the details. And not just sometimes, but pretty much all the time. So if you don’t understand the details, if you don’t know how the “bottom line” was “calculated”, what good does it do you to know it? If you don’t know what the summary is summarizing, what’s the point of knowing the summary? Indeed, can you even be said to understand the summary?

Now, I actually think those questions have good answers–sometimes. Summaries do have their uses. There are certain times when it’s ok to ignore the details and just focus on getting the gist. But details have their uses too, and there are times when it is anything but ok to ignore them. In my experience, many of the most serious mistakes in ecology arise from insufficient attention to detail (see here and here for just two of many possible examples).

So here are some quite specific circumstances in which I think summaries have value even if you don’t know the details of what’s being summarized:

  • The summary is only a starting point; you’re going to learn the details. For instance, you read the abstract of an interesting-looking paper, or read a live-tweet of a talk, and decide to go read some papers on that topic. Or, you read a bunch of abstracts, and decide to read whichever papers sound most interesting, thereby using the abstracts as a filter on what details to learn rather than as a substitute for learning the details.
  • You’re only mildly interested in some topic and have no need to know very much about it.

Conversely, there are other circumstances in which you had better know the details, so that you know exactly what’s being summarized. Now, summaries are still valuable in these circumstances–but only because they help you understand the details, not as a substitute for the details.

  • You’re going to work on the topic.
  • You’re going to publish something on the topic.
  • You’re going to apply for a grant on the topic.
  • You’re going to present on the topic.
  • You’re going to teach the topic.
  • You’re going to cite a paper on the topic. Yes, I believe that you should read–carefully, critically, and in its entirety–every paper you cite (rule of thumb: read it as if you’re reviewing it). To cite something is to rely on it; you ought to satisfy yourself that you can rely on it. Doing otherwise is how mistakes get perpetuated. Yes, I admit to not always doing this myself. We are all sinners.

And here are some bad reasons and poor excuses for focusing on summaries to the exclusion of the underlying details:

  • When reading a theoretical paper, skipping the math and any technical explanation and just focusing on the broad-brush summary, on the grounds that you don’t know math or don’t like math. No, you don’t need to rederive all the math, any more than you have to reperform every experiment you read about. But you can hardly claim to understand the math if you’ve skipped over the math entirely! Something similar could be said for any technical paper, of course, but in my experience “math phobia” is far more common than “natural history phobia” or “experimental design phobia” or “statistics phobia” or etc.
  • Ignoring the details because you’re just looking for a way to get a quick paper. Many ecologists are understandably eager to find and use methods that seem easy to apply but yet promise great insight. This doesn’t just lead to bandwagons, it also leads to people rushing out to apply these methods without having thought about the details of how the methods work or how the results should be interpreted. In this context, focusing on “the bottom line” to the exclusion of details basically amounts to saying “I don’t want to have to think about this method, I just want to be able to ‘crank the handle’ and use it without thinking.”
  • Ignoring the details because you think “the big picture” is what really matters. Hey, I’m a big-picture person too. I love the big picture. But if you don’t know the underlying details, then you don’t know what it’s a big picture of. You say you want to see “the forest for the trees”? Well, you’d better be able to tell if that green patch in your “big picture” (your “satellite photo”, if you will) is a forest and not grassland or farmland. Heck, you’d better be able to tell if your “big picture” is a satellite photo and not a child’s fingerpainting. (oops, that snapping sound you just heard was the sound of the whole “big picture/forest for the trees” metaphor being stretched beyond the breaking point…)

It occurs to me that this is connected to a really old post of mine on hand waving in ecology, in which I illustrated by example, but struggled to articulate, the difference between “good” and “bad” hand waving. One characteristic of good hand waving is that it starts from an appreciation of the details. When someone like Mathew Leibold argues that a very simple food web model “captures the essence” of what’s going on in some complex natural community, he’s starting from a thorough understanding of what the model assumes and what it predicts. But if you only “get the gist” of that same model, you’re in no position to judge whether it is, or is likely to, “capture the essence” of what’s going on in your study system.

p.s. Protip: if it’s not obvious where a link goes, it probably goes to something funny that’s related to the post. This is true of many of my posts.

PeerJ is a new open access publishing initiative which you join by paying a flat one-time fee, entitling you to publish as many open-access articles as you want for the rest of your life. Articles are peer reviewed for technical soundness. The initiative was founded by some serious scientific publishing bigshots.

But that’s not actually what I wanted to note. In passing, a recent Nature news article on PeerJ says that

To avoid running out of peer reviewers, every PeerJ member is required each year to review at least one paper or participate in post-publication peer review.

Hmm, wonder where I’ve heard that idea before? Wait, it’ll come to me…

p.s. Just to be clear, I’m not claiming that PeerJ got this idea from Owen Petchey and I. I just feel vindicated that something like our “PubCreds” idea would be incorporated into a serious publishing business venture. More than one, actually. This also vindicates a remark which scholarly publishing consultant Joseph Esposito made to me at a publishing conference: PubCreds will happen when someone figures out how to monetize it (or in this case, monetize a larger initiative, of which something like PubCreds is one component).

Posted by: Jeremy Fox | June 13, 2012

Intuition, education, and zombie ideas (UPDATED)

Here’s an intriguing little cognitive psychology experiment, which shows that highly educated people evaluate the truth or falsehood of statements less quickly and less accurately if those statements are ones that appear true under a “naive” theory, but which education teaches us are actually false (e.g., “Humans are descended from chimpanzees”, “The Earth revolves around the sun.”). This suggests that pre-existing false ideas aren’t “overwritten” by education, they’re merely “suppressed”.

I wonder if something similar can explain the persistence of certain zombie ideas. Is it just inherently difficult to unlearn the first thing we ever learn about a topic, as I suggested in this old post? So that, if the first idea you ever learn about a topic is false (say, you’re taught the IDH as an undergrad), you become a zombie and it becomes very difficult to cure you? And if so, what can we do about it (besides make sure that our undergrad curricula are up to date)?

Presumably there’s a lot of research on this I’m not aware of.

p.s. Depressingly, rates of correct responses were lowest, and speed of response slowest, for questions about evolution (questions fell into 10 different subject areas across the physical and life sciences).

UPDATE: Error in first paragraph now fixed.

Posted by: Jeremy Fox | June 13, 2012

Blogging and tweeting the ESA meeting

The Ecological Society of America is encouraging bloggers to blog the ESA meeting. If you do a post on any aspect of the meeting, they’ll create a post on their EcoTone blog with your post title, an excerpt, and a link back to the full post on your blog. Details here.

Interesting idea. I’ll need to ask a few questions before I participate. Not sure if my daily previews and summaries of my experiences are the sort of thing they’re looking for. Those posts are really informal, and they’re written for the audience of this blog rather than the much broader audience of EcoTone. Plus, I sometimes include criticism of various aspects of the meeting. Even if EcoTone is just going to be doing brief excerpts and linkbacks, I want to make sure they’re ok with what they’re linking to.

And if you’re tweeting the meeting, the hashtag is #ESA2012.

Posted by: Jeremy Fox | June 13, 2012

Elinor Ostrom, 1933-2012

Elinor Ostrom, the first woman ever awarded the Nobel Prize in Economics, has died. Ecologists, including me, mostly don’t know her work (I only know of it). But we should. She did hugely important work on the management of common pool resources, and argued that Hardin’s tragedy of the commons was in fact sustainably soluble (and, as a matter of historical fact, had been sustainably solved many times) via appropriate social norms and institutions, not just by privatizing the commons as Hardin argued.

Crooked Timber has a brief remembrance and (as always over there) a remarkably good comment thread full of thoughtful remarks and useful links.


Posted by: Jeremy Fox | June 12, 2012

Advice: where to eat and drink at Evolution 2012

I lived in Ottawa for four months a couple of years ago while on sabbatical. I’ve drawn on that experience to create an annotated map of suggested places to eat and drink for Evolution 2012.

Basically, everyone is going to be eating and drinking in the Byward Market area, as it’s the highest concentration of bars and restaurants close to the convention center and to major tourist attractions like Parliament. I’ve suggested a few places in that area, but also some places a bit further afield for folks who like to walk or who are willing to bike/drive/take a taxi. The other “main drags” in downtown Ottawa are Elgin St. and Bank St.

Bottom line: the best places to drink in Ottawa are The Manx (walkable from the convention center; hidden in a basement; great cozy, buzzy atmosphere; excellent, changing food menu; really popular) and Pub Italia (not walkable; has the only world-class beer selection in Ottawa; big draft list plus hundreds of bottles including lots of rarities and obscure Belgians; an essential stop for beer geeks like me).

See also this list of local pubs produced by the meeting organizers.

Posted by: Jeremy Fox | June 12, 2012

How to attend big conferences: have a focus

Over at Sociobiology Joan Strassman has a great post on how to choose talks at big conferences like the ESA Annual Meeting. This is something we’ve talked about on this blog before, but no one ever brought up Joan’s excellent suggestion: have a focus. That is, try to see as many talks as you can in a particular area that you want to learn more about. This is a great way to get up to speed on the current state of play in that area.


Posted by: oikosasa | June 12, 2012

Accepted Today

Patterns and processes of population dynamics with fluctuating habitat size by Fukaya K (Hokaido University, Japan), Shirotori W and Kawai M.

Competitive outcomes between two exotic invaders are modified by direct and indirect effects of a native conifer by Metlen KL (Nature Conservancy, Oregon, USA), Aschehoug ET and Callaway RM

Soon as Early View!

Posted by: oikosasa | June 12, 2012

New Managing Editor

  Hi everybody

My name is Åsa Langefors, I’m the new Managing Editor at Oikos. After the two initial  months at the office, I have now started to learn the routines and to get aquainted with editors, authors and reviewers. i.e. with Oikos. And just as things started to  find their place in my brain, we switched over to ScholarOne for handling of manuscripts. New stuff to learn, new routines etc. But the new system has so far run smoothly and I hope everybody will have some patience for possible confusions that might arise in the beginning.

I have my background in Molecular Ecology, having studied MHC-genes in fish as a PhD-student and post-doc. During the last five years, I have combined a career as a freelance journalist with work at the university in a research school in Genomic Ecology (with a special interest in so called soft skills, including carreer development, personal development, how to deal with gender issues etc.) and in a EU-project in Soil Ecology. I am sure that several parts of my quite mosaic background will be useful in my work with Oikos.

When not working, I spend time with my family, consisting of a husband and two kids and also a lot of time in my garden. In addition, when life is good, I think we have a duty to enjoy it. So, good cappuciono, fine wine, good food, doing workout (preferentially outdoors, like running, walking, skiing or skating) are important matters to me.

Posted by: Jeremy Fox | June 11, 2012

Against live-tweeting talks (UPDATEDx2)

A rant against live-tweeting talks, here.

I don’t tweet at all, so I don’t live-tweet. In particular, I don’t feel like I’d provide much of value to anyone by live-tweeting talks, or that I’d get a lot of value out of following others’ live tweets.* And while it doesn’t bother me much as a speaker–I just ignore people who are doing it, much as I ignore undergrads who text during class–I can understand how it would bug some people.

I do agree with the linked post that those who do it get less out of the talk. In particular, I try to make my talks as dense and fast-paced as possible without risking losing the audience. That is, I try to design my talks so that you have to pay close attention, and so that your close attention is rewarded. You’ll hopefully feel like you really got a lot out of the time you spent listening to me. I question whether you’ll be able to fully follow a talk, especially the sort of talk I try to give, if you’re live-tweeting. Studies show that even people who think they’re good at “multi-tasking” and have practiced it a lot actually aren’t good at it, by any measure.

So live-tweeting doesn’t really bother or offend me personally, though I can see why it would offend others. I think the people who do it are mostly hurting themselves, if only a little, and not for much benefit that I can see. But I’m an old guy, so I suspect some of our regular commenters are totally going to disagree with me on this.

UPDATE: Just to be clear, I certainly don’t think that the only people who ever pay less-than-full attention to a talk are the people who are live-tweeting. Far from it. And as a speaker, I personally am no more bothered by live-tweeters than I am by people who aren’t paying full attention for some other reason.

UPDATE #2: Perhaps not surprisingly, I’m not big on the way this post is being summarized on Twitter. The post isn’t really about whether “talks should be ‘tweetable'”, it’s about the benefits and costs of live-tweeting them. Anything is ‘tweetable’. But in truth, this probably says more about me than it does about Twitter or folks who use it. I tend to distrust other people’s short summaries of anything, whether tweeted or not. And just for the record, let me say that commenters here, and Joan Strassman in her own post, have articulated some good reasons why one might live-tweet (or tweet right after a talk), or follow the live-tweets of others. So while I personally still don’t see much value in live-tweeting, I can understand why others do. As with many things in life, it’s a question of doing what works for you.

*To be clear, I do see value in many other uses of Twitter.

Posted by: Jeremy Fox | June 10, 2012

Fighty crab: the meme that keeps on giving

I continue to find Zen Faulke’s “fighty crab” meme hilarious. I’m still realizing how versatile it is.

Written a confusing blog post? Fighty crab tells you:

Want to celebrate World Oceans Day? Fighty crab tells you how:

Want to win your One True Love back (assuming your One True Love was having a dalliance with, um, dolphins)? Fighty crab has you covered:

p.s. I didn’t come up with any of these.

Posted by: Jeremy Fox | June 10, 2012

Fighty crab vs. zombies

Posted by: Jeremy Fox | June 9, 2012

Advice: free e-book of presentation tips from Zen Faulkes

Zen Faulkes (‘Neurodojo‘) has compiled his many excellent blog posts on scientific presentation tips into a short e-book (i.e. a pdf file). Recommended.

Billiards is all about sequences of causal events. Your cue strikes the cue ball, causing it to roll into another ball, causing that ball to roll into the corner pocket.

Falling dominoes are sequences of causal events. You knock over the first domino, which knocks over the second, which knocks over the third.

Rube Goldberg machines are sequences of causal events. The toy car is pushed into a line of dominoes, the last of which falls onto another toy car, which rolls down a ramp and runs into a ball, which rolls down another ramp…[skipping ahead]…which causes a piano to fall…[skipping some more]…which causes paintball guns to fire at a rock band.*

When humans think about causality, they find it natural to think in terms of sequences of events. That’s why colliding billiard balls are a paradigmatic example of causality in philosophy.

But ecology is mostly not like billiards, or falling dominoes, or Rube Goldberg machines. Like history, ecology is (mostly) notjust one damned thing after another.” But it’s hard not to think of it that way, and to teach our students not to think of it that way.

(UPDATE: I’m not saying that ecology, or dynamical systems in general, aren’t causal systems. They are! I’m just saying that the nature of that causality is such that it’s misleading to think about it as “Event A causes event B which causes event C which causes event D…”)

(UPDATE#2: Nor am I saying that ecological systems are “nonlinear” or “nonadditive”. They are, but that’s not my point here. For instance, you can have a sequence of causal events in which the magnitude of the effect is nonlinearly related to the magnitude of the cause. See the linked post from Nick Rowe, below, for further clarification. Sorry the original post wasn’t better, it’s clear that I did a lousy job of anticipating the ways in which readers might misunderstand what I’m trying to get at here).

Ecology is about dynamical systems. Stocks and flows, not falling dominoes. Inputs and outputs, not colliding billiard balls. Simultaneity, not sequences. Feedbacks, not one-way traffic.

Here’s an example. It’s a population ecology example, but not because population ecology is the only bit of ecology that’s about dynamical systems. It’s just a bit of ecology I know well. I could equally well have picked an example from physiological ecology (e.g., to do with individual growth), or from community ecology, or from ecosystem ecology, or from island biogeography, or conservation biology, or spatial ecology, or macroecology, or etc.

The example is predator-prey dynamics. You’ve got some prey that reproduce and die, and some of those deaths are due to predators. Predators convert consumed prey into new predators, and they die. Purely for the sake of simplicity (because it doesn’t affect my argument at all), let’s say it’s a closed, deterministic, well-mixed system with no population structure or evolution or anything like that, so we can describe the dynamics with just two coupled equations, one for prey dynamics and one for predator dynamics. And again for the sake of simplicity, let’s say it’s a constant environment and there’s no particular time at which organisms reproduce or die (e.g., there’s no “mating season”), so reproduction and mortality are always happening, albeit at per-capita and total rates that may vary over time as prey and predator abundances vary.

You cannot think about this dynamical system in terms of sequences of causal events. For instance, let’s say the system is at equilibrium, meaning that predator and prey abundances aren’t changing over time. That does not mean nothing’s happening! In fact, there’s a lot happening. At every instant in time, prey are being born, and prey are dying, and those two rates are precisely equal in magnitude but opposite in sign. And at every instant in time, predators are being born and predators are dying, and those two rates are precisely equal in magnitude but opposite in sign. Inputs and outputs are in balance. You cannot think about equilibria in terms of sequences of causal events, it’s like trying to think about smells in terms of their colors, or bricks in terms of their love of Mozart. What “sequence of events” keeps the system in equilibrium?

Or, let’s say the predators and prey exhibit cyclic dynamics. For concreteness, let’s say it’s a limit cycle in the Rosenzweig-MacArthur model. Why do the predators and prey cycle? This is a case where it’s sooo tempting to think in terms of sequences of events; I know because my undergrad students do it every year. “The prey go up, which causes the predators to go up, which causes the prey to crash, which causes the predators to crash.” In lecture, even I’ve been known to slip and fall back on talking this way, and when I do the students’ eyes light up because it “clicks” with them, they feel like they “get” it, they find it natural to think that way. And it’s wrong. Not “wrong in the details, but basically right”. Not “slightly wrong, but close enough.” Wrong. Births and deaths are happening instantly and continuously. There are no sequences of events here.

Now I can hear some of you saying, ok, that’s true of the math we use to describe the world, but it’s not literally true of the real world. In the real world one could in principle write down, in temporal order of occurrence, all the individual birth and death events in both species. But my point would still hold. A prey individual was born, which caused prey abundance to increase by one, which caused…what, exactly? What’s the next domino to fall in the sequence? Another prey birth? No. A prey death? No. A predator birth or death? No. What that increase in prey abundance did was slightly change the expected time until the next birth or death event, by increasing prey abundance and (in any reasonable model) feeding back to slightly change the per-capita probabilities per unit time of giving birth and dying. Now, you could try to drill down even further, down to the underlying physiological (or whatever) causes of individual births and deaths, and the underlying mechanisms linking per-capita birth and death probabilities to species’ abundances. But you’re never going to find something that lets you redescribe predator-prey dynamics in terms of sequences of events, each causing the next. (UPDATE #3: And to clarify further, no, I’m not trying to argue against the notion that population dynamics are ultimately a matter of individual organisms giving birth, dying, and moving around. I actually heartily believe that! My point is to do with how to interpret the causality of what’s going on, whatever level of organization (individuals or populations) we choose to focus on.)

Our deep-seated tendency to think in terms of causal sequences of events rather than in terms of rates of inputs and outputs (i.e. rates at which the amount of something increases or decreases) doesn’t just make it hard to teach ecology. I think it also makes it hard for professionals to do ecology. For instance, to preview a future post, much of the appeal and popularity of structural equation models (SEMs) that they let researchers take causal diagrams (variables connected by arrows indicating which ones causally affect which others) and turn them directly into fitted statistical models. That is, SEMs mesh with and reinforce our natural tendency to think about causality in terms of colliding billiard balls. Which I think makes them positively misleading in many circumstances (as I say, much more on SEMs in a future post).

This post was inspired by a post on the same topic by Nick Rowe. Nick’s post is about economics. His post is way better than mine. You should click through and read it (no training in economics required; stop when you get to the bit at the end about “concrete steppes”, which is where the post segues into technical economics issues).

*Click the link to see what I’m talking about. 😉

The current distribution of species bears the strong stamp of “big, slow” historical events and processes–speciation events, continental drift, meteor strikes, ice ages, the rises and falls of mountain ranges and land bridges, etc. Which has often been taken to imply that, in the grand scheme of things, the sorts of “small, fast” processes that contemporary  community ecologists study don’t actually matter all that much.

In this old post, I argue that, to the contrary, “big, slow” historical processes only matter if contemporary community ecology lets them matter. What we ought to be asking is not whether community ecology is “important” (because it has to be), but why community ecology is “history preserving” rather than “history erasing”.

Intrigued? Click through and read the whole thing.

p.s. I know I said in a recent post that I was done talking about macroecology for a while. But the response to that post was so positive that I changed my mind. It’s gonna be all macroecology all the time now! (just kidding)

Posted by: Jeremy Fox | June 7, 2012

Pollination ecology humor

Here’s what it probably feels like for a strawberry flower to be pollinated and develop into a strawberry. With cartoons.

“The changes can be unsettling.” LOL!

HT Jeremy Yoder

Posted by: Jeremy Fox | June 7, 2012

Is macroecology like astronomy?

Note: This post is old wine in a new bottle. It basically repeats some old posts, just in a slightly different way. I’m only doing it because the comment threads on those old posts are really good, but I felt like they petered out a bit too soon.* This is my attempt to revive them. So if you’ve been reading my old posts on macroecology and the associated comments and thinking “More please!”, this post is for you. But if, as is more likely, you’ve been reading those old posts and thinking, “Jeebus, doesn’t he have anything new to say?”, just skip this one.

Note: Man, this post ended up way longer than I originally intended! Sorry about that. Maybe everybody should skip it. Basically, all the post does is argue that macroecology is not at all like astronomy, except in a couple of superficial ways. If you really care why I argue that, and can’t guess it from having read old posts, read on…


I write a lot on this blog about how difficult it is to infer process from pattern, mechanism from observation, and causation from correlation, even tentatively. In response, ‘macroecological’ colleagues, whose work emphasizes the documentation and interpretation of observed ‘large-scale’ ecological patterns, often point to the example of astronomy. The example of astronomy, they say, illustrates how observational science can be successful science, and even causation-inferring science.

But unless I’ve missed it (which is quite possible), they never keep following that line of thought, at least not in any detail. I wish they would. Yes, absolutely, astronomy is a successful, causation-inferring science, and it does so without being able to manipulate stars or galaxies or whatever. But precisely how does it achieve its successes? After all, there are also unsuccessful observational sciences. Macroeconomists, for instance, infamously remain in vociferous disagreement on even the most basic points. So what makes astronomy successful, and can macroecology emulate it? I emphasize that in asking this question, I really don’t know the answer and I’m genuinely curious. I’ve asked this question in the comments on previous posts, but never gotten a reply. So I decided to do a post on it, in the hopes of smoking out Brian McGill and Ethan White’s inner astronomers. 😉

Just so I don’t come off as totally lazy (“Hey readers, teach me astronomy!”), I did do a bit of background research**, the fruits of which are below.

Origins of the macroecology-astronomy analogy

The macroecology-astronomy analogy seems to originate from a comment by Robert MacArthur to Jim Brown, reported in Brown’s Macroecology (p. 21):

“Astronomy was a respected, rigorous science long before ecology was, but Copernicus and Galileo never moved a star.”

Brown goes on to cite geology, specifically the theory of plate tectonics, as a second example. That mechanistic theory is well-confirmed even though geologists have never experimentally manipulated entire tectonic plates or the mantle plumes that move them around.

Brian Maurer, in his Untangling Ecological Complexity: The Macroscopic Perspective (p. 112-113) pursues this analogy a little bit further (in fact, as far as I’ve ever seen it pursued):

Explaining these patterns [in assemblages of species] is difficult because the standard tools of science for developing mechanistic explanations cannot be used. With few exceptions, there is no way to manipulate the biodiversity of a large geographic region. Another problem is that often the same pattern might be consistent with more than one process…[These limitations] also apply to astronomy, but this has not prevented astronomers from learning a great deal about stars and galaxies. Astronomy has a strong, quantitative theoretical foundation in physics, and this foundation allows fairly precise predictions to be made about what patterns can be expected in complex systems like galaxies.”

I can totally see why early macroecological texts pushed this analogy. It’s an effective response to anyone who would claim that, if you’re not doing manipulative experiments, you can’t possibly be doing science. But while the analogy establishes the possibility of a successful, astronomy-like macroecology, I don’t know that it establishes the actuality. Indeed, based on what little I know about astronomy, I’m skeptical of the ability of macroecology to emulate astronomy. I don’t think this means macroecology can’t be successful or infer causality–but I do think its methods aren’t really analogous to those used in astronomy. But these thoughts are tentative. I hope that people who know more than I do about astronomy (or macroecology!) will chime in.

I emphasize that I’m not actually saying that any macroecologist takes the astronomy analogy as seriously as I’m about to try to take it. Nor am I saying that they should have done so–I’m not accusing them of failing to follow their own argument to its logical conclusion or anything like that. I’m actually not out to criticize their use of the analogy at all. I’m just intrigued and curious. I want to see what we can learn by pushing the analogy as far as it will go, even if that means pushing it further than macroecologists have ever taken it.

Reasons why macroecology is not like astronomy

1. Ecologists can do experiments! Ok, it’s true that you can’t manipulate, say, the biodiversity of an entire continent. But you certainly can do smaller-scale experiments that are directly relevant to interpreting large-scale patterns. One example that I’ve raised before, and which I’ll raise again because it’s such a great example***, is the work of Jon Shurin on local-regional richness relationships in freshwater zooplankton communities (Shurin 2000, Shurin et al. 2000). After correcting for an artifact to do with variation in spatial extent of different regions, lake zooplankton appear to exhibit a linear relationship between local (within-lake) species richness, and regional richness (total richness of all lakes in a large region). This has been taken to indicate that local diversity just reflects dispersal limitation, a sort of “passive sampling” from the regional “species pool”, meaning that local communities are open to colonization by whatever species happen to arrive. But here’s the thing: local zooplankton communities aren’t open. You can directly test that by trying to invade lakes with species not currently present. It turns out that invasions mostly fail, unless you first drastically reduce the densities of resident species, thereby eliminating competition from residents. Far from being open to whatever colonists happen to arrive, lakes are almost completely “saturated”. This means that linear local-regional richness relationships had been misinterpreted. As is now well-established, linear local-regional richness relationships are one of those patterns that are, as Maurer says, “consistent with more than one process”. Small-scale experiments like Jon’s were key to establishing that point in the context of local-regional richness relationships.

So macroecologists can’t “manipulate the stars” or “manipulate tectonic plates”–but they certainly can do experiments that provide information directly relevant to the macroecological equivalents of the heliocentric theory or continental drift. In this respect, macroecologists are actually in a better position than astronomers or geologists when it comes to inferring causality. They have more weapons in their arsenal.

This is one way in which I think the analogy with astronomy might actually be holding macroecology back a bit. Their emphasis on the impossibility of large-scale, “manipulate the stars”-type experiments sometimes seems to cause them to downplay the relevance of other sorts of manipulations.**** Just because the pattern is large-scale doesn’t mean the only relevant experiments are large-scale. The underlying processes that putatively generated the pattern typically operate everywhere, at all times, and so can be tested for anywhere, at any time.

As a second example, Jon Levine and others have powerfully combined small-scale experiments with larger-scale observations to  show that the large-scale correlation between native and non-native species richness is down to the fact that the same environmental conditions that promote native diversity promote non-native diversity (e.g., high rates of propagule supply). This effect swamps the fact that, all else being equal, more species-rich communities are more resistant to colonization by species not already present (Levine 2000, Levine 2001 Oikos)

Now in fairness, Ethan White has argued in the comments on old posts that all the best macroecology actually recognizes this and is based on synthesis of all relevant information, including small-scale experiments. I wish I fully shared his confidence that this kind of work is what everybody is out to do, at least ideally. I’m torn between taking his word for it (since he knows the literature far better than me), and my own admittedly-limited experience as a reviewer of macroecological papers which too often neglect directly-relevant experimental work.

2. Astronomy is based on the quantitative estimation of different effects, using well-developed and -validated physical theory. My point here is basically an expansion on Brian Maurer’s brief remark about astronomy’s “strong, quantitative foundation in theoretical physics.” What that foundation allows astronomers to do is to estimate and subtract out from their observations the effects of all sorts of “nuisance” factors and sources of error, leaving them with accurate, precise estimates of the quantities of interest. For instance, see here, here, and here for layman-level discussion of how “transits”, such as the recent transit of Venus between the Sun and the Earth, allow astronomers to accurately and precisely estimate quantities like the mass of the Sun, the absolute (not just relative) distances from the Earth to other objects in the solar system, and the chemical composition of the atmospheres of other planets. See here for the estimation of stellar parallax (a very subtle effect). And see here for discussion of how subtle deviations in the orbits of planets from what would be predicted based on Newtonian mechanics and the masses of known planets led to the discovery of new planets. Note in every case that getting a good estimate does not involve just accumulating a big sample size and then averaging away the “noise”, thereby allowing the desired “signal” to reveal itself. Rather, getting a good estimate involves using well-established,quantitative background knowledge to precisely quantify, say, the Doppler shift in the spectrogram of the atmosphere of Venus due to the Earth’s movement around the Sun.

Macroecologists certainly try to do this sort of thing. They often include covariates in their statistical analyses to statistically control particular sources of variation, they can use comparisons among different datasets to estimate error sources known to affect only some of those datasets, and they can use randomization-based “null models” to try to figure out what their data would look like in the absence of some particular process like interspecific competition. But are those approaches anything like as precise and well-validated as what astronomers do? Indeed, in the case of randomization-based null models, there are reasonable arguments that they don’t, and can’t, work at all.

Further, macroecologists often deny that we could ever have fully-parameterized models of the “microscale” processes of birth, death, and dispersal that ultimately drive species distribution and abundance. I actually don’t think that’s true, at least not universally, but let’s say for the sake of argument that it is. Doesn’t that amount to a denial that we’ll ever have a “strong, quantitative foundation” that would allow us to estimate and subtract out “nuisance” effects, thereby allowing us to precisely estimate effects of interest? For instance, if you think that it’s impossible to parameterize a many-species competition model for a large, spatially-heterogeneous area, what makes you so sure that just randomizing your observed species x sites matrix while holding the row and column totals constant subtracts out all effects of interspecific competition while leaving effects of all other processes intact? Isn’t that basically like an astronomer saying “I don’t know how to quantify the Doppler shift in my spectrogram, so I’ll just randomize my data with respect to the day of observation and hope that that fixes it?” Doesn’t this lack of what economists call “microfoundations” make macroecology more like macroeconomics than astronomy?

3. Astronomy isn’t really about statistical patterns. In a classic Oikos paper I discussed in one of my first posts, John Lawton (1999) argues that macroecological patterns reflect the fact that, at large scales, the “noise” of species- and system-specific details “averages out”. There are reasons to question whether that’s a good analogy for macroecology, but let’s accept it for the sake of argument. Here’s my question: is astronomy like that? I mean, when astronomers estimate the properties of some object in outer space–its distance from us, its chemical composition, its size, etc.–they’re estimating the properties of that particular object. Yes, they do that repeatedly for lots of objects, and I’m sure there are statistical patterns in the resulting data. For instance, maybe the “species-abundance” distribution of different kinds of stars has some sort of interesting shape? But it’s my impression that most (all?) of the vaunted quantitative, cause-inferring rigor of astronomy comes into play in estimating the properties of individual objects, not in studying the statistical features of collections of objects. Basically, I’m suggesting that, if statistical mechanics is your model for what macroecology is like (as both John Lawton and, in other passages in his book, Brian Maurer suggest), then astronomy is not. Astronomy and statistical mechanics are very different in terms of what they’re aiming to do and how they’re aiming to do it. In his paper, John imagines a fairy who foolishly tries to understand the intractable random movements of individual particles of a gas, totally missing the tractable macroscopic properties of large collections of gas particles. But predicting the highly non-random movements of individual “particles”–planets, say, or comets–is what astronomers live for!

Am I totally wrong about all this? Maybe I’m just focusing on the wrong bits of astronomy, and other bits actually are a great model for how macroecology works, or could work? Or maybe I’ve just shown that the whole macroecology-astronomy analogy can’t be pushed any further than the very short distance that Brown and Maurer push it? (Woohoo! I’ve saved macroecology from a shaky analogy no one would’ve thought of if I hadn’t suggested it!) Or in pushing the analogy further, have we actually highlighted some things that needed highlighting, such as the ability of macroecology to draw on small-scale experimental data? I’m honestly not sure–you tell me.

And then I promise to shut up about macroecology and talk about something else. 😉

*Also, right now I’m a bit low on both time for really substantive posts, and really new things to write substantive posts about. It’s quicker to repeat myself. 😉

**Specifically, I spent 5 minutes googling “astronomy blog” and reading the top hits. 😉

***I did warn you about old wine in a new bottle.

****It also causes them to forget that you can “manipulate the stars” if you create an artificial universe in which the stars are small enough to be manipulable. That is, you can do experimental macroecology in microcosms. Phil Warren and his collaborators have done a lot of nice work on this (e.g., Holt et al. 2002).

Posted by: Jeremy Fox | June 7, 2012

Carnival of Evolution #48

Now up at the world’s most popular evolutionary blog, Pharyngula. Check it out.

Posted by: Jeremy Fox | June 6, 2012

Advice: on the perils of “established” methods

Commenting on the previous post, Jim Bouldin notes that people often choose, or justify, their methods on the basis that those methods have been used by many others in the past.

As Jim points out, there is a problem with this:

You should choose well-justified methods, not popular ones. And you should justify your methods by justifying them. Saying that “At least I’m only making mistakes that others have made before” is not a justification.

By the way, I’m guilty of invoking past practice as a justification for my own practice, though I try to do it only as a supplement to good justifications rather than as a substitute. Occasionally referees who aren’t swayed by good arguments can be swayed by bad ones, so I sometimes use both.

I’m all for making the most of the data we already have–but no more than that. An hazard of trying to wring as much as possible from any dataset is that you’ll overstep and try to use the data to address questions or draw conclusions that can’t be addressed or drawn. In ecology, this was the motivation behind the excellent NutNet project-existing data weren’t really adequate, so they had to go collect new data.

Over at Cop in the Hood there’s a fun rant by Peter Moskos on just this point, in a social science context. A huge, information-rich Big Dataset recently was used to argue that people in poor neighborhoods have just as easy access to nutritious food as people in rich neighborhoods, so lack of easy access to nutritious food can’t explain higher incidence of obesity in poor neighborhoods. Which is total bunk because the data on what constitutes a “grocery store” are, if not total garbage, at least totally inadequate for the purpose for which this study tried to use them. A fact which the study recognizd, only to dismiss it with the excuse that better data would have been difficult and expensive to obtain. Which amounts to saying “Doing it right would’ve been hard, so we decided to do it badly.” Click through to read the whole thing, it’s a great, short read and not at all technical.

I’m curious to hear from readers who work more with pre-existing data than I do: Have you ever looked into doing some sort of analysis of pre-existing data, only to drop it because you decided that the data weren’t good enough? Or have you ever reviewed a synthetic paper and told the authors, “Sorry, but your whole project is worthless because the data just aren’t good enough”?

And are there any general strategies that can be used to guard against making more of the data than is reasonable? One possibility is to  involve the people who collected the data in any synthetic effort using those data. That’s certainly something my CIEE working group on plankton dynamics did, and I think it was a good thing, even if it does have its own risks (e.g., causing the synthesizer to worry about truly minor flaws in the data that don’t actually affect the results).

Note that one strategy that doesn’t guard against poor-quality data is “make sure you have a really big dataset.” Having more fundamentally-flawed numbers, or more non-flawed numbers to go with the fundamentally-flawed ones, doesn’t make the fundamentally-flawed numbers any less flawed. Put another way, flaws in your data don’t just create “noise” from which a “signal” can be extracted if only you have enough data. Flaws in your data can eliminate the signal entirely, or worse, generate false signals (as in the social science study linked above).

It’s only natural that someone like me would worry about this sort of thing, as I don’t work with pre-existing data that much. I’d be interested to hear from people who do data synthesis for a living and are really invested in it (the ‘synthesis ecologists‘). How often do you run into serious problems with data quality, bad enough to prevent you from answering the question you want to answer? Does the possibility keep you up at night? What do you do about it?

HT Andrew Gelman, who also comments.

p.s. Before anyone points this out in the comments: I freely grant that everyone always tries to push every method or approach as far as it will go, so everyone always runs the risk of overstepping what their chosen method or approach can teach them. But ‘synthesis ecology’ is what’s hot right now, so that’s the context in which I think it’s most important to raise this issue.

UPDATE: Here’s this post in cartoon form.

The summer conference season is upon us. I’ll be at Evolution 2012 in Ottawa and the ESA Annual Meeting in Portland, Oregon (where I’m talking on Thursday afternoon…sigh).*

Closer to the meetings, I’ll have meeting previews highlighting talks I’m especially looking forward to attending, and places I’m planning on eating and drinking, and I’ll be blogging from the meetings as well.

In the meantime, here are a couple of posts from the archives to help you prepare:

  • Choosing which talks to attend is always tricky at a big meeting. Which raises the question: Are there any reliable predictors of talk quality? (besides “Jeremy Fox recommended this talk.”)
  • Here are a bunch of tips on how to give a good talk, and avoid some common statistical errors. Also includes links to more comprehensive sources of advice on giving talks.
  • If you’re giving a poster, here’s my one big piece of advice: YOUR POSTER HAS TOO MUCH FRICKIN’ TEXT! I know this because EVERYONE’S posters have too much text these days. Seriously, I’m not kidding. A poster is not a paper in large, colorful, flat form. No one wants to stand there reading for 15 minutes. Your poster should be a highly digested, mostly graphical summary of your work. All the text can just be short bullet points, and not many of them (like, say, 3 for the Discussion). After all, you’re going to be standing right there–if people have questions, they can ask you! In all seriousness, posters were actually much better before the advent of color plotters. It was a pain in the neck to print out and matte a whole bunch of 8.5 x 11 sheets of paper, so you pared things down to only the essential information.
  • Here’s how to ask tough questions, and here’s how to answer them.
  • Here’s why to “network” at conferences, and here are some tips on how to do it.

*And while I won’t be there, I’m a co-author on a talk at the Society for Vertebrate Paleontology Annual Meeting. It’s on using the Price equation to separate effects of speciation, extinction, immigration, and within-lineage change on mammalian size evolution, and includes an illustrative application to a classic paleo dataset. Check it out if you’re going!

Posted by: cjlortie | June 4, 2012

Journal versus publisher webpages

I find it a little challenging and often distasteful that the top hits for some ecological journals, including Oikos, are the publisher’s page of it. Often, I want to get right to the journal, imagine that, and it is tough to navigate through the publisher standard page to get to it. I recently enjoyed a really exciting paper in Seed Science Research, so googled it, and got the publisher page. I wanted to post a comment on a paper I read there. After skimming the page, http://journals.cambridge.org/action/displayJournal?jid=SSR, I found the ‘SSR home’ link on the left.  Whew. I clicked it assuming I was going to get a nice home-spun page with the feel of the journal and instead I get the exact same page again.  Now, let’s try Oikos.  I googled and it and got the following return, http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1600-0706, that is after after the yogurt sponsored ad (we should pay to displace that really). Whilst this is an attractive url, I propose we get the Oikos/Nordic one higher up on the list. Or perhaps move the content there or ask for a clearer redirect? I skim now, find the ‘journal home’ on the left, click it, and viola… get the same page again. OK, maybe I am misunderstanding it. Where is the Oikos, Nordic Society homespun page? I can’t see it. Backing out, I google Nordic Society, and whilst I am happy to see the Canadian one at the top, no Oikos again. The Nordic Soc of Irish Dancers is the most fun one though. As a final test of my secret hypothesis here that we are not managing our online presence as ecologist as best we could, I try the journal of ecology. First hit, gorgeous, www.journalofecology.org/. The url makes sense, photos pretty, nice screen, I can see it is the BES in the top right without it dominating the journal presence, and if I work at it, I can see at the very bottom that it is a Wiley publication. Better.  If someone has time, check a few more journals for us but part of making ecology more funded, better perceived, and useful, has to include more effective online perception.

Posted by: Jeremy Fox | June 4, 2012

Cool forthcoming Oikos papers

Some forthcoming (in press) Oikos papers that caught my eye. Lots of good stuff in the pipeline!*

Nadeem and Lele introduce a new maximum likelihood-based method of population viability analysis (PVA) and test it on song sparrow time series data. The new method, called “data cloning”, was previously developed by Lele in other contexts. It estimates observation error as well as process error (e.g., demographic stochasticity), and deals gracefully with missing data. The clever thing about it is that it has all the computational advantages of more popular Bayesian estimation methods, but it’s fully frequentist and so doesn’t need priors. Which is a good thing because priors for the rare, poorly-known species for which we often want to construct PVAs often are pretty arbitrary guesses which have strong effect on the outcome of the analysis because there’s not enough data to “swamp” them. You also avoid having to adopt the subjectivist Bayesian interpretation of what your probabilities (e.g., extinction probabilities) mean. In the case of song sparrows, it turns out that incorporating observation error into the analysis really changes the results. The approach is even sensitive enough to detect evidence that the data and associated PVA model omit important biological processes (here, dispersal).

Tielbörger et al. use a massive series of carefully-controlled common garden experiments to reveal strong evidence for “bet-hedging” germination in annual plants. Roughly, bet-hedging is a way of maximizing your expected relative fitness in an uncertain environment. Germinating all your seeds every year (going “all in” in betting parlance) provides a big payoff if the year turns out to be a good one, but it is very risky. If the year turns out to be a bad one, all the resulting plants will die before reproducing (the ecological equivalent of “going bust”). But conversely, if you never germinate any seeds, so that your seeds just sit in the ground, they’ll eventually all die without reproducing (“nothing ventured, nothing gained”). So the optimal germination fraction (the one with the highest expected relative fitness compared to the others) will be some intermediate fraction, the precise value of which depends on the probability distribution of different kinds of years. That’s the theory, anyway. But strong empirical tests are almost non-existent, because they’re really difficult. For instance, you have to control for environmental and genotype x environment variation in germination fraction. The authors went to the trouble of developing inbred lines of each of three annual plant species, growing up their seeds in a common greenhouse environment to eliminate maternal effects, and then planting those seeds into common gardens along a rainfall predictability gradient in the field, and along an artificial rainfall gradient in the greenhouse. As expected, species subject to higher risk of reproductive failure exhibit lower genetically-determined germination fractions. Yes, Virginia, annual plants really do hedge their bets–and those that face more risk do more hedging!

Fraker and Lutbeg develop an individual-based model of mobile predators and prey and show how limitations to the movement rates and perception distances of individuals cause their spatial distributions to deviate from the ideal free distribution. If you have limited information (=limited perception distance) and limited ability to act on that information (=limited movement rate), you can’t attain the ideal free distribution (which assumes that you have perfect information which you’re free and fully able to act upon). Which at that level is kind of obvious, but Fraker and Lutbeg explore precisely how the resulting distributions deviate from ideal free, which is much less obvious. Bailey and McCauley (2009) is one nice experimental paper showing data illustrating some of the predicted consequences of limited information and movement rates. More broadly, I always like stuff that shows the complex and counterintuitive macroscale consequences of different microscale assumptions about the behavior and movement of individual organisms. Maybe if people write enough of these kinds of papers, other people will quit trying to infer the underlying microscale processes directly from inspection of (or some sort of randomization of) macroscale data.

Speaking of starting from microscale assumptions and deriving their macroscale consequences, Casas and McCauley ask: What’s the functional response of a predator that must divide its time between searching for prey, and other activities (broadly denoted as “handling”)? If you said “It’s an increasing saturating function and we’ve known that since Holling (1959),” you’re right–sort of. That is, you’re right only if you’re prepared to make radical simplifying assumptions about the relative timescales of the underlying processes that cause predator individuals to change “states” (here, from the state of “searching for prey” to the state of “handling captured prey” and back again). If you want to avoid such radical (and often unrealistic) assumptions, then you have to be prepared to do much more complicated math, which Casas and McCauley illustrate for both parasitoids and a predator (Mantis, the same predator considered by Holling himself in a classic 1966 study of predator functional responses). One consequence of increased realism is that the predator population never reaches an equilibrium or stationary distribution of individuals in different states, a fact which turns out to have important and testable consequences for predator-prey dynamics.

Finally, I don’t see how I can get away without mentioning Mata et al., an impressively large protist microcosm experiment manipulating disturbance intensity, disturbance frequency, nutrient enrichment, and propagule pressure in factorial fashion and examining their effects on resident community structure and invader success. As you’d expect, such a complicated experiment throws up complicated results, some of which seem to be readily interpretable (e.g., high disturbance intensity creates conditions that favor invaders with high intrinsic rates of increase), others less so. I do think it’s a little unfortunate that the authors frame their experiment as a test of Huston’s “dynamic equilibrium model”, since that “model” shares the same fatal logical flaws as zombie ideas about the intermediate disturbance hypothesis. I suggest that framing the experiment in terms of logically-valid theory might have aided interpretation, and possibly even suggested a somewhat different experimental design.

Many other interesting-looking papers coming out, but I don’t have time to dig into all of them so this’ll have to do for now. Happy reading!

*p.s. Just so you know, no one has ever told me, hinted to me, or implied to me that I should promote the journal’s content. When I highlight Oikos papers that I think are particularly interesting, it’s because I think they’re particularly interesting. It’s not like I ever think “Ok, gotta pick some Oikos papers to talk up now.” I hope you’ll take my word on that, given that I also link to a lot of non-Oikos content and criticize Oikos papers. Don’t get me wrong, I’m an Oikos editor and author, I like the journal, I want to see it do well and continue to fill what I think is an increasingly crucial niche, and I think the blog can help achieve that. And so when I do highlight interesting papers, I highlight interesting Oikos papers. But if I didn’t think there was anything worth highlighting, I wouldn’t. So I hope you find it valuable if I occasionally highlight Oikos papers I find particularly interesting, or invite the authors to do so, just like I hope you find the other posts valuable. But if for whatever reason you don’t, that’s fine.

Note as well that in saying this, I mean no criticism of any other journal blog, many of which focus much more than we do on the content of the associated journal. Different blogs are different.

Posted by: Jeremy Fox | June 1, 2012

Techniques aren’t powerful; scientists are

During a long and interesting post on storytelling in science, Andrew Gelman makes the following remark about some famous statisticians and the techniques they’ve developed:

The many useful contributions of a good statistical consultant, or collaborator, will often be attributed to the statistician’s methods or philosophy rather than to the artful efforts of the statistician himself or herself…Rubin wielding a posterior distribution is a powerful thing, as is Efron with a permutation test or Pearl with a graphical model, and I believe that (a) all three can be helping people solve real scientific problems, and (b) it is natural for their collaborators to attribute some of these researchers’ creativity to their methods.

I think this is right. Techniques and approaches, statistical or otherwise, aren’t powerful except in narrow and, in the grand scheme of things, rather unimportant senses. If techniques were really what mattered, science could be reduced to a recipe that anyone could follow.* What matters most, I think, is how and to what the technique is applied, and how the results are interpreted and linked to other evidence and ideas. None of that can be automated or routinized. Which means that what really matters is the ability of the scientist using the technique or approach. Rich Lenski wielding a vial of bacteria is a powerful thing, as is Mathew Leibold with a simple food web model, or Jon Losos with a phylogeny, or Tony Ives with an autoregression, or Peter Morin with a jar of protists, or me Steven Frank with the Price equation.

It remains to be seen if “me wielding zombie jokes” turns out to be powerful. 😉 If it does, this is what I’m going to say. 😉

*Francis Bacon thought this was possible (“But the course I propose for the discovery of sciences is such as leaves but little to the acuteness and strength of wits, but places all wits and understandings nearly on a level.”) There’s much that he got right in “The New Organon” (1620), but I think this bit is wrong.

Posted by: cjlortie | June 1, 2012

Oikos editorial now open access

The editorial describing and announcing future endeavors is now open access. Whew. We intend to offer all future editorials similarly.Thanks for your patience.

Posted by: Jeremy Fox | June 1, 2012

Engaging with crackpots at scientific meetings

Here’s a rare problem, but one that raises some interesting issues: What’s the appropriate way to deal with a crackpot at a scientific meeting? Specifically, a crackpot presenter? Over at Doing Good Science, Janet Stemwedel raises this question in the context of a philosophical talk that was nominally about the non-crackpot topic of abductive reasoning but actually propounded conspiracy theories about the Kennedy assassination. Janet raises a number of interesting issues, some of which I’ve also blogged about, such as the ethics of tough questions (is it impolite to tell someone they’re full of it, or impolite not to?) and some of which I probably should (e.g., Do scientists have an ethical obligation to use some of their finite time to correct mistakes by others, even if those others seem unlikely to recognize the error of their ways? What are the risks to being too quick to dismiss off-the-wall ideas?*)

As I said, crackpot presentations are rare, especially in ecology. I have the impression crackpots are more attracted to other fields, like philosophy, fundamental physics, and mathematics.** I’ve never actually seen a crackpot ecology presentation. I have seen one ESA talk that literally made no sense, but it was by someone who wasn’t an ecologist by training and who had done competent work in his own field, and so was merely seriously confused rather than a crackpot.

Have you ever seen a crackpot presentation at an ecology meeting? If so, what did you, or the other audience members, do?

*There certainly are examples in ecology and evolution of off-the-wall ideas being quickly dismissed when they shouldn’t have been. The Price equation is one. George Price was brilliant, but he had at least strong crackpot tendencies. Nature famously had to be tricked into reviewing his initial paper on the Price equation, and Richard Lewontin initially dismissed the Price equation as trivial before changing his mind and writing to Price to apologize for his initial dismissal. And conversely, there are examples of crackpot ideas not being recognized as crackpot when they should have been. Recently, PNAS found an excuse to retract a crackpot paper on speciation by hybridization that was backed by Lynn Margulis, an eminent scientist who late in her career pushed her ideas to crackpot extremes. The line between crackpot and non-crackpot ideas, and between crackpots and non-crackpots, isn’t always easy to everyone to see.

**Well, I guess climate change denialism could be considered an example of crackpot ecology, but that’s not so much crackpot as political. I tend to think of true crackpots as being crackpots for their own idiosyncratic reasons.

Posted by: Jeremy Fox | June 1, 2012

Robert McIntosh, long-ago zombie slayer

FOOB Chris Klausmeier recently sent me a 1962 Ecology paper by Robert McIntosh.* Here’s the first paragraph:

Thomas Henry Huxley once commented, “Life is too short to occupy oneself with the slaying of the slain more than once” (Huxley 1901). Certain ideas seem to be invulnerable to attack and persist although subjected to multiple executions. One such ecological idea is that the “law of frequency” devised by Raunkiaer (1918, 1934) is useful as a simple indication of uniformity or homogeneity within a stand or between several stands of vegetation. The persistence in current sources of a concept which has been belabored by ecologists for 40 years is a testimonial to the tenacity of ideas.

Robert McIntosh: long-ago zombie slayer!

I take heart from the fact that no one talks about the “law of frequency” any more. Zombie slaying is possible! But clearly it takes a while–apparently several decades, in the case of law of frequency. Which, perhaps not surprisingly, is longer than a typical professional career. A famous line from Max Planck is relevant here…

The Thomas Henry Huxley quote with which McIntosh leads his paper is interesting, in that Huxley could hardly be said to have lived by it.

By the way, any reader who is inspired by this post to try to revive the “law of frequency” just to mess with me, be warned: You are not big, you are not clever, and I swear to the God of your choice that I will kick your a**! 😉

*Chris has so far not revealed how the heck he found this…

Posted by: Jeremy Fox | June 1, 2012

Advice: how to collaborate

Don’t tell me none of your collaborations are like this.

Posted by: Jeremy Fox | May 31, 2012

Simplifying a complex, overdetermined world

Ecology is complicated. Anything we might want to measure is affected by lots of different factors. As a researcher, how do you deal with that?

One way to deal with it is to try to focus on the most important factors. Try to “capture the essence” of what’s going on. Focus on developing an understanding of the “big picture” that’s “robust” to secondary details (meaning that the big picture would basically look and behave the same way, no matter what the secondary details). This is how I once would have justified my own interest in, say, simple theoretical food web models (e.g., Leibold 1996). Sure, they’re a caricature of any real-world system. But a caricature is a recognizable—indeed, hyper-recognizable—portrait. The whole point of a caricature is to emphasize the most important or distinctive features of the subject. A caricature that’s not recognizable, that’s not basically correct, is a very poor caricature.

But here’s a problem I’ve wondered about on and off for a long time: what’s the difference between a simplification that “captures the essence” of a more complex reality, and one that only appears to do so, but actually just gives the right answer for the wrong reasons? After all, as ecologists we aren’t in the position of an artist drawing a caricature. We don’t know for sure what our subject actually looks like, though of course we have some idea. So it’s not obvious that our caricatures are instantly-recongizable likenesses of whatever bit of nature we’re trying to caricature.

Now, one possible response to this concern is to deny that getting the right answer for the wrong reasons is even a possibility. If we develop a simplified picture of how the world works, then any omitted details which don’t change the predictions are surely unimportant, right? If our model makes basically the right predictions, then it’s basically right, at least as far as we can tell? Right?

I’m not so sure. The reason why I worry about this is what philosophers call “overdetermination“. Overdetermination is when some event or state of affairs has multiple causes, any one of which might be sufficient on its own to bring about that event or state of affairs, and perhaps none of which is necessary. Philosophers, at least the few I’ve read, are fond of silly examples like Sherlock Holmes shooting Moriarty at the exact same instant as Moriarity is struck by lightning, leaving it unclear what caused Moriarty’s death. But non-silly examples abound in ecology. Here’s one from theoretical ecology (I could easily have picked an empirical example). The Rosenzweig-MacArthur predator-prey model predicts predator-prey cycles for some parameter values. Imagine adding into this model a time lag between predator consumption of prey and predator reproduction, one which is sufficient on its own to cause predator-prey cycles. Now here’s the question: is the original Rosenzweig-MacArthur model a good approximation that “captures the essence” of why predator-prey cycles occur when there’s also a time lag? Put another way, is the original Rosenzweig-MacArthur model “robust” to violation of its assumption of no time lags? Or in this more complex situation, is the Rosenzweig-MacArthur model misleading, a bad caricature rather than a good one?

The same questions arise when different causal factors generate opposing effects rather than the same effect, and so cancel one another out. Consider a predator-prey model which has a stable equilibrium because of density-dependent prey growth. Now add in both predator interference and a time lagged predator numerical response, with the net effect being that the system still has a stable equilibrium because the stabilizing predator density-dependence due to interference cancels out the destabilizing time lag. Does the original model “capture the essence” of the more complex situation? Is it “robust” to those added complications? Or is it just giving the right answer for the wrong reasons?

I think the answer to all these questions is “no”. That is, in cases of overdetermination, I’d deny that a model that omits some causal factors is “capturing the essence”, or is “robust”, or is accurately “caricaturing” what’s really going on, no matter if its predictions are accurate or not. But I’d also deny that, in cases of overdetermination, a model that omits some causal factors is misleading or wrong. That is, I think that the alternative possibilities I set up at the beginning—our simplified picture is either “basically right” or “basically wrong—aren’t the only possibilities. There’s at least one other possibility—our simplified picture can be right in some respects but wrong in others.

Further, I think this third possibility, though it might seem rather obvious, actually has some interesting implications. For one thing, a lot of work in ecology really does aim to “capture the essence” of some complicated situation. It’s not just theoreticians who try to do this—empirical ecologists (community ecologists especially) are always on the lookout for tools and approaches that will summarize or “capture the essence” of some complex phenomenon. Which assumes that there is an essence to be captured. Conversely, a lot of criticism of such work argues not only that ecology is too complicated to have an essence to be captured, but that all details are essential, so that omitting any detail is a misleading distortion. I’m suggesting that, at least in an overdetermined world (which our world surely is), both points of view are somewhat misplaced.

For another thing, it’s important to recognize how simplified pictures that are right in some respects but wrong in others can help us build up to more complicated and correct pictures of how our complex, overdetermined world works. Recall my examples of predator-prey models. How is that we know that, say, density-dependence is stabilizing, while a type II predator functional response and a time-lagged numerical response are destabilizing? Basically, it’s by doing “controlled experiments”. If you compare the behavior of a model lacking, say, density-dependence to that of an otherwise-identical model with density-dependence, you’ll find that the latter model is more stable. In general, you build up an understanding of a complicated situation by studying what happens in simpler, “control” situations (often called “limiting cases” by theoreticians). The same approach even works, though is admittedly more difficult to apply, if the effects of a given factor are context dependent (this just means your “controlled experiments” are going to give you “interaction terms” as well as “main effects”). So when I see it argued (as I have, more than once) that complex, overdetermined systems can’t be understood via such a “reductionist” approach, I admit I get confused. I mean, how else are you supposed to figure out how an overdetermined system works? How else are you supposed to figure out not only what causal factors are at work, but what effect each of them has, except by doing these sorts of “controlled experiments”? I mean, I suppose you can black box the entire system and just describe its behavior purely statistically/phenomenologically. For some purposes that will be totally fine, even essential (see this old post for discussion), but for other purposes it’s tantamount to just throwing up your hands and giving up.

Deliberately simplifying by omitting relevant causal factors is useful even when doing so doesn’t “capture the essence”, and even when there is no “essence” to capture. These sorts of simplifications aren’t caricatures so much as steps on a stairway. In a world without escalators and elevators, the only way to get from the ground floor to the penthouse is by going up the stairs, one step at a time.

Posted by: Jeremy Fox | May 29, 2012

Here’s what happens when you don’t understand math

You end up making a terrible sandwich.

(I know that’s not really the take-home message of the linked post. But it was the best teaser line I could come up with to encourage you to click through and read the whole, hilarious thing).

Posted by: Jeremy Fox | May 29, 2012

Advice for thesis writers

Just discovered The Thesis Whisperer, a blog by Inger Mewburn, who studies research student experiences. It looks to be quite honest, thoughtful, and funny, and has quite the following (most posts get dozens of comments, which puts this blog in the shade). Worth checking out, especially if you’re struggling to write your thesis and need a boost from reading about others in the same boat.

Not to say I agree with all of it. There’s a recent post comparing thesis writing to a marathon, which is a superficially-plausible but bad analogy for reasons Thomas Basbøll lays out.

I’ve been running into lots of bad analogies lately, based on identifying two totally different things based on one superficial similarity. Might have to post on this at some point…

Posted by: Jeremy Fox | May 26, 2012

I’m busy

Invited ms due in a few days, too busy to post, so here’s a video of a cockatiel singing “Rock Lobster” by the B-52s:

Posted by: cjlortie | May 25, 2012

Oikos now live on ScholarOne Manuscripts

We want to welcome authors, referees, and the board to the manuscript central platform used by many journals. It is our hope that this will increase speed, efficiency, and communication for the journal. I will miss the old system in some respects, but please do not hesitate to email the editor(s) when you review or submit as we hope to continue building a community here, not just a journal.

Previous system…

Posted by: cjlortie | May 25, 2012

Important developments at Oikos

Oikos has a new EiC, Dries Bonte.  In an editorial, this and other major developments are described including some of the new philosophy and current goals for Oikos.

Dries in action… likely calling authors to tell them the good news.

Here is a brief summary of the changes (but I leave the thunder to the editorial).
The editorial team, inclusively, will write editorials as frequently as we can to highlight papers illustrating effective synthesis.
Oikos is moving all handling to ScholarOne Manuscripts to speed things up and ensure that reviews are never lost.
There are now 50 handling editors and profiles (including some amusing pics) are listed on the Oikos website.
Currently, Oikos publishes approximately 15% of all submissions.
As a first examplar of insights into Oikos synthesis, the spatial ecology papers in that issue are described highlighting the novel elements.

Posted by: Jeremy Fox | May 23, 2012

What makes for productive scientific debates?

Science is full of debates. Some are productive, some aren’t. What makes for a productive debate?

First, a few remarks about what I mean by a “productive” debate. I don’t mean a debate that leads to agreement on all or even any points, either among the main participants or among non-participants. For instance, consider a debate on some matter of empirical fact. If round-earthers and flat-earthers debate the shape of the Earth, and eventually agree that it’s flat (or “compromise” and agree the Earth is hemispherical), does that make their debate productive? I’m not saying that one side or the other is always right, or that compromise positions are never right, merely that resolution of any sort is not a marker of a productive debate. Now, it’s always unproductive if participants can’t agree on what questions they’re debating.* But there are lots of reasons why a debate might fail to settle on an agreed resolution, and being “unproductive” is only one of those reasons.

What I mean by a productive debate is a debate in which the participants engage with one another, meaning that they pay close attention to and understand the other side, and respond to the other side’s evidence and arguments rather than strategically ignoring them. A productive debate also is one in which all relevant evidence and argument gets fully aired, and arguments are pursued to their logical conclusions and their full implications considered. A productive debate also is one that doesn’t get sidetracked by misunderstandings. This means that the participants need to choose their words carefully and precisely, be clear and explicit, and expect the same from other participants. A productive debate also is one in which no one engages in personal attacks, and in which no one takes criticism of anyone’s views (no matter how strong) to be a personal attack.** Productive debates may not reach an agreed resolution—but if they don’t, they at least make the issues crystal-clear. That clarification of the issues is both a very useful outcome (especially to students and others learning for the first time about the subject of the debate), and the most that can be expected.

By that standard, there are lots of productive debates in ecology. Recent debates over MaxEnt, aired in part in Oikos, are an excellent example. Debates over “sampling effects” in biodiversity-ecosystem function research, sparked in part by an Oikos paper (Aarssen 1997), led to productive development of new experimental designs and statistical techniques that resolved the issue (Loreau et al. 2001). The big debate over ratio-dependent functional responses eventually led to agreement on some issues and “agreement to disagree” on others (Abrams and Ginzburg 2000).***

Actually, probably most “tit for tat” exchanges of comments in the literature are productive by my standard. After all, that’s what “tit for tat” means—you raise a point, and I respond to it rather than ignoring it or dodging it. Indeed, there’s a productive debate in ecology every time reviewers do a good job reviewing a ms and the authors respond. Of course, in such cases there’s an editor involved, who effectively can force both sides to debate productively, on pain of having their comment go unpublished, their review ignored, or their ms rejected. 😉

Notably, in the examples I listed where the protagonists eventually came to partial or complete agreement, this was only after a lengthy period of vociferous disagreement. If you are the sort of person who wants to see agreement at the end of a debate (like my fellow editor Dustin), well, I think you can only get that, if at all, by first letting the debate run its natural, debate-y course.

Which isn’t to say all debates in ecology and evolution are productive. The recent spat over inclusive fitness in evolutionary biology hasn’t, as far as I can tell, raised any issues that weren’t already familiar, and seems if anything to have muddied the waters rather than clarifying them. Further back, while I don’t know the punctuated equilibrium literature well, my impression is that that debate had both productive and unproductive elements. Productive in that it sparked interest in some important issues and prompted some new empirical research. Unproductive in that it proved difficult for the protagonists to agree on the questions at issue, and on what would count as an answer. IIRC, the punctuationists were rather shifty and difficult to pin down on what exactly they were claiming, and the same data were infamously interpreted as evidence for and against punctuated equilibrium by the opposing sides.

Anyway, that’s what I would argue. Who wants to debate me? 😉

*It’s fine, and often necessary, to debate the choice and framing of the question, and what counts as evidence for a given answer. But at some point, all sides do need to come to an agreement as to what questions are at issue and what would count as an answer if debate is to be productive.

**Which means, among other things, that “politeness” can be the enemy of productive debate, if by “politeness” you mean “not saying exactly what you mean, in order to avoid possibly offending someone.” That’s not what I mean by “politeness”. In my view, empirical evidence and logical argument are never impolite, and if anyone else takes them that way, that’s their problem. This doesn’t mean you can’t try to phrase what you have to say in a polite way, but you can’t do so at the expense of clarity and explicitness if you really want to have a productive debate.

***Note to youngsters: yes, there once was a massive debate in ecology over a particular functional response model.

Posted by: Jeremy Fox | May 23, 2012

Effort underway to save the ELA

The Canadian federal government recently announced that it will no longer fund the renowned Experimental Lakes Area. There is now an effort underway to save ELA: go here for info.

Posted by: Jeremy Fox | May 22, 2012

On confusing specific examples and general principles

Here’s something I struggle with in my teaching and writing (blogging as well as papers). How do you keep your audience from mistaking specific examples for general principles, and vice-versa?

For instance (to pick a specific example!), “density dependence” is a general principle. It just means that per-capita growth rate varies with density (in any fashion, for any reason). The logistic equation is the first specific example of density dependence taught to undergrads, because it’s the simplest example. Which causes many students to answer exam questions as if they think logistic growth is one and the same thing as density dependence.

The obvious way to deal with this is to use multiple examples of the general principle. Which I try to do, of course. But just using, say, two examples rather than one doesn’t magically allow your audience to extract the general principle and distinguish it from example-specific details. If your two chosen examples share other features besides being two examples of the same general principle, your audience may well latch onto those other features as being somehow crucial. I have had this happen in my teaching, using multiple examples of density dependent models with carrying capacity parameters, thereby giving some students the mistaken impression (despite my best efforts to prevent this) that “density dependence equals carrying capacity”. But if your two chosen examples are as different as you can make them, your audience may have trouble seeing that they have anything in common at all. And if you try to get around these problems by using more than two examples, well, how many examples can you possibly provide, given that you only have so much time, or so many words, to work with?

As I said, this problem doesn’t just crop up when teaching undergrads; it’s not a problem specific to that audience. For instance, consider the “storage effect“, a very general type of coexistence mechanism. A colleague of mine has been arguing to me, correctly I think, that Peter Chesson and others interested in the “storage effect” haven’t always explained it in the best way. Part of the problem may be their reliance on an overly-limited range of examples. Briefly, the storage effect is a coexistence mechanism that can operate when environmental conditions and species’ densities fluctuate over time, in such a way that the strength of competition a given species experiences covaries in an appropriate way with environmental conditions. One way for the appropriate covariances to arise is if the competing species have stage-structured life histories with a long-lived, difficult-to-kill life history stage.* The examples that always get used include annual plants with seed banks, zooplankton with resting eggs, or coral reef fish and tropical trees with long-lived adults. It’s completely understandable why such examples are emphasized. The example of coral reef fish is what originally inspired Chesson and Warner (1981) to come up with the “lottery model” of coexistence, with Peter Chesson later generalizing the “lottery” mechanism to the storage effect. The storage effect is easy to demonstrate and illustrate using the sorts of mathematical models appropriate to species with such life histories.  Such life histories are common in nature, and all the best empirical examples of the storage effect involve species with such life histories. Such life histories even give the storage effect its name. When species with such life histories coexist via the storage effect, one can view each species as increasing when environmental conditions favor it, and then “storing” the gains in a long-lived, hard-to-kill life history stage, preventing it from being driven extinct even if conditions mostly disfavor it.

All of which has given many people the impression that those sorts of stage-structured life histories are essential to the storage effect. Which they’re not. The only life history feature you need is overlapping generations (Ellner and Hairston Jr. 1994). So even organisms that just reproduce and die continuously, with no stage structure at all, can exhibit a storage effect, as illustrated by the “flip-flop competition” model of Klausmeier (2010).**

Sometimes you can avoid the problem of confusing specific examples and general principles by avoiding general principles entirely. That is, if there’s a specific example or application of a general principle which is familiar to your audience, you can sometimes explain a new example or application by reference to the familiar example rather than to the general principle. I do this when I’m explaining the application of the Price equation to ecology (e.g., Fox 2010 Oikos, Fox and Kerr 2012 Oikos). The Price equation is an extremely general and abstract mathematical formalism with very broad applicability. But my audience is very familiar with one specific application: evolution by natural selection. That is, my audience is familiar with evolution by natural selection, even though they (mostly) don’t have any previous familiarity with the Price equation, or with the notion that evolution by natural selection is merely one specific example of the more general principle of “selection” (Price 1995). So rather than trying to explain the general, abstract principles and how they apply to whatever specific bit of ecology I’m talking about, I start by making an analogy between evolution by natural selection and the specific bit of ecology I’m talking about. I pitch what I’m doing not as applying an abstract, general principle in a specific ecological context, but as taking an idea that (as far as most of my audience knows) is specific to evolution, and transferring that familiar “evolutionary” insight to ecology. But if your goal is to convey the general principle, you obviously can’t get away with just talking about examples, because it wouldn’t be clear what they’re examples of?

Any ideas on how to deal with this problem? Does the order in which you introduce general principles and specific examples matter? (I feel like it doesn’t, or shouldn’t, but I’m not sure) There must be papers on this in educational psychology, but I don’t know that literature at all…

*You also need other ingredients. For instance, species can’t all respond in exactly the same way to environmental fluctuations.

**Klausmeier (2010) doesn’t actually say that the storage effect is what generates coexistence in this model, but Chris Klausmeier and I have figured out that that’s what’s going on. See this old post for some discussion. I didn’t walk through the details then and I’m not going to now, because I doubt most readers would be interested. But trust us, it’s a storage effect.

Posted by: Jeremy Fox | May 22, 2012

My blogging is starting to have real-world impact

I’ve just been asked to review a paper by a leading journal, on a topic that I’ve never published on, but have blogged about.

Who says blogging has no real-world impact? 😉

The Canadian federal government is going to cease funding the Experimental Lakes Area. Since the late 1960s, the ELA and its 58 small lakes have been doing amazing long-term monitoring and experiments on whole lakes, including groundbreaking studies of eutrophication and acid rain. Closer to home, they have arguably the best long-term phytoplankton and zooplankton time series data in the world–very frequent sampling of many lakes, going back decades, all resolved to species level, and with consistent sampling procedures and the same taxonomists doing the identifications. I’m the lead organizer of one of the many collaborative groups to have used the ELA data.

The Feds want to transfer ownership of the facilities to a university or the provincial government, on the grounds that universities, not governments, should be doing this kind of science. As if universities and the provincial government have lots of spare money lying around to run the ELA. And as if governments don’t need the kind of science the ELA has always provided (many of their experiments have been chosen precisely for their direct policy relevance).

Most days I’m proud to call Canada home. Not today.

UPDATE: See here for a very good explanation of why this was a bad decision, which also places the decision in the context of the Canadian government’s other reductions in support for basic science (which in turn are part of the current government’s strategy of reducing all federal expenditures and revenues)

One of my pet themes on the Oikos blog is how subtle scientific errors can arise from using ordinary words to describe technical concepts (e.g., see here, here [especially the comments], and the last item on this list). Here’s a lovely passage on this, from physicist N. David Mermin. The context is a discussion of how difficult it is to teach relativity, not just because it conflicts with our intuitions about time and space, but because those intuitions are built into the grammar of our language:

Language evolved under an implicit set of assumptions about the nature of time that was beautifully and explicitly articulated by Newton: “Absolute, true, and mathematical time, of itself, and from its own nature, flows equably without relation to anything external… ” Lovely as it sounds, this is complete nonsense. Because, however, the Newtonian view of time is implicit in everyday language where it can corrupt apparently atemporal statements, to deal with relativity one must either critically reexamine ordinary language, or abandon it altogether.

Physicists traditionally take the latter course, replacing talk about space and time by a mathematical formalism that gets it right by producing a state of compact nonverbal comprehension. Good physicists figure out how to modify everyday language to bring it into correspondence with that abstract structure. The rest of them never take that important step and, I would argue that like the professor I substituted for in 1964, they never really do understand what they are talking about.

The most fascinating part of writing relativity is searching for ways to go directly to the necessary modifications of ordinary language, without passing through the intermediate nonverbal mathematical structure. This is essential if you want to have any hope of explaining relativity to nonspecialists. And my own view, not shared by all my colleagues, is that it’s essential if you want to understand the subject yourself.

Go here to read the whole thing. It’s wonderful.

It isn’t just in physics where our ordinary language and everyday experience get in the way of our understanding of the non-everyday. The same thing happens in economics (see, e.g., much of Paul Krugman’s writing, such as this). The same thing happens in evolutionary biology (famously, Darwin’s use of the word “selection” was widely misunderstood as attributing willful agency and goals to nature). And the same thing happens in ecology. I just wish I could articulate it as well as Mermin! Like an ugly duckling who hopes to grow into a swan, I dream of growing out of my natural snark-and-zombie-joke-based writing style into something like the above.

In particular, I’m still searching for a way of explaining the effects of disturbance by modifying ordinary language, without obliging readers (and my undergraduate students) to pass through the intermediate step of understanding math. But as I indicated by my recent post on another topic, I vacillate on whether that’s even possible, or whether the problem is just that I haven’t found the right words.

HT Robin Synder, a wonderful scientist and a better friend. And a FOOB.

Posted by: Jeremy Fox | May 16, 2012

Postdoc in theoretical/experimental community dynamics

My collaborator Dave Vasseur is seeking a postdoc for an NSF-funded project combining theoretical and experimental work on environmental variability and community dynamics. Details here. Dave is a super-smart and super-nice (and super-tall) guy, so this is a super-awesome opportunity.


Posted by: Jeremy Fox | May 16, 2012

Maybe we need even more stability concepts!

p.s. to the previous post: Commenter Christopher Eliot (indirectly) makes the important point that all those stability concepts related to equilibria and other attractors assume that whatever system you’re studying can be described by a model with unchanging structure and parameter values. It’s only species’ densities (or whatever your “state variables” of interest are) that are allowed to change over time. Of course, in nature it’s probably hardly ever the case that a perturbation just changes species densities, while having no effect on any other aspect of the ecology of the system (e.g., the species’ behaviors, levels of key abiotic factors, etc.).

So maybe we need some new stability concepts! 😉 Just kidding. There are of course theoretical models which allow one or more key parameters to vary over time, often due to extrinsic variation in the environment (that’s sometimes called “external forcing”). And there are models which allow intrinsically-generated temporal variation in parameter values as well, which effectively just makes those parameters into additional state variables (e.g., models of eco-evolutionary dynamics, which allow some parameters to evolve via natural selection). So I don’t think we actually need any new stability concepts. But we probably do need a lot more work on models in which parameter values and even model “structures” can change over time. That’s really difficult and messy of course, which is why many theoreticians understandably hesitate to do those kinds of models.

Posted by: Jeremy Fox | May 16, 2012

Advice: 20 different stability concepts

The comments on a previous post indicated some understandable confusion on the part of some commenters as to the relationship (or lack thereof!) between various measures of “stability” in ecology. The term “stability” is infamous for meaning different things to different people, so that entire areas of the literature, particularly on the links between “diversity” and “stability”, are rife with confusion. Previous attempts to clear up the confusion (e.g., Pimm 1984) seem not to have had much long-term effect, so odds are that no blog post of mine is going to help much. But long odds of doing any good have never stopped me from posting before. 😉

Below is a list of every “stability” concept that I could think of off the top of my head last night, with a brief definition, a useful reference or two (sometimes to the paper defining the concept, sometimes just to an arbitrarily-chosen paper illustrating or applying the concept), and perhaps a few brief interpretive remarks.*

The single most important message you should take away from this list is that these different kinds of stability are different, and in many cases have little or nothing to do with one another. Even when they are related, it’s often in complex, non-intuitive ways. For instance, to pick just one possible example of many, the same conditions that synchronize the fluctuations of the abundances of different species (thereby increasing the temporal variability or “instability” of their total biomass) also can cause their fluctuations to decrease in amplitude and be bounded further from zero, both of which are “stabilizing” (Vasseur and Fox 2007). So the correct answer to the question “Do synchronizing factors decrease or increase stability?” is “What exactly do you mean by ‘stability’?”

I’ve grouped the different stability concepts into three rough categories, although the last category is a catch-all for stability concepts that don’t really fit with anything else.

I have not made any comments on the underlying drivers or causes of any of these stability measures. This post is just about clarifying concepts. Oikos is not paying me nearly enough** to try to write a post summarizing everything that’s ever been written on the drivers of all these different things!

You can talk about the stability of anything that can change over time, but for concreteness I’ll present definitions couched in terms of community ecology.

Cranky note to readers who are not mathematically inclined: This list is only an entry point into the literature. It is no substitute for reading and understanding the literature. In most cases this will oblige you to learn some math. I know that’s probably not what you want to hear, but that’s the way it is. Many important concepts of “stability” are mathematical. It is not possible for me, or anyone, to properly explain these concepts using only words. Math is more precise than words, so translating math into words always represents a loss of information and an increase in ambiguity. Words have multiple meanings, and “stability” is no exception. If you don’t know what meaning is being used in a particular context, there are going to be tears before bedtime. And while some theoreticians sometimes could do a better job of explaining themselves, there’s a limit to how much it’s possible for them to walk you through this stuff. Theoretical papers in the primary literature necessarily assume some mathematical expertise on the part of the reader, just like a paper reporting field experiments necessarily assumes some expertise on the part of the reader with things like experimental design and statistics. So if you want to understand this stuff well enough to work on it or teach it well, you’re going to need to put the effort in to learn some math.

So if you’re the kind of person who reads theoretical papers by skipping the equations and just reading the words, because you just want to “get the gist”, I’m sorry, but it’s pretty much inevitable that you’re going to end up confused about stability. When it comes to “stability” (and many other important ecological concepts), there either is no ‘gist’ to get, or the only way to get it is to first get the technical details. You’d be appalled, and rightly so, if some theoretician said to you, “Plants and algae are basically the same, they’re all green, the differences are just unimportant technical details that only plant ecologists need to worry about”. Which is exactly how big a mistake you’re making if you think that “All these different kinds of stability are basically the same, they’re all related, the differences are just unimportant technical details that only theoreticians need to worry about”. Yes, I know Charles Elton (1958) conflated several different stability concepts in his very influential book on invasion ecology. But he had the very good excuse of writing many decades ago. You don’t have that excuse, and shouldn’t follow his example.

And if you say, “But I don’t know enough math to really understand the details of different stability concepts,” then your choices are to learn some math, or work on something else. Just like how, if you don’t know enough about plants to really understand the plant ecology literature, your choices are to either learn more about plants, or work on something else.

And you know what? It can actually be fun to learn this stuff! Seriously. It’s like anything that takes a bit of effort to appreciate—once you get into it a little, it’s pretty cool.

Note to mathematically-inclined readers: Yes, many of my definitions are very imprecise. Please don’t hassle me in the comments. I am aware of more precise definitions. I chose to write the post in this way because it’s only an entry point into, and rough road map of, the literature, aimed at non-mathematicians. My imprecision will do no harm. No one is going to rely solely on my definitions. I’m only trying to be precise enough to give non-mathematical readers a sense of just how many different stability concepts there are, and just how different they are from one another.

Stability concepts related to equilibria and other attractors

These are the stability concepts that tend to get used most often by theoreticians.

Feasibility of equilibrium, and interior vs. boundary equilibria. Is there a set of non-zero densities at which all species have zero population growth rates? If so, the system has a feasible “interior” equilibrium. An equilibrium at which one or more species have zero density is known as a “boundary” equilibrium. An equilibrium at which one or more species have negative densities is “infeasible”, because negative densities are physically impossible.

Stability of equilibrium. Will species’ densities approach a given equilibrium over time if they’re not currently at that equilibrium (for instance, because they’ve just been perturbed away from their equilibrium values)? Equilibria that are stable in this sense are sometimes called “resilient”. The opposite is an unstable equilibrum, an equilibrium state that the community tends to move away from, rather than towards. A neutrally stable equilibrium is one the community tends to move neither towards nor away from.

(Asymptotic) rate of return to equilibrium. How fast does the community return towards, or move away from, an equilibrium state? Typically, theoreticians ask about the asymptotic rate of return, meaning how fast the system returns once it is already sufficiently close to equilibrium (systems that are sufficiently close to a stable equilibrium approach that equilibrium at a constant rate). This is an asymptotic rate because, in general, if a community is far from a stable equilibrium its rate of return to that equilibrium can fluctuate greatly over time, and can even be temporarily negative (see “reactivity”). But eventually (“asymptotically”) the system gets close enough to equilibrium to approach it at a constant rate.

Local vs. global stability of equilibrium. A community that returns to an equilibrium following a sufficiently-small perturbation (i.e. a perturbation that only changes species’ densities by a sufficiently small amount from their equilibrium values) is said to be locally stable. A community that returns to equilibrium following any possible perturbation that doesn’t actually eliminate a species is said to be globally stable.

Domain of attraction. How far can you can perturb species’ densities away from equilibrium and still have them return to that equilibrium? By definition, a globally stable community has the largest possible domain of attraction.

Alternate stable states. A community with multiple, locally-stable equilibria has alternate stable states. Which equilibrium the community approaches in the long run depends on its initial state. For instance, a community that starts out close to one equilibrium might approach that one rather than another, more distant equilibrium.

Attractor. Any state or sequence of states which a system tends to approach if it’s not currently in that state. Stable equilibria are one kind of attractor. Others include stable limit cycles and chaotic attractors. Most of the stability properties of equilibria are also properties of other kinds of attractors too. For instance,  attractors can be local or global attractors, a community can have alternate local attractors, etc. Also, some kinds of attractors are sometimes considered as more “stable” than other kinds (e.g., equilibria>limit cycles>chaos).

Permanence. A community is permanent if the community tends to move away from boundary equilibria. Basically, this means that, if one or more species are currently rare, they tend to increase rather than decline to extinction. Permanence implies the existence of some sort of interior attractor.

Reactivity. A system is reactive if, on being perturbed away from a stable equilibrium, it initially moves even further from that equilibrium before eventually returning.

Probability of local asymptotic stability (or feasibility, or permanence, or etc.). Given a specified model of system dynamics, with parameters randomly chosen from specified distributions, what is the probability that the resulting system will have a locally-stable interior equilibrium, or have a feasible interior equilibrium, or be permanent, or be reactive, or etc.?

Sign stability. An equilibrium which is guaranteed to be stable, just due to the signs of the interactions among the species (e.g., predators have a negative effect on prey growth rate, while prey have a positive effect on predator growth rate), no matter what the absolute magnitudes of those interactions.

References: May 1973 (stability of equilibrium, sign stability, probability of stability), Goh and Jennings 1976 (feasibility), Law and Morton 1996 (permanence), Neubert and Caswell 1997 (reactivity), Case 2000 (entry-level textbook covering various stability concepts related to equilibria and attractors)

Stability concepts related to variability

These are the stability concepts that, at least lately, are most often used in empirical studies, although there is a fair bit of theoretical work as well.

Temporal variability of a single variable. How much the abundance of a given species, or some other variable of interest, varies over time. Can be measured by various statistics, which themselves differ from one another (e.g., variance, coefficient of variation). Some authors use “constancy” to refer to the property of having low temporal variability.

Temporal variability of the sum of a set of variables. How much the total abundance or biomass of a set of species, or the sum total of some other set of variables, varies over time. The answer depends both on how the individual variables (the summands) vary, and on how they covary. All else being equal, negative covariation among the summands reduces the variability of their sum. Can be measured by various statistics, which themselves differ from one another (e.g., variance, coefficient of variation).

Range or amplitude of variation. The range of values over which the variable of interest fluctuates (maximum minus minimum). For variables that oscillate (cycle) in a periodic fashion, this is known as the amplitude of the oscillation.

Stationarity. A stationary variable is one that fluctuates over time, but with unchanging mean, variance, and other statistical moments. So for instance, a species that’s gradually declining towards extinction (i.e. mean abundance is declining) is not stationary.

References: Ives and Hughes 2002, Loreau and de Mazancourt 2008

Other stability concepts

Resistance. A system is resistant if it’s difficult to perturb it away from its current state. For instance, a small fire that kills grassland plants (thereby perturbing their abundances) might have no effect on tree abundances, indicating that trees are more resistant than grass to small fires.

Species deletion stability. If you remove a species from a community without changing anything else, does that lead, directly or indirectly, to any other species going extinct? If not, the system is species deletion stable. If so, then you have “secondary extinctions”, which could themselves lead to further extinctions (an “extinction cascade”).

Network topology stability. If you have a network of dependencies (e.g., predators are linked to, and depend on, their prey, and plants and pollinators are linked to, and depend on, each other), and you start removing species and/or links from that network, how many and/or which links do you have to remove in order to remove all the links on which a given species, or any species, depends? I made up the name for this, there doesn’t seem to be an agreed term in the literature.

Invasion resistance. If you add a new species to the community, at initially-low abundance, can it increase and establish itself? If not, the community is invasion resistant. Actually, this is related to local stability of boundary equilibria, so maybe it doesn’t belong in this subsection…Note that, in the theoretical literature, an “invader” is just an initially-rare species. It’s not, or not necessarily, “non-native” or “exotic”. In defining “invader” in the way that they do, theoreticians are focusing on the ecological determinants of invasion success. After all, species move around and establish new populations in new locations all the time, and they always have. Whether or not “non-native” (or “exotic” or “weedy” or etc.) species tend to have the properties that allow them to invade is a separate question; the mere fact that a species is “non-native” or “exotic” does not in and of itself affect its ability to invade.

Boundedness away from zero. How closely does the abundance of a given species, or some other variable that can only take on non-negative values, approach zero? The less closely it approaches zero, the more “stable” it is reckoned to be, on the basis that random events are more likely to cause the variable to actually go to zero if it closely approaches zero on its own.

Persistence time. How long does a species, or the community, persist before that species, or one or more of the species in the community, goes extinct?

References: Pimm 1980 (species deletion stability; an Oikos classic!), Case 1991 (invasion resistance), McCann et al. 1998 (boundedness away from zero)

*I wish I could just direct readers to Wikipedia for this, but the Wikipedia page on “ecological stability” is poor. The section on local vs. global stability says “Local stability indicates that a system is stable over small short-lived disturbances, while global stability indicates a system highly resistant to change in species composition and/or food web dynamics.” Huh? A locally stable system is one that’s…stable? Local stability has something to do with small disturbances, whereas global stability has to do with large disturbances resistance to change? Other sections are just as bad. I often find Wikipedia useful, but this isn’t one of those times. It’s not that the page is brief, it’s that what’s there is confusing or wrong.

**Indeed, they’re not paying me anything.

Posted by: Jeremy Fox | May 15, 2012

From the archives: bandwagons in ecology

What are some of the biggest bandwagons in ecology right now? Why do some research topics turn into bandwagons, while others don’t? How do you tell a bandwagon from a non-bandwagon? Can a bandwagon be stopped? For the answers, check out this old post.

Posted by: Jeremy Fox | May 14, 2012

Crowdfunding crowdfunding

What if you want to crowdfund your science but are having trouble developing a professional-looking and compelling pitch? xkcd has the answer!

Many ecologists expect competing species to exhibit compensatory dynamics, meaning that the densities of any two competing species should be negatively correlated over time or across space. If your competitor increases in abundance, you ought to decline, right? After all, to the extent that two species are competing, that means that when one increases, it’s at the expense of the other, right?

Um, no. Or rather, not necessarily. For instance, environmental fluctuations can cause competing species to exhibit positive rather than negative correlations in abundance. Think of a drought which causes the density of every plant species to decline, even though they’re all competing. But there’s a deeper reason why you should not necessarily expect the densities of competing species to all be strongly negatively correlated with one another: in general, it’s mathematically impossible. I don’t think this fact is as well-known as it should be, so I thought I’d post on it.

Say you have just two competitors, each of whose densities you’ve measured at a bunch of different time points, or a bunch of different spatial locations. In this special case, the correlation coefficient (Pearson’s correlation or rank correlation) between the density of species 1 and the density of species 2 can indeed take on any value from +1 to -1. So depending on how strongly the species compete and other factors, it’s possible that their densities could be perfectly compensatory (correlation = -1). So for the sake of illustration, let’s assume that the correlation between their densities is -1.

Now imagine that there’s a third competitor. How will its densities correlate with those of species 1 and 2? Well, to answer that, you’d have to specify more information about the ecology of all three species. But without knowing anything about the ecology, I can tell you what the answer won’t be. Species 3 won’t have a correlation of -1 with both species 1 and 2. Because that’s mathematically impossible. For instance, if species 1 and 3 have a correlation of -1, then by definition species 2 and 3 must have a correlation of +1, i.e. perfectly synchronous rather than perfectly compensatory dynamics. Conversely, if species 3 has correlations of -1 with both species 1 and 2, then by definition species 1 and 2 must have a correlation of +1.

This three species case is a simple illustration of a general principle: the more species you have, the less-compensatory their dynamics can possibly be. It’s mathematically possible for any number of species to all be perfectly in sync with one another. But the more species you have, the less density compensation they can possibly exhibit, on average. In general, we can describe the pairwise correlations among s competitors with a correlation matrix, a square matrix with s rows and s columns, one row and column for each species. The number in row i of column j gives the correlation between species i and j, and of course the same number will appear in row j of column i since the correlation between species i and j is the same as that between j and i. The numbers on the diagonal will all be +1, since by definition any variable is perfectly correlated with itself. Now, as a matter of mathematical necessity, correlation matrices are positive semidefinite. Which turns out to imply that, the larger s is, the less-negative the off-diagonal elements of the correlation matrix can possibly be, on average.

For instance, in the special case when every pair of species has the same correlation, the minimum possible value of that correlation equals -1/(s-1). Here’s the graph for that special case:

As you can see, even with as few as 5 species, in this special case the minimum possible correlation is only -0.25, which is pretty weakly compensatory. In the limit, as s goes to infinity, the minimum possible correlation goes to 0 (i.e. species fluctuate independently of one another).

Of course, in reality the pairwise correlations won’t all be equal, and so even with many competing species it’s possible that some pair of them might have strongly compensatory dynamics. But if they do, that just implies that some other pair of them must have strongly synchronous dynamics. On average, the pairwise correlations can’t be more than slightly negative when you have more than a few species.

Note as well that the same basic point holds for other measures of synchrony. For instance, the exact same points hold if you want to analyze synchrony in the frequency domain by looking at phase differences.

This mathematical fact is certainly familiar to folks who do a lot of work on this stuff, like my collaborator Dave Vasseur. But it deserves to be more widely known. Lots of ecologists have the vague sense that competitors ought to exhibit compensatory dynamics, and so are somewhat surprised to learn that compensatory dynamics are actually quite rare in nature.  But the reason they’re rare is mathematical, not ecological.  Which means you cannot use the rarity of compensatory dynamics as evidence for anything about ecology. For instance, you can’t say “These species only exhibit weakly compensatory dynamics, so they must not be competing very strongly”. You can’t even say “These species only exhibit weakly compensatory dynamics, so environmental fluctuations must be generating synchrony that overrides the strongly compensatory dynamics that would otherwise occur.”

Just to be clear, there absolutely is scope for the strength of synchrony or compensation to vary among communities, and among different pairs of species, for all kinds of interesting ecological reasons. But if you aren’t clear on what dynamics are possible, you’re liable to misinterpret actual dynamics.

Posted by: Jeremy Fox | May 11, 2012

Turn your study system into a funny meme! (UPDATED)

In a previous post I expressed a bit of wariness that crowdfunded science, by requiring new forms of salesmanship on the part of scientists, might tend to favor style over substance.

But if the style is going to be this funny, I’m all for it! Zen Faulkes’ SciFund project is about the environmental and evolutionary drivers of gigantism in sand crabs. “Gigantism” is relative, of course–which Zen has illustrated with a great picture of a “giant” sand crab trying to beat the c**p out of his fingertip. The picture’s hosted on a meme generator site that lets you add your own caption, which a bunch of people have already done, and the results are hilarious. Go check it out!

Readers are encouraged to post their own pictures to the meme generator site and let us know in the comments.

(Beware that the site can be slow, which caused me to accidentally post a caption three times…)

UPDATE: Here are some samples:

Posted by: Jeremy Fox | May 10, 2012

On the use and care of mathematical models

A wonderful passage from Simon Levin (1975):

Most models which find their way into the pages of journals such as this one are not meant as literal descriptions of particular situations, and thus generate predictions which are not to be tested against precise population patterns. On the other hand, such models, if handled skillfully, can lead to robust qualitative predictions which do not depend critically on para- meter values and can stimulate new directions of investigation. Such is often the role of theory throughout the basic sciences.

Further, that which mathematical models give up in reality is offset by the tightness of logic and the precision characteristic of mathematics. The author of mathematical papers thus has the same responsibility for the accurate presentation of theorems and reasoning as does the experimentalist in presenting data. Those who observe mathematicians at work often wonder at their fascination with precise statements and endless refinements of theorems and results already approximately known. The experimentalist can, however, easily appreciate this pride of craft, which has a direct analogue in the care one lavishes on the presentation of data.

As one who indulges in mathematical models and analysis, I am thus especially concerned by the tendency to state and use mathematics imprecisely. Not only do folk theorems become more and more garbled as they are handed down, but also the results which such theorems engender are suspect.

Posted by: Jeremy Fox | May 10, 2012

Wikipedia on ecological microcosms

I stumbled across the Wikipedia entry on “microcosm (experimental ecosystem)“. It’s, um, sketchy. But it does cite Rees Kassen and my former labmate Lin Jiang, so it’s got that going for it.

Posted by: Jeremy Fox | May 10, 2012

“I want to be the [famous non-scientist] of science”

A little while back, neuroethologist Zen Faulkes said in a post on crowdfunding that he “want[s] to be the Amanda Palmer of science crowdfunding“, Amanda Palmer being a musician who just had massive success crowdfunding her new album. And he’s also on record as wanting to be “the Iggy Pop of science“, Iggy Pop being a rock star who continues to rock just as hard in his 60s as he did when he was young. The analogy would be a scientist who doesn’t just do his best work when he’s young.

I’m not really into Iggy Pop (the only song of his I really like is “Candy“, mainly for the vocals from the amazing Kate Pierson). But my ambitions are similar to Zen’s. So I’ll say that I want to be the Jamie Moyer of science. Jamie Moyer, for those of you who don’t know, is a pitcher in Major League Baseball. He is 49. Yes, you read that right. He is the oldest player in Major League Baseball by a wide margin. Earlier this year he became the oldest MLB pitcher ever to win a game. He’s been around so long there are all kinds of ridiculous facts about him (which hasn’t stopped others from making a bunch up, just for fun).

Why Jamie Moyer? Well, basically because it seems like the highest I can reasonably aim at this point. I’m almost 40, an age by which even many ecologists (a field not known for child prodigies) have done, or soon will do, their best work. If I was going to win the Mercer Award, I’d almost certainly have won it by now. So it’s not realistic for me to say “I want to be the Beatles of science” or “the Stephen Spielberg of science” or anything like that. I’m never going to be that big a deal. Indeed, I’m never even going to be as big a deal as mathematician Paul Erdös, the most-published mathematician in history and the exemplar of a scientist who remained productive into old age. So basically, I wanted to pick someone who was pretty good for a long time without ever being great, peaked relatively late, and eventually became appreciated as something of a unique curiosity.

Moyer seems to fit the bill. In stark contrast to most professional baseball players, who peak from about 27-30, Moyer’s best years all came after he was 34. He’s only made one All-Star team, and he’s never won the Cy Young Award for being the best pitcher in the league (and only twice has he even received any votes for that award). And he doesn’t pass the “eye test” of what a good pitcher should look like. For a professional athlete, he’s not physically imposing. And he’s famous for not throwing very hard. A typical major league pitcher can throw about 89-90 miles per hour. Moyer’s never thrown harder than the mid-80s (which is quite slow for a major leaguer), and in his recent record-setting win he topped out at 78 mph. He succeeds by really “knowing how to pitch”, as the baseball cliche goes. He’s what you might call a craftsman rather than a genius. His skills are very real, but they’re of an unusual and subtle type most readily appreciated by his fellow professionals and hardcore baseball fans.

Analogously, I am not physically imposing, even for an ecologist (although I try to fake it). I don’t pass the “eye test” of what a good ecologist should look like: I mostly work in a lab, don’t even own any boots to get muddy, and work on fundamental topics that aren’t easily appreciated by the general public. I’m not massively productive or widely cited. But I do things my own way (I have a pretty high percentage of first- and sole-authored papers). And I’m proud that that way of doing things occasionally draws very flattering comments from colleagues who I really admire. And while my peak may never be that high, I like to think that I’m above average, and that I could sustain my current level for many years yet. Which someday might be enough for people to start making up silly facts about me. 😉

So how would you complete the sentence “I want to be the [famous non-scientist] of science”?

Friends Of Oikos Blog (‘FOOBs’) Chris Klausmeier and Elena Litchman are looking for three postdocs; advert below. They are two of the best people in the world working at the interface of theory and experiment in community ecology, specifically plankton communities. So if that’s what you want to do, you should apply (heck, I’m thinking about applying…)


Three postdoctoral research positions are available in the labs of Elena Litchman and Chris Klausmeier to develop mathematical and statistical models in plankton ecology.

1) Modeling community dynamics in Lake Baikal and analyzing long-term plankton data (job #6136). Funded by NSF grant “Dimensions: Collaborative Research: Lake Baikal Responses to Global Change: The Role of Genetic, Functional and Taxonomic Diversity in the Plankton”

2) Using trait-based and community models to optimize algal biofuel polycultures (job #6137). Funded by NSF grant “Experimental and theoretical trait-based approaches to optimizing algal biofuel polycultures”

3) Investigating community dynamics in spatially and temporally varying environments (job #6138). Funded by NSF grant “CAREER: Modeling Complexity in Plankton Communities”.

Basic qualifications are a PhD in ecology, mathematics, or a related field, and a strong interest in quantitative ecology. Knowledge of phytoplankton ecology, limnology, or oceanography is a plus. Mathematical modeling expertise is required. The postdocs will be based at Michigan State University’s Kellogg Biological Station <http://www.kbs.msu.edu>. Each position is for one year, with a possibility of renewal, given satisfactory performance.

To apply, search for the job numbers above at <https://jobs.msu.edu>. Applications should include a cover letter describing your research interests and experience and your CV. Also, email the contact information of two references to Chris Klausmeier (klausme1@msu.edu).

Review will begin June 1, 2012. For more information, visit <http://preston.kbs.msu.edu/postdocs.php> or email Chris Klausmeier (klausme1@msu.edu).

My colleagues and I worked on the top 1% of ecologists project for a long time. There was significant discussion of both the interpretation itself and the implications. We also discussed conducting a social survey of the NSERC Discovery Grant holders similar to the one we conducted for the elite (published in Scientometrics). The editorial board of Oikos (particularly Dustin Marshall) facilitated a much clearer and direct interpretation as I got tangled up in the implications and caveats. I just wanted to provide a few additional nuances here in case you might be interested.

I must admit that my perception of the implications has also recently changed because I just received the news on my own NSERC Discovery Grant. I was funded at what I understand to be the lowest tier ($21,000) and for only one year (instead of 5 years). My former grant was $18,500 per year, and the importance of economy was a critical consideration for any project or idea a student and I generated. I wonder at the merit of underfunding ecologists and at the importance of minimum and realistic thresholds (i.e., how much could one generate with $18,000-20,000 per year and how competitive are you in the next round against others that entered the system at a higher level to begin with). Consequently, I imagine a few intriguing questions with respect to this project and ecology in general (and my own ability to fund and conduct research).

1.  Is there a way to mix these two data streams to examine whether there are thresholds or larger relationships?
2.  Is there a point between the two sets of individuals (NSERC vs most highly cited) that is meaningful either as a minimum or mean value?
3.  Is research best forwarded by incremental increases in funding or more ‘jackpot-driven’ funding (that NSF adopts and that NSERC appears to now emulate)?

1. OK, the first question is a snap – albeit it is questionable to combine the two different datasets. As discussed below in the comments thread associated with the post by Jeremy, there are numerous different attributes associated with the researchers including multiple versus single grant holders that I cannot decouple. The time frame and allocation of funding is also different (i.e. in Canada NSERC uses a 5-year cycle if you are lucky, whereas in the most highly cited instances, the values reported were for an ‘average’ and I hope representative year in their current career stage). Ideally, I would love to see an experiment on this or at least see balanced contrasts by sub-discipline such as between plant ecologists at similar career stages. Nonetheless, with some painful conversions, I can match up the two datasets (good idea Jeremy). This is a purely an exercise to see if we can increase the scope of inference and possibly provide a roadmap for more comprehensive analyses by granting agencies.

Red is Canadian NSERC Discovery Grant holders (converted to citation per publication with a mean annual funding value to match the most highly cited reporting) whilst green is the most highly cited ecologists identified by ISI. The insights/interpretations would be that (i) there is limited overlap, (ii) some of the most highly cited ‘less’ funded individuals approach upper funding levels of Canadians, and (iii) the two lines intercept at 5.9, back-transformed to $794,328 USD. The implication of the latter point would be that NSERC would have to significantly extend mean annual funding to ecologists for this group of scientists to become the citation elite – if recognition or discovery functioned linearly. This value might be a bit too high to hope for Canadians given that the mean funding for 2012 in ecology and evolution was $27,167 (full stats here).

2. Let’s examine this another way. What if I simply combined both sets and ignore the fact that they were very from very different groups. Similar to a funnel plot analysis or a trim-and-fill analysis when conducting a meta-analysis to identify publication bias, do the raw (but matched) values of the NSERC holders ‘fill in’ the missing range of values? As you might expect, they do not, the distribution is still kurtotic and significantly deviates from normal. The line of best fit between funding and citations also inflected at approximately 5.9, but the grand mean (and median) of the distribution fell to 4.7 and 4.6 respectively with the first quartile at 4.9. Now, these are numbers that a Canadian system, or better yet any system, interested in funding ecologists at a reasonable level could consider. Using the median value back-transformed, we are looking at a minimum threshold of $39,810 USD. This seems viable. Detailed budget work both before and after my NSERC Discovery Grant results lead me to a conservative number for field ecology research with travel for a small lab at $36,000 CDN per year. I may not be able to pursue tangents, which is unfortunate because I suspect discovery is accelerated by these surreptitious moments, but I can mentor students and achieve critical mass within the lab in terms of collaboration and field assistance. Anything less than this is really difficult unless students are independently fully funded but most schools require modest top-ups. In summary, I propose that this is a reasonable current minimum for ecological research.

3. Gradual increase versus jackpot is intriguing as it relates to discovery. All of the above assumes that increases, sometimes even nominal ones, can make a difference. Realistically however, the difference between $18,500 and $21,000 per year is not another graduate student.  I am grateful for the increase (but wish it was for 5 years), but I still cannot do the research that I proposed. I envisage several alternatives. Increase the minimum, award jackpots here and there, or provide a mechanism for applicants to honestly identify funding levels associated with their specific research. To the best of my knowledge, NSERC ranks applications and each tier is associated with a funding level.  What was the purpose of the budget that I carefully prepared? I assume it is to demonstrate that if I really got $52,000 I could effectively allocated the funds to do that research. However, it is not impossible to imagine that padding a budget occurs, and that it is not an even an unreasonable bet-hedging strategy when soliciting funds since ecology is done in natural environments where accidents happen. The combination of these two practices, tiered funding and inability to communicate thresholds by applicants, leads to arbitrary and low awards. An obvious solution would be the provision of more transparent evaluation with respect to the budget and threshold reporting. My understanding of NSF grants is that it is more jackpot based with much lower funding success rates but larger grants. This could accelerate discovery for that specific interval when one is funded but between grants one likely spends large amounts of time writing more grants with limited capacity for scientific discovery. This leads me back to the second question. A hybrid of the two would be more even distributions of funds to more individuals at higher levels, i.e. > 0 and also > $27,167. Nonetheless, when an individual hits an amazing and important idea, the capacity to direct larger sums of funds could be critical. The hybrid should also be in between these two systems – not just in values – but funding model with documentation and limited- discovery research conducted and funded at modest levels and larger discovery endeavors funded more ambitiously. I know that there are discovery accelerator supplements in place and other alternatives as well, but we should urge agencies to do more for ecology which is often at the lower end of the priority list. I suspect if we polled most ecologists with the following question, the answer would be yes. If you were awarded a relatively large grant, even once, could set your research program on a totally different trajectory? I imagine I could, but whether I would actualize that dream is another matter. Perhaps NSF grant holders pre and post large awards could be tested – provided they had some funding after the 3 year cycle.

I propose that funding discovery is similar to the scientific process of inquiry. It is advanced through multiple channels. We need publications that document and describe patterns, propose ideas, explore ideas empirically, and hit home runs with rigorous experimentation. Not every contribution needs to be hit out of the park however, but by providing most ecologists with few dollars, agencies are not even letting them get up to bat. Ideas are sometimes cheap, testing them hard. I enjoy ecology and see the best in our discipline. I firmly hold a conviction that both basic and applied ecology is useful in effectively managing our little planet. With inadequate funding levels and limited alternative models to conduct research, we end up with ideas only for most the team… discussing them in the dugout.

My fellow Oikos editor and blogger Chris Lortie has a strong interest in scientific publication practices (see, e.g., here). His latest effort, now in press at Oikos, examines patterns of funding and impact among the ecological 1%: the most-cited 1% of ecologists over the last 20 years (Lortie et al. in press). Chris is too busy right now to blog it himself, so I’m going to do it, because I think it’s a must-read.

In a previous paper (UPDATE: link fixed), Chris and his colleagues reported detailed survey data characterizing 147 of the most-cited ecologists and environmental scientists. Not surprisingly, they are overwhelmingly male, middle-aged, employed in North America and Western Europe, have large, well-funded labs, publish frequently, and have high per-paper citation rates.* But there’s a surprising amount of variation (multiple orders of magnitude) in terms of lab size and funding level within this elite group. A feature of the data which Lortie et al. take advantage of to ask how citation rates vary with funding level within the elite.

In their previous work, Chris and his colleagues have shown that, for non-elite Canadian ecologists, more funding is associated with more “publication impact efficiency” (PIE): more highly-funded researchers also have more citations per dollar of funding. But Lortie et al. find that the same is not true of the elite. Using a slightly different measure of PIE (citations per publication per dollar), Lortie et al. find no relationship (not even a hint!) between PIE and funding within the ecological elite. Again, that’s despite multiple orders of magnitude of variation in funding level within the elite. Combined with their previous results, this indicates diminishing returns to really high funding levels (above several hundred thousand dollars, roughly). The implication is that funding agencies looking to maximize the “bang for their buck” arguably should reallocate funding away from really elite researchers and towards non-elite researchers. This wouldn’t reduce the PIE of the former group, but would increase the PIE of the latter group. Actually, Lortie et al. are careful to suggest increased funding for non-elite ecologists, rather than a reallocation of existing funding. But opportunity costs are ever-present so long as total funding is finite, and so I don’t think it’s possible to avoid the implication that these data suggest reallocation of funding away from the “elite of the elite”.

This is really thought-provoking stuff, and I hope that the folks who run our funding agencies take note.** One thing I’d like to see is for funding agencies to use this kind of information to give guidance to grant referees on how to evaluate applicants’ track records. For NSERC Disovery Grants, the “excellence of the researcher” (basically, the reviewers’ evaluation of your track record of publications and other outputs over the previous 6 years) is 1/3 of the grant score. To my knowledge, NSERC currently offers no guidance as to whether “excellence” should be scaled relative to the applicant’s funding level (which the applicant is obliged to report). These data suggest that it should be, and further that the appropriate scaling is nonlinear. Against that idea, one could note that increased funding gives researchers various advantages that let them increase their per-publication impact as well as their publication rate. So ultimately it’s impossible for reviewers (or anyone) to try to tease apart how much of an applicant’s track record is just due to their funding level (the idea being that anybody with lots of funding will have a good track record), and how much is due to their “intrinsic” excellence.

I am curious to see results for elite and non-elite researchers using exactly the same measure of PIE. Perhaps Chris can provide these numbers in the comments.

All of the usual caveats about citations as a measure of “impact” apply, obviously, and Lortie et al. recognize those caveats. But the conclusions here are, I think, robust to those caveats. Basically, there’s an upper limit to the mean quality of papers that any lab is capable of producing–and it turns out that even the most brilliant ecologist’s lab hits that limit at a funding level of several hundred thousand dollars or so.

One other caveat is that these are observational, comparative data. They aren’t necessarily a reliable guide to the effects of an “experimental manipulation” such as a reallocation of funding away from the elite. But they’re the only guide we have. Though having said that, I’d also be interested in analyses tracking changes over time in the funding level, PIE, and other relevant variables for individual researchers. Would it lead to the same conclusions?

In passing, one minor quibble I have is that Lortie et al. describe elite researchers as especially “collaborative”. But by this, they seem to mean simply that elite researchers have larger labs on average than non-elite researchers, not that they have more extensive collaborations with colleagues outside their labs. Which seems like a rather unconventional definition of “collaboration”.

There are analogies here to debates in economics over income and wealth inequality and their consequences, which are too obvious for me to ignore–but too incendiary for me to comment on!

This should be an interesting comment thread…

*Click through and read the whole thing for data on other characteristics of elite researchers–such as how hard they work and how much alcohol they drink!

**At least some of them are thinking about this stuff. I can’t find the citation, but a little while back NIH did analyses along these lines for their researchers, and discovered the same pattern of diminishing returns for really well-funded labs. Although the threshold funding level beyond which there was no further increase in efficiency was higher than in ecology, as you’d expect given the higher cost of much biomedical research.

Posted by: Jeremy Fox | May 6, 2012

Favorite popular science books about ecology?

What are your favorite popular science books about ecology?

I’m actually struggling to think of favorites. I tend not to read popular science related to ecology. I tend to read popular science books on topics I know less about because I like learning new things; I already know a lot about ecology. But I also have a vague sense that there’s not much popular ecology out there that’s not conservation-related, and I tend to go for popular science that’s more about ideas than applications. But it could well be that there’s lots of stuff I’m missing.

There are lots of great popular books about evolution. Richard Dawkins’ The Blind Watchmaker. Jonathan Weiner’s The Beak of the Finch. Marek Kohn’s A Reason for Everything (which kind of straddles the line between popular science and biography). Probably lots of others I haven’t read.

At the risk of hijacking my own thread, my favorite popular science books about other topics include Paul Hoffman’s The Man Who Loved Only Numbers (a biography, of legendary mathematician and eccentric Paul Erdös, rather than a popular science book, but far too great not to include), Simon Singh’s Fermat’s Enigma (about Andrew Wiles’ proof of Fermat’s Last Theorem), and William Poundstone’s Fortune’s Formula (about the Kelly Criterion for maximizing expected returns on wagers or other uncertain investments. An amazing story that, without stretching, involves everyone from Claude Shannon to Mafia bosses. It has evolutionary implications too, relating to the evolution of “bet hedging”, which aren’t noted in the book. Plus, it’s by William Poundstone, and he’s always good value).

Posted by: Jeremy Fox | May 4, 2012

Cool new Oikos papers

Some forthcoming Oikos papers that caught my eye:

Tuomisto (in press) is a “consumer’s guide” to evenness indices, showing how they are mathematically related to one another, and to partitionings of diversity into alpha and beta components. The take home message is the same as in my recent post: different “evenness” indices actually measure different things. To pick one, you need to know exactly what you’re trying to measure. And it is not interesting to simply ask whether different indices give you different results, because they will as a matter of mathematical necessity.

Ackerman et al. (in press) use a comparative approach to test the popular and intuitively-appealing hypothesis that pollinator extinctions and temporal fluctuations in pollinator abundance select for flowers attractive to many pollinator species. By combining long-term census data on 37 species of Panamanian euglossine bees with data on bee and flower phenologies, Ackerman et al. show that the “risk hedging” hypothesis doesn’t work. There’s no tendency for generalized plants to be pollinated by more variable pollinators. Rather, the longer a plant flowers, the more bee species visit it, which suggests that plant-pollinator specificity is just a sampling phenomenon (the longer you flower, the larger the “sample size” you’re taking from the pollinator fauna). As a contrarian, I always like to see widespread intuitions put to the test–especially when they’re found wanting.

New et al. (in press) develop a stochastic, mechanistic predator-prey model for the dynamics of hen harriers and red grouse, fit it to long-term time series data using state space methods, and use the fitted model to evaluate alternative management strategies for predator suppression. Red grouse is a popular game species in the UK, but you can’t hunt grouse that hen harriers have killed. It’s illegal to cull harriers, leading to the idea of “diversionary feeding”–give the harriers alternative food so they stop hunting grouse. Which sounds like a good idea until you realize that (i) it’s expensive, and (ii) in the long run, you might just build up higher harrier abundances, which would lead to even heavier predation on grouse than would otherwise occur (a specific example of the general principle of apparent competition; Holt 1977). The results show that harriers do suppress both average grouse density and grouse cycle amplitude (red grouse are a famous example of cyclic dynamics, driven by density-dependent parasitism). That’s a really cool result in and of itself. There aren’t many good examples of “community context” mediating the stability of population cycles.  The results also show that diversionary feeding, as currently practiced, makes only a marginal difference at best.

Numerous other forthcoming papers caught my eye–we’ve got a lot of good stuff coming out. But I need to get back to marking, so they’ll have to wait for a future post.

Posted by: Jeremy Fox | May 4, 2012

Robert Sokal, 1926-2012

Biostatistician Robert Sokal died on April 9. Like most ecologists, I knew Sokal primarily through his canonical statistical textbook with Jim Rohlf, Biometry. But he was of course much more than the author of a classic textbook. In his research, he was a pioneer of clustering methods, originally in the context of pheneticist phylogenetics. His personal life was dramatic; he fled Nazi Germany as a youth and was raised in China, eventually becoming part of one of the world’s finest evolutionary biology groups at Stony Brook University. The trend towards increasing quantitative rigor in ecology is a long term trend; Robert Sokal was hugely important in driving that trend.

Remembrances from Joe Felsenstein and Michael Bell, Chris Jensen, and Greg Mayer.

Lots of terms in ecology are only loosely defined, or can have somewhat different meanings depending on the context. Which can make it difficult to measure those things, because different measures often will behave at least slightly differently. “Diversity” is a good example–there are lots of different “diversity” indices. So how do you choose the “right” index, or the “best” index, of whatever it is you want to measure? And what do you do if your results differ depending on what index you choose?

This issue is one that I think many ecologists worry about a lot more than they should. It’s really very simple:

  • If you’re testing a precisely-defined model or hypothesis (which basically means a mathematical model, or a hypothesis derived from a mathematical model) that predicts the behavior of a particular index, then that’s the index you need to measure if you want to test that model.  For instance, if you want to test Bob May’s classic complexity-stability model, then your measure or index of stability needs to be the one May used, or one that you can show is tightly correlated with the one May used. And if that index of stability is difficult or impossible to measure (which in the case of May’s model, it is), then you either have to find some other way to test the model that doesn’t involve measuring stability, or you have to go ask some other question entirely.
  • If you’re testing an imprecisely-defined model, like a verbal or “conceptual” model, that doesn’t specify a choice of index, then the choice of index is completely arbitrary, so just pick one and don’t sweat it. Worrying (or arguing with colleagues) about which index is “best” in such contexts is totally pointless. There’s no way to choose the “best” index of something that’s imprecisely defined. You can’t choose the “best” measure of something unless you know, on independent grounds, exactly what that something is. Yes, this means your results may well depend on your choice of index. If that bothers you (and in many situations, it should), you should pick or develop a more precisely-defined model or hypothesis to test.
  • The only reason to calculate various indices of the “same” thing and then compare your results across those indices is if different indices give you complementary ecological information. For instance, if your hypothesis predicts that experimental treatment X will increase species richness but reduce Simpson’s diversity, then measuring both those “diversity” indices (species richness and Simpson’s diversity) helps you test your hypothesis. But it is not interesting or useful to calculate various indices simply to see if your results vary across different indices. Different indices are different. Of course they can behave differently. If they couldn’t, they wouldn’t be different indices.
Posted by: Jeremy Fox | May 2, 2012

Video: a snarky take on the kin selection debate

I’m very late on this, but it might be new to some readers, so here goes. Back in 2010 prominent evolutionary theorist Martin Nowak and two of his Harvard colleagues (one of whom is the even more prominent E. O. Wilson) published a lengthy and quite strident Nature paper attacking kin selection and inclusive fitness theory as an explanation for eusociality. They drew some equally strident replies (one of which had 137 signatures!), which you can find at the above link. For commentary on what became a major and ongoing fight, see here, here, here, and here (and probably lots of other places).

Or, just watch this Xtranormal video, from evolutionary biologist, evolutionary cartoonist (Darwin Eats Cake), and poet Jon F. Wilkins:

I think the video actually works very well as a concise summary of the replies to Nowak et al. And I find the snark both pretty funny and arguably justified as a response to the stridency of Nowak et al. But I can definitely see where some folks would disagree with me on both counts, particularly as some of the snark is directed at the authors of Nowak et al. rather than the content. So if you’re the sort of person who dislikes heated debates, or thinks that humor and satire have no place in scientific discourse, you’re probably better off not watching the video. But if you’re the sort of person whose reaction to heated debates is to grab some popcorn, you’ll love it.

Some astute commentary and background from Wilkins on the science and politics of this debate, and why he made the video, is here.

Do I dare make an Xtranormal video about the IDH? I’ve already got some dialogue written

p.s. Full disclosure: while I can’t claim to be an expert on this debate, I do know more about it than the average bear from my work on the Price equation. I pretty much agree with the opponents of Nowak et al. on the substantive issues.

Posted by: Jeremy Fox | May 2, 2012

Crowdfunding science: the future?

Round 2 of the SciFund Challenge is now live. This is a crowdfunding initiative in which dozens of scientists get money to support their research by appealing for small donations directly from the public. Here’s an old Oikos blog guest post from one of the SciFund founders, talking about the initiative. Discussion of Round 2 at NeuroDojo and I’m a Chordata! Urochordata!. The latter has also been posting very interesting analyses of the results from Round 1 (look for posts tagged #SciFund).

I think SciFund is a great idea, the founders deserve a lot of credit for getting it off the ground and continuing to put a lot of effort into making it better. But is it, as NeuroDojo suggests, the future of science funding? Well, there’s every reason to think it’s part of the future, as NeuroDojo documents. But while it’s very hard to say what the long-term future will bring, my guess would be that it’s not going to be more than a complement to more traditional funding sources (not that anybody’s claiming otherwise). I say that in part because massive crowdfunding successes, like musician Amanda Palmer, who raised $250,000 for her next album in one day, are exceptional. Just because she did it doesn’t mean everyone can do it. And while lots of good science is relatively cheap, lots of it isn’t. Yes, as NeuroDojo says, soon somebody is likely to break through and raise an NSF grant’s worth of funding by crowdsourcing. But lots of somebodys? Or even, say, as many somebodys as NSF funds? It could well be that I’m just a traditionalist who lacks sufficient vision and imagination (seriously, that could well be the case here), but I have a hard time seeing that happening. When R.E.M. retired, I read an interview with bassist Mike Mills in which he suggested that in future the internet would allow many more bands to attain some modest degree of success than used to be the case, but that we’d never again see a band get as big as R.E.M. or U2. I suspect something similar might be true for crowdfunded science–it might allow lots of people to pay for quite modest projects, but allow few if any to do really big things (or even NSF-grant-sized things).

I also wonder (and worry) about how a future in which crowdfunding is a big fraction of all funding would shape what science gets funded. As far as I can tell, the Round 1 SciFund data are mostly about relating characteristics of investigators to their funding success. What about characteristics of their science? For instance, are conservation or other applied projects an easier sell than fundamental research? Can you crowdfund theoretical work? Conversely, how easy is it to crowdfund the scientific equivalent of snake oil (or even just a poorly-designed or boring project)? SciFund quite rightly involves investigators sharing advice on how to market their work to the public, and yes, regular research grants also involve some measure of salesmanship. But as anybody who’s ever seen commercial advertisements knows, there’s marketing and there’s marketing. I do wonder if crowdfunded science will end up being driven by what’s marketable, to an extent that would make old fogies like me uncomfortable.

Posted by: Jeremy Fox | May 1, 2012

Oikos blog used as course material

In the past few weeks the Oikos blog has been getting visits from a Moodle site associated with the undergraduate Plant Ecology course offered by Berea College. I assume this means that one or more posts are being used as course material. I tried to contact the instructor to find out more, but didn’t hear back (maybe contacted the wrong person?). So I’ll just say that I’m flattered that anyone would find our posts sufficiently useful to make them worthy course material.

If you’re from Berea, I’d be interested to hear from you (in the comments, or via email) about how the Oikos blog is being used in the course.

Unfortunately, I can’t tell if the course site is directing students towards a specific post. But somehow I doubt that it’s this one. 😉

Posted by: Jeremy Fox | May 1, 2012

Carnival of Evolution #47

This month’s compilation of the best in evolutionary blogging is now up at Evolving Thoughts. Lots of good stuff as usual.

One post that caught my eye talks about how Steven J. Gould was wrong to claim that Cope’s Rule applies only to living organisms, not inanimate objects. This caught my eye not because Steven J. Gould was wrong (that’s hardly news), but because I like comparative analyses that probe the range of applicability of our ideas. If you find that Cope’s Rule applies to inanimate objects, or that college basketball wins have the same “species abundance distribution” as ecological communities, or that cars, like living species, have a triangular “body size-species richness” distribution*, you’ve learned something you wouldn’t have learned just by thinking about ecology. Such comparisons can suggest general, non-ecological explanations for ecological patterns. Such comparisons also can reveal patterns that are so ubiquitous that they’re perhaps not very interesting at all.

*as I believe was shown by an old paper of John Lawton’s in Oikos, but I can’t find the reference

Grad school can be daunting, especially for new grad students. It’s a totally different experience than undergrad, and it’s easy to feel like you don’t belong, like you have no idea what you’re doing, that you somehow fooled everyone to even get this far. In short, it’s easy to feel like an imposter. Over at The Contemplative Mammoth, Jacquelyn Gill has a very nice post on how she got over her own “imposter syndrome”. I particularly like the suggestion that acting as a mentor can be as helpful as receiving mentoring. Also the need to draw a line between generalized, ungrounded feelings of inadequacy, and specific respects in which you need to improve (a good adviser can help you draw this line). Some of her advice is less universal, but that’s only natural–to an extent, everyone needs to find their own source of strength.

UPDATE: For a massive compilation of grad students and profs blogging about imposter syndrome, go here. If you’ve ever felt like an imposter, you are definitely not alone!

Did you ever feel like an imposter? How did you deal with it?

Posted by: Jeremy Fox | April 27, 2012

Is ecology becoming too collaborative?

Is it just me, or is everyone overcommitting to too many collaborative projects lately? Everybody likes to collaborate these days. It’s easier to do these days, and the culture of ecology values it highly. And frankly, everyone sees it as a way to get their names on a paper while doing only part of the work. So we all agree to participate in lots of collaborations–to the point where we lack the time to really pursue any of them. Which, paradoxically, only encourages us to agree to more collaborations. After all, if your collaborators are moving slowly, you may as well start up some new collaborative project with someone else while you’re waiting for them to get their act together. And if you’re moving slowly, you may as well start up some new collaborative project with someone else while your other collaborators are waiting for you to get your act together.

Don’t get me wrong, I’m not saying we should all stop collaborating. But anyone else think this sort of dynamic is becoming more common?

Posted by: Jeremy Fox | April 23, 2012

Human instructors > computers

Like me, many of you are probably grading final exams right now. For a very funny illustration of why, for better or worse, computers will not be able to do this task for you any time soon, see here (see also here for an example of the sort of essay that computer grading algorithms like).

I leave it to you to consider the relevance of this to the use of citation indices and other computer-generated measures of “influence”.


Posted by: Jeremy Fox | April 22, 2012

Favorite ecology and evolution quotes? (UPDATED)

What are your favorite quotes about ecology and evolution?

There’s the final paragraph of Darwin’s Origin, of course–the “tangled bank” image, “There is grandeur in this view of life”, and all that. As good as anything I’ve ever read, but perhaps a bit too lengthy to be considered a “quote”. I’m thinking more of pithy one-liners here.

I really like Francis Crick’s remark, “Evolution is cleverer than you are”. Very true, very pithy.

For a bit of discussion and context about Dobzhansky’s famous line “Nothing in biology makes sense except in the light of evolution,” (which isn’t one of my all-time favorites for some reason), go here.

I’m having trouble coming up with good ecology quotes. Casual googling mostly turns up quotes about environmentalism. I do really like an old bumper sticker which some former grad student of Peter Morin’s stuck on the lab vehicle (to Peter’s annoyance, or so I heard). It read “Ecology is as easy as…” and then completed the sentence with the equilibrium conditions for the Lotka-Volterra competition equations.

UPDATE: The best ecology quotes seem to be those that insult ecology–and some of them are from ecologists! A commenter quotes Elton, writing in his famous Animal Ecology book, defining ecology as “The science which says what everyone knows in language that no one understands.” And this compilation of ecology quotes includes evolutionary biologist E. B. Ford’s remark, “It seems to me that ecology describes what animals do, when they are doing nothing interesting.”

I’d be a little cautious about the veracity of the quotes in this compilation (Box’s famous line about how “all models are wrong but some are useful” is misattributed, which makes one wonder about the accuracy of the other quotes). But there a number of other zingers which I hope were actually said:

Peter Grant: “Pattern, like beauty, is to some extent in the eye of the beholder.”

John Maynard Smith: “Mathematics without natural history is sterile, but natural history without mathematics is muddled.”

Peter Kareiva: “Academic ecologists are renowned for arguing amongst themselves about all the things they do not know.”

Peter Morin: “There are some ecologists who put down lab experiments because we have abstracted things too much. Our response is that if you don’t start with a simple system, you won’t understand what’s going on anyway.”

This compilation also includes some lines of ecological verse, such as this bit from Ben Jonson:

Almost all the wise world
is little else in nature
but parasites or sub-parasites
Which reminds me of this bit of doggerel from Ogden Nash:
Big things have little things
Which sit on them and bite ’em
Little things have littler things
And so on ad infinitum
So, on the basis of n=2 datapoints, we can conclude that all ecological verse concerns parasites. 😉
Posted by: Jeremy Fox | April 20, 2012

Is ‘synthesis ecology’ a distinct scientific discipline?

Over at I’m a Chordata! Urochordata!, Jarrett Byrnes asks whether ‘synthesis ecology’ is a distinct scientific discipline. Interesting question, on which even current and former NCEAS postdocs can’t agree on an answer (not surprisingly, since if it is a discipline it’s presumably an emerging and therefore ill-defined one). I don’t have an answer either, but in lieu of an answer here are some random thoughts:

  • Why does it matter if ‘synthesis ecology’ is a distinct discipline or not? Is it so that people who consider themselves ‘synthesis ecologists’ can have a convenient shorthand to summarize what it is that they do? (I can certainly understand that) Or to make it easier to do things like say to your Head of Department “‘Synthesis ecology’ is a hot field and we should hire someone working in that area.” Some other reason(s)? I ask because I have a bit of a sense that some folks would really like synthesis ecology to be a field, and want to figure out how to make it come into being. Which raises the question of why you’d want to do that. I emphasize that I do mean that as an honest question, which I don’t know how to answer. I’m not asking the question because I think the answer is “it actually doesn’t matter.”
  • The entire culture of ecology has changed since the mid-90s. Data sharing is now much more common and valued, many more of us work in much more collaborative ways, meta-analyses and other syntheses of existing data are more common, and more people are choosing what questions to ask based on the available data rather than the other way around. Perhaps there’s no distinct discipline of ‘synthesis ecology’ because we’re all ‘synthetic ecologists’ now?
  • Following on from the previous thought: To the extent that the entire culture of ecology is quite ‘synthetic’ now, that may actually make it more difficult to establish ‘synthetic ecology’ as a distinct discipline. For instance, it can be difficult to argue for hiring a ‘synthesis ecologist’ if your Head of Department (or equivalent) can respond “But half the ecologists in the department already do meta-analyses, participate in working groups, and use sophisticated quantitative methods to analyze big datasets. Why do we need to hire another person who does those things? Especially someone who doesn’t have a strong grounding in any established ecological discipline?”
  • If ‘synthesis ecology’ is a distinct discipline, presumably it’s a methodologically-defined one. Which would mean it presumably shares some features in common with other methodologically-defined fields of science, and contrasts in some ways with fields of science defined by subject matter. It’s perhaps worth noting that different sorts of people tend to be attracted to these two kinds of disciplines. (I’m definitely a ‘subject matter’ guy myself)
  • How are ‘synthesis ecologists’ different from statisticians (or ‘data scientists’, as some statisticians have taken to calling themselves lately)? Because when I think of a methodologically-defined discipline that focuses on extracting information from existing data, I think of statistics. Maybe the answer is, “Synthesis ecologists also have the ecological grounding to choose the questions as well as do the analyses to answer them.” But if that’s the answer, that raises again the question of whether synthesis ecology is a distinct discipline, or just regular ol’ ecology that happens to be pursued via certain methods.
  • I’ve read some suggestions to the effect that the discipline of ‘synthesis ecology’ is culturally defined–synthesis ecologists are committed to collaboration and data sharing, for instance. Which strikes me as a slightly odd way to define a discipline. So if I pull together a big dataset and perform a meta-analysis on it without collaborators, I’m not doing ‘synthesis ecology’?
  • If it is a field, is it ok if we call it ‘synthetic ecology’ instead? Because ‘synthesis ecology’ just sounds awkward to me. 😉
Posted by: Jeremy Fox | April 18, 2012

A key problem in interpreting observational data (UPDATEDx2)

I’m a bit late on this, but I just found this very nice blog post from Bob O’Hara on a little-recognized (at least in ecology) problem in interpreting observational data: “errors in variables”. That is, for whatever reason, there’s random error in the values of your independent variables.

This is a really important problem, and lurking just under the surface of Bob’s post are larger issues, from the utility of popular statistical methods like path analysis, to what we mean by “causality” and how best to infer it. Will try to post on some of these issues in future.

Click through and read the whole thing. The comment thread is good too. Also, um, interesting. It’s particularly interesting to see the original Science paper Bob criticizes defended by a commenter who argues that the authors of the Science paper are famous ecologists like Nick Gotelli who know a lot about statistics. Because famous ecologists never make mistakes. Nope, never.* Also interesting to see the same commenter imply that, by publishing his criticisms in a blog post on which anyone is free to comment, Bob somehow wasn’t giving the authors a chance to respond. And to see the same commenter imply that Bob’s criticisms aren’t worthy of notice because they weren’t peer-reviewed.  I can only assume that the commenter is opposed to people criticizing the work of others in face-to-face conversations, since such criticisms are even less “public” and also aren’t peer-reviewed. I also assume the commenter also believes that Science‘s peer review process always weeds out weak comments. (UPDATE: On further thought the snark here goes too far. It’s possible that the commenter in question didn’t intend these implications. But I stand by my other criticisms) Finally, interesting to see the same commenter defend the Science paper by…noting that completely different ways of studying the same topic could also be criticized on completely different grounds. Which isn’t something Bob, or anyone else, ever denied. Not to mention it’s also a blanket defense against any and all criticism of anything (since every approach has its own limitations). And it’s also the same argumentative strategy used by intelligent design advocates (who see any criticism of evolution as support for their own position).

At some point I’m going to do a post highlighting some cases of productive, useful debates in ecology (feel free to suggest examples in the comments). Because I’m getting depressed seeing so many very prominent people who should know better setting such poor examples.**

*In practice, I’m sure that the commenter** who defended the original Science paper with this argument didn’t mean to claim that famous ecologists never make mistakes. But if that wasn’t the intended claim, why raise the authors’ identity at all? Why not just stick to arguing about whether or not the putative mistake was actually a mistake? What possible legitimate purpose can be served by pointing out the authors’ fame and experience? I mean, heck, Nick Gotelli, one of the authors of the Science article Bob criticizes (and whose name is invoked by the commenter in defense of the paper), is perfectly happy to address criticisms of his work on their merits, even when they’re published on blogs (see, e.g., his comments on this post). He doesn’t just respond to criticisms by invoking the authority of his own name.

**No, I won’t reveal the commenter’s identity here–think of it as incentive to click through to Bob’s post.

UPDATE #2: On still further reflection, and after correspondence with some colleagues (some initiated by me, some initiated by them), I can see how much of this post could reasonably be viewed as making a mountain out of a mole hill, treating a legitimate but minor disagreement about the appropriate response to Bob’s post with more seriousness or intensity than was called for. I decided to post because it seemed to follow on naturally from my recent post on the debate over Adler et al. Which may just be another way of saying that I was, unconsciously, looking a bit too hard for an excuse to re-emphasize some of the points made in that post. I do think that it’s important that ecologists debate one another in productive ways, and I do think that one way to help ensure that happens is to point out when others are debating in unproductive ways. I don’t plan to stop doing that, but based on the feedback I’ve received I probably need to choose my battles a bit more carefully.

This seems like a good place to say again how much I appreciate reader feedback, whether in correspondence or in the comments. Without that feedback, the blog would be much worse, I’d make even more missteps than I do, and I’d recognize fewer of them.

Posted by: Jeremy Fox | April 18, 2012

Scroll down for cool new guest post!

A new guest post by Ayco Tack and colleagues, discussing the background and implications of their interesting new Oikos paper, is now up. But due to a quirk in the way WordPress schedules posts, the post appears further down the page, inserted between two older posts. Don’t let that stop you from scrolling down to find out why they just published a paper criticizing their own previous work (!)

Posted by: Jeremy Fox | April 18, 2012

What’s the worst you’ve ever been mis-cited?

As an academic, sooner or later you’ll be mis-cited. Sometimes badly. More than once as a reviewer I’ve had to correct authors who were trying to cite my papers (and those of others) in support of a claim that was actually the opposite of what those papers demonstrated! Which I suppose is the maximum mis-citation (“MMC”). Of course, those citations weren’t actually published. But if I hadn’t been the reviewer, I presume they would’ve been.

So what’s the worst you’ve ever been mis-cited?

HT Ayco Tack for the idea for this post.

Posted by: Jeremy Fox | April 16, 2012

Yes, IDH zombies are still worth worrying about (UPDATED)

Via Twitter, Nate Hough-Snee asks whether zombie ideas about the IDH are actually taught any more, and suggests that the idea that the IDH is still popular is itself a zombie idea. That’s a question I’ve asked myself about another zombie. But I was wrong. And in this case, well, sorry Nate, I sincerely wish you were right, but…

Searching Web of Science for “intermediate disturbance hypothesis” yields the following plot of the number of citations/year, which includes citations only to items indexed in Web of Science:

See any sign of declining popularity there? Neither do I.

And here’s a plot of the citations/year in the Web of Science database of two key IDH papers. Connell (1978) coined the term “intermediate disturbance hypothesis”, and Huston (1979) is the source of a key zombie idea about how disturbance affects coexistence. I left out Hutchinson (1961) because it’s often cited for reasons that have nothing to do with Hutchinson’s zombie idea about how disturbance can promote coexistence. Also plotted are data for several papers theoretically or empirically refuting zombie ideas about the IDH (Chesson and Huntly 1997, Pacala and Rees 1998, Mackey and Currie 2001, Roxburgh et al. 2004, Shea et al. 2004). Note the log scale on the y-axis.

See any sign that non-zombie ideas about disturbance are replacing zombie ideas? Or that people are losing interest in zombie ideas about how disturbance affects coexistence and so are no longer citing the papers that first developed those ideas? Neither do I.

As to whether zombie ideas about the IDH are still taught in undergraduate curricula, they aren’t any more at Calgary, but I doubt we’re typical. Certainly, zombie ideas about the IDH are still in our textbooks. The current (4th) edition of Begon et al.’s Ecology: From Individuals to Communities* continues to summarize an entire section (8.5) of the chapter on competition with statements like “Even when interspecific competition occurs it does not necessarily continue to completion”, and continues to devote an entire subsection (8.5.3) to a summary of Hutchinson’s zombie idea about how fluctuating environments promote coexistence. The 4th edition of Ricklefs and Miller’s Ecology includes in its online review material for the chapter on competition theory the claim that predators can promote coexistence by reducing competitor populations to a low level, so that “resource limitation is no longer a factor”, and repeats this claim in a review question for the chapter on competition in nature.

Believe me, I would love to discover that my crusade against these zombie ideas is unnecessary because everybody actually agrees with me already, and nobody teaches these zombies anymore. But all the evidence indicates that many, many ecologists still believe in these ideas, still base ongoing research on them, and still teach them. So yeah, the IDH zombies are still worth worrying about.

*UPDATE: In fairness, the 4th edition of Begon et al. actually does a better job on disturbance effects on coexistence than the second edition does. The introductory passages, while not zombie-free, are improved, and unless I missed it there’s no longer any discussion of Huston (1979).

Posted by: aycotack | April 16, 2012

Why shoot yourself in the foot?


A while back I polled readers to ask what new features they’d like to see on the Oikos blog. One popular choice was guest posts by authors of recent and forthcoming Oikos papers. Ask and ye shall receive: here’s the first one! It’s by Ayco Tack and colleagues, the authors of the lead article in our April issue (Tack et al. 2012 Oikos 121:481). Their article presents an important critique of the burgeoning field of community genetics. Based on reanalyses of published experiments, Tack et al. show that the importance of intraspecific genetic variation for community ecology has been systematically overestimated. This article caught my eye not just because it is novel and important, but because the authors include their own previous work in their critique. It’s not often you see authors criticizing themselves! I was curious to learn more about why anyone would do this, and I thought our readers would be curious as well. So I invited Ayco and his colleagues to write a guest post, which appears below. Thanks very much to Tack et al. for taking the time to share the ‘story’ of their very nice paper.

-Jeremy Fox


In a recent paper in Oikos, we examine the foundations of community genetics. As a part of this paper, we reanalyze some of our own earlier data, and point out that in fact, the role of community genetics may– to some extent – have been exaggerated, or that the quest for evidence has at least been tilted in a certain direction.

The publication of a study like ours may seem like a deliberate act of shooting ourselves in the foot. Yet, we believe we have done so for a good reason – and we have actually enjoyed the process.

The rationale for this paper may partly be traced to its genesis. In an obscure bar in Manchester, two of us (Ayco Tack & Marc Johnson) met during a conference on community genetics in Manchester. Here, and at a further meeting, we pondered on how different authors can interpret their data in a very different way. What we discovered was that while Johnson & Agrawal (“Plant genotype and environment interact to shape a diverse arthropod community on evening primrose”) and Tack et al (“Spatial location dominates over host plant genotype in structuring an herbivore community”) may come with very different titles and abstracts, the underlying data will actually reflect a joint pattern: that the impact of genotypic variation simply depends on the scale at which you look at it.

More fuel was added to this notion several months later, when Marc served as the opponent of Ayco’s PhD thesis in Helsinki. In scrutinizing a chapter on the effects of spatial location vs genotype, he suggested to Ayco and his coauthor Tomas that conducting a more comprehensive re-analysis of published literature would offer a good way to examine potential biases in our perception of genotypic effects. So we did, and the paper is out right now.

In collaborating on this paper, we believe that we have lived up to some of our own ideals for making science. As scientists, should we not be open-minded in the real sense of the word? What would science be if we all get hung up on our favorite ideas and keep interpreting all our data in favor? How much faster might we not advance if we were to pick up new ideas, critically re-examine our own data in this context and propose new avenues to weigh together “old” and new perspectives? And most importantly: if we stumble upon a pervasive problem in the field, might not your colleagues be more willing to accept it if one is first willing to show it with your own work? (For a classic example, see Paine’s paper on food webs)

Rather than raising monuments to last, we believe that we should ask ourselves: Are we finished when publishing our data? Do we expect (or even want) results that stand rock-solid in the new waves of science, without a need for re-interpretation? Or is it, in fact, that the most valuable route to knowledge is based on an open mind and on remoulding old truths with new ideas?

So why do we refrain from reinterpreting our own work? Do we fear that it will devaluate published results? We believe it will not. By contributing ourselves to linking old findings to new truths, we are actually more likely to make them part of vivid science than to lose them in the scientific sediments.

So – why shoot yourself in the foot? Because it is actually refreshing. And it may, in fact, offer the fastest walk forwards…

Ayco Tack, Marc Johnson & Tomas Roslin

One of my Ph.D. students will be taking his candidacy exam soon. This is the exam, also known as “qualifiers” or “orals” at some universities, that tests whether you have the background knowledge to get a Ph.D.

In order for candidacy exams to fulfill their purpose, profs have to ask hard questions. An excellent way to figure out how much a student knows (and to see how they think through problems to which they don’t immediately know the answer) is to keep pursuing a line of questioning until you exhaust the student’s knowledge. Of course, that’s not the only reason profs ask hard questions. One unofficial purpose of candidacy exams everywhere is to let profs prove to students that they know more than the students do. Plus, it’s a traditional ritual profs had to go through themselves, and so they’re darn well going to make their students go through the same ritual. 😉

My own candidacy exam actually went quite well. Maybe my profs were feeling kindly that day*, but they never really pushed me to the point where I just had no idea how to answer. The question I remember best is one Peter Morin asks all his students: Name five famous female ecologists or evolutionary biologists and summarize their contributions to the field.

I know of an amphibian community ecologist from Duke whose exam began with one of her committee members sliding the latest issue of an evolutionary journal (which had just arrived in the mail that morning) across the table at her. The cover featured drawings of several fossil fish. The committee member asked, “So, what do those fossils illustrate about fish evolution?”

At Rutgers, one of my friends was once asked, “You’re reading a French biology journal and you come across the acronym ADN. What does it stand for?” Note that Rutgers is an English-language university, and neither the student nor the prof knew French. (answer below)

I don’t have any anecdotes relating to any questions I myself have asked. For better or worse (and I think it’s a bit of both), candidacy exams in my department are somewhat narrowly focused and there are rules to prevent profs from asking overly wide-ranging or off-the-wall questions. So I don’t know that any of my students will ever get to have any really good anecdotes about the ridiculous questions they were asked during their candidacy exams.

So what’s the hardest (or weirdest, or most memorable, or whatever) question you were asked during your candidacy exam?


Answer: ADN stands for DNA. Apparently the prof had just learned that the French acronym for deoxyribonucleic acid was ‘ADN’, and found this bit of trivia so interesting that he decided to ask the student about it.

*Or maybe I had left enough books on my desk that morning. The rumor in the Morin lab was that Peter would come in to your office very early on the morning of your exam, look at all the books on your desk (which you had presumably been reading in order to prepare for the exam), and not ask you any question that you could answer by reading any of those books. I don’t actually know if Peter does this. But it does raise the possibility that, if you had enough books on your desk, you could prevent Peter from asking any questions at all. 😉

Posted by: Jeremy Fox | April 16, 2012

From the archives: defending microcosm experiments

A microcosm paper on which I am a co-author just got rejected. Which seems as good an excuse as any to repost this, in which I shoot down all the commonly-voiced blanket objections to microcosms in ecology.

I have yet to hear a convincing unconditional objection to any widely-used study system or research approach in ecology. Every study system and research approach has its strengths, and its limitations. Which is something that seems to be widely acknowledged in ecology–except when it comes to microcosms.

Posted by: Jeremy Fox | April 13, 2012

Charles Darwin is going to be in an animated pirate movie

The movie version of The Pirates! In an Adventure with Scientists! comes out later this month. I can’t wait!

Posted by: Jeremy Fox | April 12, 2012

Everything you ever wanted to know about the Price equation

…is covered in this review by the great Steven Frank. I know some readers think of me as ‘that Price equation guy’. But what I know (and continue to learn) about the Price equation, I mostly got from reading (and continuing to ponder) Steven Frank’s stuff.

This one paper includes a basic derivation and explanation of the Price equation, discussion of a huge range of technical and conceptual issues including some recent criticisms of the approach, and discussion of applications to various practical problems of interest to empiricists. There’s even an entire subsection devoted to a recent Oikos paper by myself and Ben Kerr, using an extension of the Price equation to explain how biodiversity affects ecosystem function! Like I said, I really admire Steven Frank’s work and I’ve learned a lot from reading him. So I’m quite flattered that he’d find something I’ve written on the Price equation interesting enough to be worth discussing in a major review.

Posted by: Jeremy Fox | April 12, 2012

The paradigm of ‘paradigm shifts’ turns 50

Thomas Kuhn’s The Structure of Scientific Revolutions is 50 years old this year. It’s one of the most famous books on philosophy of science ever published, and one of the only ones to become well-known (albeit usually in a second-hand way) among scientists themselves. Kuhn focused on how scientific ideas get established, and later discarded or replaced, and his coinage of the term ‘paradigm shift’ proved especially resonant. See here for some ecological discussion of ‘paradigm shifts’. Kuhn was one of the first philosophers of science to pay serious attention to how science is actually practiced, and to take his philosophical inspiration from detailed consideration of the actual history of science. That’s surely a big reason why his ideas found an audience with practicing scientists.

David Kaiser has a very good overview essay at Nature, in particular noting some aspects of Kuhn’s book which are well-known among philosophers but much less well-known among scientists. Kuhn’s usage of ‘paradigm shift’ was infamously (although perhaps productively) ambiguous. And his view of alternative paradigms as ‘incommensurate’, meaning that they literally can’t be compared, is not a view to which many scientists would subscribe. Scientists tend to think of new ‘paradigms’ as improvements over the old ones, so that science exhibits cumulative progress of a sort that Kuhn denied. Kuhn himself wasn’t anti-science, but his ideas were one source of inspiration for postmodernist critiques of the objectivity of science.

I actually think ecologists would benefit from more familiarity with philosophers of science besides Kuhn (and Karl Popper, the other philosopher of science to become well-known among scientists). But I also think they’d benefit from more first-hand familiarity with Kuhn and Popper, since the second-hand ‘pop’ versions of Kuhn and Popper familiar to most practicing scientists aren’t actually all that helpful. And as Kaiser notes, Structure is actually short, clear, and very readable, and the most controversial claims are put forward in a suggestive rather than insistent style. The University of Chicago Press has just published a new edition of Structure with an introductory essay by Ian Hacking, whose own philosophical works I’ve found very accessible and helpful. So if you’re looking to get some first-hand familiarity with philosophy of science, the 50th anniversary of Structure is as good an excuse as any to take the plunge.

Posted by: Jeremy Fox | April 10, 2012

Zombie ideas about disturbance: a dialogue (UPDATED)

UPDATE: Just to be clear, disturbance, and environmental variability more generally, can promote stable coexistence. They just can’t do so via the mechanisms that the Professor is trying to teach in this dialogue. I’ve been clear about this in previous posts, but it was suggested to me that anyone who only reads this post (and doesn’t read the comments) could get the wrong idea.


The scene: An undergraduate ecology lecture. The Professor has been teaching students about the effects of disturbance on competitive exclusion.

Professor: In summary of this section of the course, the great diversity of species to be found in a community is one of the puzzles of ecology. In an ideal world the most competitive species (the one that is most efficient at converting limited resources into descendants) would be expected to drive less competitive species to extinction. However, this argument rests on two assumptions that are not necessarily always valid.

The first assumption is that organisms are actually competing, which in turn implies that resources are limiting. But there are many situations where disturbance, such as predation, storms on a rocky shore, or frequent fire, may hold down the densities of populations, so that resources are not limiting and individuals do not compete for them.

The second assumption is that when competition is operating, one species will inevitably exclude the other. But in the real world, when no year is exactly like another, the process of competitive exclusion may never proceed to its monotonous end. Any force that continually changes direction at least delays, and may prevent, an equilibrium or a stable conclusion from being reached. Any force that simply interrupts the process of competitive exclusion may prevent extinction and enhance diversity.*

Clever student: [raises hand]

Professor: Yes, a question in the back.

Clever student: I’m confused about both of those assumptions. I just don’t understand how they prevent competitive exclusion.

Professor: Can you be more specific? What exactly don’t you understand?

Clever student: Well, with the first assumption, if a species is experiencing really high mortality rates from predation or fires or whatever, how come it doesn’t just go extinct?

Professor: Because if its density is low, then resource levels will be high, which allows the species to have a very high reproductive rate.

Clever student: So there’s really rapid mortality, but really rapid reproduction, and the two balance out?

Professor: Yes.

Clever student: So then wouldn’t anything that reduces reproduction even a little, or increases mortality even a little, like just a little bit of competition, still lead to extinctions? I mean, in that simple resource competition model from that Tilman guy that you showed us to teach us about competitive exclusion, there were per-capita mortality rate parameters for each competitor. You can jack up those mortality rate parameters, and yeah, you reduce the equilibrium abundance of the dominant competitor, and you raise the equilibrium resource level, but you still get competitive exclusion. The species that can reduce the limiting resource to the lowest equilibrium level relative to its competitors still wins. It’s just that everybody’s R* value increases as mortality rates increase.

Professor: [pauses for thought] Hmm… I see what you mean. But remember, we’re envisioning a situation in which mortality is so high that resources aren’t limiting, so there’s no competition at all.

Clever student: You mean, species are there and they’re consuming resources, but their densities are so low that their consumption doesn’t reduce resource levels at all? How can that be? If they’re there, they have to be consuming some resources, right? And if they’re consuming some resources, then they’re surely at least slightly reducing resource levels, which means there’s at least some competition, right? Plus, isn’t it super-unlikely that mortality rates would be just high enough to reduce species to near-zero density, so that there’s no competition, but no higher? So that species can still persist rather than just being totally wiped out because they’re getting killed faster than they can possibly reproduce?

Professor: I think you’re over-thinking things. Rather than thinking about hypotheticals, think about real natural systems. Think of harsh environments like rocky shores and alpine meadows. There is in fact a lot of disturbance and mortality in those environments, which does reduce population densities, and species do coexist in those environments. So there you go.

Clever student: Yes, I know all that, but that doesn’t answer my question. I want to know why they coexist. I mean, how do we know they’re not just coexisting for some reason that doesn’t have anything to do with their densities being low? ‘Cause I read some microcosm papers where they manipulated mortality rates of competing species and found that you got just as much if not more competitive exclusion at high mortality, and if you didn’t it wasn’t just because competitor densities were low.

Professor [not sure how to answer so he bails out]: We’re running a bit short of time, and I’m not sure I’m familiar with the papers you refer to, although I don’t know that microcosm studies are relevant to what happens in nature. Why don’t you come to my office hours later and we’ll talk more about it? Now, you said you also had a question about the other assumption?

Clever student: Yes, I don’t understand the whole “interruption” thing. Like, if I give my buddy $2 today, and he gives me $1 tomorrow, then I give him $2 the next day, and we keep alternating like that, eventually I’ll run out of money and he’ll end up with all the money. Even though every other day my losses are interrupted by him giving me $1.

Professor: Ok, I see where you’re confused. The point of the second assumption is that the interruptions slow down the rate of exclusion, here the rate at which you go broke, compared to what would happen if there were no interruptions. If you gave your friend $2 every day, with no interruptions, you’d go broke a lot faster.

Clever student: Yes, that’s true, but why is that the right comparison? I mean, I’d also go broke slower if I only gave my friend $0.50 every day, with no interruptions. I’m losing the same amount of money every day, I’m just losing less than if I give my friend $2 every day. That’s why I go broke slower when I give him $2 every other day, and he gives me $1 every other day–on average I’m only losing $0.50 per day in that case. So I go broke at the same rate, whether I give him $0.50 every day, or we alternate between me giving him $2 and him giving me $1.  It seems like how much money I’m losing on average is all that matters. The interruptions are just noise, aren’t they? They don’t actually have any effect on anything.

Professor: Well, remember that we learned that the frequency of environmental change matters. Hutchinson said so. The environment has to switch from favoring one competitor to favoring another with intermediate frequency, on the same timescale as competitive exclusion. In your example, you and your friend are favored on alternate days, which is a really high switching frequency, not an intermediate frequency. In that kind of case, Hutchinson says that species just average across the fluctuations.

Clever student: But what if I give my friend $2 every day for, like, a week, and then he gives me $1 every day for a week, and so on? Or a month, whatever. I’m still losing $0.50 per day on average, so I’m still going broke at the same rate.

Professor: Hmm, yes, I see what you mean. But at least when you and your friend are favored for longer periods of time, those periods of time are economically relevant. I mean, at the end of a week when you were favored, you’ll have saved up enough money to buy a beer. [laughs] As ecologists, it’s often difficult to study what happens in the long term, so we just focus on shorter, ecologically-relevant timescales.

Clever student: [does not laugh] But I thought we were trying to explain coexistence. Like, real coexistence, not just temporary blips that sort of look like coexistence. I mean yeah, sure, the long run is hard to study–but I’d still have the same question even if the long run were really easy to study. So I’m even more confused now. Are you saying that species will go extinct no matter what the frequency of disturbance, but it’s ok, because along the way they’ll sometimes increase in density? Isn’t that totally changing the question?

Professor: Ok, you’re right, average conditions do matter. So consider a case in which you give your friend $2, and the next day he gives you $2, and so on. In that case, neither of you loses or wins any money on average.

Clever student: But that’s just the same as if we each give the other no money. Or we each give each other the same amount of money on the same day rather than alternating days. Or there’s a continuous steady flow of money from his bank account to mine, and an equal continuous steady flow of money from my account back to his. The day-to-day fluctuations in who gives money to who don’t matter at all, all that matters is the fact that we’re each breaking even on average.

Professor: You seem very focused on the average conditions and the long-term outcome, to the exclusion of the fluctuating dynamics that disturbance generates. Those fluctuations are a very interesting part of ecology, you can’t just ignore them. You can’t just ignore dynamics.

Clever student: [getting frustrated] I’m not ignoring dynamics–competitive exclusion is a kind of population dynamic. The abundance keeps going down until it hits zero. And I’d be happy to pay attention to fluctuating dynamics if you gave me a reason to. It’s you who said that “Any force that simply interrupts the process of competitive exclusion may prevent extinction and enhance diversity”. You didn’t say “Any force that simply interrupts the process of competitive exclusion creates interesting fluctuations in species abundances on the way to exclusion, which is something we can’t ignore even though it has no effect on diversity.”

Professor: Ok, I see your point on dynamics, you certainly do have a way with words. I think where you’re confused is that you’re trying to separate two effects of disturbance that just can’t be separated. Adding disturbance to a disturbance-free system both changes the average conditions, and interrupts approach to equilibrium. Those two things always go hand in hand, and you don’t really need to worry about separating their effects. It’s all just effects of disturbance.

Clever student: But they don’t always go hand in hand–there are ecological systems with similar average conditions and different amounts of variability around the average. And there are ecological systems with different average conditions but similar amounts of variability. Plus, we can do experiments to manipulate average conditions and variability independently of one another, can’t we? Don’t scientists do that all the time–do experiments to tease apart effects that usually co-occur?

And if you don’t separate effects of changes in the average from effects of changes in the variability around the average, how do you know that the variability is what matters? Because that’s how you explained it–you talked about variability, you talked about “interrupting” the approach to equilibrium. I mean, that’s why we call them “disturbances”, right? They disturb what we think of as the “normal” course of events. But based on what you’ve told us, disturbances don’t actually matter as disturbances at all, since what they do isn’t really to disturb the normal course of events, it’s to change the normal course of events. That is, change the average conditions. Which of course you could also change without having any disturbances. So I don’t see what’s so special and unique about disturbances as opposed to just any old change in average environmental conditions.

So I guess that’s really my big question: what can introducing disturbances do that can’t be done by just making the equivalent change in average environmental conditions. Why does variability per se matter?

Professor: Umm…


This post is directed at the many readers who are still on the fence about whether the intermediate disturbance hypothesis (IDH) is a zombie idea. My goal with this post is to try to force those fence-sitters to come down on the correct side, by reminding those readers that this isn’t a purely intellectual debate among academics. This is about what we teach to our students. And as teachers, we need to be able to answer our students’ questions. I don’t think the questions I’ve put in Clever Student’s mouth are at all unreasonable. Indeed, and in all honesty, they’re exactly the sort of questions that I’d expect my University of Calgary undergraduates to ask. They’re certainly the sorts of questions any undergraduate who reads this blog would ask (and there are many undergrads who do read this blog, at universities around the world). Maybe most students wouldn’t ask these questions in quite so articulate or pointed a manner, but they would ask them. They’re perfectly natural questions, that arise from the standard way in which the IDH is explained in textbooks.

So, for those of you who are still on the fence about whether the standard, textbook explanations of the IDH are zombie ideas: How would you answer Clever Student’s questions?

Note that you can’t answer Clever Student’s questions by claiming that the IDH actually has to do with competition-colonization trade-offs, or trade-offs between disturbance tolerance and competitive ability, or successional niches, or the storage effect, or other factors not mentioned by the Professor in his opening lines. Yes, those other factors are relevant to thinking about the effects of disturbance on coexistence. They’re also irrelevant here, because the way in which the Professor explains the IDH is the standard explanation that actually appears in numerous textbooks, and in the Introduction sections of many, many papers. If you think the way to answer Clever Student’s questions is to redefine the IDH by dropping both of the Professor’s assumptions and explaining the effects of disturbance completely differently, then you’re admitting that the standard, textbook understanding of the IDH is 100% wrong. Which I suggest ought to bother you as much as it bothers me. Yes, textbook explanations have to simplify and gloss over technical details–but surely not the point of inviting the sorts of questions Clever Student asks!

Note as well that Clever Student is very alert to attempts to change the question, which the Professor tries and fails to do several times. Note as well that all those attempts to change the question were actually tried by commenters on my original zombie ideas post, or folks who’ve corresponded with me privately. The Professor in this post is just trying to answer Clever Student’s questions the way commenters and correspondents have tried to respond to my original post. The Professor here is no straw man.

If you’re tempted to respond by arguing that Clever Student’s questions are somehow ambiguous or otherwise flawed, please be aware that Clever Student’s questions can be put much more rigorously and precisely. In particular, I would discourage you from trying to argue that “students exchanging money is nothing like species competing for resources”, unless you’re prepared to explain why the analogy is a bad one. Because in every relevant respect, the analogy is perfectly consistent with the Professor’s second assumption. So if you think the monetary exchange analogy is a bad analogy to the IDH, then what you actually think is that the standard, textbook explanation of the IDH is a bad explanation.

The Professor here is not an unreasonable or ignorant person. He’s smart, and he’s doing his best to answer Clever Student’s questions. But those answers just don’t cut it. Hence my curiosity whether any readers can come up with better answers. Our students–real students, not Clever Student–deserve no less.

Of course, I think Clever Student’s questions don’t have good answers. I think the only legitimate response to those questions is to stop teaching the Professor’s zombie ideas in the first place.**

*These lines of the Professor’s dialogue are an abridged quote from p. 740 of the second edition of Begon, Harper, and Townsend’s textbook Ecology: Individuals, Populations, and Communities. My abridgements are minor and do not alter the meaning of the passage.

**I wonder if anyone will try to argue for “teaching the controversy”–teaching both standard ideas about the IDH, and counterarguments. Personally, I think that’s about as good an idea as “teaching the controversy” between evolution and creationism. Remember, this isn’t a controversy between alternative logically-valid claims, which simply make different assumptions about how the world works, and which we can decide between by conducting an appropriate experiment. It’s a controversy about the logical validity of one set of claims. There are scientific controversies which can be usefully taught in science classes. But this isn’t that kind of controversy.

Posted by: Jeremy Fox | April 9, 2012

Advice: why network at conferences?

Following up on my post on how to network at scientific conferences, it occurred to me that I left unstated why one might want to “network” at scientific conferences. I guess I thought it was obvious, but perusing the comments over at the original post by Scicurious made me realize that it’s not. Perhaps because “networking” seems to mean different things to different people.

So: what I mean by “networking” is “talking to other scientists about science, especially scientists you don’t already know”. Networking in this sense is something you do for all kinds of good reasons. You like talking about science. You have questions or criticisms relating to someone’s work and you want to talk to that person about them. You want advice or feedback from someone about your own work, or about some technical problem you’re struggling with. You’re planning to build on someone’s work and you need to pick their brain about the nitty-gritty details to make sure you can first repeat what they’ve done. You want to ask someone about a potential collaboration, or to share their data. You just want to tell someone that you really liked their talk, or their new paper that just came out. Etc. etc. My previous post just takes for granted that these are the sorts of things you’re trying to do, and gives some hopefully-useful advice on how to overcome any nervousness, shyness, or awkwardness that might prevent you from doing these sorts of things.

Networking in this sense is not something you do “to get your name out there” or “to ‘sell’ your work” or “to meet top people” or “to market your personal brand” or any other such silliness. It’s true that networking for the reasons I’ve suggested will have the effect of causing other people to know who you are and what you’re working on, and will hopefully give those other people a positive impression of you. But if you network for the purpose achieving those effects (e.g., “I need an excuse to meet Dr. Famous…I know, I’ll ask him a question about his latest paper!”), you’re doing it wrong.

Posted by: Jeremy Fox | April 9, 2012

Succeeding in academia: being good vs. being lucky

I’ve written about how I was very lucky to get a tenure-track academic position. To complement that anecdote, here’s some discussion from NeuroDojo of an actual data-based study of how luck vs. talent determined the career outcomes for 300 physicists tracked over 20 years.  Turns out that my experience is broadly representative–luck matters a lot early in one’s career.

It’s the time of year when many first year graduate students are preparing their research proposals. So now seems like a good time to repost my advice on how not to choose your research project.

One point I’d add to that post is that it’s always a bad idea to try to pretend that a project will test some general idea or theoretical prediction when it wasn’t actually designed to do so. A common mistake made by students (and sometimes their profs!) is to first design a project (often an applied project, or a specialized or system-specific project), and then go searching for a post-hoc rationale for that project in terms of general ideas and fundamental theory. All this does is open you up to tough questions that you won’t be able to answer. Questions like “If you’re serious about testing theory X, how come you’re not working in completely different system Y, which would make it much easier to test theory X?” If you’re serious about testing some general theoretical idea or principle, you need to take that principle as your starting point and design your study accordingly.

Similarly, it’s a bad idea to design a project to test some general theoretical idea, and then go searching for some applied rationale for that project. For instance, just because your project involves some abiotic environmental variable doesn’t mean your project is about climate change (I’m as guilty of this sort of claim as anyone, by the way).

This isn’t to say that there aren’t projects that can kill two birds with one stone–projects that both act as powerful tests of fundamental ideas, and have direct applied relevance. There are. But again, if that’s the kind of project you want, you need to design it that way from the very start.

Posted by: Jeremy Fox | April 5, 2012

On science for science’s sake

In my post on justifying fundamental research, I didn’t argue for fundamental research just for its own sake. Not because I don’t buy that argument, but just because it’s a very hard argument to make well. It can easily come off as “just give me money to do whatever idiosyncratic thing I personally want to do,” which isn’t very compelling. Indeed, I’m not sure that valuing fundamental research for its own sake is really a stance you can argue for at all, not in the usual sense of appealing to data and/or logical reasoning from agreed premises. All you can do, I think, is try to persuade people that science is valuable for its own sake. Not that data and logic are totally irrelevant. But ultimately, you need to make an emotional or moral case, not a factual or logical case.

So if you want to argue that fundamental science is valuable simply for its own sake, I think you have to do it the way physicist Robert Wilson once did (and have Wilson’s way with words). Click through for a great article on a great scientist, but here’s the money quote. It’s from Wilson’s 1969 Congressional testimony on the need for a new particle accelerator (what eventually became Fermilab), in response to a question from a senator on whether the accelerator would have anything to do with “the security of the country”:

It has only to do with the respect with which we regard one another, the dignity of man, our love of culture. It has to do with: Are we good painters, good sculptors, great poets? I mean all the things we really venerate in our country and are patriotic about. It has nothing to do directly with defending our country except to make it worth defending.

HT Brady Allred, via the comments on a related post at Jabberwocky Ecology.

Posted by: Jeremy Fox | April 4, 2012

Advice: how to network at conferences (UPDATED)

Scicurious asks a question about “networking” at scientific conferences:

HOW do you DO IT?

(emphasis in original)

Good question! But I can’t tell you the answer, because that would involve teaching you the secret handshake which is taught only to Faculty and which we use to signal to one another that we are not Students, so that we can avoid talking to Students at conferences. As a member of the Faculty, I’m not allowed to teach the secret handshake to Students, on pain of death.*

Just kidding. 😉 I actually totally understand where this question comes from. Most people don’t find it easy to just go up to a stranger and start talking to them, even when the stranger is a fellow ecologist at an ecology conference that exists mainly so that ecologists can meet and talk to one another. Especially when the stranger is your “superior”.

Which is something you really need to try to get over–the feeling that anyone is your superior. It’s easy to understand where this feeling comes from–as a student, you periodically get evaluated by certain faculty, such as the faculty teaching your courses or your committee members evaluating your thesis proposal. So it’s easy to feel like you’re always being evaluated, by every faculty member you meet. But you’re not. Seriously, you’re not. If you come and talk to me at a conference, I’m not going to be quizzing you to see if you’ve read all (or any!) of my papers, or tearing apart your research project, or anything like that. In fact, like NeuroDojo, not only do I like talking to anyone about science, I’m actually flattered that you would be interested in my work, or want my advice on your work. And most every ecologist I know feels the same way (even those who wouldn’t admit to being flattered are subconsciously flattered; having people want to pick your brain is good for your ego). And while it’s true that I’ll probably form some sort of opinion about you based on our conversation, that’s true of every conversation that every person has with another person. So have the confidence to approach others as peers–and they’ll treat you like one.

Even if you want to ask someone for advice, don’t let that shade into feeling like you’re some kind of supplicant begging the indulgence of the king. Because you know what? Faculty ask each other for advice all the time. We even ask students and postdocs for advice. That’s what colleagues do.

But of course, saying “don’t be nervous or intimidated” isn’t actually helpful advice, or even advice at all. So here are some tips for networking at conferences:

  • Ask your supervisor to introduce you to whoever it is you want to meet. A good supervisor will also tell her friends to stop by your poster or come to your talk.
  • Have some purpose for meeting people.  “I just want to meet Dr. Famous” or “I just want to make sure Dr. Famous has heard of me and my work” aren’t really good purposes. Although if all you want to do is tell someone you really enjoyed their talk, or that you love their writing for the Oikos blog ;-), that’s fine (just don’t expect a long conversation to spontaneously begin if that’s all you wanted to say).
  • Most people will see through flattery. Never try to flatter someone just to ingratiate yourself.
  • Tag along with your supervisor to some meals. I did this a lot as a young grad student at the ESA meeting. I’d just ask my supervisor Peter Morin if he had any dinner plans that night, and usually he’d either invite me along or wouldn’t object if I invited myself. As far as I recall, I was usually pretty quiet at dinner, and I don’t think Peter considered me a pest (maybe Peter will comment if my memory is fuzzy here!) I think just hanging out with Peter and all his famous friends (Peter pretty much knows everybody) helped get me used to thinking of these folks as my peers. Or, if he was chatting with someone at a poster session or something, I’d just come up and say hi to him, and then he’d introduce me to whoever he was chatting with. The side benefit of this was free food: Peter never lets students pay for their own meals when they’re out with him (I have the same policy).
  • Attend some small conferences or working groups. At a big conference like the ESA the biggest challenge to meeting someone you want to meet can be finding them in the crowds! It can also be hard to force yourself to introduce yourself, to stand out from the crowd. For students who are nervous about approaching faculty, I think there can be a subconscious tendency to sort of “hide” in the crowd at a big conference.  There’s no crowd for you, or the people you want to meet, to “hide” in at a small conference. Especially if the conference has elements designed to encourage interaction, like communal meals and discussion/breakout sessions.
  • Give a poster rather than a talk. That way, people will come by your poster and introduce themselves to you. And if they come by but don’t introduce themselves, you have a ready-made excuse to take the initiative (“Hi, I’m So-and-so, let me know if you have any questions or want me to walk you through it.”)
  • Talk with visiting seminar speakers at your home university. And not just as part of your lab group’s meeting with the visitor, or as one of 50 grad students attending the pizza lunch with the visitor. Book some one-on-one time. Put a bit of advance thought into what you want to talk about, and don’t hesitate to talk about your own stuff. Remember, the speaker will probably be keen to talk about your work, because she’ll already have given, or be giving, a seminar on her own work. Do this even if the speaker’s research isn’t exactly like, or even all that similar to, your own. Every ecologist I know is broadly curious about the world, and likes chatting with people about all kinds of ecology. Plus, visiting speakers expect and want to chat with all sorts of people about all sorts of things–that’s part of being a visiting speaker. Once you’ve chatted with a few faculty from outside your own university, you’ll stop seeing them as your superiors. Plus, you’ll have had practice explaining your work to strangers, so you’ll be good at it when it comes time to attend the next conference.
  • Email people in advance if you really want/need to talk at the conference. This is probably most appropriate if it’s someone who you want or need to have more than a brief chat with–say, someone you see as a potential collaborator or something.
  • Right after someone’s talk, or right at the end of a session, can be a good time to grab someone and ask them if they’re free to chat later.
  • If the person you want to talk to is currently talking to someone else (and they probably are), just stand close by, in their line of sight, and wait a minute for a break in the conversation so you can introduce yourself. Yes, this is a little awkward–but not because you’re a student. I find it awkward too; it’s awkward for anyone. Once there’s a break in the conversation, just say “Sorry to interrupt…”, introduce yourself, and ask the person you want to talk to if there’s a time when they could chat. This kind of interaction happens all the time, especially at larger conferences. You’re not going to annoy or offend anyone if you do this. And if the person you’re talking to responds by saying something like, “Sure, send me an email,” they’re not signalling annoyance, that’s just how they’d prefer to do their scheduling. Similar remarks hold if the person you want to talk to currently appears to be going somewhere in a hurry. Just introduce yourself and say “I know you’re busy–do you have some time to talk later?”
  • Talking to postdocs, younger faculty, and your fellow grad students often is more valuable than talking to Dr. Famous. And you may find it easier.
  • It’ll get easier with practice, just like everything else in life.
  • UPDATE: One more thing I forgot: don’t ever feel obliged to drink alcohol if you don’t want to. Just drink whatever you want. No one will hold it against you or think you’re weird or anything. I’m speaking from personal experience here: as a grad student, I basically never drank alcohol. And I met plenty of people, and hung out with them in bars. It was fine. I only really started drinking beer as a postdoc.

I can’t promise that every interaction you have will be positive. It’s possible that someone you want to talk to will be rude to you. Maybe even rude to you because you’re a student (nobody’s your “superior”, but some people think they are). Don’t let it get to you, and don’t take it personally. Some people are jerks (and some are just having a bad day, or whatever). And don’t let it color your perception of any larger group. Plenty of famous ecologists are really nice and approachable.

So, for anyone out there who will be at Evolution 2012 in Ottawa or the ESA meeting in Portland and wants to talk to me: please do!

*Oh what the heck, here it is (5:46 mark).

Posted by: Jeremy Fox | April 4, 2012

Carnival of Evolution #46

The best of last month’s evolution blogging, here. Lots of stuff on human evolution and sociality this month.

Posted by: Jeremy Fox | April 2, 2012

From the archives: how I almost quit science

This might be the best thing I’ve ever written for the Oikos blog.


Or if they do, they remain unconvinced by my arguments for the value of fundamental research.

The most important ideas in ecology are general, broadly-relevant ideas, not narrow, parochial ones. But can you have too much of a good thing? Is there such as thing as an overly general idea? Here’s an old post in which I argue that there is, and that the most highly-cited Oikos paper in history provides an example.

Posted by: Jeremy Fox | March 27, 2012

What’s your favorite ecology textbook? (UPDATED)

What’s your favorite ecology textbook? Why?

I don’t have much to contribute here, because I only teach upper-level courses that don’t use textbooks. The last ecology textbook with which I have much personal experience is the 2nd edition of Begon, Harper, and Townsend (now Begon, Townsend, and Harper), which I learned from as an undergrad*. My supervisor Peter Morin used to swear by the famous, and famously massive, Ricklefs (now Ricklefs and Miller). Those two are probably the most advanced undergraduate ecology texts. But there are lots of choices out there.

Out of curiosity, I searched the Alibris textbooks section for the subject “ecology”. The results are, um, interesting; I hadn’t been aware that the definition of “textbook” (or “ecology”!) could be stretched quite so far. But for what it’s worth, the best-selling ecology textbook of any stripe on Alibris is the 2nd edition of Dodds and Whiles’ Freshwater Ecology. Don’t know that one; the aquatic ecology text on my shelf is Lampert and Sommer, because it’s pretty strong on concepts and on links to general ecology. The best-selling general ecology textbook on Alibris is the 7th edition of Smith and Smith’s Elements of Ecology. An earlier edition of Smith and Smith was used in an ecology course I TA’ed at Rutgers. I vaguely recall thinking Smith & Smith was weak on the conceptual side. You can’t cover applications (which Smith & Smith emphasize) at the expense of concepts, or without integrating your concepts and applications (I seem to recall that Smith and Smith included one random paragraph on “chaos theory” tossed into the middle of an otherwise-unrelated chapter…).  But that was long ago and I don’t really remember very well.

UPDATE: In terms of more advanced and specialized books, I should’ve mentioned that I love Case’s Illustrated Guide to Theoretical Ecology for my upper-level population ecology and introductory mathematical modeling classes. The very strong emphasis on graphs and pictures illustrating the mathematics is unique, and a really effective way to teach math-phobic ecology students that math is just a tool for helping us think about ecology.

And in terms of graduate-level texts, I’m of course biased because Morin’s Community Ecology is basically Peter’s lecture notes for his wonderful graduate community ecology course, which I took as a grad student. Gary Mittelbach has a competing text coming out this spring. Glancing at the tables of contents suggests that the two books cover much of the same material with broadly similar organization. But that leaves plenty of scope for all sorts of differences. I’ll be curious to see how Gary’s book compares and might even do a comparative review on Oikos blog at some point.

Share your ecology textbook preferences in the comments.

Thanks to Jim Bouldin for the post suggestion.

*If you look up when the second edition of Begon, Harper, and Townsend was published, you will find that I am approximately eleventy bazillion years old.

Posted by: Jeremy Fox | March 26, 2012

Oikos blog: have we won the internet yet?

For the fourth week in a row, the Oikos blog just had it’s biggest week ever. And what a week it was! Thanks to some very interesting and popular posts by Chris on peer review, we got 3,956 non-syndicated views last week. Add in the syndicated views and we probably got something like 5,000 views last week.

The current month became our biggest month ever after only three weeks; we’re currently at 12,445 non-syndicated views and counting for the month.

Thanks for reading everyone!

Posted by: Jeremy Fox | March 26, 2012

Cool new statistical method not so cool after all?

A while back I posted on a cool new nonparametric method, which goes by the acronym “MINE”, for detecting associations between variables in multivariate datasets. The method can detect even nonlinear (and non-monotonic!) relationships between pairs of variables, and it provides a measure of the strength of the relationship analogous to the familiar R^2.

Turns out that this approach has some drawbacks, though, perhaps quite serious. Andrew Gelman’s blog has a good summary of recent commentary. Not surprisingly for such a flexible nonparametric method, it seems to lack power. But there may be other issues as well, to do with things like the scope and rigor of the proofs of the method’s statistical properties. I’m not qualified to pass judgment on how serious these issues are. But if you’re thinking of using this method, you should definitely click through and check out the commentary.

UPDATE: The MINE authors themselves show up in the comments, briefly addressing the issues I’ve raised and noting that they’ve posted a detailed reply to the comments they’ve received over on Andrew Gelman’s blog. Great to see authors and their readers engaging in such a productive and substantial discussion. So if you’re interested in the MINE method, and alternative approaches, you really ought to click through to Andrew Gelman’s blog.

Posted by: Jeremy Fox | March 23, 2012

Is the time right for NCEAS 2.0?

Apparently there was much talk at the closing NCEAS symposium about “NCEAS 2.0”–a successor institute and what it might look like.  This is a really interesting conversation from which I’m far removed (I never had any involvement with NCEAS), so this post is basically me putting my hand up from the back row of the audience and asking questions of folks in a position to answer.

I find it interesting that no one at the NCEAS closing symposium seems to have suggested “there are already a bunch of NCEAS 2.0’s: NESCent, NIMBioS, CIEE, SESYNC…” I’m curious for folks who were there, or who know more about it than me, to chime in here. What’s the need for a successor to NCEAS that’s not being filled by all the various institutes (in ecology and related disciplines) inspired by NCEAS? In asking that question, I don’t mean to imply that I think the answer is “there is no need for NCEAS 2.0”; it’s a genuine question, not a rhetorical one.

I also find it interesting that visions for NCEAS 2.0 seem to vary rather widely, and that they often seem quite removed from NCEAS 1.0. For instance, Peter Kareiva apparently suggested some sort of institute that would reach out to big corporations and provide a neutral space in which corporations and ecologists could talk about pressing environmental issues (apologies if I’ve garbled what Peter said; I wasn’t there). Which sounds awfully far from NCEAS 1.0, which was basically “host a critical mass of postdocs and working groups doing whatever ‘synthetic’ ecology they propose to do” . Again, I have no answers here, just the question.

I’m also wondering if there was any discussion at the closing symposium of the extent to which NCEAS’s success was a product of good timing. Founding a center dedicated to synthesizing existing data and promoting collaborative work was a brilliant choice in the mid-1990s, when the internet and other computing advances had just gotten to the point of making data synthesis and collaborative work much easier. Had NCEAS been founded 10 years earlier, it would’ve been much less successful. So my question is, is the timing right for NCEAS 2.0? Are there any big new opportunities out there that, if we take advantage of them in the right way, will fundamentally change and improve how we do ecology? I don’t know that there are. The time is not always ripe for big advances or sea changes in how we do science. Perhaps sometimes the best we can do is just keep on keepin’ on with what we’ve been doing. But it’s a really hard question to answer, and honestly I have no idea what the answer is. I just think it’s a question we ought to ask. NCEAS’s success was partially due to being in the right place; everyone wants an excuse to go to somewhere with nice weather, beaches, and restaurants like Santa Barbera. I think we need to ask how much NCEAS’s success was also due to being in the right time. Maybe the only way to find out is to take the risk of founding a new center and see if it succeeds (which is kind of what NSF has already done by letting NCEAS wind down and founding SESYNC).

Posted by: Jeremy Fox | March 23, 2012

The end of NCEAS (UPDATED)

NCEAS officially closed its doors earlier this week with an invitation-only symposium this week held an invitation-only symposium to mark the end of its NSF funding, though apparently it has secured some other funding sources and will be continuing on in some form. If it was a wake, apparently it was an Irish one; report from Marc Cadotte here. I posted my thoughts on NCEAS a while back, when it became clear that there wouldn’t be any last-minute reprieve for NSF funding for the center. Now seems like an appropriate time to link back to those thoughts.

UPDATE: Post corrected in response to info provided in the comments. Look for NCEAS to share the input from the symposium on its homepage, on Twitter (@nceas), and elsewhere in the coming weeks. Clearly there will be changes coming to NCEAS, since the funding will be changing, and so fortunately I think most of the commentary from myself and others is still relevant.

Posted by: Jeremy Fox | March 22, 2012

Trying to save a zombie idea (UPDATED)

In a previous post I commented that it would be interesting to see whether the very nice paper by Adler et al. (2011) on diversity-productivity relationships in terrestrial grasslands would finally kill off the zombie idea that diversity generally is a humped function of productivity. I also suggested that this zombie might be more of an apparent zombie than a real one, because hardly anyone these days still believes in this zombie idea, it’s just that everyone thinks everyone else believes in it.

I was wrong on both counts (you’d think I of all people wouldn’t be so quick to see a zombie idea as dead or dying!) Writing in Science this week, Pan et al. and Fridley et al. criticize Adler et al., and the reply is here. Pan et al. claim that, properly analyzed, the Adler et al. data show a strongly linear diversity-productivity relationship, while Fridley et al. claim both that the data are “clearly deficient” as a test for a humped diversity-productivity relationship, and that they show a “clear” humped diversity-productivity relationship*.

In my view, the comments are a striking illustration of just how far even very good ecologists will go in an attempt to save a pet hypothesis in which they are heavily invested. I should emphasize in saying that that I don’t have a dog in this fight. The question of how plant diversity varies as a function of primary productivity is a purely empirical question. Addressing it properly requires careful study design to sample the full range of natural variation, control for confounding factors, etc. And personally I don’t care what the answer is. I just care that we get the right answer.

I think Adler et al. is the right answer, for grasslands. Pan et al. don’t, but as Adler et al. point out their comment basically amounts to cherry-picking data to obtain a linear relationship. Not consciously cherry-picking of course. But as a rule, in any large and complex dataset there is always some subset of data showing a relationship between variables different than that shown by the dataset as a whole, and you can always find some more-or-less-plausible post-hoc reason to pay special attention to that subset. As Grace et al. point out in their reply to Pan et al., Fridley et al. actually prefer to focus on a different subset of the data. That strongly suggests that both sets of commenters are, unconsciously, just looking for reasons to make the data show what they think the data ought to show.

As noted above, the Fridley et al. comment is, on it’s face, self-contradictory*. Fridley et al. also try to change the question, arguing that “productivity”, or any index of it, should be defined to include leaf litter, which would be contrary to the bulk of previous empirical and theoretical studies. They also argue that Adler et al. should have conducted various alternative analyses…that Adler et al. actually did conduct and presented in their paper. And finally, as Grace et al. show, what little “humpiness” there is in the Adler et al. data reflects the log-normal distribution of the data, so it looks slightly humped on an arithmetic scale.

I’m not an expert on this stuff (I’m not a plant guy), but I’m confident I know enough about it to come to an informed (not infallible, but informed) opinion. And in my view, the Adler et al. paper is totally unscathed. In grasslands, I think we now have the best data we can ever expect on the diversity-productivity relationship–collected at a large number of sites around the world, with good coverage of a big productivity range, sampled using a consistent protocol. And those data have been very carefully and thoroughly analyzed. The answer is clear: the diversity-productivity relationship is very weak and noisy, and it’s not humped. So we’d better learn to deal with it.

Even if you want to say that the humped diversity-productivity relationship is so well-established that we need extraordinary evidence to reject it, well, I disagree with your premise, plus Adler et al. is extraordinary evidence.

At the end of their comment, Fridley et al. conclude with some quite striking rhetoric. They claim that the hump-backed diversity-productivity relationship is a “cornerstone of plant ecology”, backed by “decades of careful mechanistic analysis”, and is “used by plant conservationists and restoration ecologists, as well as theoretical ecologists.” The first claim must surely be loose language on the part of Fridley et al., because taken at face value it’s patently false. Discarding the humped diversity-productivity hypothesis would not cause plant ecology to collapse the way a building would if you removed its cornerstone. If you think otherwise, your view of what “plant ecology” consists of is way too narrow. The vast bulk of research on plant physiological, population, and community ecology has nothing to do with the diversity-productivity relationship, humped or otherwise. I’m not sure what the second claim means, but I hope they’re not claiming that we’ve proven mechanistically that the diversity-productivity relationship has to be humped. There’s nothing in ecology like, say, the kinetic theory of gases that let’s us rigorously derive a diversity-productivity equivalent of the ideal gas law. And their other claims are irrelevant. If conservationists, restorationists, and theoreticians based their work on the assumption that the earth was flat, that wouldn’t make the earth flat (and yes, it is possible for many experts to be wrong about a matter of empirical fact). Worse, do you really want to argue that conservation and restoration practices, or theoretical modelers, should never adjust what they do in light of new data? Please don’t misunderstand me: the authors of the Fridley et al. comment include many very good, very smart ecologists (and Jason Fridley for one is a friend of this blog). I just find it striking that so many very good, very smart ecologists could be so reluctant to reconsider their position on what ought to be a straightforward empirical matter, and so willing to resort to what looks like quite poorly-supported rhetoric. Basically their rhetoric at the end amounts to saying “this zombie idea has been around for a long time–so therefore it can’t possibly be a zombie”.

UPDATE: Commentary from one of the co-authors of Adler et al. here.

*Grace et al. also note that Fridley et al.’s comment is self-contradictory. I agree with that characterization.

You just published a paper in Oikos. Whew. The review process is not trivial, and it took some time in spite of the amazing speed of this journal. Done, time to move on to the next one. Right? No. I have a suggestion based on a recent experience here. Take a second, and do a blog post about the Oikos paper. My recent experiences with the decline to review post and paper (not in Oikos sadly but the implications were for Oikos) were very positive and the learning continued entirely due to posts and emails by readers. See below the movie poster in the post, I added a section of additional analyses and interpretation generated by discussion with readers.

Writing and publishing a paper involves a series of decisions including an entire analytical and statistical pipeline. After your paper is in print, post to the blog describing the process, decisions you made, nuances, alternatives, additional figures, pictures of the study site, or extended implications. Most of this would not really fit in the current model for a paper. Nonetheless, it is valuable information for those that need to relate their research to your paper. There are at least a few benefits to this additional activity.

Author benefits

1. Science is a process and extending it beyond a static publication is useful.

2. Readers that enjoyed your paper could swing by the blog to see if there is additional information and read a bit more about the study (or see the study, i.e. you could post pics).

3. Readers of the blog might in turn read the paper. I don’t endorse shameless promotion, but you can certainly list the strengths and limitations of the study here and explain how it relates to other studies, published anywhere.

4. You will get feedback. Metrics have some utility in understanding what people think of your paper but active discussion is better. This will improve subsequent papers and your research too.


Readers could also post questions about a paper they read. Benefits include the following.

1. By posting appropriate questions here instead of emailing them directly to the author, they are less likely to get lost in the hundreds of emails folks receive.

2. We can all benefit from the discussion by seeing it transparently and not in a series of emails.

3. The authors are more likely to respond since the question is public.

4. I will chase them for you. If you post a great question here, I will direct it to the author on your behalf and the handling subject editor as well.  I manage the blog and that’s my job – promote appropriate discussion of all topics relevant to Oikos and the society.

Here is a little experiment we could also do if we started this process.  We could track the relative success in terms of views, downloads, and citations of those papers that are discussed versus those are not. I bet more insight into the process, or self-synthesis of the research done, is a positive step for ecology. Importantly, it has the capacity to improve ecology.

Thinking on the alchemy of synthesis further, I was considering the importance of identifying the various elements of ecological research that a journal like Oikos or a centre like NCEAS could combine. Here are some options. Data, maps, people, ideas, and methods are very likely candidates that I assume we routinely utilize for synthesis in ecology and evolution. People are generally an indirect consequence of collaboration and working groups. However, we should consider formalizing this process by identifying divergent perspectives on a topic and soliciting proposals for novel synthesis. Alternatively, we could identify very different research topics and examine whether there are connections between them. Of course, this would be facilitated by meta-data and datasets provided by authors of papers.

What about syntheses of the review process? For instance, using the review process and its associated interactions to identify hot topics, debate, and people that should meet. Given the masking of reviews and the limited scope of viewing of the said reviews, this is challenging. On review forms, there is sometimes a box you tick about remaining anonymous. What if we added another box, can we make your review public? This might be very useful (in addition to promoting better reviews). Journals could collect and post sets of reviews to illustrate the effectiveness of the review process, the variability, and illuminate some of the discussion that is currently unavailable to a wider readership. Even if journals simply posted the best reviews, this would still facilitate assessment of the frequency of similar concerns for some topics and how often common sets of suggestions occur more broadly. In grading term papers, by the end of the process I sometimes wish I had a stamp that said, don’t just review, be critical; cite your sources; what is the implication of this study; etc. We might see the same trends, but at a much higher level of course, for the papers we review. One of the benefits of being an editor is seeing this process in action and getting a sense of what is hot/not so to speak, but we could all benefit from these insights. Consequently, we would improve our papers, reviews, and accelerate the process of handling the work of others. A little more magic for all instead of muggling along.

Another benefit of this form of synthesis is that it provides referees with credit, indirectly at least. If you elected to give permission that your review(s) be made public, your review becomes a form of public communication. The real magic trick is to get credit. You could waive anonymity. Or, you could choose to remain anonymous during the review process, tick the public permission box, but add with identity listed – decoupled from the posts. The journal then posts all the public reviews every few months. It also publishes an annual list of the referees that agreed to share the reviews, separately as a list. You can then take credit for your reviews by providing the link to your employer for tenure considerations or job applications. Authors could skim the list and try to guess whom did such and such a review. They might even be right but would not know for certain. The price to pay for a bit of credit. Ultimately, we need to decide how much of the information is valuable to the community, but this is tough to estimate without being able to read a large number of reviews from various journals.

Tracking referee service as a discipline would be also useful. I envisage ‘referee tracker‘. Names are always masked, but we use this iphone tool or webpage to quickly log the requests and reviews we do on the cloud. The points are graphically displayed in a large flash plot for the whole discipline and you can see where you fit in (without names as it is not a competition). Institution, journal names, etc. are always masked as well. The point is for you to track your own individual data (and it is yours) and to provide us with a community-level curve to see how we are doing. Only you assess your relative performance. Several have posted to this effect already, above/below the curve, and it would be a fantastic form of synthesis to see this widely. Many folks get so many requests or do so many reviews using the online forms that they don’t even have a record of how much they participate in peer review. The tool could be used by any academic discipline.

Posted by: cjlortie | March 20, 2012

What is novel synthesis in ecology?

I like the idea of novel synthesis but have been trying to deconstruct what it means. I have spent some time at NCEAS and really enjoyed seeing synthesis in action. Synthesis, ecologically, appears to be bringing together datasets from different places. However, synthesis in ecology does not necessarily appear to be the combination of various entities to generate something new per se.  For instance, we might all have datasets on diversity in backyards. We meet and then combine them. We now have an estimate for the diversity of species at a larger scale, i.e. the city, region, etc. but is this really new? Perhaps this where the ‘novel’ synthesis part comes in for Oikos.  The goal is to bring together elements, data, ideas, and maybe people now too under the new Oikos Forum model to produce dialectics, experiments, and insights that otherwise could be obscured from too narrow a focus.

Are there other models of synthesis for ecology though in addition to the combination of same or different data streams from different places or taxa? What other mechanisms could we use to promote novel synthesis in ecology and evolution, particularly as they relate to important conservation issues?

Discussion, collaboration, and networks are ideal approaches. Journals have the capacity to promote syntheses along these lines. One mechanism would be special issues of Oikos bringing together individuals on a specific topic to evaluate and examine linkages within and between specific fields. Bringing together fundamental ecologists and applied ecologists to imagine and describe connections speaks to a post by Jeremy on the value of fundamental ecology. Journals sometimes publish special issues and often organize special symposiums. I would like to see Oikos do more of this and for us to find other channels to promote these linkages (the blog, editorials, advertisements or calls for sets of papers, etc.). Let’s see if we can expand the actualization of novel synthesis for ecology and evolution.

Is there a peer-review crisis in ecology? This is critical question that we need to explore to determine whether we should consider introducing changes. Referee selection, number of reviews needed, and the relative importance of referees versus editors in improving the quality of our manuscripts are viable lines of inquiry. There are numerous ways to tackle these questions. To assess the extent that we are in a crisis, I conducted an online survey to calculate the individual decline to review (weighted analysis of requests versus reviews actually done).

I am biased on this topic as I assumed there was a crisis. I also want ecologists to examine the why and how of reviews and this would be a useful indication that we need to so. So, a high decline to review rate would mean that most of us are too busy to review the work of others. And the answer is… in a short editorial on the topic published in Immediate Science Ecology.  It is the article at the bottom of the main page, and the pdf link is to the right of article.  The DOI is still pending, but the article is otherwise final.

In case you just want the punchline, the decline to review rate for ecologists is 49% (+/- 0.02) meaning that about half the requests are declined by the most appropriate referees. This is not a large pool of respondents nor necessarily indicative of all ecologists or what individual journals such as Oikos experience. Nonetheless, I see this a clear signal that we need to add incentives or change handling in some form.

Not included in the article was the importance of gender.  Analyses of productivity and role one serves in the process were however reported. Women accounted for 36% of the respondents, and interestingly, declined to review less often than the male respondents in this dataset. The decline to review rate of men was 1.5 time greater than women. There were differences in the proportions of respondents by gender that participated in peer review in various capacities with 20% of women serving as editors compared to 30% by men. Nonetheless, women do more reviews relative to the number of requests they receive (Figure below), and similar to the relationship reported in the editorial this difference increases with productivity.

So, this is likely a very real cost that women pay in science and is thus a compelling argument for ensuring that we consider managing the diversity of referees we choose to use in reviewing the work of others – not by gender but by career stage (junior versus senior) or by productivity in a specific field. After all, peer review is also an opportunity for collaboration and for building networks for practicing science. In summary, whether we agree on what constitutes a crisis or not, there is an opportunity here for journals in general, and for Oikos in particular, to consider how we select referees and whether it is meaningful to make these criteria transparent and ensure that reviews are spread between junior-senior, referee-editor, or highly productive veteran-newcomer to the games.

Gender redux, slight nuance in the race to publish. I was thinking about the comment posted by JEB on the meaning of the fit lines. Increasing productivity generates a greater divergence between requests and reviews likely due to a visibility effect, i.e. those that publish more papers also receive more solicitations.  Both genders demonstrate this effect, but I was wondering if it an’important’ difference (i.e. red versus blue line, like Tron light cycle races when they crash). I did a t-test for each gender to examine the scale of difference between the fit lines of requests-reviews (incidentally the r2 values are around 0.4). The two fit lines were significantly different for each gender – statistically speaking (t-test for females, t = -2.3, p = 0.02; t-test for males, t = -5.8, p = 0.0001), but the mean extent of divergence between the fit models is double that for men relative to women. This corresponds nicely to what your eye can see instantly in the plots that the lines are more different for men. However, the nuance I was considering is whether the female lines in predicted reviews catch up or overlap with males if the productivity scales fully overlapped, i.e. if additional female respondents completed the survey and reported more publications per year. They would not (t-test, t = -4, p = 0.0001) – women that publish more papers would still do more reviews. Fascinating! Disclaimer, this is only a dataset of 257, so I am purely speculating here.

Posted by: cjlortie | March 19, 2012

Models of journal management

I was wondering how important the communication and management between editors for a journal might be in ensuring effective and fair dissemination. I imagine that most are top down with the Editor in Chief(s) managing the bulk of the decisions. However, I suspect that some journals have significantly expanded their boards with many more EiC positions and have moved reviewing submissions to more as a panel (like many granting agencies). This seems like a pretty good idea to me but perhaps not as efficient in handling exceptionally high volumes of manuscripts.

Importantly, the flow of communication and information between editors is important (whether the EiC does the bulf of processing or not). It is possible that many ecology journals do not have significant communication between the editorial board of 50+ subject/associate editors. This is unfortunate. These individuals share a common interest and handle reviews, interactions with referees, and feedback to authors with the common goal of publishing the specialty of the journal. If the board functions more like a community with active discussion, then these individuals could collectively solve problems, discuss frequent occurrences of certain sets of papers, identify hot trends, and calibrate their estimations of publishable material. This is not a set up at all, but a preamble to the fact that the editorial board of Oikos engages in frequent discussions via email – sometimes very extensively and always collectively.  I see this as a very positive avenue for change and an opportunity for more networked science occurring not just in experimentation but in review.

Posted by: Jeremy Fox | March 19, 2012

Steve Jobs on the value of fundamental research

For what it’s worth (which may be a lot, or next to nothing, depending on your point of view), Steve Jobs agreed with me on the value of fundamental research. The passage below is from his 2005 commencement address to Stanford University. It’s quite well-put, so I thought I’d share it. He describes how he dropped out of Reed College after less than a year, with no plan as to what to do next besides just do whatever seemed interesting:

And much of what I stumbled into by following my curiosity and intuition turned out to be priceless later on. Let me give you one example:

Reed College at that time offered perhaps the best calligraphy instruction in the country. Throughout the campus every poster, every label on every drawer, was beautifully hand calligraphed. Because I had dropped out and didn’t have to take the normal classes, I decided to take a calligraphy class to learn how to do this. I learned about serif and san serif typefaces, about varying the amount of space between different letter combinations, about what makes great typography great. It was beautiful, historical, artistically subtle in a way that science can’t capture, and I found it fascinating.

None of this had even a hope of any practical application in my life. But ten years later, when we were designing the first Macintosh computer, it all came back to me. And we designed it all into the Mac. It was the first computer with beautiful typography. If I had never dropped in on that single course in college, the Mac would have never had multiple typefaces or proportionally spaced fonts. And since Windows just copied the Mac, it’s likely that no personal computer would have them. If I had never dropped out, I would have never dropped in on this calligraphy class, and personal computers might not have the wonderful typography that they do. Of course it was impossible to connect the dots looking forward when I was in college. But it was very, very clear looking backwards ten years later.

Again, you can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future. You have to trust in something — your gut, destiny, life, karma, whatever. This approach has never let me down, and it has made all the difference in my life.

Posted by: cjlortie | March 18, 2012

Cover letter power and job interviews

I flip flop on the relative importance of cover letters for my submissions.  Sometimes they write themselves, other times it is a struggle. I have not been cognizant enough to see if there is a relationship between those that I struggle with letters on and rejection rate 🙂 Nonetheless, in the most optimistic sense, it is the first point of contact between you and an editor. It is also an opportunity to set the tone in the conversation you will have in making your manuscript better (and hopefully getting it published). I am sure that the importance varies from editor to editor, but it is likely time well spent making sure it is effective, positive, and to some extent makes the job easier for the editor in making a decision.

I was wondering if there is a similarity between cover letters and job interviews in the business world. I have heard that there are several frequent questions that pop up in interviews including what is your greatest strength often followed by your greatest weakness. Apparently, it is not recommended to have the same answer to both questions, i.e. too organized etc. Joke aside and the obvious silly transparency to these questions, perhaps cover letters should do the same thing. With the advent of manuscript central and online submission systems, if a particular Editor in Chief has a strong preference for certain key pieces of information in a cover letter, tell us. Why not have a few boxes at the beginning of a the submission process called ‘cover letter form’ or ‘desirable editor information’ as a guide to the paper right up front? So for instance, you log in, put title, abstract, author names, etc and then instead of the usual cover letter part where you can upload your own document or paste whatever you like into a large text box, there are several tailored questions by the editor that you answer (perhaps in addition to whatever you want to load up).

How is this paper a good fit for Oikos (i.e. its greatest strength)?

This study promotes novel synthesis by combining video observation of pollinators in the alpine with assessment of trophic interactions and community-level estimates of insect diversity. Instead of monitoring individual insect behavior as is common in the literature, we pioneer a novel video method using apple ipod nanos to record diversity and abundance of entire insect assemblages.

What is the most significant limitation to this study?

The video method worked very, very well.  However, it was sometimes difficult to identity insects to species level.  Consequently, we used RTUs (recognizable taxonomic units) in all statistical analyses.  We recognize that this may be seen as a limitation for many insect/pollinator ecologists.  Also, our budget was limited so we had only 10 cameras at this point in time. Nonetheless, we see this as a very novel study.

Did I get the job, i.e. get through the first hoop? Something like the above could be very useful for the editor.  Or whatever the specific editor really feels she/he needs to quickly and effectively handle and process the high volume of manuscripts they most certainly see!  I think a standard form or a even a few questions provided by the Editor in Chief to this effect would ensure that the collaboration and discussion between author and review board is useful.  If an editor wanted to know strength of effect sizes, sample sizes, whether the paper was primarily theoretical or empirical etc. all that could be coded up very quickly in the letter.  I am brainstorming on this for Oikos, but I assume other journals have similar cover letter mechanisms varying in specificity. Given the ease of online forms, there could even be a different set of questions that pop up if you click theoretical versus empirical versus forum paper.

Posted by: cjlortie | March 16, 2012

Journal loyalty.

Nearing the end of my PhD dissertation, I was ready to submit my first paper from that process.  I had published before, and whilst the PhD was fun, it was a long haul, and I was keen to get some work out there.  I chose the Journal of Ecology for a big chapter.  It was reviewed very quickly, fairly, and the reviews made the paper much better.  Journal of Ecology was also the first journal that I reviewed for.  I did the review on time and then asked to be provided with feedback including the outcome of the manuscript.  The handling editor at the time gave me feedback.  I thought wow, this is amazing. This journal had my loyalty forever, particularly as a referee.  I had a similar experience with Oikos early on too.  I had read some neat papers and found them both enjoyable and useful.  I did a few reviews and the handling editor, Linus, was super kind and funny.  Sold. I was treated with respect and they get mine – in addition to whatever I can do to help promote novel ecology.

With the adoption of online systems to handle papers, I hope that we can still maintain the personal aspect. It is useful to chat with the editors that handle our work because it calibrates our capacity to self-assess scientific merit. Also, it is nice to have a personal communication as it provides an indication of whether they are being fair.  I always assume the best in this respect, but the odd email reassuring me that they are human and appreciate how tiring the peer review process can be gives hope.

The Research Works Act and the discussion of the profit margins by academic publishers have stimulated both a wealth of discussion and some clear indications of future directions.  I wanted to take a moment here to reflect on how this might relate to Oikos.  Several petitions have been set up online to stop the act in particular and others have discussed careful consideration of where we should submit our work.  At least two immediate solutions are evident.  If this issue is important to you, submit to society journals and use journals that provide affordable open access.  In the former instance, societies partner with journals and have come to some arrangement that shares profits and I assume this also limits the total margin as well.  It would be nice to know what proportion of journal sales are returned to benefits to the society, but in general, it is nonetheless reasonable to assume that whatever profits are generated in turn benefit ecology in general and its respresentatives/members to some extent (relative to profits entirely moved to a publisher).  Oikos is a society journal.

I chose to participate in the review process for Oikos for several reasons including this fact.  I viewed it not only as a journal but as a tool to advance novel, risky, and quirky science.  Oikos thus has the capacity to provide a niche for certain sets of publications and consequently promote riskier, novel science.  It should also serve the needs of the community both through what it publishes and how it communicates with its members including referees and authors.  I think we could envisage numerous strategies to improve this aspect including referee-training workshops at ecology meetings and more opportunity for individuals in general to interact outside and in addition to the peer-review process. Loyalty is important. We could also consider publishing papers within the journal, even just a few here and there, that speak more directly to trends, developments, and the cultural attributes associated with how we practice ecology.

Out of curiosity, I examined the relative scientometrics associated with all ecology journals published by societies.  The list was surprisingly exhaustively with many societies publishing only one journal.   I wanted to examine how Oikos performs relative to the other major societies (I defined major loosely as those publishing a few journals).  Oikos published the second highest volume of papers in 2010 and unfortunately ranked in last place in all metrics of use (including the 5-year IF).   This is quite disappointing in many respects because I see Oikos as positioned both historically and currently to be a leader in moving the importance of society-based ecology forward.

Interestingly, the total number of papers published strongly predicts the total citations as one would expect. Given that Oikos publishes so much content, it is surprising that it diverges so significantly from the fit line (r2 = 0.82, p = 0.0001 for the regression) with the largest negative residual from the line.  The largest divergence, but positive, was Ecological Monographs.

I like the fact that we disseminate a lot of research per year, but my guess is that we should focus on doing a better job at promoting our own specific niche of ecology and forwarding synthesis through the papers that are valuable because they advance highly novel ideas.  I assume that Monographs hits a home run with their divergence by publishing far fewer papers but very deep ones.  In essence, I suspect some soul searching on what Oikos can realistically do best might be a productive agenda, and then clearly communicating this as transparently as possible to the greater community.  I am biased on what I would like to see us to do.  Given that the job of a society journal is to represent the needs and values its members hold, we could work harder on ensuring that Oikos does this well.

Open access is important.  It is nice to be able to share an html link to your work and have anyone access it from anywhere – even those without library-derived support. I propose that affordable open access is a viable pathway for Oikos to meet this need within the greater debate of the Research Works Act issues.  Plos allows one to pay less than the full amount per paper, and I always take this option because my NSERC Discovery grant is so small and is also fully committed to student salaries. I like that fact that with Plos I am free to enter the actual amount that I am able to pay which is between $500 and $1000 depending on the year.  Oikos should consider offering more open access options to its authors.  This is good for the society and the journal given the higher performance of these papers. Importantly, some of us can’t afford to pay $1800-$3000 per paper.

Finally, handling time or turnaround time on papers in the last year was very good.  However, we do not communicate this well.   Similar to the above analyses, I examined the performance of society journals by their handling system since Oikos currently uses email with a limited online tracking system.

It appears that society journals using more advanced handling systems perform better.  I know there are many other reasons for this (and rate of change in general would be better to examine) but my guess is that handling time or awareness of it makes authors feel better.  I like personal emails from handling editors, but it is also nice to log in and see how long (hopefully short) it has been sitting in review and what stage it is at.  I wonder if ecotrack offers different functions or information relative to manuscript central (mc).  Nonetheless, the adoption of a system by Oikos (forthcoming) that provides statistics and feedback to users will facilitate reduced handling times.

Many other contrasts are possible including to sole-journal society publications, number of members within a society, cost, contrasts to non-society journals publishing similar volumes of papers per year, etc., but I found it useful to reflect on these differences as an exercise to brainstorm on improvements for Oikos.  In summary, Oikos is a society journal and has the capacity to do much, much more with this opportunity to make ecology better and benefit its members.  Open access is limited at this point in time, and we should consider mechanisms to provide alternatives (reduced rate options, freebies for reviewing, member discounts, special issues, etc).  Handling time and awareness of the progress of a manuscript is important, and Oikos is doing much better in this respect but we could also consider more transparency in the process including how referees are selected and compensation for their time.  I recognize and do support bolder initiatives in the dissemination of ecology, but I like what Oikos offers and would love to see it become a leader in other respects besides metrics.

I do fundamental research. I don’t choose what questions to address, what system to work in, or any other aspect of my research based on consideration of ‘societal needs’. I’m not trying to achieve any policy goals, except those of ‘doing good science’ and ‘training good scientists’. Nothing I’ve ever done has any direct or obvious ‘applications’ (I’ve joked more than once that ‘no whales have ever been saved’ by my research). I simply work on whatever question I think is interesting. Hopefully that doesn’t just mean ‘of interest only to me personally’, but nor does it necessarily mean ‘of interest to lots of other people’, be they my scientific colleagues, policymakers, or my fellow citizens. (UPDATE: I emphasize that I do think fundamental researchers ought to be able to explain to others why their work is interesting and important–it’s just that I don’t think those explanations include ‘my work addresses a societal need’ or ‘my work is interesting/important simply because lots of people think it’s interesting/important’).

So why is fundamental research worthwhile? Why should a government agency give me, or anyone, a grant to do it? Especially in a world with pressing practical problems, ecological and otherwise: billions of people are desperately poor, the climate is changing rapidly, species are going extinct at historically-high rates, and money to address these problems is scarce due to the biggest global economic crisis since the Great Depression.

Those are good questions. They deserve good answers. And the answers aren’t obvious. Indeed, one possible answer is that fundamental research isn’t worthwhile, that science for its own sake is a luxury we can’t afford. Even many scientists and champions of science question the value of fundamental research. For instance, Daniel Sarewitz recently argued in Nature that the US government is starving the budgets of ‘mission-driven’ science agencies that actually respond to the ‘public good’, and that the ‘blue-sky bias’ favoring NIH and NSF needs to be ‘brought down to earth’.* In 2007, the UK’s Natural Environment Research Council, the main UK government agency supporting fundamental research in ecology and related fields, decided to focus future funding on seven ‘themes’, all of which are directly related to pressing global environmental problems. Closer to home, the most active Oikos Blog commenter, Jim Bouldin, has argued passionately that much ecological research effort should be reoriented towards directly addressing global environmental problems, particularly climate change. UPDATE: This isn’t a good summary of Jim’s views; see the comments.

And the question of justifying fundamental research won’t go away once the global economy recovers. Opportunity costs are ever-present. A dollar given to me to grow bugs in jars is always going to be a dollar not spent on something else. Plus, the world is always going to have pressing problems that need solving. The second US President, John Adams, famously wrote that he had to study politics and war, that his sons could have liberty to study mathematics and philosophy, so that their children could study painting, poetry, and music. It’s been over 200 years since Adams wrote that, and while I do think we’re closer to the point where everyone can feel free to study painting, poetry, and music–or do fundamental research–without having to justify it, that day is still a long ways off. So if you want to argue for fundamental research, you have to argue that it’s not a luxury good, something we can only afford to spend money on once more pressing needs have been addressed.

What follows represents my best shot at defending my life. I’m not sure how convincing it will be to anyone not already convinced. It’s not a full-on cost-benefit analysis or anything like that (although in my own defense, I’m not sure that even the best economists working on science policy would dare to attempt something like that). It’s just reasons backed up with anecdotal examples, and I’m sure anyone who disagreed with me could come up with their own anecdotes. But if I ever had to justify myself to someone I met at a dinner party or something, these are the sorts of things I’d say.

What follows is also pretty light on links to the massive literature on justifying fundamental research. That’s deliberate. This may sound strange, but this is a sufficiently important issue that I felt like I ought to be able to come up with my own answer, rather than just looking up and quoting the answers of others. For what it’s worth, here’s one randomly-googled article that seems to cover many aspects of the issue.

First of all, I don’t think fundamental research is like John Adams’ art, poetry, and music. I don’t think we fund fundamental research (or not) primarily for the same reason we fund the arts (or not). In my view, the reasons for funding fundamental research actually do have to do with solving pressing applied problems. It’s just that fundamental research helps to solve those problems in non-obvious (but very real) ways.

Second, while there is evidence that the economic ‘return on investment’ in scientific research is positive, those estimates need to be taken with a large grain of salt. Further, AFAIK they don’t do a great job of separating out the ROI on ‘fundamental’ vs. ‘applied’ research, and so they don’t explain why we should fund one type of research vs. the other.

Third, while one good reason to fund fundamental research is that the public actually does want it, I don’t think that’s the only reason or even the strongest reason. It’s true that the public is fascinated by a lot of fundamental science, with expensive physics and astronomy instruments like the Mars Rover, the Hubble telescope, and the Large Hadron Collider being perhaps the most obvious examples. In ecology, think of the popularity of nature documentaries. But the trouble with this argument is that it implies that we should only fund those lines of fundamental research that the public likes (e.g., research on ‘charismatic megafauna’). And while there’s lots of fundamental science that many members of the public probably would find fascinating if they knew about it and if it were pitched to them in the right way, I don’t think fundamental research should be a popularity contest. I say that not because I’m an anti-democratic elitist, but because I think there are good reasons to fund fundamental research that are independent of public interest in that research. Such as:

Fundamental research is where a lot of our methodological advances come from. For instance, rapid advances over the last 10-15 years in fitting mechanistic models to time series data, which are useful for things like predicting future pest outbreaks, have come from fundamental work in population ecology (see here for discussion).

Fundamental research provides generally-applicable insights. For instance, the extinction risk criteria used to produce the IUCN Red List of threatened and endangered species are based in large part on fundamental, generally-applicable models of stochastic population dynamics developed by Russ Lande (Mace et al. 2008). Mace et al. (2008) discuss at length the reasons for this, which are not limited to lack of species-specific knowledge. But it is true that, if you lack detailed, system-specific knowledge, you do want general, broadly-applicable insights to be able to fall back on.

Current applied research often relies on past fundamental research. Isaac Newton wasn’t trying to help put satellites in orbit or a man on the moon when he developed his laws of motion, but NASA engineers rely on those laws. Mathematician G. H. Hardy proclaimed that “pure” mathematics, and especially his own field of number theory, was “useless”, which Hardy considered a virtue because that meant that number theory could never be applied to any “warlike purpose”. But it turns out that number theory is central to public key cryptography, and Hardy’s other example of useless mathematics–Einsten’s equations of relativity–was key to the development of nuclear weapons. And it’s not just fundamental physics and mathematics that turns out to be highly applicable down the road. Genetic algorithms, a routine way to solve practical optimization problems, ultimately derive from Darwin’s theory of evolution by natural selection. It would be trivially easy to keep citing examples here, but you get the point. And no, I don’t think you can argue that we already have all the fundamental knowledge we’re ever likely to need, so that while funding fundamental research was worthwhile in the past, it no longer is.

UPDATE: Here is a short-but-interesting list of the surprising fundamental science behind some well-known research applications (black hole research gave us wifi?!)

Fundamental research often is relevant to the solution of many different problems, but in diffuse and indirect ways. But because those ways are diffuse and indirect, I’m having trouble coming up with a clear-cut example off the top of my head… 😉

Fundamental research lets us address newly-relevant issues. Societal needs change. So the ‘relevance’ of different lines of research changes over time, often quite fast, and almost always unpredictably. Think of the discovery of the Antarctic ozone hole, emergence of new diseases, and even global warming (I’m old enough to remember a time when global warming wasn’t on anyone’s radar). For that reason, specialists who have only been trained to think about questions of current applied relevance often are poorly-prepared to deal with newly-relevant questions. And it is often impractical at best to rapidly shift training and hiring procedures in an attempt to tightly ‘track’ changing societal priorities. For instance, the practical expertise of government veterinarians failed to prevent the 2001 foot and mouth disease outbreak from quickly raging out of control in Britain. For advice, the British government turned to people like Roy Anderson and Matt Keeling, with fundamental training in mathematical epidemiology. Fundamental researchers, at least the good ones, are broad thinkers with skill sets that let them think intelligently about and address a wide range of problems, so that the way to respond rapidly to a newly-emerging societal problem is to have fundamental researchers who can turn their attention to that problem.

Fundamental research alerts us to relevant questions and possibilities we didn’t recognize as relevant.One function of fundamental research is to discover and evaluate the relevance of previously-unrecognized questions we didn’t even know we needed to ask. Researchers exclusively focused on addressing questions posed to them by policymakers are not well placed to recognize, or argue, that we are asking the wrong questions or trying to solve the wrong problems.

For instance, assessment of ‘instream flow needs’ (basically, how much water can we extract from rivers and streams for human uses while preserving the stream ecology) traditionally has been treated as an engineering question. My colleagues Ed McCauley, Lee Jackson, John Post, and others have argued that this amounts to a poor framing of the problem. A better starting point for thinking about instream flow needs is fundamental knowledge of density-dependent population dynamics in advection-diffusion environments.

As another example, consider alternate stable states and hysteresis, which severely complicate management and restoration since they prevent an ecological system from being easily manipulated into a desired state. The concepts of alternate stable states and hysteresis were originally discovered in dynamical systems theory. Would ‘applied’ ecologists focused on solving system-specific problems ever have discovered these ideas, which are highly relevant to management problems as diverse as lake eutrophication and the collapse of the North Atlantic cod fishery? The possibility of chaos, first widely recognized due to the work of fundamental theoretical ecologist Robert May (1976), is a third example. The fact that chaotic dynamics have proven difficult to demonstrate in nature doesn’t undermine their importance as a possibility that ought to be considered. If you think you might be managing a system that’s inherently unpredictable, you manage it differently (perhaps adaptively).

Fundamental research suggests novel solutions to practical problems. This is related to the previous point. Research directed towards solving particular practical problems tends to focus on a narrow range of solutions to those problems, and a narrow range of obstacles that might prevent those proposed solutions from working. Supposedly ‘relevant’ research often is quite narrowly focused and fails to recognize useful linkages, analogies, and ideas drawn from other fields.

For instance, fundamental research on biodiversity and ecosystem function suggests a novel approach to biofuel production that doesn’t compete with food production or require heavy fertilizer use: sow diverse mixtures of grasses on land that can’t be used for crop production (Tilman et al. 2006).

As a second example, algal biofuel production is plagued by the problem of zooplankton contamination. You don’t get much algal biofuel if Daphnia are eating your algae. The engineers and biochemists who work on algal biofuels have tried all kinds of (often expensive) ways dealing with this. But they never tried the first thing that would occur to any ecologist with some fundamental training in how food webs work: add some fish to eat the zooplankton. Ace fundamental ecologist Val Smith tried this, and as he reported at the last ESA meeting, it works. My buddy Jon Shurin also is taking fundamental ideas from community ecology and showing how they’re very relevant to algal biofuel production.

As a third example, writing recently in Nature, Varmus and Harlow report that US National Cancer Institute and NIH are going to be devoting significant funding to addressing ‘provocative questions’ about cancer. One of which is the recognition that cancer cells are an evolving population and that trying to kill them with drugs selects for drug resistance. Which is something people doing fundamental work in evolutionary biology recognized years ago (Frank 2007, Pepper et al. 2009). Indeed, there are many areas of medicine that would benefit from paying more attention to fundamental ideas from evolutionary biology. Consider in particular the scary possibility that pretty much all the conventional wisdom on how to prevent evolution of drug resistance in malaria, developed by very practical malaria specialists who have largely ignored basic evolutionary ideas, is not just wrong but actually the opposite of right (Read et al. 2009).

Finally, as Dave Tilman’s young daughter showed, you can even use fundamental ideas about resource competition to keep your lawn free of dandelions. Which is probably not the first thing that would occur to someone trained in applied weed management.

The only way to train fundamental researchers is to fund fundamental research. Even fundamental research projects that don’t themselves contribute, directly or indirectly, to the solution of any particular societal problem, now or in the future, contribute by training new fundamental researchers.

So what do you think? Have I justified my existence?

*Note that ecologists are used to thinking of NIH as itself a ‘mission-driven’ agency focused on treating diseases, especially cancer and HIV. Apparently the distinction between ‘fundamental’ vs. ‘applied’ research can sometimes, like beauty, be in the eye of the beholder. But I do think the distinction is reasonably clear, and lots of folks agree with me on that. So while we can quibble about whether some specific bit of research is ‘fundamental’ or not, if you want to argue that this whole post is moot because there’s no difference between ‘fundamental’ and ‘applied’ research I think you’ve got an uphill battle.

Posted by: Jeremy Fox | March 15, 2012

The nine kinds of peer reviewers

I can’t possibly comment on how true this is.

HT Jarrett Byrnes

Posted by: Jeremy Fox | March 15, 2012

Advice: the ‘snake fight’ portion of your thesis defense

It’s the time of year when many graduate students defend their dissertations. Many students are anxious about the defense, especially the part where they have to fight a snake. Here is a FAQ that addresses common concerns about the ‘snake fight’ portion of your dissertation defense. I recommend that you check it out. You don’t want the snake fight portion of your defense to go like this.

Note that the advice in the linked article applies mainly to North American and British defenses. In many northern European countries, an external examiner known as the ‘opponent’ fights the snake on behalf of the student. In such cases, the quality of your thesis determines, not the size of the snake you have to fight, but rather the snake-fighting skill level of the opponent. If you write a very good dissertation, your opponent will be a skilled snake-fighter. If you write a weak dissertation, your opponent will be afraid of snakes.


Posted by: Jeremy Fox | March 14, 2012

Ecologist interview: Colin Kremer

Sarcozona’s last (?) ESA interview is up. She chats with Colin Kremer, a grad student in the Klausmeier-Litchman lab at Michigan State. It’s an audiofile, and well worth a listen. It’s wide-ranging, touching on everything from why the ESA meeting is awesome (which it is) to choosing a research project to activism in science.

Colin and his labmate Beth were key contributors to a plankton dynamics working group I organized. Both super-sharp. I like that Sarcozona didn’t just interview old fogies like me, because old fogies aren’t the only ones with anything interesting to say.

Posted by: Jeremy Fox | March 14, 2012

From the archives: What’s your best paper?

A while back I did a post talking about what I think is my best paper, and inviting readers to share their stories of their best papers. I thought it would attract a bazillion comments, but the only response was from an old friend of mine. But since so many of you were keen to share your stories of eating your study organisms and doing crazy things in the name of science, I figured I’d give you another chance to toot your own horns.

So tell us: what’s your best paper (or your favorite paper, or the paper you’re proudest of)? Why?

Posted by: Jeremy Fox | March 13, 2012

Why do experiments? (UPDATEDx2)

As a complement to the previous post on “why do mathematical modeling”, I thought it would be fun to compile a list of all the reasons why one might conduct an experiment. But I am lazy* (though not as lazy as this man), and so rather than compiling my own list I’ll share the list from Wootton and Pfister 1998 (in Resetarits and Bernardo’s nice Experimental Ecology book).

To see what happens. At it’s simplest, an experiment is a way of answering questions of the form “What would happen if…?” Such experiments often are conducted simply out of curiosity. This sort of experiment teaches you something about how the system works that you couldn’t have learned through observation, it gives you a starting point for further investigation (e.g., you can develop a model and/or do follow-up experiments to explain what happened), and it can be of direct applied relevance (e.g., if you want to know what effect trampling has on a grassland you’re trying to conserve, go out and trample on randomly-selected bits of it).

There are limitations to such experiments, of course. Because they’re conducted without any hypothesis in mind, they’re typically difficult or impossible to interpret in light of existing hypotheses. And on their own, they don’t provide a good foundation for generalization (e.g., would the experiment come out the same way if you repeated it under different conditions, or in a different system?)

Interestingly, Wootton and Pfister suggest that experiments conducted just to see what happens are most usefully conducted in tractable model systems about which we already know a fair bit (analogous to developmental biologists focusing their experiments on C. elegans and a few other model species). They worry that curiosity-driven experiments, conducted haphazardly across numerous systems, leave us with not only with a very incomplete understanding of any given system, but with no basis for cross-system comparative work. This illustrates how the decision as to what kind of experiment to conduct often is best made in the context of a larger research program, an issue to which I’ll return at the end of the post.

As a means of measurement. These experiments are conducted to measure the quantitative relationship between two variables. Feeding trials to measure the shape of a consumer’s functional response are a common example: you provide individual predators with different densities of prey, and then plot predator feeding rate as a function of prey density. These experiments are a good way of isolating the relationship between two variables. For instance, in nature a predator’s feeding rate will depend on lots of things besides prey density, including some things that are likely confounded with prey density, making it difficult or impossible to use observational data to reliably estimate the true shape of the predator’s functional response. Or, maybe prey density just doesn’t vary that much in nature, so in order to measure how predator feeding rate would vary if prey density were to vary (which of course it might in future), you need to experimentally create variation in prey density. This is an example of a general principle: in order to learn how natural systems work, we’re often forced to create unnatural conditions (i.e. conditions that don’t currently exist, and may never exist or have existed).

Of course, the challenge with these experiments is to make sure that the controls needed to isolate the relationship of interest don’t also distort the relationship of interest. For instance, feeding trials conducted in small arenas are infamous for overestimating predator feeding rates because prey have nowhere to hide, and because prey and predators behave differently in small arenas than they do in nature.

To test theoretical predictions. Probably the most common sort of experiment reported in leading ecology journals. Again, often most usefully performed in tractable model systems**.

But as Wootton and Pfister point out, these kinds of experiments, at least as commonly conducted and interpreted by ecologists, have serious limitations that aren’t widely recognized. For instance, testing the predictions of only a single ecological model, while ignoring the predictions of alternative models, prevents you from inferring much about the truth of your chosen model. If model 1 predicts that experiment A will produce outcome X, and you conduct experiment A and find outcome X, you can’t treat that as evidence for model 1 if alternative models 2, 3, and 4 also predict the same outcome. It’s for this reason that Platt (1964) developed his famous argument for “strong inference“, with its emphasis on lining up alternative hypotheses and conducting “crucial experiments” that distinguish between those hypotheses.

There’s another limitation of experiments conducted to test theoretical predictions, which Wootton and Pfister don’t recognize, but which is well-illustrated by one of their own examples. Wootton and Pfister’s first example of an experiment testing a theoretical prediction is the experiment of Sousa (1979) testing the intermediate disturbance hypothesis (IDH). Which, as readers of this blog know, is a really, really unfortunate example. Experiments to test predictions are only as good as the predictions they purport to test. So if those predictions derive from a logically-flawed model that doesn’t actually predict what you think it predicts (as is the case for several prominent versions of the IDH), then there’s no way to infer anything about the model from the experiment. The experiment is shooting at the wrong target. Or, if the prediction actually “derives” from a vague or incompletely specified model, then the experiment isn’t really shooting at a single target at all–it’s shooting at some vaguely- or incompletely-specified family of targets (alternative models), and so allows only weak or vague inferences about those targets (this is what I think was going on in the case of Sousa 1979).

One way to avoid such ill-aimed experiments is for experimenters to rely more on mathematical models and less on verbal models for hypothesis generation. But another way to avoid such ill-aimed experiments is to quit focusing so much on testing predictions and instead conduct an experiment…

To test theoretical assumptions. It is quite commonly the case in ecology that different alternative models will make many similar predictions. For instance, models with and without selection (non-neutral and neutral models) infamously make the same predictions about many features of ecological and evolutionary systems. This makes it difficult to distinguish models by testing their predictions. So why not test their assumptions instead, thereby revealing which alternative model makes the right prediction for the right reasons, and which alternative is merely getting lucky and making the right prediction for the wrong reasons? For instance, I’ve used time series analysis techniques to estimate the strength of selection in algal communities (Fox et al. 2010), thereby directly testing whether algal communities are neutral or not (they’re not). In this context, this is a much more direct and powerful approach than trying to distinguish neutral and non-neutral models by testing their predictions (e.g., Walker and Cyr 2007 Oikos) (UPDATEx2: The example of Fox et al. 2010 isn’t the greatest example here, because while it is an assumption-testing study, it’s not actually an experiment. Probably should’ve stuck with Wootton and Pfister’s first example of testing evolution by natural selection by conducting experiments to test for heritable variation in fitness-affecting traits, which are the conditions or assumptions required for evolution by natural selection to occur. And as pointed out in the comments, the Walker and Cyr example isn’t great either because they actually were able to reject the neutral model for many of the species-abundance distributions they checked, in contrast to many similar studies).

A virtue of focusing on assumptions as opposed to predictions is that it forces you to pay attention to model assumptions and their logical link to model predictions, rather than treating models as black boxes that just spit out testable predictions. Because heck, if all you want is predictions, without caring about where they come from, you might as well get them here.

Another virtue of tests of assumptions, especially when coupled with tests of predictions, is learning which assumptions are responsible for any predictive failures of the model(s) being tested. This is really useful to know, because it sets up a powerful iterative process of modifying your model(s) appropriately, and then testing the assumptions and predictions of the modified models.

Another reason to test assumptions rather than predictions is that it might be easier to do. Of course, in some situations it could be easier to test predictions than assumptions. And in any case, you all know what I think of doing science by simply pursuing the path of least resistance (not much).

Of course, testing assumptions has its own limitations. Since theoretical assumptions are rarely if ever perfectly met, we’re typically interested in whether the a model’s predictions are robust to violations of its assumptions. Does the model “capture the essence”, the important factors that drive system behavior, and over what range of circumstances does it do so? So you run into the issue of how big a violation of model assumptions is worth worrying about. I don’t have any great insight to offer on how to deal with this; sometimes it’s a judgment call. Sometimes one person’s “capturing the essence” is another person’s “wrong”. For what it’s worth, we make similar judgment calls in other contexts (e.g., how big a violation of statistical assumptions of normality and homoscedasticity is worth worrying about?)

Wootton and Pfister conclude their chapter by discussing how to choose what kind of experiment to conduct. For instance, if you’re studying a system about which not much is known (and assuming you have a good reason for doing that!), you may have no choice but to conduct a “see what happens experiment” (“kick the system and see who yells”, as my undergrad adviser David Smith put it). You might want different experiments depending on whether you’re seeking a general, cross-system understanding of some particular phenomenon, vs. intensively studying a particular system. Or different experiments depending on whether you’re setting out to test mathematical theory, or to identify the likely consequences of, say, a dam or some other human intervention in the environment. Problems arise when you don’t think this through. For instance, conducting an experiment just to see what happens, and then retroactively trying to treat it as a test of some theoretical model, hardly ever works (but it’s often tempting, which is why people keep doing it).

So what do you think? Is this a complete list?

*Or, depending on your point of view, “resourceful”.

**Someday, I need to do a post on what makes for a good “model system”.

Posted by: Jeremy Fox | March 13, 2012

Why do mathematical modeling?

Over at Just Simple Enough, Amy Hurford compiles an interesting (and comprehensive?) list of all the reasons why one might do mathematical modeling. Too many people think the only reason for scientists to do modeling is “to make predictions”; this list is a useful corrective. Go read it–especially if you’re not a modeler.

Amy also notes that different references on models in ecology (e.g., Caswell 1988, Otto and Day’s 2007 textbook) offer different reasons to do modeling. None of the references touches on all the reasons in her list, and a couple of her reasons aren’t listed in any of the references she considers.

This list is a good complement to Bill Wimsatt’s discussion of all the various ways in which models can be false, and the various ways in which those falsehoods actually help models serve the various purposes they serve.

Posted by: Jeremy Fox | March 12, 2012

A note on comment moderation

One of the things I’m proudest of about the Oikos Blog so far is the quality of our comments section. Our community comprises a relatively small number of regular commenters (many of whom are folks with blogs of their own), and a larger number of folks who comment less often. Most posts draw at least a few comments, and sometimes we manage to post something that sparks an extended conversation. Comments are almost always smart and thoughtful, and often very funny.

Well, almost always smart and thoughtful. Just now I got an anonymous one-sentence comment characterizing a recent post of mine as an “anti-MacArthur rant”, and thanking another commenter for “ending” the rant (never mind that the other commenter actually thought my post was interesting, and recognized that the post wasn’t actually “anti-MacArthur”, and so didn’t see themselves as “ending” a “rant”).

As regular readers will know, I welcome pushback, and indeed I go out of my way to invite it and engage with it (see, for instance, my debates with Karl Cottenie, Chris Klausmeier, and others in the “zombie ideas” posts, and my debates with Brian McGill in the “macroecology vs. microecology” posts). But I don’t welcome comments like the one I just mentioned, which simply state or imply, without evidence or argument, that I, or anyone, is ranting or ignorant or a jerk or whatever. Such comments do nothing to further the conversation. Accordingly, I have blocked the comment.

In the past, when I have blocked a comment (and this has literally only happened once before), I have emailed the commenter privately, politely explained my reasons for not posting the comment, and suggested ways in which the comment might be modified to make it suitable for approval. I would have done the same in this case, but the commenter provided no name or email address. If you are the commenter in question, I invite you to contact me privately so that I can extend the same courtesy. Or, if you prefer to remain anonymous, you may submit another comment elaborating your criticisms of the post, which I will duly consider. Seriously–I will consider it. My writings for the Oikos Blog often are intentionally provocative; I doubt many people would read the blog if it were written in a dry and measured style, or expressed only uncontroversial opinions. That style of writing necessarily carries some risk that I’ll cross a line. I don’t claim to be perfect, and so if I write something that you think crosses a line that shouldn’t be crossed, you are welcome and encouraged to comment and explain why you think so. But you do need to include the explanation, because if I post something it means I have already thought about it carefully and decided it’s ok, and so it’s not obvious (at least not to me) why the post crosses a line.

I follow a fair number of blogs, in various subject areas including some quite controversial ones relating to politics and economics. These blogs vary widely in how they handle comments. The ones with the best comment sections (e.g., Crooked Timber, Worthwhile Canadian Initiative) are the ones that take the same approach I try to take: the posters engage with the comments (especially reasoned criticisms), they block comments that don’t further the conversation, and as necessary they explain (both privately to problematic commenters, and publicly), their reasons for blocking the comments they’ve blocked.

As I said, the Oikos Blog is blessed with such good commenters that no matter what our comment policy was, we’d hardly ever have to block any comments. Nevertheless, consider this a chance to discuss the issue. In the comments, please share any thoughts you have on comments policies (ours specifically, or more generally).

Posted by: Jeremy Fox | March 9, 2012

Where do Oikos blog readers come from?

WordPress has started providing country-by-country data on where our (non-syndicated) page views come from. Here are of the data, which only cover the last few days:

  • We’ve had views from 89 countries in just the past few days!
  • Top 10 countries: USA (2506 views), Canada (700), UK (323), Brazil (262), Sweden (228), Australia (185), Spain (143), Germany (141), France (97), New Zealand (74).

I’m kind of surprised that we have readers from that many countries. I’m not surprised that the US, Canada, and the UK are 1-2-3, but I am surprised at some of the other country rankings (Norway and Finland not in the top 10?) I’m also interested in how the number of page views correlates, or doesn’t, with population size and native language. Here’s a plot for the top 10 countries:

The filled points and the regression line (R^2=0.95) are English speaking, open points are non-English speaking. As you can see, for English speaking countries our pageviews are closely related to population size, although much of the relationship is driven by the US. We’re disproportionately popular in Canada (and perhaps disproportionately unpopular in the UK?!) Pageviews from non-English speaking countries are unrelated to population size. Relative to population size, we may be more popular in Sweden (the leftmost open point) than we are in some English-speaking countries (thanks for reading, Swedes!)

Posted by: Jeremy Fox | March 6, 2012

Carnival of Evolution #45

Your monthly compendium of evolutionary writing is here. Includes my discussion of how to use a children’s card game to teach the concept of random drift (which I refer to, somewhat imprecisely, as “neutral drift”, but I have another post making clear that “neutral” and “drift” are not the same thing).

So, when you were young, what did you want to be when you grew up? Was the answer always “ecologist”? I doubt it, unless one of your parents was an ecologist. So, if not an ecologist, what did you want to be?

I wanted to be a paleontologist, until I found out that “studying dinosaurs” might involve “scraping at dirt with a toothbrush for hours”. Then I wanted to be a professional baseball player, until it became clear that I was not going to grow into a good enough athlete to make that feasible.

I know other ecologists who got much closer to pursuing their athletic dreams. I have a colleague in my department who went to college on a golf scholarship and at least toyed with the idea of becoming a professional golfer before becoming an ecologist, and I know another ecologist who went to college on a soccer scholarship (which, he’d be the first to admit, you’d never guess by looking at him now). And Amy Hurford apparently went to college on a basketball scholarship, although just as a means to the end of becoming an ecologist.

Or perhaps you did grow up and do something else, and then switched to ecology. I know a guy who was a professional ballet dancer before he became an ecologist, and someone who got a Ph.D. in physics before switching to ecology as a postdoc.

So what did you want to be when you grew up? Or what were you, before you were an ecologist? Answer in the comments.

Posted by: Jeremy Fox | March 5, 2012

Poll: have you ever eaten your study organism? (UPDATED)

You’re encouraged to explain your answer in the comments.

UPDATE: Who’d have thought that almost half of you study delicious species? Lobster? Rainbow trout? Carrots? It’s as if you chose your PhD supervisors on an empty stomach. And hardly any of you have had the courage to actually eat disgusting/dangerous/fatal species? Where’s the excitement in that? Plus, the “craziest thing you’ve ever done for science” thread is being dominated by modelers and programmers who’ve fit billions of regressions or written hugely complex code–crazy, yes, but hardly in a way that sets the pulse racing. C’mon people, live a little! You all need to go out, eat some bugs, and then tag some sharks while using yourselves as bait. 😉

Posted by: Jeremy Fox | March 5, 2012

Craziest thing you’ve ever done for science?

So, just to keep y’all entertained as I continue cranking away on other tasks: what’s the craziest (or oddest, or most grueling, or etc.) thing you’ve ever done for science? Answer in the comments.

I write a lot about theory on this blog. This post is really for all you “muddy boots” field ecologists, since any modeler who says “I once wrote code that took 6 hours to compile” is just going to look silly. 😉

While you consider your answer, here’s some music to inspire you:

Posted by: Jeremy Fox | March 4, 2012

From the archives: the story of my first publication

I’m currently writing a big grant application on stochastic population dynamics. So now seems a good time to point readers towards this old post of mine, which explains how that was the subject of my very first publication, although I didn’t realize it at the time.

Another favorite from the archives, in which I explain why community assembly is like a chess endgame–and why sometimes neither can be comprehended by human beings.

Posted by: Jeremy Fox | March 2, 2012

“Blind tastings” for scientific papers?

Revisiting my old humorous post on why my papers are like fine wine got me thinking about wine tastings. Wine tastings are often done blind so that the tasters aren’t biased by knowing what they’re drinking. As psychologists have shown time and again, if people think they’re drinking expensive wine, they love it–even if they’re actually drinking the cheap stuff. The same effect shows up with violins (played blind, a Stradivarius sounds no better to professionals than a modern instrument), and indeed with just about anything. If you know it’s “supposed” to be good, you tend to decide that it is good. Duncan Watts just wrote a nice book on this.

So here’s a question: do we need “blind tastings” of scientific papers? Leave aside the question of whether this is even possible (sometimes it is, sometimes it isn’t). Do you think we need them? That is, do readers or reviewers tend to overrate papers by the scientific equivalent of Chateau d’Yquem*, and underrate papers by the scientific equivalent of Stag’s Leap? I’m sure that papers by famous people are more widely-read than papers by unknowns. But what I’m wondering about is our evaluation of a paper, given that we’ve read it.

*Purely as a joke, I was going to link the phrase “the scientific equivalent of Chateau d’Yquem” to a picture of a really famous ecologist, but I chickened out. I was afraid that I might be misread as implying that whoever I linked to doesn’t actually do good work. And I’m not myself a really famous ecologist, so I couldn’t link to a picture of myself.



Posted by: Jeremy Fox | March 2, 2012

From the archives: why my papers are like fine wine

An old humorous post, which actually has a semi-serious point to do with the use of citation metrics to evaluate scientific papers and authors. If you believe in bandwagons, then you should believe that sometimes being well-cited is a bad thing (because it just indicates that the author is riding a bandwagon).

Posted by: Jeremy Fox | March 1, 2012

Apologies for technical glitch

Sorry about the technical glitch just now, which resulted in the last post being published nine times. WordPress provides various buttons on its various screens that one can click to write a new post. For the last post, I used a new button that WordPress recently added, and that I’ve never used before. Apparently there’s a bug, which caused the text to be posted after every auto-save. Suffice to say, I won’t use that button again.

No time for any substantive posts for a little while, as I’m swamped with grant writing and teaching. So here’s one of my favorite posts from the archives. It’s about all the reasons why coexisting species should be similar to one another rather than (or in addition to) different. If you think that niche differences are all that matter for coexistence, or that the only reason coexisting species might be similar to one another is “habitat filtering”, you’re wrong, and you ought to read this post.

I suggest in the post that someone (not me) ought to expand the post into a provocative review paper on this topic. I’m dead serious about that, I think it would become a very widely-cited paper. And I don’t even need any credit, though if you decide to write the paper I do hope you’ll submit it to Oikos as a “Forum” article (again, seriously).

Okay, back to work.

UPDATE FEB. 29: Dave and I have lined someone up for this. Thanks to everyone who expressed an interest.

Dave Vasseur and I are preparing a big grant application to look at the effects of demographic and environmental stochasticity on spatial population dynamics (particularly spatial synchrony), using a tightly-linked combination of mathematical models and microcosm experiments. This is a developing line of research that’s been very successful for us so far (Vasseur and Fox 2007 Ecol. Lett., 2009 Nature, Fox et al. 2011 Ecol. Lett.). There may be an evolutionary component to the grant as well, involving experimental evolution in bacteria.

We plan to propose two postdocs as part of the grant, and the rules of the funding agency to which we are applying (the JSMF) oblige us to name the postdocs we plan to hire as part of the application. We already have one person lined up, but need to line up a second.

Ideally, we’d like someone who has, or is interested in gaining, experience with laboratory microcosm experiments. Strong quantitative skills are a plus. But we’re casting our net broadly here–if you’re interested, please do get in touch.

Location is negotiable (one of the postdocs will be based in Calgary, the other at Yale). Position would be for 3 years. Salary and benefits would be in line with major federally-funded postdocs in the US and Canada. UPDATE: Start date is slightly flexible, but would be fall 2012 or early winter 2013.

The application deadline is Mar. 15, so if you’re interested, please email me (jefox@ucalgary.ca) ASAP, and include your CV and contact details for a couple of referees.

Posted by: Jeremy Fox | February 26, 2012

More on changing your mind in science

A while back I asked what was the biggest scientific claim that you had changed your mind about? At the time, I wasn’t aware that hundreds of very prominent scientists had already answered a slightly broader version of this question in 2008 for the Edge website. I highly recommend browsing through the answers (warning: you can easily end up spending hours!)

For instance, here’s part of astrophysicist Piet Hut’s answer. He no longer sees simple analogies as an effective tool for explaining complicated concepts, because the same analogies also can be used for quite different (and even incorrect) concepts:

I still think I was right in thinking that any type of insight can be summarized to some degree, in what is clearly a correct first approximation when judged by someone who shares in the insight. For a long time my mistake was that I had not realized how totally wrong this first approximation can come across for someone who does not share the original insight…So for each insight there is at least some explanation possible, but the same explanation may then be given for radically different insights. There is nothing that cannot be explained, but there are wrong insights that can lead to explanations that are identical to the explanation for a correct but rather subtle insight.

I think this is a big part of why zombie ideas about the intermediate disturbance hypothesis are so hard to kill. You can summarize zombie ideas about disturbance, and correct ideas about disturbance, using the same words. Indeed, that’s precisely what many of our undergraduate ecology textbooks do. Which makes it very difficult for those who rely on the summaries to distinguish zombies from non-zombies.

The Edge posts a different question every year and then compiles the responses it receives. This year’s question–“What is your favorite deep, elegant, or beautiful explanation?”–should be a fun one. Although, as I’ve pointed out before, while the truth may be beautiful, the beautiful isn’t necessarily true. Which, to their credit, some of the respondents so far, like science writer Carl Zimmer, recognize. Zimmer’s favorite elegant explanation is Kelvin’s calculation of the age of the Earth based on the rate at which an initially-molten Earth would cool down to its current temperature. The calculation is remarkably simple and elegant–and way off base, because Kelvin didn’t know about radioactivity, and because he treated the Earth as a solid ball of rock rather than a solid outer shell surrounding a turbulent liquid mantle. The truth is much messier, more complicated, and harder to understand than Kelvin’s calculation–but it’s still the truth.

Posted by: Jeremy Fox | February 24, 2012

Crowdfunding long-form science journalism (UPDATED)

MATTER is an interesting start-up, looking to create a home for something increasingly rare: high quality, long-form commissioned science journalism. The people behind it are experienced professional journalists who’ve written for publications like New Scientist and the Guardian; it is not an amateur effort. So if you’d like to read more long-form science journalism, and would be willing to pay small amounts for it ($0.99 USD per several thousand w0rd article), then click the link and consider donating (the link goes to their Kickstarter crowdfunding page). You won’t be alone; they’re clearly going to blow past their initial funding goal.

One fun aspect of this is that donors get to be on the “editorial board”, meaning they get to help choose the stories that get commissioned. Donors also get the first few stories free, and various other goodies depending on how much they donate.

Discussion from top US financial/journalism blogger Felix Salmon here (he thinks it’s a great idea). Further discussion from a skeptic (who thinks it won’t be financially sustainable in the long run) here. UPDATE: One of the founders of Matter responds to skepticism about their business plan here.

Posted by: Jeremy Fox | February 23, 2012

Modeling challenge: explain sheep cyclones

The Art of Modelling poses a question to mathematically-inclined readers: can you build a model of individual movement that explains sheep cyclones?

Even if you’re not a modeller, you should click through to find out what a sheep cyclone is.

Inferring causality is hard. Especially in a world where lots of factors, some of them unknown, causally affect the response variable of interest (and each other), and where there are causal feedbacks (mutual causation) between variables. It’s even harder when, for whatever reason, you can’t do a properly controlled, replicated experiment. What do you do then?

One standard answer is to rely on what Jared Diamond (and probably others) have called “natural experiments”.  The basic idea is as follows. If you think that variation in variable A causes variation in variable B, compare the level of B across systems that vary in their level of A. So instead of manipulating A yourself, you’re relying on the “manipulations” (variations) in the level of A that nature happens to provide.

Unfortunately, natural experiments are infamously unreliable, not just compared to “real” experiments but in an absolute sense. As my PhD supervisor Peter Morin liked to say, “The problem with natural experiments is that there’s no such thing as a natural control.” That is, systems that vary in their level of A often vary in lots of other ways as well, some of which probably also affect the level of B. You can of course try to address this by statistically controlling for the levels of those other variables, assuming you can identify them. And you can try to simply collect lots of data from a large range of systems in the hopes that surely some of the among-system variation in variable A will be independent of all confounding variables. And you can try to get rid of any causal feedbacks from B to A by praying to the god of your choice…

Or maybe there’s a better way. Economists have to deal with all the same challenges in inferring causality that ecologists do. If anything, economists have it even worse because doing relevant experiments often is harder in economics than it is in ecology. In response, economists have come up with an interesting and potentially-powerful approach to inferring causality from natural experiments, the method of “instrumental variables” (IV).

Here’s the basic idea (for details, click the link above, which goes to the very good Wikipedia page on IV). An instrumental variable, call it X, is a variable that causally affects B only via its effect on A, and that is not itself causally affected (directly or indirectly) by B or A. Economists summarize the latter assumption by saying that X is “exogenous”. So you can estimate the causal effect of A on B by using, not just any natural variation in A, but only that natural variation in A that can be attributed to natural variation in X. Changes in X are perturbations that propagate to B via only one causal path, that running from A to B, so variation in the instrumental variable X allows you to estimate that strength of that causal path. The approach can be generalized to multiple causal paths, as long as you have multiple instrumental variables.

One thing I find interesting about IV is that they highlight how “more data” is not always helpful. Tempting as it is to think that, if only you had enough data on A from enough different systems, you could reliably infer the causal effect of A on B, it’s not true. What you need is not more data on the variability of A, you need the right sort of data on the variability of A (namely, that generated by an instrumental variable). Indeed, more of the wrong sort of data on variability in A can actually be harmful to inferring the effect of A on B.

The nice thing about the IV method is that it doesn’t require you to know anything about the rest of the system, such as other variables that might affect B while also covarying with A. All you have to know (and this is the hard part) is that X is what economists call a “good instrument”–that it satisfies the assumptions that make it an instrumental variable.

Which may limit the applicability of IV in ecology. In economics, IV are often policy changes. For instance, an increase in cigarette taxes should affect health only via its effect on how much people smoke. So you can use changes in cigarette taxes to estimate the effect of smoking on health, thereby getting around the fact that lots of factors may affect both health and smoking, and that people’s health may affect their inclination to smoke. Weather events like droughts also tend to make good instruments in economics.

I’m unsure whether ecologists will often have good instruments available to them. Weather is exogenous to ecological systems as well as to economic systems. But the problem is that weather changes typically affect any variable of interest via multiple causal pathways. And many policy changes certainly have ecological as well as economic effects. But the problem with many policy changes affecting ecological variables is that they’re not exogenous–the policy changes are made in response to observed changes in the variable which the policy change is intended to affect. So if ecologists want to use policy changes as instrumental variables, they may want to focus on policies with unintended ecological consequences. And even there you still might have the problem of unintended consequences propagated via multiple causal paths.  But we won’t know if IV can be useful in ecology if we don’t try them out.

And if you do try out IV and get them to work, I hope you’ll submit the paper to Oikos. 😉

Posted by: Jeremy Fox | February 20, 2012

Biggest week ever for the Oikos blog

Last week was the biggest week ever for the Oikos blog. No surprise, since I did a bunch of posting. But still: 3972 views, including 1124 syndicated views! That’s 567 views/day for those of you scoring at home.

It was also the biggest week ever just counting non-syndicated views (2848), even though our many of our non-syndicated views have been replaced by syndicated views since we started putting full posts rather than teasers in our RSS feed.

It’s a bit of a pain to add up the syndicated views since you have to do it by hand from the stats on individual posts. But assuming that the proportion of syndicated views this week was typical (it’s actually probably a bit higher than usual since we had a bunch of posts this week, but whatever), then in a typical week we’re getting 2800-3400 views, or well over 400/day.

Thanks for reading everybody!

Posted by: Jeremy Fox | February 17, 2012

Another upcoming course on models in ecology

Friend of Oikos Blog Chris Klausmeier (“lowendtheory”) writes with details of a series of one-week summer courses on Enhancing Linkages between Mathematics and Ecology (ELME), to be offered at Kellogg Biological Station (MI, USA). I know all the instructors, they’re all excellent, ranging from the world-famous (Hal Caswell) to the someday-will-be-world-famous (Colin Kremer and Don Schoolmaster). Details below.


ELME is a summer educational program at the Kellogg Biological Station devoted to Enhancing Linkages between Mathematics and Ecology.

ELME 2012 will be a sequence of three courses covering: Week 1) Maximum Likelihood Estimation, week 2) Structural Equation Modeling, and week 3) Matrix Population Modeling. In this hands-on environment, students will learn the basics in a lecture setting and cement their knowledge with independent and collaborative modeling projects using the computer program R.

Dates: June 4-22, 2012

Hours: Mon-Fri 9-5

Instructors: Week 1) Colin Kremer (Michigan State University), week 2) Don Schoolmaster (National Wetlands Research Center / USGS), and week 3) Hal Caswell (Woods Hole Oceanographic Institute)

Target audience: 12-18 graduate students or exceptional undergraduates

Prerequisites: At least one semester of statistics, undergraduate calculus, and familiarity with basic matrix manipulations Previous exposure to theoretical ecology and R useful but not required.

Format: A mixture of lecture, guided computer labs, and independent/team projects

To apply, email elme2012@kbs.msu.edu the following:

– your CV

– a statement of research interests and why you’d benefit from the course (< 1 page)

– a statement of relevant educational/research experience, including related coursework (< 1 page)

– the name of a reference who you’ve asked to email a letter of support

Deadline for applications: March 15, 2012

Preference will be given to students interested in all three courses.

Financial support to cover room and board and help defray transportation costs is available. Let us know if this is not necessary.

Academic credit is available, students of MSU and affiliated schools are encouraged to enroll.

For more info see <http://www.kbs.msu.edu/education/elme> or email elme2012@kbs.msu.edu.

Posted by: Jeremy Fox | February 17, 2012

Must-read blog on the art of modeling

Amy Hurford, an ecology graduate student working a recent ecology PhD who worked with the brilliant Troy Day and Peter Taylor at Queen’s University, has a new blog called Just Simple Enough: The Art of Mathematical Modelling. It’s great stuff, you totally need to check it out. She’s thinking out loud, and very articulately, about what makes a great model, why build a model at all, and how coming up with a simple model is often a matter of seeing the problem from the right angle (a topic on which I’ve commented myself).

Seriously, what are you doing still reading this blog? Click the links already!

Posted by: Jeremy Fox | February 16, 2012

Models in ecology course to be offered

My northern neighbor Mark Lewis, Canada Research Chair in Mathematical Biology, will be offering a course on “Models in ecology” for advanced undergrads and grad students at Bamfield Marine Station. Marty Krkosek is the co-instructor.

The course runs Apr. 30-May 18, 2012.

You have to apply to be admitted. Application deadline is Mar. 1. To apply, go here.

The course description is below. It sounds awesome. I especially like how the course is suitable for students from both empirical and theoretical backgrounds. And Mark is one of the very best people in the world at linking math and data, as well as a great teacher and a great guy. So if you want to learn how to do ecology the way I for one think it should be done, this is the course for you.


This course develops the methods, models and tools for quantitative ecology. Students learn to formulate, analyse, parameterize, and validate quantitative models for ecological processes and data. Applications include population dynamics, species interactions, movement, and spatial processes. Approaches involve classical hypothesis testing, computer simulation, differential equations, individual-based models, least squares, likelihood, matrix equations, Markov processes, multiple working hypotheses, and stochastic processes. A computer lab covers simulation and programming methods. Course discussion entails evaluation and appraisal of current literature. This course is open to graduate and undergraduate students.

Prerequisites: Introductory calculus, and statistics/biostatistics, or permission of the instructor(s).

This course is suitable both for field-based biology students and for mathematical/theoretical students who are interested in learning about how to connect models to data in an applied ecological setting.

Posted by: Jeremy Fox | February 16, 2012

What does R-squared mean?

Not “proportion of variance explained”! At least, that’s not the most precise gloss. Nice discussion here.

HT Jarrett Byrnes (via Twitter)

Posted by: Jeremy Fox | February 16, 2012

Mathematics and ecology survey

The International Network of Next-Generation Ecologists is surveying ecologists about their knowledge of mathematics and their views on how to incorporate mathematics into the training of ecologists. It’s a short survey (it took me less than a minute), go take it here.

Just make all the usual judgment calls and conduct all the usual “exploratory” analyses that scientists conduct all the time!

The linked paper is the best paper I’ve read in a long time. It’s essential reading for everyone who does science, from undergraduates on up. It’s about experimental psychology, but it applies just as much to ecology, perhaps even more so. It says something I’ve long believed, but says it far better than I ever could have.

One partial solution to the problems identified in this paper is for all of us to adhere a lot more strictly to the rules of good frequentist statistical practice that we all teach, or should teach, our undergraduates. Rules like “decide the experimental design, sampling procedure, and statistical analyses in advance”, “don’t chuck outliers just because they’re ‘outliers'”, “separate exploratory and confirmatory analyses, for instance by dividing the data set in half”, “correct for multiple comparisons”, etc. Those rules exist for a very good reason: to keep us from fooling ourselves. This is not to say that judgment calls can ever be eliminated from statistics–indeed, another one of my favorite statistical papers makes precisely this point. But those judgments need to be grounded in a strong appreciation of the rules of good practice, so that the investigator can decide when or how to violate the rules without compromising the severity of the statistical test.

Basically, what I’m suggesting is that, collectively, our standards about when it’s ok to violate the statistical “rules” may well be far too lax. Of course, if they were less lax, doing science would get a lot harder. Or rather, it would seem to get a lot harder. In fact, doing science that leads to correct, replicable conclusions would remain just as hard as it always has been. It would only seem to get harder because we’d stop taking the easy path of cutting statistical corners. And then justifying the corner cutting by making excuses to ourselves about the messiness of the real world and the impracticality of idealized textbook statistical practice.

The linked paper discusses another solution: to report all judgment calls and exploratory analyses, so that reviewers can evaluate their effects on the conclusions. Sounds like a great idea to me. They also note, correctly, that simply doing Bayesian stats is no solution at all. The paper is emphatically not a demonstration of inherent flaws in frequentist statistics.

Further commentary from Andrew Gelman here.

Here’s an issue which I’ve encountered occasionally as a referee over the years (though not recently, and not as a handling editor as far as I can recall). It concerns manuscripts for which a student is the lead author, and their supervisor is a co-author. Once in a while I find that such a manuscript contains one or more serious mistakes, such as confusion about basic concepts, an experimental design that completely confounds key factors, failure to measure important response variables that obviously should’ve and easily could’ve been measured, or serious statistical errors such as analyzing a nested design as if it were a factorial design. The nature of the errors is such that I would not expect to encounter them in papers lead-authored by the supervisor.

So my assumption (and I emphasize that it is an assumption) is that one of two things is going on.* Either the supervisor didn’t really read the paper carefully before it was submitted, and so wasn’t fully aware of the mistakes or of their seriousness. Or else the supervisor was fully aware of the mistakes, but decided that “it’s the student’s paper, let him make his own mistakes”. And of course, these possibilities aren’t mutually exclusive, since a supervisor who gives his students a lot of freedom and lets them make their own mistakes is the sort of supervisor who might let students submit an ms without first reading it carefully.

My question to you is: are you bothered by this? Because I am, but I’m not sure if that’s just me. I’m bothered for several reasons. First, either possibility I’ve described would seem to be a violation of the published rules of most journals, which require that all authors take responsibility for everything in the manuscript. Second, even if those journal rules didn’t exist, wouldn’t you still want to make sure that any science with your name on it was correct? Third, I’m most bothered by the apparent willingness of some supervisors to effectively force reviewers to do the training that the supervisors ought to be doing.

Note that the situation is totally different if the supervisor isn’t a co-author. As a reviewer, I’m not the least bit bothered if I’m reviewing a manuscript sole-authored by a student and find serious mistakes that a more experienced author probably wouldn’t make. Note also that I’m all in favor of allowing students a lot of freedom, including the freedom to make mistakes. But that freedom does not extend to the freedom to make serious, clear-cut mistakes with my name on them.

But then again, maybe I shouldn’t be bothered by this. One could take the view that it’s the job of reviewers to identify mistakes, no matter what the source of those mistakes. Further, even very experienced people do sometimes make serious mistakes (like believing in zombie ideas!), so maybe my annoyance here is based on the false premise that there are some mistakes that should just never happen in any paper with an experienced co-author.

What do you think? Should supervisor co-authors let student lead authors make serious mistakes? Should reviewers care if they do? Or is this whole post just based on a false premise?

*Actually, I suppose there are at least two other possibilities: the supervisor is aware of the mistakes and their seriousness, but either hopes the reviewers won’t notice or care, or else hopes to be given the opportunity to fix the mistakes in a revision. But I ignore these possibilities, because considering them is too depressing.

p.s. to my own students: this post was not inspired by you!

Posted by: Jeremy Fox | February 15, 2012

The bright side of a zombie (ideas) apocalypse

One of my favorite comics asks whether we shouldn’t just let the zombies win.

Posted by: Jeremy Fox | February 14, 2012

Want to cite the Oikos Blog? Here’s how! (UPDATED)

My fellow editor Mark Vellend just emailed me with the fruits of his research on how to formally cite blog posts. While standards are still evolving and many ecology journals have no official policy, you can find guidance here and here. (UPDATE: second link fixed)

The latter link includes advice on how to cite pseudonyms, which uses the unintentionally amusing example of citing the Dalai Lama. Mark points out that someone following the linked guidance would list the author of my posts as “oikosjeremy” rather than Jeremy Fox. Which I don’t really think is a big deal–it’s not like it’s a secret who “oikosjeremy” is. Mark suggests that I may want to stop blogging under a pseudonym, in anticipation of being cited. On the contrary, my thought is to start blogging under a different, more entertaining pseudonym. Like “Charles Darwin”. Or “thegreatestecologistintheworld”, like in the old Calvin and Hobbes cartoon where Calvin signs his homework “Calvin, Boy of Destiny”. Or maybe “Author’s Name”, so that the citation would read “A. Name”. Kind of like when two British football (soccer) players a few years ago bought a couple of racehorses and named them “Some Horse” and “Another Horse” in the hopes that one day an announcer calling a horse race would be forced to say “And down the stretch they come, it’s some horse leading by a length over another horse!” 😉 (UPDATE: I’ve chickened out and changed my display name to my real name).

I know you’re going to do it anyway, so go ahead and suggest new pseudonyms for me in the comments. Indeed, I predict that top commenter Jim Bouldin is going to spend hours thinking up suggestions on this. 😉

Posted by: Jeremy Fox | February 12, 2012

Advice: how to choose a PhD program (UPDATED)

Joan Strassman has a nice post at Sociobiology about how to choose a PhD program. I agree with most but not all of what she has to say.

I don’t agree that you just avoid M.Sc. programs if you think you might want a PhD. Unless you’re sure you want a PhD, doing an MSc is a good way to hedge your bets. It’s a much smaller commitment, both on your part and on the part of your advisor. If you’re unsure exactly what you want to work on, an M.Sc. can be a good way to find out, and gives you a natural way to change directions as your interests evolve. Lots of people do an M.Sc. in one lab and a Ph.D. on a somewhat different topic in another lab. But to do this you have to start, and finish, your M.Sc. first. If you start a Ph.D. and then for whatever reason decide you don’t want to finish it, you can often take an M.Sc. instead, but that’s colloquially known as a “terminal” M.Sc. If you do that, you’ll typically have a very hard time convincing anyone to take you on for a Ph.D. After all, you already tried and failed to finish one Ph.D.–why should anyone think you’ll succeed on your second try, or choose you over a competing applicant who hasn’t failed to finish? You may not think that’s fair, but that’s the reality.

There are other reasons to do an M.Sc. first. You’ll get a paper or two out of your M.Sc. thesis, which will make your CV stronger when you eventually finish grad school and enter the academic job market. Doing an M.Sc. and then a Ph.D. does extend your total time in grad school, but often not by much because you get all your coursework out of the way during your M.Sc. An M.Sc. also can qualify you for various jobs (e.g., certain environmental consulting positions, some technician/lab manager positions) for which a bachelor’s would not qualify you, so it’s not as if you’ve wasted your time if you end up deciding not to go for a Ph.D. And as for funding, while some universities don’t provide funding for their M.Sc. students, many do.

Note that I’m not saying you should do an M.Sc. to find out if you want to go to grad school at all. You should do your homework and figure that out before you start applying to graduate programs. You should be doing an independent study or honors research project, taking research assistant positions, and talking to your TAs to find out what doing research, and graduate school, is like.

UPDATE: Zen Faulkes has a nice post on why grad students fail. Many of these pitfalls can be avoided by doing your homework, and honestly assessing your own background and motivations, before you start applying.

Other advice I’d add:

Choosing the right advisor is more important than choosing the right program. I went to Rutgers because I wanted to work with Peter Morin, even though the Rutgers EEB program wasn’t on any lists of the top graduate programs in EEB. I’ve never regretted the choice. Don’t get me wrong, it’s fun and stimulating to be part of a top graduate program because you have the opportunity to interact with so many really good faculty and students. But your relationship with your supervisor is going to be much more important to your grad school experience. If you get on well with your supervisor and your labmates, you’ll have a good experience in grad school. If not, not. And once you leave, who your supervisor was counts for more than which program granted you your degree. When I was looking for jobs, I wasn’t viewed as a Rutgers graduate–I was viewed as a Morin lab graduate.

The exception to the above is that you definitely do not want to do your PhD at the same university where you got your bachelor’s (and your MSc, if you got one). If you don’t leave the nest, people will assume you can’t fly. Getting a bachelor’s in one place and both your graduate degrees somewhere else is fine. And getting a bachelor’s and master’s in one place and a Ph.D. someplace else is fine too. But getting your Ph.D. and all your pre-Ph.D. degrees in one place looks bad, even if you do different degrees under different advisors.

Here’s some advice I really can’t emphasize enough, because it concerns a really common mistake prospective grad students make. Before you contact any prospective advisor (at least in N. America), do your homework. Have a good look at their website, and read a couple of their papers. Then write each prospective advisor a personal email which is addressed specifically to that person and only to that person. Describe your background, interests, and long-term (i.e. post-grad school) goals, and say specifically why you want to join their lab (which doesn’t mean having a specific project in mind, of course). If you don’t do that, you’ve already gotten off on the wrong foot, and with many supervisors (including me) you’re already pretty much doomed. Most every decent advisor is very busy, and receives many, many inquiries from prospective students, many of them obviously bulk emails (many but not all of which come from students in developing countries). Every professor I know deletes such emails without reading them. If you can’t be bothered to take the time to do your homework before contacting me, why should I take the time to reply to you? Rather than signaling that you’re seriously interested in my lab, you’ve just signaled to me that you’re the kind of student who likes to cut corners and who doesn’t show initiative. Plus, if you don’t do your homework before contacting me, I’m just going to reply by asking you about your background, interests, long-term goals, and what specifically interests you about my lab. I mean, how else am I supposed to reply? So why not just save us both some time and send me a detailed, personal email to start with?

And then once you’ve corresponded with a few prospective advisors and narrowed the field to a few top choices, make sure to visit those labs before you make your final decision, and ideally before admission decisions are even made. Besides meeting your prospective advisor, you’ll get to meet their current grad students, see the facilities, and check out the city and the surrounding area. I actually insist on meeting prospective students face to face, with rare exceptions for students who are highly recommended by close colleagues whose judgment I trust. And no, skype or a phone call isn’t really a substitute. Most prospective advisors will at least encourage a visit even if they don’t insist on it, and will be happy to pay for it. Both you and your prospective advisor are considering making a big commitment to each other. It’s to the advantage of both of you to be as sure as possible that you’re a good match.

Posted by: Jeremy Fox | February 11, 2012

College vs. graduate school

Here. The diagram on the left is true. The diagram on the right isn’t, but it often feels like it is.

Posted by: Jeremy Fox | February 11, 2012

Herding professor cats

I can’t possibly comment on how true this is.

HT Denim and Tweed.

Posted by: Jeremy Fox | February 7, 2012

Drilling down vs. scaling up

Biological Posteriors asks a good question: how far down the [mechanistic] rabbit hole should one go to get an answer to any question? For instance, if you want to understand plant distributions, do you need to study plant physiology? Or even plant biochemistry?

Briefly, I’d say it depends on how you’ve framed the question, what sort of answer you’re looking for (e.g., a quantitative vs. a qualitative answer), and whether there’s anything comprehensible at the bottom of the rabbit hole.

But here I want to respond by asking a question of my own: why assume that you can only find the right mechanistic “level” by starting at a high level and then drilling down? Why not go the other way? Why not scale up? That is, start with a (possibly very detailed) “low level” mechanistic description of the physiology, life history, and behavior of individual organisms, and then ask about its higher level implications for density-dependence of population growth rate, coexistence, ecosystem function, etc.? There are lots of successful examples of this approach, indeed too many to list.

Note that this approach need not restrict you to building and simulating very computationally-intensive individual-based models. For instance, it may well be possible to derive a tractable analytical, high level approximation to your individual-based low level simulation. Importantly, that high level model, although simple, may well be different than the simple high level model you would’ve invented if you hadn’t first done the low level model and then scaled up. The work of Drew Purves, Steve Pacala and colleagues approximating the famous SORTIE model of forest dynamics is a fine example (Purves et al. 2008, Strigul et al. 2008).

So how do you decide whether to start high and drill down, or start low and scale up? Well, it’s often good to start at a level at which you already know, or can easily find out, a fair bit. In other words, don’t think about whether to drill down or scale up, think about starting from what you know and then working (upwards or downwards) towards something you don’t know.

It’s also worth noting that, if you don’t know how to drill down, you often won’t know how to scale up either, and vice-versa. This is something I wish a lot of macroecologists would take to heart. Macroecologists often argue that we don’t know how to scale up from individual- and population-level mechanisms to their macroecological consequences. Which is true enough. But they seem to take that as an argument for starting at the macroecological level and then drilling down. Which I confess I don’t understand. For instance, writing in the most recent issue of Oikos, Gotelli and Ulrich argue that we don’t know how to specify and parameterize system-specific process-based models of species interactions and dispersal.* But they present this as a reason to focus on null models that test for certain non-random patterns in presence-absence matrices (data matrices indicating which species are present at which sites). But if we don’t know how to build and parameterize low-level process-based models, why should we be at all confident in our ability to build high level null models that omit the effects of certain processes (such as interspecific competition)? Especially null models that putatively apply, not just to one specific system, but very generally? Because take my word for it, it is really easy to come up with very plausible low-level competition models in which competition generates presence-absence matrices that look nothing like those tested for by any of the standard null models. And conversely, it’s surprisingly difficult to come up with generally-applicable low-level process-based models that produce some of the high-level patterns that null models often test for (such as “checkerboard distributions”, where sites contain species A or species B, but never both). To be fair, I think Gotelli and Ulrich are aware of this issue, although they don’t put it quite this starkly. But I’m not sure even they have fully taken to heart the notion that, if we don’t how to scale up from microecology to macroecology, we don’t know how to drill down either.

*Grouchy aside: I also don’t understand why macroecologists harp on the purported impossibility of specifying and parameterizing low-level models for many species. First of all, as the example of SORTIE (and other examples) shows, it’s perfectly possible to build and parameterize very detailed process-based individual-level models of entire communities, or of dynamically-sufficient subsets of those communities. Second of all, why would anyone think that scaling from microecology to macroecology is totally impossible unless we have a fully-specified and parameterized model of the low-level microecological processes? For instance, you don’t need to build such a model to show experimentally that local communities are effectively closed to colonization (e.g., Shurin 2000). Which is all you need to show in order to refute the once-common macroecological claim that linear local-regional richness relationships imply that local communities are highly open to colonization. I guess I must be missing something here, because very smart macroecologists whose work I really respect keep emphasizing the claim that we can’t build and parameterize low-level process-based models of community dynamics. Which just seems like such an obvious straw man. Hopefully folks will weigh in in the comments on this and set me straight.

Posted by: cjlortie | February 6, 2012

Hedgerows and bees

 A very nice reporting of the work by Jeff Ollerton forthcoming in Oikos.  The newspaper is The Guardian and here is the link.

The experimental test of the study is described very well near the end of the article.  I will post a link to the Oikos article as soon as I get it.  I think this is both a nice piece of journalistic reporting and a novel, useful study.  Good stuff (like honey).


The Buell and Braun awards respectively to the best student talk and poster at the Ecological Society of America Annual Meeting. They’re nice awards: besides the prestige, you get $500 plus travel reimbursement to the following year’s meeting.

To win the awards, the first thing you have to do is register. Unfortunately, there should probably be an award to students who can figure out how to do this. Buried in the “About ESA” section of the ESA website (not in the section about the upcoming Annual Meeting) is a page describing all the ESA awards. Click on “Buell & Braun Awards” to see the application rules and download the application form. The deadline for submitting the application form is Mar. 1.

Note that registering for the awards is a separate process from submitting your abstract (the deadline for which is Feb. 23, 5 pm US ET), and registering to attend the meeting. You have to do all of these things to be considered for the Buell or Braun award.

Note as well that the application form asks for a statement of up to 250 words describing how your research will advance the field of ecology. That statement is in addition to your abstract. Note however that 250 words is only an upper limit. You could just write a couple of sentences, even ones just pulled from your abstract. Without getting into specifics, if the judging works the way it did last year, this approach would not affect your chances of winning.

If all this seems like an unnecessarily complicated process to you, I don’t disagree. Indeed, every year many very good students decide it’s not worth all the bother. In a typical year, less than 20 students register to be considered for the Braun award, and only a few dozen register to be considered for the Buell. Both are small fractions of the total number of students giving posters and talks, respectively. And I know from personal experience last year, as a judge for the Buell and Braun as well as for other student awards handed out by the ESA sections, the many of the very best students only choose to register for the section awards, for which the registration process typically is much easier. That’s even though the section awards are less lucrative.

Probably the silliest part is asking students to write an extra statement about how their research will advance the field of ecology. As a student commenter on last year’s post about the Braun award notes, that’s what your abstract is supposed to do. Students are quite rightly annoyed by application procedures that take up some of their scarce time while serving no obvious purpose. Frankly, I’m surprised that extra statement is still required. Last year I sat on the committee that chose the Buell and Braun winners, and we discussed the application procedure and agreed that the requirement for the extra statement should be dropped. I don’t know why there’s no change in the application procedure this year, but I’ll be asking and will update with any information I find out.*

Despite all that, I strongly urge all interested students to register for consideration for the Buell and Braun awards. The potential payoff is worth the effort, even though much of that effort probably shouldn’t be necessary. Particularly because many of your fellow students aren’t going to bother registering, thereby increasing your odds of winning!

*There is a sense in which these hurdles serve a “purpose”: by holding down the number of applicants, they make it easier to find enough judges. In my view, there are much better ways to ensure that the judges aren’t swamped by too many applicants. Just discouraging people from applying by making the application process unnecessarily complicated has the unfortunate side effect of reducing the quality of the applicant pool, since as far as I can tell there’s no positive correlation between “willingness to apply” and “competitiveness for the award”. Obviously, the powers that be could try to recruit more judges, but they already work pretty hard at that, so it is necessary to find some way to either hold down the number of applicants, or judge them more efficiently. They could reduce the number of judges per presentation. Currently, it’s 4-6, which seems like twice as many as necessary–we only get 2-3 referees on our peer-reviewed papers! They could get a few people to pre-screen the posters in the morning each day and then only send judges to meet with the top candidates during the evening poster session. They could even pre-screen the posters in advance by asking students to submit an image of their poster a couple of weeks before the meeting. As for talks, I suggest only allowing students to apply for the Buell award twice. That way students will only apply for consideration when they feel they have their best stuff to present (in most cases, a nearly-complete MSc or PhD project), and you’ll have a manageably-sized applicant pool that likely includes most of the strongest presentations. I don’t claim any of these solutions is perfect, merely that they’d be better than the status quo. Bringing in some combination of these changes, so that the application process can just be reduced to a checkbox during the abstract submission process, seems like the way to go to me.

In the comments, please provide your own suggestions for how to arrange the application process for the Buell and Braun awards.

Posted by: Jeremy Fox | February 3, 2012

Postdoc in plant population ecology

Friend of Oikos Blog* Peter Adler and colleagues are seeking applicants for a postdoc in plant population ecology. The ad is below. Peter’s a terrific plant population ecologist and this sounds like a neat project.

In the past I’ve only posted job ads for myself and other Oikos editors, so I’m stretching a bit here. But I decided I’m ok with posting the occasional job ad as a favor to a close colleague, as long as the ad is likely to be of interest to a sufficient number of blog readers, and as long as I don’t feel like the ads are “diluting” the other content of the blog. But I’d welcome feedback in the comments as to whether Oikos Blog ought to be posting job ads, and if so, under what circumstances.



Plant ecologist/population biologist

We anticipate hiring a post-doctoral researcher for a two-year position with possibility of extension working primarily with Drs. Jeremy James (Oregon State University), Elizabeth Leger (University of Nevada Reno), and Peter Adler, (Utah State University) on a USDA-NIFA funded project. The broad goal of the project is to quantify variation in the demographic processes and ecological conditions that limit native plant establishment along major environmental gradients in the Great Basin. Major duties of the position include: 1) Supervise collection of demographic data by field crews in Oregon, Idaho and Nevada 2) Compile and analyze data, and work with project scientists to build and interpret population models 3) Design and implement additional studies and analyses that complement project objectives 4) Prepare and submit papers for publication.

This project provides an exciting opportunity to ask important questions about native plant recruitment and population dynamics in relation to environmental variation and environmental change. The post-doctoral researcher will have substantial creative latitude to develop complimentary lines of inquiry and also will have numerous opportunities to collaborate with a diverse project team including ecologists, sociologists, economists, and education specialists.

The ideal candidate will have a PhD in ecology or a related field, excellent field skills in plant demography, and experience or interest in population modeling, as well as a demonstrated ability to lead project teams. The permanent work site is negotiable (the position could be based out of Burns, OR, Reno, NV or Logan UT) but the post-doctoral researcher will spend a substantial amount of time overseeing and participating in data collection during the growing season at field sites in Oregon, Idaho and Nevada. The proposed starting date is June 2012, lasting through June 2014, though the start date is flexible. Salary is competitive, and includes benefits. Consideration of interested applicants will begin April 15, 2012, and continue until the position is filled. To be considered, please email a CV, a description of your research interests and background, as well as the names and emails of three references as one pdf to:jeremy.james@oregonstate.edu.

Please feel free to contact Dr. James (Jeremy.james@oregonstate.edu) Dr. Adler (peter.adler@usu.ed) or Dr. Leger (eleger@cabnr.unr.edu) with any questions.


Economics, even ecological economics, isn’t something I’d ordinarily write about on the Oikos Blog–it’s not really the blog’s purpose, and it’s not something I’m really qualified to write about. But I’m making an exception to plug a very interesting exercise in ecological economics by my Calgary colleague M. Scott Taylor.

From 1870 to the late 1880s, the American buffalo (bison) population declined from about 10-15 million to 100. The decline itself isn’t a shock–settlers were spreading west, and hunting buffalo and grazing cattle as they went. But the decline was much less steep before 1870. Why the sudden crash? Various explanations have been proposed, to do with things like changes in Native American hunting practices, hunting by the US Army, and the expansion of railroads. None of these explanations are especially convincing.

In a new paper in the Dec. 2011 issue of American Economic Review, Scott uses a combination of theoretical modeling, empirical time series analysis, and historical research to develop a much more convincing explanation, to do with technological innovation and international trade. It turns out that, just before the crash began, tanners in England came up with a way to tan buffalo hide into leather, which previously had not been possible. This instantly created a massive new international market for buffalo hides; previously there wasn’t a big market for any buffalo product. Scott uses old trading records to infer that around 6 million buffalo hides, representing a kill of about 9 million buffalo, were exported from the US from 1871-1873. The combination of a technical innovation, a huge international market, open access buffalo hunting, and fixed world prices (buffalo weren’t a large enough fraction of the world leather supply for their decline to drive up prices) appears to be what ultimately drove the American buffalo to the brink of extinction.

The echoes of this crash still reverberate today. The buffalo slaughter of the 1870s was widely deplored at the time as wanton and wasteful. Some of those who witnessed it first hand, including Teddy Roosevelt, John Muir, and William Hornaday, founded the conservation movement in the US. One of the first and greatest successes of that movement was the creation of the national park system, including Yellowstone and its tiny remnant buffalo herd. And there’s a lesson for today as well: when small countries worry that the combination of technological innovation and global markets will decimate their natural resources, Americans ought to be willing to listen–because the US was once in the same boat.

I’ve seen Scott give a talk on this work, and can attest that it’s a really nice piece of science. I really like the fact that Scott put in the effort to develop every possible line of evidence, including combing through historical records and building a theoretical model, rather than just relying on those bits of evidence which he found easiest to access or develop. Too often in ecology, and probably every field, investigators follow the path of least resistance and focus narrowly on whichever lines of evidence which they find most convenient or congenial to work with. And then ignore or argue with people who’ve come to different conclusions based on different lines of evidence. I also like Scott’s effort to be as quantitative as possible, for instance by estimating how many buffalo hides were exported. It turns what otherwise would’ve been a theoretical model plus suggestive historical anecdotes into a quite convincing story.

See also further discussion at Conversable Economics (on which this post is based).


Posted by: Jeremy Fox | February 3, 2012

Tenure-track position in theoretical/computational ecology

My fellow Oikos editor Andre de Roos passes on word that the University of Amsterdam is hiring a tenure-track theoretical/computational ecologist. Details here.

Posted by: Jeremy Fox | February 2, 2012

Ecologist interview: Juliana Mulroy

Sarcozona resumes her series of interviews from last year’s ESA Meeting (better late than never!) with a chat with plant population ecologist Julian Mulroy from the ESA Historical Records Committee.

Posted by: Jeremy Fox | February 2, 2012

Carnival of Evolution #44

The best of last month’s online evolutionary writings, here. Get ’em while they’re hot!

Posted by: Jeremy Fox | February 2, 2012

Advice: how to give a good presentation

Over at NeuroDojo, Zen Faulkes has been doing a lengthy series of posts on how to give a good presentation. The latest one, on the need to avoid “shortcuts to credibility” (like trying to talk differently than you usually talk), is here. The whole series is recommended for students.

« Newer Posts - Older Posts »