Simplifying a complex, overdetermined world

Submitted by drupaladmin on 31 May 2012.

Ecology is complicated. Anything we might want to measure is affected by lots of different factors. As a researcher, how do you deal with that?

One way to deal with it is to try to focus on the most important factors. Try to "capture the essence" of what's going on. Focus on developing an understanding of the "big picture" that's "robust" to secondary details (meaning that the big picture would basically look and behave the same way, no matter what the secondary details). This is how I once would have justified my own interest in, say, simple theoretical food web models (e.g., Leibold 1996). Sure, they're a caricature of any real-world system. But a caricature is a recognizable—indeed, hyper-recognizable—portrait. The whole point of a caricature is to emphasize the most important or distinctive features of the subject. A caricature that's not recognizable, that's not basically correct, is a very poor caricature.

But here's a problem I've wondered about on and off for a long time: what's the difference between a simplification that "captures the essence" of a more complex reality, and one that only appears to do so, but actually just gives the right answer for the wrong reasons? After all, as ecologists we aren't in the position of an artist drawing a caricature. We don't know for sure what our subject actually looks like, though of course we have some idea. So it's not obvious that our caricatures are instantly-recongizable likenesses of whatever bit of nature we're trying to caricature.

Now, one possible response to this concern is to deny that getting the right answer for the wrong reasons is even a possibility. If we develop a simplified picture of how the world works, then any omitted details which don't change the predictions are surely unimportant, right? If our model makes basically the right predictions, then it's basically right, at least as far as we can tell? Right?

I'm not so sure. The reason why I worry about this is what philosophers call "overdetermination". Overdetermination is when some event or state of affairs has multiple causes, any one of which might be sufficient on its own to bring about that event or state of affairs, and perhaps none of which is necessary. Philosophers, at least the few I've read, are fond of silly examples like Sherlock Holmes shooting Moriarty at the exact same instant as Moriarity is struck by lightning, leaving it unclear what caused Moriarty's death. But non-silly examples abound in ecology. Here's one from theoretical ecology (I could easily have picked an empirical example). The Rosenzweig-MacArthur predator-prey model predicts predator-prey cycles for some parameter values. Imagine adding into this model a time lag between predator consumption of prey and predator reproduction, one which is sufficient on its own to cause predator-prey cycles. Now here's the question: is the original Rosenzweig-MacArthur model a good approximation that "captures the essence" of why predator-prey cycles occur when there's also a time lag? Put another way, is the original Rosenzweig-MacArthur model "robust" to violation of its assumption of no time lags? Or in this more complex situation, is the Rosenzweig-MacArthur model misleading, a bad caricature rather than a good one?

The same questions arise when different causal factors generate opposing effects rather than the same effect, and so cancel one another out. Consider a predator-prey model which has a stable equilibrium because of density-dependent prey growth. Now add in both predator interference and a time lagged predator numerical response, with the net effect being that the system still has a stable equilibrium because the stabilizing predator density-dependence due to interference cancels out the destabilizing time lag. Does the original model "capture the essence" of the more complex situation? Is it "robust" to those added complications? Or is it just giving the right answer for the wrong reasons?

I think the answer to all these questions is "no". That is, in cases of overdetermination, I'd deny that a model that omits some causal factors is "capturing the essence", or is "robust", or is accurately "caricaturing" what's really going on, no matter if its predictions are accurate or not. But I'd also deny that, in cases of overdetermination, a model that omits some causal factors is misleading or wrong. That is, I think that the alternative possibilities I set up at the beginning—our simplified picture is either "basically right" or "basically wrong—aren't the only possibilities. There's at least one other possibility—our simplified picture can be right in some respects but wrong in others.

Further, I think this third possibility, though it might seem rather obvious, actually has some interesting implications. For one thing, a lot of work in ecology really does aim to "capture the essence" of some complicated situation. It's not just theoreticians who try to do this—empirical ecologists (community ecologists especially) are always on the lookout for tools and approaches that will summarize or "capture the essence" of some complex phenomenon. Which assumes that there is an essence to be captured. Conversely, a lot of criticism of such work argues not only that ecology is too complicated to have an essence to be captured, but that all details are essential, so that omitting any detail is a misleading distortion. I'm suggesting that, at least in an overdetermined world (which our world surely is), both points of view are somewhat misplaced.

For another thing, it's important to recognize how simplified pictures that are right in some respects but wrong in others can help us build up to more complicated and correct pictures of how our complex, overdetermined world works. Recall my examples of predator-prey models. How is that we know that, say, density-dependence is stabilizing, while a type II predator functional response and a time-lagged numerical response are destabilizing? Basically, it's by doing "controlled experiments". If you compare the behavior of a model lacking, say, density-dependence to that of an otherwise-identical model with density-dependence, you'll find that the latter model is more stable. In general, you build up an understanding of a complicated situation by studying what happens in simpler, "control" situations (often called "limiting cases" by theoreticians). The same approach even works, though is admittedly more difficult to apply, if the effects of a given factor are context dependent (this just means your "controlled experiments" are going to give you "interaction terms" as well as "main effects"). So when I see it argued (as I have, more than once) that complex, overdetermined systems can't be understood via such a "reductionist" approach, I admit I get confused. I mean, how else are you supposed to figure out how an overdetermined system works? How else are you supposed to figure out not only what causal factors are at work, but what effect each of them has, except by doing these sorts of "controlled experiments"? I mean, I suppose you can black box the entire system and just describe its behavior purely statistically/phenomenologically. For some purposes that will be totally fine, even essential (see this old post for discussion), but for other purposes it's tantamount to just throwing up your hands and giving up.

Deliberately simplifying by omitting relevant causal factors is useful even when doing so doesn't "capture the essence", and even when there is no "essence" to capture. These sorts of simplifications aren't caricatures so much as steps on a stairway. In a world without escalators and elevators, the only way to get from the ground floor to the penthouse is by going up the stairs, one step at a time.

Categories: 
New ideas

Comments