Drilling down vs. scaling up

Submitted by drupaladmin on 7 February 2012.

Biological Posteriors asks a good question: how far down the [mechanistic] rabbit hole should one go to get an answer to any question? For instance, if you want to understand plant distributions, do you need to study plant physiology? Or even plant biochemistry?

Briefly, I'd say it depends on how you've framed the question, what sort of answer you're looking for (e.g., a quantitative vs. a qualitative answer), and whether there's anything comprehensible at the bottom of the rabbit hole.

But here I want to respond by asking a question of my own: why assume that you can only find the right mechanistic "level" by starting at a high level and then drilling down? Why not go the other way? Why not scale up? That is, start with a (possibly very detailed) "low level" mechanistic description of the physiology, life history, and behavior of individual organisms, and then ask about its higher level implications for density-dependence of population growth rate, coexistence, ecosystem function, etc.? There are lots of successful examples of this approach, indeed too many to list.

Note that this approach need not restrict you to building and simulating very computationally-intensive individual-based models. For instance, it may well be possible to derive a tractable analytical, high level approximation to your individual-based low level simulation. Importantly, that high level model, although simple, may well be different than the simple high level model you would've invented if you hadn't first done the low level model and then scaled up. The work of Drew Purves, Steve Pacala and colleagues approximating the famous SORTIE model of forest dynamics is a fine example (Purves et al. 2008, Strigul et al. 2008).

So how do you decide whether to start high and drill down, or start low and scale up? Well, it's often good to start at a level at which you already know, or can easily find out, a fair bit. In other words, don't think about whether to drill down or scale up, think about starting from what you know and then working (upwards or downwards) towards something you don't know.

It's also worth noting that, if you don't know how to drill down, you often won't know how to scale up either, and vice-versa. This is something I wish a lot of macroecologists would take to heart. Macroecologists often argue that we don't know how to scale up from individual- and population-level mechanisms to their macroecological consequences. Which is true enough. But they seem to take that as an argument for starting at the macroecological level and then drilling down. Which I confess I don't understand. For instance, writing in the most recent issue of Oikos, Gotelli and Ulrich argue that we don't know how to specify and parameterize system-specific process-based models of species interactions and dispersal.* But they present this as a reason to focus on null models that test for certain non-random patterns in presence-absence matrices (data matrices indicating which species are present at which sites). But if we don't know how to build and parameterize low-level process-based models, why should we be at all confident in our ability to build high level null models that omit the effects of certain processes (such as interspecific competition)? Especially null models that putatively apply, not just to one specific system, but very generally? Because take my word for it, it is really easy to come up with very plausible low-level competition models in which competition generates presence-absence matrices that look nothing like those tested for by any of the standard null models. And conversely, it's surprisingly difficult to come up with generally-applicable low-level process-based models that produce some of the high-level patterns that null models often test for (such as "checkerboard distributions", where sites contain species A or species B, but never both). To be fair, I think Gotelli and Ulrich are aware of this issue, although they don't put it quite this starkly. But I'm not sure even they have fully taken to heart the notion that, if we don't how to scale up from microecology to macroecology, we don't know how to drill down either.

*Grouchy aside: I also don't understand why macroecologists harp on the purported impossibility of specifying and parameterizing low-level models for many species. First of all, as the example of SORTIE (and other examples) shows, it's perfectly possible to build and parameterize very detailed process-based individual-level models of entire communities, or of dynamically-sufficient subsets of those communities. Second of all, why would anyone think that scaling from microecology to macroecology is totally impossible unless we have a fully-specified and parameterized model of the low-level microecological processes? For instance, you don't need to build such a model to show experimentally that local communities are effectively closed to colonization (e.g., Shurin 2000). Which is all you need to show in order to refute the once-common macroecological claim that linear local-regional richness relationships imply that local communities are highly open to colonization. I guess I must be missing something here, because very smart macroecologists whose work I really respect keep emphasizing the claim that we can't build and parameterize low-level process-based models of community dynamics. Which just seems like such an obvious straw man. Hopefully folks will weigh in in the comments on this and set me straight.

Categories: 
New ideas