Read this delightful John Harte Interview (ht @DynamicEcology) which resonates rather well with me (no surprise given our common background). Nevertheless, I would have to push back on a few pieces.
From general laws flow absolutely bullet-proof insights and this is what we most need
I must disagree. Bullet-proof insights are mathematical theorems. Their generality is an empirical question about their assumptions (not their predictions). (And lastly, I’m pretty much in support of the “what we most need” part, but it deserves to be substantiated. Certain objectives could be met just fine with a black box/crystal ball predictor of the future that provided no general laws or insights. It remains to be demonstrated that such predictions are (a) not all that is needed or (b) are impossible without general laws).
The critique against fitting mechanistic models is a very different argument than a critique against writing down a proposed mechanism merely to study it in the absence of data. Such are the role of theorems in ecology. I’ve no doubt John appreciates the importance of such work, though perhaps too rarely do we acknowledge that such contributions are more fundamental and robust, not less, precisely because they do not involve fitting parameters to data. Theoretical ecology has a rich contribution independent of any observation. Conditions under which populations can oscillate without being driven periodically. The necessary conditions for coexistence of \(n\) species, and role of spatial/temporal heterogeneity therein. Threshold dynamics/\(r_0\). The evolution of dispersal (e.g. why the existence of spatial heterogeneity alone is not sufficient), of demonstrating that connecting two “sink” patches in which a resident population cannot persist can enable persistence. This kind of work has the bullet-proof status of theorem without making any claim on generality. These are laws in a mathematical sense. Whether they are laws in an ecological sense and whether they are general or not is a question of how often their assumptions are met. This is why, as Tony Ives says, we must test assumptions, not (merely) predictions (or worse, “post-dictions” of model fit). How we testing assumption is not as statistically well-posed as how we fit models, but it need not be hard. The assumption that space and time are not everywhere homogeneous is relatively easy to establish across a wide range of scales. Not all assumptions are so easy: That dynamics should be largely restricted to low-dimensional manifolds is far from obvious (at almost any scale).
One of them is the ease with which we can simulate numerically and handle massive data sets. There is a risk that this will divorce people from what really matters, which is the natural world.
If we can draw meaningful conclusions from the exercises in mathematical logic above (which John no doubt appreciates we have, and would no doubt agree could guide and clarify a lot of muddled thinking), then the same can be said of numerics. Like anything else, of course it can be done poorly, but numerical studies are no more inherently divorced from the natural world than good analytic theory.
It is unfortunate that ‘massive data sets’ fall in the same sentence, as the term is not synonymous with ‘numerical simulation’. Again, it is perhaps easy to misuse such data in ignorance of the natural world, just like anything else can be done poorly. Yet John has eloquently argued the danger of the ‘mechanistic approach’ allowing intuition to select a few arbitrary features we consider to be important, and the same might be said of the observations themselves. No one would argue that all that is interesting happens to be things we can observe and manipulate on the temporal and spatial scales of an individual human being (e.g. in the domain of field work). Aggregating data across larger temporal and spatial scales, being able to take advantage of rich information about environment, climate, genetics, phylogeny, and so forth in our understanding of patterns and process is important.
John has an excellent bit in the interview about science advancing by failures rather than successes; that we learn the most when we can observe deviations from the model. In this I see his views and Tony’s views as two sides of the same coin: it is in discovering and then understanding the mechanisms behind the deviations from general theory that we most advance. John’s example from statistical mechanics resonates strongly with me here – deviations from the ideal gas law not only expose interesting new science (forget dipole moments, this is the whole molecular, atomized world view replacing a continuous one), while also recovering the general law under the appropriate limits (of temperature and pressure but also number of atoms).