Just because I’m a theorist deeply entrenched in methodological concerns about uncertainty and decision making doesn’t mean I don’t think about practical conservation from time to time. Some musings from my comment here, copied over for my own reference.
Though I would like to believe the gap stems from the problems you discuss, I think that differing objectives between research and application may play a much larger role. I suspect that scientific papers that are most useful and influential for conservation practitioners and policymakers are those which confirm what they already believe, or whatever the interests opposed to them least want to hear. Let’s call these “Cassandra” papers, since in this context they usually forecast disaster. For the practitioner it may matters little whether the math is simple or complex, clearly explained or impenetrable, or even right or not so right. Worm et al 2006 paper which the media quickly decided predicted the end of global fisheries within 50 years is perhaps a good example.
Okay, so beyond bolstering arguments already being made by those who propose, implement or legislate conservation against their opposition, there are certainly unknowns that they might turn to research to answer. Resource allocation might be an example of this; e.g. do we prioritize purchasing pristine areas that are not likely to be threatened or less pristine areas in more immediate danger (a la Pfaff). Let’s call these “rule of thumb” papers, where the conclusion is an easily applied guide-line. It seems doubtful that the practitioners would be inhibited by their access and understanding of the math in this case, since they want the research to provide an answer they can trust, and not worry so much about what math justifies it. They are more likely to use proxies of quality (journal, researcher, affiliation, popularity of the method), then working through the assumptions to see if they like them; no?
So there is a third case in which the conclusion is of the sort “apply my method and it will tell you what to do”, as opposed to “here’s what to do”. I think only this case falls at risk to the mathematics being a barrier, though when accompanied by user-friendly software tools perhaps that can be dismissed as well. These “methods” papers are probably the favorite option of many researchers, as they seem the most rigorous, accurate way, reflecting the details of the problem at hand. Scientists are probably least fond of the first example, where even the paper’s authors may feel the conclusions are being overstated, while others feel such work is wrong and counterproductive. I’d guess many researchers are lukewarm towards the middle case, as better than a coin flip. I imagine from the conservation practitioner’s ranking is reversed. To what extent would you agree with this classification? If so, how is the conservation literature distributed across these categories, and how might we want it to be distributed? Do we indeed have the greatest impact writing Cassandra papers rather than writing nice clear methods, and if so, what are the implications?