Trying to think about a more systematic way to go about varying the parameters: the underlying parametric model has 3 parameters for the stock-recruitment curve’s deterministic skeleton, plus growth noise. (My first exploratory phase has been just to try different things. See my various tweaks in the history log Clearly time to be more systematic about both running and visualizing the various cases.)
Should I just choose a handful of parameter combinations to test? (Trying to think of a way to do this that is easy to summarize – at least I can summarize expected profit under each set). Presumably, for each set of these parameters, I’d want a few (many?) stochastic realizations of the calibration/training data.
Would it be worth digging up some real-world data-sets and base the selection of underlying model parameters on them?
Then there’s a variety of nuisance parameters: grid size, discount rate, price of fish (non-dimensionalization eliminates that one I guess), cost to fishing (and whether the cost is on effort or harvest, whether linear or quadratic, etc), harvest grid size / possible constraints on maximum or minimum allowable levels for the control; length of the calibration period (and related dynamics if we use any of the variable fishing effort models you showed me today).
Additionally there’s the MCMC-related nuisance parameters – parameters for the priors, possibly hyperpriors, and the MCMC convergence analysis (selecting burn-in period – currently 2000 steps out of 16000, etc) . Also the distributional shapes for the priors, and perhaps more meaningfully, the GP covariance function (using Gaussian for simplicity, but might want to look at Matern, and the various linear + Gaussian covariances).
New and progressing issues
- Vary harvest policy during the observation stage
- Add prior & posterior graphs (on same plot) to standard analysis (i.e. gaussian-process-control.R)
- GP process plots should show with and without nugget variance
- examples with non-stationary dynamics (e.g. Ricker with oscillations)
- evaluate GP under large noise conditions
- Characterize and develop a strategy for inferred GP that are not self-sustaining
from the commit log today
- prior/posterior plots added to the BH-Ricker example. Comparable but non-optimal performance by GP. 06:07 pm 2012/12/19
- oscillating ricker still does not give decent solutions 05:45 pm 2012/12/19
- shows posteriors and priors of hyperparameters 05:38 pm 2012/12/19
- fixed minor typo in calc of posterior distribution plots #18 05:15 pm 2012/12/19
- plotting for posteriors and priors added #18 04:13 pm 2012/12/19
- replicate 02:22 pm 2012/12/19
- time to start writing 09:16 am 2012/12/19
- Ah, good. With a bit more data, a very nice example of how GP can avoid problems of a structurally inaccurate parametric approach (Myers vs Ricker). 09:15 am 2012/12/19
- more data going up to positive equilibrium, better but still does not avoid crash 08:29 am 2012/12/19
- An interesting example – seems to work despite being calibrated on a crashing dataset. 08:08 am 2012/12/19
- With smaller K, GP cannot quite determine a proper policy 07:37 am 2012/12/19
- try different dynamic parameters on Myers (lower K) 05:59 am 2012/12/19
Misc: ropensci
- resolved some outstanding issues on rfigshare (rfigshare/issues?state=closed)