Determining the stochastic transition matrix is the essential computational step of SDP solution. If stochasticity enters only in the growth process:
\[x_{t+1} = z_t f(x_t) \]
where \(z_t\) is a draw from a log-normal density distribution function $ (x; , ) $, then the probability that \(x_{t+1}\) falls within a bin $ [x_i-,x_i+] $ given that at it is currently at \(x_t = y\) is
\[ P([x_i-\Delta,x_i+\Delta] |y) = \int_{x-\Delta}^{x+\Delta} \log \mathcal{N}(x; \log(f(y)) + \mu, \sigma) dx\]
Note that we are discritizing the transition matrix to a grid of values $ x_i $ at time \(t\) to $ x_j$ at time \(t+1\), hence the binning question, where $ $ is half the grid bin width.
Another note: Let us assume z_t is mean unity; only logical for the growth equation. In the case of the log-normal, the parameter $ $ is the log mean, thus: $ z_t = (x; (1) - ^2/2, ) $.
Computational implementation / comparison
On a fine grid, this seems to be the same as just evaluating the density at the midpoint and normalizing.
For instance, we could integrate following the definition above:
bw <- (x_grid[2] - x_grid[1]) / 2 # we'll go from the midpoint
F <- function(x) dlnorm(x, log(f(y,h,p)) - sigma_g ^ 2 / 2, sigma_g)
Prob <- sapply(x_grid, function(x) integrate(F, x - bw, x + bw)[[1]] )
Prob/sum(Prob)
which seems to get us the same thing as evaluating the density directly at the midpoint:
Prob <- sapply(x_grid, dlnorm(x, log(f(y)) - sigma^2/2, sigma))
Prob/sum(Prob)
Note also that because we’re normalizing, we could just as easily rescaled the state by the mean,
dlnorm(x/f(y), log(1) - sigma^2/2, sigma)
See this example code (requires mystochastic_dynamic_programming.Rfunctions), distributions agree pretty exactly (red and blue distributions overlap exactly to look purple):
Adding additional sources of uncertainty
In the above, next year’s population is uncertain but this year’s is known perfectly. Imagine the current value of the stock $x_t $, is a random deviate from the value we measured \(m_t\), given by the probability density $ P_m(x_t) $, and the harvest level will be a random variate around whatever level we set for the quota, given from the distribution P_q(h). Both of these introduce uncertainty into the expected value of stock next year, \(f(x_t,h_t)\):
\[ P([x_i,x_i+\Delta] |m_t, q_t) = \] \[ \int_x^{x+\Delta} dx \int_0^{\infty}dy \int_0^{\infty} dh\cdot \ln\mathcal{N}(x; \log(f(y,h)) + \mu, \sigma) P_q(h) P_m(y) \]
Looks like this is gonna be un-pretty. Cubature R package is supposed to handle these higher-dimensional integrals quickly, but no luck. Computation for even a single integral is very slow:
require(cubature)
pdf_zg <- function(x, expected) dlnorm(x, log(expected)-sigma_g^2/2, sigma_g)
pdf_zm <- function(x) dlnorm(x, log(1)-sigma_m^2/2, sigma_m)
pdf_zi <- function(x,q) dlnorm(x, log(q)-sigma_i^2/2, sigma_i)
F <- function(x) pdf_zg(x[1], f(x[2],x[3],p))*pdf_zm(x[2])*pdf_zi(x[3], q)
adaptIntegrate(F, c(x_grid[i]-bw,0,0), c(x_grid[i]+bw, x_grid[gridsize], h_grid[gridsize]))
(See full example as unit_test2)