Lab Notebook

(Introduction)

Entries

Rrhack Notes

11 Dec 2014

Misc notes from #rrhack. Much more detail under our Github organization, including the issues tracker & wiki for the meeting and the start of the repositories for the various teaching modules.

Also see rrhack slack forum and twitter hashtag.

Key questions

  • Audience and motivation. Reproducibility isn’t a goal, it’s a means to an ends. Accelerating science is the goal. Motivation should be to accelerate & scale your own science.

  • Reproducible by whom? Starting point: expert in domain, expert in language can reproduce without the need to rediscover / recreate? Better: agonstic to language?

  • Complexity. How frequently does routine, non-computational research involve writing over 1000 lines of code? Can 1000 lines of code be managed successfully without a software development approach?

  • knitr & scaling issues

  • The download file problem.

Mine on using Docker

  • Docker installed on cluster
  • Students use RStudio export function to download & submit (into a standard course management platform). import data via direct calls from R (e.g. download.file())

Jenny Bryan on on using Github to teach

  • Free organization account (gold) with 50 private repos
  • Prof / TA team has write access
  • Each student needs to be their own team with write access to own repo
  • Student team has read access to everything, so they can see each-others work

Homework submission: open an issue in their repo, tag the owners group, include the hash.

Need programmatic control of Github to set this up.
Need alterego to test that this works

Users avoid command line for most work to start, just using Github interface and RStudio interface.

Not happy with Windows version of Github’s git client (hard to connect to RStudio). Not a problem with the mac.

Dockerized RStudio as pointy-clicky application?

From Jenny: Desktop launch for docker. from Dan: rewrap boot2docker to launch RStudio (hadleyverse) with a click. (installs virtualbox,docker, & boot2docker first if necessary. 129 MB. Not self contained though?).

New tricks

Working with multiple branches with different content (e.g. gh-pages): Add a web/ subdirectory to repo and .gitignore, check out the gh-pages branch there. Add indicator of branch to user’s prompt.

Using the wiki pages: see [richfitz/ghwiki]. Rather than publish .Rmd files

Organization: just stick with package compendium format?

Read more



Self Destroy Droplet On Completion

07 Dec 2014

Scott’s analogsea package provides a great way to script commands for cloud instances on the digitalocean platform. for instance, we can use analogsea to automatically start an instance of the desired size, submit a computationally intensive task, and then terminate the instance when the task completes successfully. This last step is particularly convenient since it makes it easier to use a very powerful (and thus expensive) instance for a short time, knowing it will terminate and avoid extra charges while idle.

To avoid first having to install the necessary software environment on the newly created digitalocean instance, we will simply pull a docker image that has already been provisioned. This is particularly useful in both keeping what we information we need to send to the cloud machine consise (no need to list all dependencies) and fast (particularly in the case of installing any packages from source, such as R packages from CRAN. Thee complete installation process to generate the image we use here can take over an hour itself).

The analogsea package provides nice functions for working with docker as well, as we will illustrate here.

First, we can define a custom little function that will pull a given Github repo, run the script specified, and push the results back up.

task <- function(REPO, PATH, SCRIPT, USER = Sys.getenv("USER"), 
                 GH_TOKEN = Sys.getenv("GH_TOKEN"), 
                 EMAIL = paste0(USER, "@", USER, ".com"),
                 IMG = "rocker/hadleyverse"){
  paste(
  paste0("-it ", IMG, " bash -c \"", "git config --global user.name ", USER),
  paste0("git config --global user.email ", EMAIL),
  paste0("git clone ", "https://", USER, ":", GH_TOKEN, "@github.com/", USER, "/", REPO, ".git"),
  paste0("cd ", REPO),
  paste0("cd ", PATH),
  paste0("Rscript ", SCRIPT),
  "git add -A",
  "git commit -a -m 'runs from droplet'",
  "git push origin master",
  "\"",
  sep="; ")
}

This could probably be made a bit more elegant, but the idea is simple. Note that we will clone over https, assuming a Github authentication token is available in the environment (e.g. in .Rprofile) as GH_TOKEN.

To use this function to run the script knit.R in the inst/examples directory of my template repo, I do:

tsk <- task("template", "inst/examples", "knit.R", IMG="cboettig/strata")

which returns the commands we’ll need to run as a string. In this case I have set a custom docker image cboettig/strata that contains my standard development environment in use on my laptop, strata.

If we have Docker installed on the local system, we can verify this script works locally first:

system(paste("docker run --rm", tsk))

Now we’re ready for the analogsea part to submit this job to a digitalocean machine which it will create and destroy on the fly. Note that this assumes we have a digitalocean account and have saved a personal authentication token to our environment as DO_PAT (otherwise analogsea will simply prompt us to autheticate manually in the browser). This also requires we have an ssh key added to our account already (at least at this time).

library(analogsea)
docklet_create(size='512mb') %>%
  docklet_run(tsk) %>%
  droplet_delete()

analogsea will first create the droplet of the desired size (analogsea refers to digitalocean droplets which have Docker software installed as “docklets”), then run our command and destroy the droplet.

Note that the functions will only continue to the next step if the previous is successful. Consequently, if the script fails for some reason, the instance will perist and we can attempt to debug if we so choose. If we want the instance to be destroyed whether the script succeeds or fails, we can simply drop the last %>% pipe and the destroy command will still run. (Otherwise some error handling would be reuired aroun the docklet_run code tomake sure it continues.

Sometimes the droplet login will fail due to having had a previous digitalocean instance with the same ip (causing ssh to warn that the host identity has changed) or to allow the token to be approved. In this case, it may help to create the the droplet in a separate step, ssh into the ip returned vy droplets() manually outside of R, and then return to R to launch the task:

d <- droplets()
d[[2]] %>%  
  docklet_run(tsk) %>%
  droplet_delete()

This assumes our desired droplet is the second in the list (hence d[[2]]).

Of course our R instance needs to persist long enough for the job to complete, so we need to be sure to run this from a machine that will itslef remain up, such as a desktop or even another server.

Read more



Lsn Nimble

04 Dec 2014

some sample data

library(sde)
library(nimble)
set.seed(123)
d <- expression(0.5 * (10-x))
s <- expression(1) 
data <- as.data.frame(sde.sim(X0=6,drift=d, sigma=s, T=20, N=100))
sigma.x not provided, attempting symbolic derivation.
plot(data)

LSN version

Test case: Set prior for m \(\approx 0\):

lsn <- modelCode({
   theta ~ dunif(1e-10, 100.0)
   sigma_x ~ dunif(1e-10, 100.0)
   sigma_y ~ dunif(1e-10, 100.0)
       m ~ dunif(-1e2, 1e2)
    x[1] ~ dunif(0, 100)
    y[1] ~ dunif(0, 100) 

  for(i in 1:(N-1)){
    mu_x[i] <- x[i] + y[i] * (theta - x[i]) 
    x[i+1] ~ dnorm(mu_x[i], sd = sigma_x) 
    mu_y[i] <- y[i] + m * t[i]
    y[i+1] ~ dnorm(mu_y[i], sd = sigma_y) 
  }
})

Constants in the model definition are the length of the dataset, \(N\) and the time points of the sample. Note we’ve made time explicit, we’ll assume uniform spacing here.

constants <- list(N = length(data$x), t = 1:length(data$x))

Initial values for the parameters

inits <- list(theta = 6, m = 0, sigma_x = 1, sigma_y = 1, y = rep(1,constants$N))

and here we go as before:

Rmodel <- nimbleModel(code = lsn, 
                      constants = constants, 
                      data = data, 
                      inits = inits)
Cmodel <- compileNimble(Rmodel)
mcmcspec <- MCMCspec(Rmodel, print=TRUE,thin=1e2)
[1] RW sampler;   targetNode: theta,  adaptive: TRUE,  adaptInterval: 200,  scale: 1
[2] RW sampler;   targetNode: sigma_x,  adaptive: TRUE,  adaptInterval: 200,  scale: 1
[3] RW sampler;   targetNode: sigma_y,  adaptive: TRUE,  adaptInterval: 200,  scale: 1
[4] RW sampler;   targetNode: m,  adaptive: TRUE,  adaptInterval: 200,  scale: 1
[5] RW sampler;   targetNode: y[1],  adaptive: TRUE,  adaptInterval: 200,  scale: 1
[6] conjugate_dnorm sampler;   targetNode: y[2],  dependents_dnorm: x[3], y[3]
...
Rmcmc <- buildMCMC(mcmcspec)
Cmcmc <- compileNimble(Rmcmc, project = Cmodel)
Cmcmc(1e4)
NULL

and examine results

samples <- as.data.frame(as.matrix(nfVar(Cmcmc, 'mvSamples')))
dim(samples)
[1] 100 206
samples <- samples[,1:4]
mean(samples$theta)
[1] 10.11174
mean(samples$m)
[1] -1.88765e-05
mean(samples$sigma_x)
[1] 0.385018
plot(samples[ , 'm'], type = 'l', xlab = 'iteration', ylab = 'm')
plot(samples[ , 'sigma_x'], type = 'l', xlab = 'iteration', ylab = expression(sigma[x]))
plot(samples[ , 'sigma_y'], type = 'l', xlab = 'iteration', ylab = expression(sigma[y]))
plot(samples[ , 'theta'], type = 'l', xlab = 'iteration', ylab = expression(theta))

hist(samples[, 'm'], xlab = 'm')
hist(samples[, 'sigma_x'], xlab = expression(sigma[x]))
hist(samples[, 'sigma_y'], xlab = expression(sigma[y]))
hist(samples[, 'theta'], xlab = expression(theta))

Read more



OU model in Nimble

03 Dec 2014

Sanity test with a simple model, Start with some sample data from an OU process:

library("sde")
library("nimble")
set.seed(123)
d <- expression(0.5 * (10-x))
s <- expression(1) 
data <- as.data.frame(sde.sim(X0=6,drift=d, sigma=s, T=20, N=100))
sigma.x not provided, attempting symbolic derivation.
plot(data)

Specify this model in Nimble BUGS code

ou <- modelCode({
   theta ~ dunif(1e-10, 100.0)
       r ~ dunif(1e-10, 20.0)
   sigma ~ dunif(1e-10, 100)
    x[1] ~ dunif(0, 100)

  for(t in 1:(N-1)){
    mu[t] <- x[t] + r * (theta - x[t]) 
    x[t+1] ~ dnorm(mu[t], sd = sigma) 
  }
})

nimble parameters

const <- list(N = length(data$x))
ou_inits <- list(theta = 6, r = 1, sigma = 1)

Create, spec, build, & compile

ou_Rmodel <- nimbleModel(code = ou, constants = const, data = data, inits = ou_inits)
ou_spec <- MCMCspec(ou_Rmodel, thin=1e2)
ou_Rmcmc <- buildMCMC(ou_spec)
ou_Cmodel <- compileNimble(ou_Rmodel)
ou_mcmc <- compileNimble(ou_Rmcmc, project = ou_Cmodel)

Run the MCMC

ou_mcmc(1e4)
NULL

and examine the results

samples <- as.data.frame(as.matrix(nfVar(ou_mcmc, 'mvSamples')))
mean(samples$theta)
[1] 10.47953
mean(samples$sigma)
[1] 0.392594
means(samples$r)
Error in eval(expr, envir, enclos): could not find function "means"
plot(samples[ , 'r'], type = 'l', xlab = 'iteration', ylab = expression(r))
plot(samples[ , 'sigma'], type = 'l', xlab = 'iteration', ylab = expression(sigma))
plot(samples[ , 'theta'], type = 'l', xlab = 'iteration', ylab = expression(theta))
plot(samples[ , 'r'], samples[ , 'sigma'], xlab = expression(r), ylab = expression(simga))
hist(samples[, 'theta'])

Block sampler

ou_spec$addSampler("RW_block", list(targetNodes=c('r','sigma','theta'), adaptInterval=100))
[4] RW_block sampler;   targetNodes: r, sigma, theta,  adaptive: TRUE,  adaptInterval: 100,  scale: 1,  propCov: identity
ou_Rmcmc2 <- buildMCMC(ou_spec)
ou_mcmc2 <- compileNimble(ou_Rmcmc2, project=ou_Rmodel, resetFunctions=TRUE)

(not clear why we use the old project here; but seems to allow us to inherit from previous settings, e.g. the monitors from mcmcSpec() initialization)

ou_mcmc2(1e4)
NULL
samples2 <- as.data.frame(as.matrix(nfVar(ou_mcmc2, 'mvSamples')))
mean(samples2$theta)
[1] 10.46894
plot(samples2[ , 'r'], type = 'l', xlab = 'iteration', ylab = expression(r))
plot(samples2[ , 'sigma'], type = 'l', xlab = 'iteration', ylab = expression(sigma))
plot(samples2[ , 'theta'], type = 'l', xlab = 'iteration', ylab = expression(theta))
plot(samples2[ , 'r'], samples[ , 'sigma'], xlab = expression(r), ylab = expression(simga))
hist(samples2[ , 'theta'])


Read more



Nimble Explore

03 Dec 2014

Working through quick-start example in nimble manual

The manual gives essentially no introduction to what appears to be a classic BUGS example model for stochastically failing pumps.

library(nimble)
pumpCode <- modelCode({
  for (i in 1:N){
    theta[i] ~ dgamma(alpha,beta)
    lambda[i] <- theta[i]*t[i]
    x[i] ~ dpois(lambda[i])
  }
  alpha ~ dexp(1.0)
  beta ~ dgamma(0.1,1.0)
})
pumpConsts <- list(N = 10, 
                   t = c(94.3, 15.7, 62.9, 126, 5.24,
                         31.4, 1.05, 1.05, 2.1, 10.5))
pumpData <- list(x = c(5, 1, 5, 14, 3, 19, 1, 1, 4, 22))
pumpInits <- list(alpha = 1, 
                  beta = 1,
                  theta = rep(0.1, pumpConsts$N))
pump <- nimbleModel(code = pumpCode, 
                    name = 'pump', 
                    constants = pumpConsts,
                    data = pumpData, 
                    inits = pumpInits)

pump$getNodeNames()
 [1] "alpha"               "beta"                "lifted_d1_over_beta"
 [4] "theta[1]"            "theta[2]"            "theta[3]"           
 [7] "theta[4]"            "theta[5]"            "theta[6]"           
[10] "theta[7]"            "theta[8]"            "theta[9]"           
[13] "theta[10]"           "lambda[1]"           "lambda[2]"          
[16] "lambda[3]"           "lambda[4]"           "lambda[5]"          
[19] "lambda[6]"           "lambda[7]"           "lambda[8]"          
[22] "lambda[9]"           "lambda[10]"          "x[1]"               
[25] "x[2]"                "x[3]"                "x[4]"               
[28] "x[5]"                "x[6]"                "x[7]"               
[31] "x[8]"                "x[9]"                "x[10]"              

Note that we can see theta has our initial conditions, while lambda has not yet been initialized:

pump$theta
 [1] 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
pump$lambda
 [1] NA NA NA NA NA NA NA NA NA NA

Hmm, initially we cannot simulate theta values though (or rather, we just get NaNs and warnings if we do). At the moment I’m not clear on why, though seems to be due to the lifted node:

simulate(pump, 'theta')
pump$theta
 [1] NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
pump$lifted_d1_over_beta
[1] NA

If we calculate the log probability density of the determinstic dependencies of alpha and beta nodes (i.e. the lifted node) then we’re okay:

set.seed(0) ## This makes the simulations here reproducible
calculate(pump, pump$getDependencies(c('alpha', 'beta'), determOnly = TRUE))
[1] 0
simulate(pump, 'theta')
pump$theta
 [1] 1.79180692 0.29592523 0.08369014 0.83617765 1.22254365 1.15835525
 [7] 0.99001994 0.30737332 0.09461909 0.15720154

We still need to initialize lambda, e.g. by calculating the probability density on those nodes:

calculate(pump, 'lambda')
[1] 0
pump$lambda
 [1] 168.9673926   4.6460261   5.2641096 105.3583839   6.4061287
 [6]  36.3723548   1.0395209   0.3227420   0.1987001   1.6506161

though not entirely clear to me why the guide prefers to do this as the dependencies of theta (which clearly include lambda, but also other things). Also not clear if these calculate steps are necessary to proceed with the MCMCspec and buildMCMC, or compile steps. Let’s reset the model1 and find out:

pump <- nimbleModel(code = pumpCode, 
                    name = 'pump', 
                    constants = pumpConsts,
                    data = pumpData, 
                    inits = pumpInits)

pump$theta
 [1] 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
pump$lambda
 [1] NA NA NA NA NA NA NA NA NA NA

Good, we’re reset. Now we try:

Cpump <- compileNimble(pump)
pumpSpec <- MCMCspec(pump)
pumpSpec$addMonitors(c('alpha', 'beta', 'theta'))
thin = 1: alpha, beta, theta
pumpMCMC <- buildMCMC(pumpSpec)
CpumpMCMC <- compileNimble(pumpMCMC, project = pump)
CpumpMCMC(1000)
NULL
samples <- as.matrix(nfVar(CpumpMCMC, 'mvSamples'))
plot(samples[ , 'alpha'], type = 'l', xlab = 'iteration',
ylab = expression(alpha))
plot(samples[ , 'beta'], type = 'l', xlab = 'iteration',
ylab = expression(beta))
plot(samples[ , 'alpha'], samples[ , 'beta'], xlab = expression(alpha),
ylab = expression(beta))

Note the poor mixing (which is improved by the block sampler, as shown in the manual).


  1. Not completely certain that this destroys anything connected to the object as C pointers from before, but seems like it should.

Read more



Coreos Cluster Gotchas

26 Nov 2014

Overall impression is that CoreOS is a promising way to easily set up a highly available cluster (e.g. when most important thing is that a service stays up when a node goes down) since it can migrate a containerized app to a new machine rather than having to already have the same app running on all machines. Either way a load-balancer needs to handle the addressing, which is do-able but somewhat tricky.

Less useful in the role of a simple server; as admin on the base system is somewhat more limited (e.g. network stats, NFS sharing, etc), and more pointedly I seem to continually run afoul of stability issues in Fleet when cluster changes size, with no way to recover without destroying and relaunching the entire cluster.

The most compelling features for me, the automated updating and the restarting containers on system reboot, can be replicated rather straight forwardly on a normal distribution.

Fleet cannot pick a leader in a cluster of size 2 (no majority) and fails when CoreOS loses a majority. A cluster of size 3 can replace 1 node, but if 2 nodes fail, the cluster is hosed. See optimal cluster size and etcd/issues/863. Rescaling may assign the new node a new address, and the majority must approve the new peer. If there’s not a majority available (e.g. cluster goes from 3 to 1) you’re stuck.

In the etcd > 0.5.0 (alpha channel now methinks) some recovery is possible, see: etcd/issues/1242

On Amazon, CoreOS provides ability to launch an AWS auto-scaling group as a CloudFormation configuration, which can set a minimum cluster size and always restart when a node goes down. Setting the minumum below 3 results in an invalid cluster (failed etcd connection due to lacking majority) and needs to be destroyed. Need to destroy the autoscaling group, cannot simply remove instances (since they will be regenerated). Also remember to adjust the security groups to set outside access for the appropriate service ports.

Persistant URL address is challenging when nodes keep changing. If one node is guarenteed to be up, we can have it run the nginx load balancer to redirect to the other nodes (using toml nginx templates).


CoreOS & Docker part ways?

Update: and now it seems CoreOS isn’t happy with Docker and seeks to invent their own runtime… time will tell if it gets critical mass to be viable, but doesn’t seem like it is well aligned with my own use cases of quickly deploying basic services (RStudio, Gitlab, Drone, Docker Registry).

Also, lots of competition/ecosystem for container orchistration alternatives to fleet/CoreOS, though the use case for many of these isn’t entirely clear for my needs. In particular, see:

Again all seem to emphasize more the stable, complex service model in the cloud and aren’t really necesary for the portable research software dev model.

Read more



Coreos Docker Registries Etc

24 Nov 2014

A secure docker registry

Running one’s own docker registry is far more elegant than moving tarballs between machines (e.g. when migrating between servers, particularly for images that may contain sensitive data such as security credentials). While it’s super convenient to have a containerized version of the Docker registry ready for action, it doesn’t do much good without putting it behind an HTTPS server (otherwise we have to restart our entire docker service with the insecure flag to permit communication with an unauthenticated registry – doesn’t sound like a good idea). So this meant learning how to use nginx load balancing, which I guess is useful to know more generally.

First pass: nginx on ubuntu server

After a few false starts, decided the digitalocean guide is easily the best (though steps 1-3 can be skipped by using a containerized registry instead). This runs nginx directly from the host OS, which is in some ways more straight forward but less portable. A few notes-to-self in working through the tutorial:

  • Note: At first, nginx refuses to run because there’s was default configuration in cd /etc/nginx/sites-enabled that tries to create a conflict. Remove this and things go pretty nicely.

  • Note: Running the registry container explicitly on port 127.0.0.1 provides an internal-only port that we can then point to from nginx. (Actually this will no longer matter when we use a containerized nginx, since we will simply not export these ports at all, but only expose the port of the nginx load balancer). Still, good to finally be aware of the difference between 127.0.0.1 and 0.0.0.0 (the publicly visible localhost, and the default if we supply only a port) in this context.

  • Note: Running and configuring nginx Note that keys are specific to the url. This is necessary for the server signing request, but I believe could have been omitted in the root certificate. Here’s how w ego about creating a root key and certificate (crt), a server key, server signing request (csr), and then sign the latter with the former to get the server certificate.

openssl genrsa -out dockerCA.key 2048
openssl req -x509 -new -nodes -key dockerCA.key -days 10000 -out dockerCA.crt -subj '/C=US/ST=Oregon/L=Portland/CN=coreos.carlboettiger.info'
openssl genrsa -out docker-registry.key 2048
openssl req -new -key docker-registry.key -out docker-registry.csr -subj '/C=US/ST=Oregon/L=Portland/CN=coreos.carlboettiger.info'
openssl x509 -req -in docker-registry.csr -CA dockerCA.crt -CAkey dockerCA.key -CAcreateserial -out docker-registry.crt -days 10000

Note that we also need the htpasswd file from above, which needs apache2-utils and so cannot be generated directly from the CoreOS terminal (though the openssl certs can):

sudo htpasswd -bc /etc/nginx/docker-registry.htpasswd $USERNAME $PASSWORD

Having created these ahead of time, I end up just copying my keys into the Dockerfile for my nginx instance (if we generated them on the container, we’d still need to get dockerCA.crt off the container to authenticate the client machines. Makes for a simple Dockerfile that we then build locally:

FROM ubuntu:14.04
RUN apt-get update && apt-get install -y apache2-utils curl nginx openssl supervisor
COPY docker-registry /etc/nginx/sites-available/docker-registry
RUN ln -s /etc/nginx/sites-available/docker-registry /etc/nginx/sites-enabled/docker-registry

## Copy over certificates ##
COPY docker-registry.crt /etc/ssl/certs/docker-registry 
COPY docker-registry.key /etc/ssl/private/docker-registry 
COPY docker-registry.htpasswd /etc/nginx/docker-registry.htpasswd


EXPOSE 8080

## use supervisord to persist
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]

Note that we need to install the dockerCA.crt certificate on any client that wants to access the private registry. On Ubuntu this looks like:

sudo mkdir /usr/local/share/ca-certificates/docker-dev-cert
sudo cp devdockerCA.crt /usr/local/share/ca-certificates/docker-dev-cert
sudo update-ca-certificates 
sudo service docker restart

But on CoreOS we use a different directory (and restarting the docker service doesn’t seem possible or necessary):

sudo cp dockerCA.crt /etc/ssl/certs/docker-cert
sudo update-ca-certificates  
  • Note: Could not get the official nginx container to run the docker-registry config file as /etc/nginx/nginx.conf, either with or without adding daemon off; at the top of /etc/nginx/nginx.conf. With, it complains this is a duplicate, (despite being recommended on the nginx container documentation, though admittedly this already appears in the default command ["nginx" "-g" "daemon off;"]). Without, the error says that upstream directive is not allowed here. Not sure what to make of these errors, ended up running an ubuntu container and then just installing nginx etc following the digitalocean guide. Ended up dropping the daemon off; from the config file and running service nginx start through supervisord to ensure that the container stays up. Oh well.

  • Note: I got a 502 error when calling curl against the the nginx container-provided URL (with or without SSL enabled), since from inside the nginx container we cannot access the host addresses. The simplest solution is to add --net="host" when we docker run the nginx container, but this isn’t particularly secure. Instead, we’ll link directly to the ports of the registry container like this:

docker run  --name=registry -p 8080:8080 registry
docker run --name=nginx --net=container:registry nginx

Note that we do not need to export the registry port (e.g. -p 5000:5000) at all, but we do need to export the nginx load-balancer port from the registry container first, since we will simply be linking it’s network with the special --net=container:registry.

Note that we would probably want to link a local directory to provide persistent storage for the registry; in the above example images committed to registry are lost when the container is destroyed.

We can now log in:

docker login https://<YOUR-DOMAIN>:8080

We can now reference our private registry by using its full address in the namespace of the image in commands to docker pull, push, run etc.

Migrating gitlab between servers

This migration was my original motivation to configure the private docker registry; ironically it isn’t necessary for this case (though it’s useful for the drone image, for instance).

Note that there is no need to migrate the redis and postgresql containers manually. Migrating the backup file over to the corresponding location in the linked volume and then running the backup-restore is sufficient. Upgrading is also surprisingly smooth; we can backup (just in case), then stop and remove the container (leaving the redis and postgresql containers running), pull and relaunch with otherwise matched option arguments and the upgrade runs automatically.

When first launching the gitlab container on a tiny droplet running coreos, my droplet seems invariably to hang. Rebooting from the digitalocean terminal seems to fix this. A nice feature of fleet is that all the containers are restarted automatically after reboot, unlike when running these directly from docker on my ubuntu machine.

Notes on fleet unit files

Fleet unit files are actually pretty handy and straight forward. One trick is that we must quote commands in which we want to make use of environmental variables. For instance, one must write:

Environment="VERSION=1.0"
ExecStart=/bin/bash -c "/usr/bin/docker run image:${VERSION}"

in a Service block, rather than ExecStart=/usr/bin/docker run ... directly, for the substitution to work. It seems if we are using the more standard practice of environment files (which after all is the necessary approach to avoid having to edit the unit file directly one way or another anyway), we can avoid the bin/bash wrapper and insert the environment reference directly.

If we’re not doing anything fancy wrt load balancing between different servers, we don’t have that much use for the corresponding “sidekick” unit files that keep our global etcd registry up to date. Perhaps these will see more use later.

Cloud-config

Note that we need to refresh the discovery url pretty-much anytime we completely destroy the cluster.

A few edits to my cloud-config to handle initiating swap, essential for running most things (gitlab, rstudio) on tiny droplets. Still requires one manual reboot for the allocation to take effect. Adds this to the units section of #cloud-config:

    ## Configure SWAP as per https://github.com/coreos/docs/issues/52
    - name: swap.service
      command: start
      content: |
        [Unit]
        Description=Turn on swap

        [Service]
        Type=oneshot
        Environment="SWAPFILE=/1GiB.swap"
        RemainAfterExit=true
        ExecStartPre=/usr/sbin/losetup -f ${SWAPFILE}
        ExecStart=/usr/bin/sh -c "/sbin/swapon $(/usr/sbin/losetup -j ${SWAPFILE} | /usr/bin/cut -d : -f 1)"
        ExecStop=/usr/bin/sh -c "/sbin/swapoff $(/usr/sbin/losetup -j ${SWAPFILE} | /usr/bin/cut -d : -f 1)"
        ExecStopPost=/usr/bin/sh -c "/usr/sbin/losetup -d $(/usr/sbin/losetup -j ${SWAPFILE} | /usr/bin/cut -d : -f 1)"

        [Install]
        WantedBy=local.target

    - name: swapalloc.service
      command: start
      content: |
        [Unit]
        Description=Allocate swap

        [Service]
        Type=oneshot
        ExecStart=/bin/sh -c "sudo fallocate -l 1024m /1GiB.swap && sudo chmod 600 /1GiB.swap && sudo chattr +C /1GiB.swap && sudo mkswap /1GiB.swap"

More probably be structured more elegantly, but it works. (Not much luck trying to tweak this into a bunch of ExecStartPre commands though.

NFS sharing on CoreOS?

Couldn’t figure this one out. My SO Q here

Read more



Coreos And Other Infrastructure Notes

19 Nov 2014

CoreOS?

Security model looks excellent. Some things not so clear:

  • In a single node setup, what happens with updates? Would containers being run directly come down and not go back up automatically? In general, how effective or troublesome is it to run a single, low-demand app on a single node CoreOS rather than, say, an ubuntu image (e.g. just to benefit from the security updates model)? For instance, would an update cause a running app to exit in this scenario? (Say, if the container is launched directly with docker and not through fleet?) (Documentation merely notes that cluster allocation / fleet algorithm is fastest with between 3 & 9 nodes).

  • If I have a heterogenous cluster with one more powerful compute node, is there a way to direct that certain apps are run on that node and that other apps are not?

  • Looks like one needs a load-balancer to provide a consistent IP for containers that might be running on any node of the cluster?

  • Enabling swap. Works, but is there a way to do this completely in cloud-config?

Setting up my domain names for DigitalOcean

In Dreamhost DNS management:

  • I have my top-level domain registered through Dreamhost, uses dreamhost’s nameservers.
  • A-level entry for top level domain points to (the new) Github domain IP address
  • Have CNAME entries for www and io pointing to cboettig.github.io

First step

  • Add an A-level entry, server.carlboettiger.info, pointing to DigitalOcean server IP

Then go over to DigitalOcean panel.

From DigitalOcean DNS management:

  • add new (A level) DNS entry as server.carlboettiger.info pointing to DO server IP
  • Delete the existing three NS entries ns1.digitalocean.com etc.
  • Add three new NS entries using ns1.dreamhost.com etc

Things should be good to go!

Read more



Wssspe Feedback

17 Nov 2014

WSSSPE working groups: Reproducibility, Reuse, and Sharing (Neil Che Hong)

Our group focused on journal policies regarding software papers. Our objectives were:

  • A Survey of journals that publish software papers. The Software Sustainability Institute already maintains a list

  • A summary of policies each of these journals has in place regarding software papers. (e.g. licensing requirements, repository requirements, required sections in the manuscripts regarding installation or tests, etc).

  • Developing a five-star rating system for ranking these policies.

  • Apply our rating system to each of these journals.

  • Solicit feedback & iterate.

We got about half way though this for some of the most recognized journals on the list; see Google Doc notes.

Feedback for WSSSPE:

WSSSPE’s conference-proceedings model of submitting a short papers that get five very thorough expert reviews ahead of time is really excellent. This is not common practice in my field, so this was my first time participating in such a model. Not only did I benefit from both the chance to write up our piece ahead of time and get expert feedback from people coming from a broader range of backgrounds than I can usually interact with, but also the ability to read the full papers and not just the abstracts of other attendees in advance of the workshop was an invaluable way to learn more, make the most of the time we had, and keep a record.

A full-day workshop is a big travel commitment (travel costs, 2 nights lodging, and using up most of the preceding and following day) while simultaneously being not much time to meet people, share ideas, and start working towards any actual products.

The format proposed at the end of the session that seemed most popular in the show of hands for future WSSSPEs – a two to three day event uncoupled from Supercomputing, based in the US in an easy city to fly into, and with more time to move ideas forward into products using a small group / hackathon model would address most of my criticsm.

Misc notes/discussions

  • Interesting discussion/ideas for tracking usage of software based on updating patterns, from James Horowitz and company: Heartbeat (pdf).

  • Neil mentioned a similar workshop he had recently taken part in creating a reviewer’s oath, recently submitted as an opinion piece to F1000. Certainly more of a guideline than most journals give, if a bit pedantic at times (for instance, as much as I believe in signing my own reviews, I would not recommend it to someone else as a blanket policy in the same vein as basic ethics like acknowledging what I don’t know. I think the ‘Oath’ needs to treat this with greater nuiance.) Anyway, food for thought.

(I didn’t manage to catch much with twitter this time, guess too much happening in in-person discussions).

Read more