Docker recently released docker-machine
to make managing multiple remote machines locally easier. Docker distributes binaries of docker-machine
for most major architectures ready-to-go, potentially making it easier to get started on Windows and Mac as well.
Set credentials in environmental variables so we don’t have to pass them on the command line each time:
DIGITALOCEAN_ACCESS_TOKEN = XXX
and create the docker-machine:
docker-machine create --driver digitalocean --digitalocean-size "1gb" server-name
where server-name
is any name you want to give your server and DO_PAT
is your access token (say, saved as an environmental variable). Here we launch a 1GB instance, the default is 512MB on digitalocean. Many other providers work just as well (including virtualbox). You need to set your terminal to use the active docker-machine
for all docker
commands, instead of the local docker
installation:
eval "$(docker-machine env server-name)"
sets three environmental variables that point your docker
commands to the new remote machine, server-name
. Wow. We can now launch any service of interest:
docker run -d -p 8787:8787 -e PASSWORD=something rocker/hadleyverse
and it will run on the server. Get the IP address of the active machine with docker-machine ip
, e.g. open the server in the browser (from a Ubuntu-based client)
xdg-open https://`docker-machine ip`:8787
When we’re finished with the instance, we can destroy the machine so we will no longer be billed, using the same syntax as we would for a container:
docker-machine rm -f server-name
If we have a locally installed docker instance, we may also want to unset the environmental variables set by machine:
unset DOCKER_TLS_VERIFY
unset DOCKER_CERT_PATH
unset DOCKER_HOST
You can see a list of active machines with docker-machine ls
and switch between machines with docker-machine env
as shown above.
Spawning a new machine adds a bit more overhead than launching a container on an existing local or remote server instance, but not much; and is easily scripted. Of course the latency is higher too: the start-up time for the DO instance takes two minutes, and pulling a sizable image onto DO machine takes another two minutes or so. docker-machine
actually prints the start-up time in seconds as it brings up the machine, in case you want to compare between services.
Docker Compose
Docker compose is just fig, which is just a yaml config file / wrapper for (some of) the docker run
command-line options. As with docker-machine
, this simplicity is definitely a strength. Rather intuitively, docker-machine
respects docker-compose
, in that after setting the environmental variables as described above, docker-compose up
runs on the remote machine, just like docker run
does.
Docker swarm
Docker swarm is rather analogous to CoreOS; it’s essential feature being a discovery service that allows the cluster to form. Swarm is mostly easily set up using docker-machine, though in my googling most tutorials fail to mention this. The official docker-machine docs are probably the best reference on this.
Docker swarm provides rather limited functionality so far. A nice docker blog post on swarm In particular, it doesn’t yet support two key features found in CoreOS scheduling: fault-tolerant scheduling; which can move a container to another host if a machine goes down; nor does it yet support Master election; so the swarm breaks if the master goes down.
It currently provides only relatively obvious scheduling – a bin-packing algorithm if you put constraints on resources a container can use, affinities to make sure --link
, --volumes-from
and other such containers end up on the same instance. Instances can be annotated with labels that can be used as constraints, such as storage=ssd
, though it’s not clear how to add these from docker-machine. As long as swarm does not support fault-tolerant scheduling and master election though, these features are not as essential. Dynamically moving a container when a machine has failed means that no human is around to consider what resources the swarm has and where to schedule the container. But for merely adding a new container to an existing swarm, it’s not particularly hard for a human to look at the existing resources and just pick manually where to stick the container without the help of swarm’s algorithms.
Swarm doesn’t really understand docker-compose yet either, in that compose is essentially written as a single-host tool.