Being a software writer, I frequently need access to standard services such as psotgresql or mysql. Since I code and deploy almost entirely on debian based systems, I can just apt-get those services on my dev machine, say
apt-get install mysql-server
That works well for services that have been around for some time such as the classical open-source relational database systems: Their feature set changes rarely so I mostly don’t care for the differences between mysql 5.6 or 5.7. For example, debian 8 (jessie) was released in early 2015 and it ships mysql 5.5. Although, 5.6 and 5.7 have been released, I can work with 5.5 just as well.
The situation changes once you are looking at more recently developed services, think about mongodb, rabbitmq, redis, neo4j, elasticsearch or orientdb. jessie ships elasticsearch 1.0.3 while 1.7.1 has been released already and there are substatial differences between those versions, for example new aggregation types in 1.3.0. In other words: I would like to use the upstream versions. In production the deployment strategy for upstream software might vary depending on the specific needs of the application but on my dev machine, this should be a quick and painless process that I can repeat any time and start from scratch. This is what docker does very nicely.
I’ll assume that you know already what docker is and how to get it running on your box. Starting from there, you can execute any of the following bash scripts to get the respective service up and running. The first time you run each command, the container images will be pulled from hub.docker.com.
This spawns a neo4j instance on 127.0.01:7474
#!/bin/bash -e mkdir -p /var/lib/neo4j sudo docker run --rm -ti \ -v /var/lib/neo4j:/data \ -p 127.0.0.1:7474:7474 \ frodenas/neo4j
--rm deletes the container after stopping neo4j,
-ti attach your tty and stdin to the container so that you can stop it via
-v option maps a directory from outside the container to a directory within. This is where neo4j stores the data so the option enables you to keep your data even if the container is stopped (and deleted).
Here, we fire up an elasticsearch 1.5.2 on port 9200 on all interfaces:
#!/bin/bash -e mkdir -p /var/lib/elasticsearch sudo docker run --rm -ti \ -v /var/lib/elasticsearch:/usr/share/elasticsearch/data \ -p 9200:9200 \ elasticsearch:1.5.2
This is an orientdb example:
#!/bin/bash -e mkdir -p /var/lib/orientdb sudo docker run --rm -ti \ -e ORIENTDB_ROOT_PASSWORD=root \ -v /var/lib/orientdb:/usr/local/src/orientdb/databases \ -p 2424:2424 \ -p 2480:2480 \ joaodubas/orientdb:latest
Orientdb listens on two ports here and we also set the environment variable ORIENTDB_ROOT_PASSWORD which is used to configure the root password for the orientdb instance (this will only happen the first time you run the container).
Those are just a few to get you started, I encourage you to check out the docker hub which provides images for almost every use case.