Image credits to docker.com. Fun fact: the turtle and the cat symbolize the dev and the ops roles, both having fun with Docker đđđđ
Getting Started with Docker
â˘
Cristina Negrean â˘
2037 words
11 minute read â˘
In a previous blog post I have written on how to create a Spring Boot application that exposes a discoverable REST API of a simple travel domain model. This guide provides insight on my take-aways regarding lightweight virtualization with Docker and walks you through the process of containerizing the Wanderlust Spring Boot application.
Skip the basics and get me started right away
In case you can not wait and want to try out the Dockerized Spring Boot Application right away, simply follow the next steps. You need to have Java 8 SDK, Git and Docker installed on your computer.
After above you will have a running multi-container Docker application with
a PostgreSQL datastore and a RESTful API. To check it out, use Docker client
command docker ps
The difference from the previous blog post where the RESTful API endpoints and HAL browser where available relatively to http://localhost:9000/api/opentravel/, the data driven RESTful API is accessible
from within the Docker host machine.
You can use the Docker client or command-line utility to check the ip address
where Docker is running:
and use the IP in the browser or a tool like cURL or Postman to test the Dockerized Spring Boot Data REST application with PostgreSQL datastore.
HAL browser:
List Destinations:
What is Docker?
Docker is an open-source Linux container management toolkit that runs natively on Linux but also works on Windows and Mac using a lightweight Linux distribution and VirtualBox.
Docker makes it easier for organizations to automate infrastructure, isolate applications, maintain consistency, and improve resource utilizations.
It builds upon LXC Linux Container (LXC), which has been part of Linux since version 2.6.24 and provides system-level virtualization. LXC uses Linux cgroups and name spaces to isolate processes from each other so they appear to run on their own system.
Virtual machines require a fair amount of resources as they emulate hardware and run a full stack operating system. With Linux Containers there exists a lightweight alternative to full blown virtual machines while retaining their benefits.
Docker Architecture
Docker consists of the following parts:
Docker Daemon: runs as root and orchestrates all running Docker containers
Docker Images: just as virtual machines are based on images, Docker containers are based on Docker images. These images are tiny compared to virtual machine images and are stackable thanks to AUFS storing only changes.
Docker Repositories: Docker images can be exchanged with others and versioned like source code in private or public Docker repositories
I am using Docker for Mac in this post, as I think it provides a much better Docker development experience on Mac, making it possible to configure a docker container with outside container storage without any workarounds on Mac, similarly how it works on Linux.
Docker for Mac application does not use VirtualBox, instead provisions a HyperKit VM based on Alpine Linux which is running the Docker Engine. So with the Docker for Mac application you get only one VM and the app manages it as opposed to Docker Toolbox where you could have created multiple VMs with docker-machine.
An alternative to the Docker Native Application is Docker Toolbox that uses VirtualBox and the installer includes the following:
Docker Client docker binary
Docker Machine docker-machine binary
Docker Compose docker-compose binary
Kitematic - Desktop GUI for Docker
Docker Quickstart Terminal app
Typical Local Workflow
Docker has a typical workflow that enables you to create images, pull images, publish images, and run containers.
From Dockerfile to Docker Image
A Dockerfile describes how to build a Docker image. The FROM command defines the base image from which we start. My Docker Image derives from the
Java runtime, by using a public OracleJDK 8 image. The images are looked up locally as well as in the publicly available Docker repository. The RUN command specifies which commands to run during the build process. Generally, all Docker containers run isolated from the world with no communication allowed â deny all policy. If there should be communication to the outside world, this must be explicitly defined through the EXPOSE command. In this example, port 9000 is exposed. The VOLUME commands specifies a mount point to which we can bind filesystems from the host operating system or other containers. This allows us to attach globally reusable and shareable mount point.
Listing 1: src/main/docker/Dockerfile
# Pull in the smallest Docker image with OracleJDK 8 (167MB)FROM frolvlad/alpine-oraclejdk8:slim# add bash and coreutilsRUN apk add --no-cache bash coreutils
MAINTAINER negrean.cristina@gmail.com# We added a VOLUME pointing to "/tmp" because that is where a Spring Boot application creates working directories for# Tomcat by default. The effect is to create a temporary file on your host under "/var/lib/docker" and link it to the# container under "/tmp". This step is optional for the simple app that we wrote here, but can be necessary for other# Spring Boot applications if they need to actually write in the filesystem.VOLUME /tmp# The project JAR file is ADDed to the container as "app.jar"ADD open-travel-spring-boot-docker-1.0.0-SNAPSHOT.jar app.jar#Expose Tomcat HTTP Port, by default 8080, the travel API overrides it via server.port=9000EXPOSE 9000# You can use a RUN command to "touch" the jar file so that it has a file modification time# (Docker creates all container files in an "unmodified" state by default)# This actually isnât important for the simple app that we wrote, but any static content (e.g. "index.html")# would require the file to have a modification time.RUN bash -c'touch /app.jar'ENTRYPOINT ["java", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
Building Images: docker build and publishing them to a Docker repository: docker push
From Dockerfiles, Docker Images are built with each Dockerfile command generating a new Docker Image which can be individually accessed by its id â a git commit-like fingerprint.
As the Spring Boot application we are containerizing is based on Gradle build specification, it would seem most straightforward to use a gradle plugin for this purpose.
Listing 2: build.gradle
Using the plugin, you can build a tagged docker image and then push it to a remote repository with Gradle in one command:
, and the outcome can be listed using docker images command, as bellow:
In Listing 2: build.gradle, the task buildDocker toggles the
docker push as disabled, as it will fail, unless you are part of the âcristinatechâ organization at Dockerhub.
If you change the configuration to match your own docker ID - see group syntax - then it should succeed, and you will have a new tagged, deployed image
at Dockerhub
The No Plugin Way
If you donât want to use the Gradle Docker Build plugin, you can achieve
all above Docker workflow steps: build, tag and push, using the command-line utility:
Running Docker containers: docker run
To run a Docker image you just need to use the run command followed by a local image name or one found in DockerHub. Commonly, a Docker image will require some additional environment variables, which can be specified with the -e option.
For long-running processes like daemons, you also need to use a âd option.
As the open travel API depends on
a Postgresql 9 datastore, to start the
postgres image, you would run the following command to configure
the PostgreSQL root userâs password, as documented in the Docker Hub postgres repository documentation:
But wait! There is a much simpler way, especially when you have more that 2 containers! By leveraging Docker Compose which has
already been installed with the Docker for Mac native application!
Docker Compose is a tool for defining and running multi-container Docker applications. It needs a Dockerfile - which we already have - so the appâs
environment can be reproduced anywhere. The magic happens in docker-compose.yml
where you need to define the services that make up your application, so they
can be run together in a isolated environment.
Lastly, run docker-compose up and Compose will start and run the entire app.
In a previous blog post I have written on how to create a Spring Boot application that exposes a discoverable REST API of a simple travel domain model. This guide provides insight on my take-aways regarding lightweight virtualization with Docker and walks you through the process of containerizing the Wanderlust Spring Boot application.
Skip the basics and get me started right away
In case you can not wait and want to try out the Dockerized Spring Boot Application right away, simply follow the next steps. You need to have Java 8 SDK, Git and Docker installed on your computer.
After above you will have a running multi-container Docker application with
a PostgreSQL datastore and a RESTful API. To check it out, use Docker client
command docker ps
The difference from the previous blog post where the RESTful API endpoints and HAL browser where available relatively to http://localhost:9000/api/opentravel/, the data driven RESTful API is accessible
from within the Docker host machine.
You can use the Docker client or command-line utility to check the ip address
where Docker is running:
and use the IP in the browser or a tool like cURL or Postman to test the Dockerized Spring Boot Data REST application with PostgreSQL datastore.
HAL browser:
List Destinations:
What is Docker?
Docker is an open-source Linux container management toolkit that runs natively on Linux but also works on Windows and Mac using a lightweight Linux distribution and VirtualBox.
Docker makes it easier for organizations to automate infrastructure, isolate applications, maintain consistency, and improve resource utilizations.
It builds upon LXC Linux Container (LXC), which has been part of Linux since version 2.6.24 and provides system-level virtualization. LXC uses Linux cgroups and name spaces to isolate processes from each other so they appear to run on their own system.
Virtual machines require a fair amount of resources as they emulate hardware and run a full stack operating system. With Linux Containers there exists a lightweight alternative to full blown virtual machines while retaining their benefits.
Docker Architecture
Docker consists of the following parts:
Docker Daemon: runs as root and orchestrates all running Docker containers
Docker Images: just as virtual machines are based on images, Docker containers are based on Docker images. These images are tiny compared to virtual machine images and are stackable thanks to AUFS storing only changes.
Docker Repositories: Docker images can be exchanged with others and versioned like source code in private or public Docker repositories
I am using Docker for Mac in this post, as I think it provides a much better Docker development experience on Mac, making it possible to configure a docker container with outside container storage without any workarounds on Mac, similarly how it works on Linux.
Docker for Mac application does not use VirtualBox, instead provisions a HyperKit VM based on Alpine Linux which is running the Docker Engine. So with the Docker for Mac application you get only one VM and the app manages it as opposed to Docker Toolbox where you could have created multiple VMs with docker-machine.
An alternative to the Docker Native Application is Docker Toolbox that uses VirtualBox and the installer includes the following:
Docker Client docker binary
Docker Machine docker-machine binary
Docker Compose docker-compose binary
Kitematic - Desktop GUI for Docker
Docker Quickstart Terminal app
Typical Local Workflow
Docker has a typical workflow that enables you to create images, pull images, publish images, and run containers.
From Dockerfile to Docker Image
A Dockerfile describes how to build a Docker image. The FROM command defines the base image from which we start. My Docker Image derives from the
Java runtime, by using a public OracleJDK 8 image. The images are looked up locally as well as in the publicly available Docker repository. The RUN command specifies which commands to run during the build process. Generally, all Docker containers run isolated from the world with no communication allowed â deny all policy. If there should be communication to the outside world, this must be explicitly defined through the EXPOSE command. In this example, port 9000 is exposed. The VOLUME commands specifies a mount point to which we can bind filesystems from the host operating system or other containers. This allows us to attach globally reusable and shareable mount point.
Listing 1: src/main/docker/Dockerfile
# Pull in the smallest Docker image with OracleJDK 8 (167MB)FROM frolvlad/alpine-oraclejdk8:slim# add bash and coreutilsRUN apk add --no-cache bash coreutils
MAINTAINER negrean.cristina@gmail.com# We added a VOLUME pointing to "/tmp" because that is where a Spring Boot application creates working directories for# Tomcat by default. The effect is to create a temporary file on your host under "/var/lib/docker" and link it to the# container under "/tmp". This step is optional for the simple app that we wrote here, but can be necessary for other# Spring Boot applications if they need to actually write in the filesystem.VOLUME /tmp# The project JAR file is ADDed to the container as "app.jar"ADD open-travel-spring-boot-docker-1.0.0-SNAPSHOT.jar app.jar#Expose Tomcat HTTP Port, by default 8080, the travel API overrides it via server.port=9000EXPOSE 9000# You can use a RUN command to "touch" the jar file so that it has a file modification time# (Docker creates all container files in an "unmodified" state by default)# This actually isnât important for the simple app that we wrote, but any static content (e.g. "index.html")# would require the file to have a modification time.RUN bash -c'touch /app.jar'ENTRYPOINT ["java", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
Building Images: docker build and publishing them to a Docker repository: docker push
From Dockerfiles, Docker Images are built with each Dockerfile command generating a new Docker Image which can be individually accessed by its id â a git commit-like fingerprint.
As the Spring Boot application we are containerizing is based on Gradle build specification, it would seem most straightforward to use a gradle plugin for this purpose.
Listing 2: build.gradle
Using the plugin, you can build a tagged docker image and then push it to a remote repository with Gradle in one command:
, and the outcome can be listed using docker images command, as bellow:
In Listing 2: build.gradle, the task buildDocker toggles the
docker push as disabled, as it will fail, unless you are part of the âcristinatechâ organization at Dockerhub.
If you change the configuration to match your own docker ID - see group syntax - then it should succeed, and you will have a new tagged, deployed image
at Dockerhub
The No Plugin Way
If you donât want to use the Gradle Docker Build plugin, you can achieve
all above Docker workflow steps: build, tag and push, using the command-line utility:
Running Docker containers: docker run
To run a Docker image you just need to use the run command followed by a local image name or one found in DockerHub. Commonly, a Docker image will require some additional environment variables, which can be specified with the -e option.
For long-running processes like daemons, you also need to use a âd option.
As the open travel API depends on
a Postgresql 9 datastore, to start the
postgres image, you would run the following command to configure
the PostgreSQL root userâs password, as documented in the Docker Hub postgres repository documentation:
But wait! There is a much simpler way, especially when you have more that 2 containers! By leveraging Docker Compose which has
already been installed with the Docker for Mac native application!
Docker Compose is a tool for defining and running multi-container Docker applications. It needs a Dockerfile - which we already have - so the appâs
environment can be reproduced anywhere. The magic happens in docker-compose.yml
where you need to define the services that make up your application, so they
can be run together in a isolated environment.