Docker Compose on Windows

As a recap, compose is a way to manage the lifecycle of the entire app — which could include multiple containers.

It is NOT a way to BUILD docker images. While compose CAN build images from dockerfiles, on production hosts, you often do not want to do builds. You simply want to reuse a prebuilt image.

  • Start, stop, and rebuild services
  • View the status of running services
  • Stream the log output of running services

Here’s a simple example using Compose — which manages a packaged redis and web tier in a docker container (alpine linux)

version: '3'
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"

Compose Up — Preserve volume data when containers are created

Compose preserves all volumes used by your services. When docker-compose up runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you’ve created in volumes isn’t lost.

Compose Up — Only Recreate Changed Containers —

Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly.

docker-compose -f docker-compose.json up

Think of image as a powered down container

  1. Splunk for events monitoring, Prometheus metrics!

Doesn’t the web service need the database service to be up and running? How do you control the start up order — the order in which services spin up?

You can control the order of service startup with the depends_on option. Compose always starts containers in dependency order, where dependencies are determined by depends_onlinksvolumes_from, and network_mode: "service:...".

However, Compose does not wait until a container is “ready” (whatever that means for your particular application) — only until it’s running. There’s a good reason for this.

The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.

To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.

Advantages of Docker Compose

ersion: '3'
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"

Run a one-off command on a service

Compose Up — Preserve volume data when containers are created

Compose preserves all volumes used by your services. When docker-compose up runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you’ve created in volumes isn’t lost.

Only Recreate Changed Containers —

Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly.

docker-compose -f docker-compose.json up

Up versus Run versus Start

The docker-compose start command is useful only to restart containers that were previously created but were stopped. It never creates new containers

Why do my services take 10 seconds to recreate or stop?

Compose stop attempts to stop a container by sending a SIGTERM. It then waits for a default timeout of 10 seconds. After the timeout, a SIGKILL is sent to the container to kill it forcefully. If you are waiting for this timeout, it means that your containers aren’t shutting down when they receive the SIGTERM signal.

To fix this issue, try the following:

JSON form versus String Form — Make sure you’re using the JSON form of CMD and ENTRYPOINT in your Dockerfile. String form uses bash, which doesn’t handle termination signals well.

Compose always uses the JSON form, so don’t worry if you override the command or entrypoint in your Compose file.

How do you automate deployments (from a git repo)?

Example deploy process may look like this:

  • Build an app using docker build . in the code directory.
  • Test an image.
  • Push the new image out to registry docker push myorg/myimage.
  • Notify remote app server to pull image from registry and run it (you can also do it directly using some configuration management tool).
  • Swap ports in a http proxy.
  • Stop the old container.

You can consider using amazon elastic beanstalk or google’s app engine with docker.

Elastic beanstalk / App Engine / Azure Web Services — will do most of the deployment for you and provide features such as auto-scaling, rolling updates, zero deployment deployments and more.

What is Image2Docker? And when should I use it?

Image2Docker is basically a PowerShell module that migrates existing Windows application from VMs to Windows Container images. Although it supports multiple application types, the main focus is on IIS.

You can use Image2Docker to export ASP.NET websites from a VM, so you can run them in a Windows Container with no application changes.

Supports Discovery of these artifacts:

  • Microsoft Windows Server Roles and Features
  • Microsoft Windows Add/Remove Programs (ARP)
  • Microsoft Windows Server Domain Name Server (DNS)
  • Microsoft Windows Internet Information Services (IIS)
  • HTTP Handlers in IIS configuration
  • IIS Websites and filesystem paths
  • ASP.NET web applications
  • Microsoft SQL Server instances
  • Apache Web Server

My app uses Integrated Windows Authentication. How will that work once an app is Containerized?

Containers cannot currently be joined to an Active Directory domain as required for Integrated Windows Authentication. A workaround is required for applications that require IWA as these applications are migrated to containers. The answer lies in something called a group MSA (managed service account).

A group MSA (gMSA) is a specific user principal that is used to connect to and access network resources.

Unlike MSAs, which can only be used by a single instance of a service, a gMSA can be used by multiple instances of a service running across multiple computers, such as in a server farm or in load-balanced services.

Containerized applications can use the gMSA when accessing domain resources (file shares, databases, directory services, etc.) from within a container.

Prior to creating a Group Managed Service Account for a containerized application or service, ensure that Windows Server worker nodes that are part of your Docker Swarm cluster are joined to your Active Directory domain. This is required to access and use the gMSA. Additionally, it is highly recommended to create an Active Directory group specifically for managing the Windows Server hosts in your Docker Swarm cluster.

Step 1 — New-ADGroup "Container Hosts" -GroupScope Global

What about logging — how and where does one see Docker Daemon logs?

It is important to distinguish between docker’s own logs and your app logs.

For daemon logs, we can configure different logging drivers for containers. By default, the stdout and stderr of the container are written in a JSON file located in /var/lib/docker/containers/[container-id]/[container-id]-json.log.

Run docker inspect to find your Docker file location

You can docker inspect each container to see where their logs are:

docker inspect --format='{{.LogPath}}' $INSTANCE_ID

2. Find the “Docker Root Dir” Value, e.g. /var/lib/docker

My application creates it’s own log files.

That’s great — and you can keep that unchanged. However, you will need a way for the container volume to interact with the local host filesystem. This doesn’t happen by default. And what’s worse is, when the container is destroyed, so are your log files.

Option 1 — BindMount a container volume to localhost:

This is the simplest approach and can be accomplished using the docker run command with the -v option (e.g.):

docker run — name=nginx -d -v ~/nginxlogs:/var/log/nginx -p 5000:80 nginx

(Note: The -v flag is very flexible. It can bindmount a volume or name a volume with just a slight adjustment in syntax. If the first argument begins with a / or ~/ you’re creating a bindmount. Remove that, and you’re naming the volume. see examples below)

  • -v /path:/path/in/container mounts the host directory, /path at the /path/in/container
  • -v path:/path/in/container creates a volume named path with no relationship to the host.

Option 2 — Create an independent Data Volume

What’s a data volume, you ask?

Containers by nature are transient, meaning that any files inside the container will be lost if the container shuts down. A data volume is defined as “a marked directory inside of a container that exists to hold persistent or commonly shared data.”

What if I don’t want to use a data volume?

There are at least three more options (although, data volumes are the preferred choice)

  1. Loggly — Containers can forward log events to a centralized logging service (such as Loggly) or store log events in a data volume.
  2. Docker Logging Driver — The Docker logging driver reads log events directly from the container’s stdout and stderr output; this eliminates the need to read to and write from log files, which translates into a performance gain.
  3. Dedicated Logging Container — This approach has the primary advantage of allowing log events to be managed fully within the Docker environment. Since a dedicated logging container can gather log events from other containers, aggregate them, then store or forward the events to a third-party service, this approach eliminates the dependencies on a host.

What is a ‘service’?

docker service create --replicas 5 -publish 8080:80 --name web nginx

What if my server uses SSL ? How do I add a certificate to a containerized site?

  1. Use Powershell to create a self signed certificate.
  2. Bind it to IIS (again, using Powershell)
  3. Docker run — docker run -ti — entrypoint cmd -p 80:80 -p 443:443 -h myweb -v c:\demo\appfiles:c:\temp microsoft/aspnet

Step 1 — Create a self signed certificate

openssl req \
-new \
-newkey rsa:4096 \
-days 3650 \
-nodes \
-x509 \
-subj "/C=US/ST=CA/L=SF/O=Docker-demo/CN=app.example.org" \
-keyout app.example.org.key \
-out app.example.org.cert

Step 2 — Create a docker-compose.yml file with the following content:


version: "3.2"services:
demo:
image: docker-demo
deploy:
replicas: 1
labels:
com.docker.lb.hosts: app.example.org
com.docker.lb.network: demo-network
com.docker.lb.port: 8080
com.docker.lb.ssl_cert: demo_app.example.org.cert
com.docker.lb.ssl_key: demo_app.example.org.key
environment:
METADATA: proxy-handles-tls
networks:
- demo-networknetworks:
demo-network:
driver: overlay
secrets:
app.example.org.cert:
file: ./app.example.org.cert
app.example.org.key:
file: ./app.example.org.key

Step 3 — Deploy the Stack

docker stack deploy --compose-file docker-compose.yml demo

Docker Expose

EXPOSE does not actually expose the network port. To expose a port to the world (or our network), you would need to specify another option when you run your container. For example, you would do the following:

1

docker run {image} -p 80

Need an experienced AWS/GCP/Azure Docker Professional to help out with your Docker / Public Cloud Strategy? Set up a time with Anuj Varma.