Docker Architect, For all your docker, docker swarm and kubernetes needs https://dockerarchitect.com/ Architecture cannot be an afterthought (™) Wed, 10 Jan 2024 15:02:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 214586389 How secure is docker? Is it possible to access the underlying host? https://dockerarchitect.com/container-security/how-secure-is-docker-is-it-possible-to-access-the-underlying-host/?utm_source=rss&utm_medium=rss&utm_campaign=how-secure-is-docker-is-it-possible-to-access-the-underlying-host Mon, 17 Jan 2022 16:46:04 +0000 https://dockerarchitect.com/?p=223 My Answer on Quora Let me paint a very simple example. You have a wordpress containerized app (or any app … Continue reading >How secure is docker? Is it possible to access the underlying host?

The post How secure is docker? Is it possible to access the underlying host? appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

]]>
My Answer on Quora

Let me paint a very simple example. You have a wordpress containerized app (or any app that uses a database). Typically, you will need a volume mount (a container volume mapped to the underlying host). This container volume has full rwx (read write execute) access to the underlying filesystem.

If you can introduce malware onto the wordpress drive (this can be through regular Layer 7 OWASP techniques), you introduce it to the underlying host. Remember, the container responds to all http requests (if it is hosting a webserver) – and anything you can introduce into a normal website via http, you can introduce into the containerized website.

As simple as that.

Now, if you want to be extra cautious:

  • a) Do not use volume mounts (or use a secure version – that takes some setting up)
  • b) Run your container in a special ‘memory isolated’ mode.

In addition,

•To make your container platform resilient, use network namespaces to sequester applications and environments

•Attach storage via secure mounts.

•Use gMSA to accomplish integrated windows authentication. This will prevent unauthorized access from any computer that does not have the gMSA credentials.

Summary

It is entirely possible to break out of a container boundary and introduce harmful software on the underlying host filesystem. However, simple precautions can mitigate this risk.

The post How secure is docker? Is it possible to access the underlying host? appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

]]>
223
Scale up and Scale out in Kubernetes https://dockerarchitect.com/uncategorized/scale-up-and-scale-out-in-kubernetes/?utm_source=rss&utm_medium=rss&utm_campaign=scale-up-and-scale-out-in-kubernetes Tue, 16 Nov 2021 22:16:56 +0000 https://dockerarchitect.com/?p=220 Overview Both Scale UP and Scale OUT are lightweight operations in AKS Clusters – as opposed to VM Scale Up … Continue reading >Scale up and Scale out in Kubernetes

The post Scale up and Scale out in Kubernetes appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

]]>
Overview

Both Scale UP and Scale OUT are lightweight operations in AKS Clusters – as opposed to VM Scale Up or Scale OUT

Scaling Up

Scaling up, is simply a matter of allocating more cluster resources to each pod.

This is in stark contrast to VM scale up – which requires shut down and restart of instances

Scaling Out

Pods can be automatically scaled out using the horizontal pod autoscaler.

Cluster nodes (VMs) can also be autoscaled.




Need an experienced Docker, Cloud Networking or a Cloud Data Protection Expert?  Anuj has successfully delivered over a dozen deployments on each of the public clouds (AWS/GCP/Azure) including several DevSecOps engagements. Set up a time with Anuj Varma.

The post Scale up and Scale out in Kubernetes appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

]]>
220
Dockerfile on Windows – Multiple Entrypoint Commands https://dockerarchitect.com/docker-on-windows/215-2/?utm_source=rss&utm_medium=rss&utm_campaign=215-2 Tue, 03 Aug 2021 17:16:29 +0000 https://dockerarchitect.com/?p=215 More than one entrypoint command? Typically, dockerfiles require a single entrypoint or single command. In fact, multiple CMD instructions are … Continue reading >Dockerfile on Windows – Multiple Entrypoint Commands

The post Dockerfile on Windows – Multiple Entrypoint Commands appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

]]>
More than one entrypoint command?

Typically, dockerfiles require a single entrypoint or single command. In fact, multiple CMD instructions are ignored and only the final CMD is executed.  By now, you should know that entrypoint and COMMAND instructions can be placed in your docker-compose.yml – instead of your dockerfiles. I prefer this approach since this keeps both files clean and more maintainable.

What do you do if you require more than one command to execute when your container starts up?

The answer is simple – but getting it work was not trivial. The idea is to wrap up those tasks in powershell scripts (say bootstrap.ps1) – and assign the ENTRYPOINT to the bootstrap script.

Why was this a challenge to get working?

The challenge wasn’t with assigning the Powershell script as the startup ENTRYPOINT command. That worked just fine.

The challenge was to do that – AND to perform an additional task – i.e. restart IIS (or whatever webserver/aspnet dll / app entrypoint) – which is your application’s true entrypoint – at the same time.

Again, the fact that you can only assign ONE ENTRYPOINT command, kept me searching for the answer.

It turns out you can concatenate your ENTRYPOINT commands – provided you start with POWERSHELL – and use Powershell concatenation as shown in the last line below.

version: '3.7'

services:

  web:

     env_file: .\variables.env

     build:

     context: .\web

     dockerfile: Dockerfile

      ports:

      - "80"

      - "443:443"

     security_opt:

      - credentialspec=file://mywebsite.json

     hostname: mywebsite

     volumes:

      - c:\Data:c:\Data

      - c:\certs:c:\certs

     entrypoint: cmd /c "powershell \bootstrap.ps1 & C:\\ServiceMonitor.exe w3svc"

Summary

Dockerfile instructions are concise and thus, deceptively simple. There seem to be limitations imposed by docker, but it turns out, with a little powershell, these limitations can be easily overcome. This post describes a way to assign multiple startup , entrypoint commands – something I found, a lot of people struggling with.

 





Need an experienced Cloud Networking or a Cloud Data Protection Expert?  Anuj has successfully delivered over a dozen deployments on each of the public clouds (AWS/GCP/Azure) including several DevSecOps engagements. Set up a time with Anuj Varma.

The post Dockerfile on Windows – Multiple Entrypoint Commands appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

]]>
215
Docker Compose on Windows https://dockerarchitect.com/docker-on-windows/docker-compose-on-windows/?utm_source=rss&utm_medium=rss&utm_campaign=docker-compose-on-windows Sat, 25 Jan 2020 22:28:07 +0000 http://containerizationarchitect.com/?p=152 As a recap, compose is a way to manage the lifecycle of the entire app — which could include multiple … Continue reading >Docker Compose on Windows

The post Docker Compose on Windows appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

]]>
As a recap, compose is a way to manage the lifecycle of the entire app — which could include multiple containers.

It is NOT a way to BUILD docker images. While compose CAN build images from dockerfiles, on production hosts, you often do not want to do builds. You simply want to reuse a prebuilt image.

  • Start, stop, and rebuild services
  • View the status of running services
  • Stream the log output of running services

Here’s a simple example using Compose — which manages a packaged redis and web tier in a docker container (alpine linux)

version: '3'
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"

Compose Up — Preserve volume data when containers are created

Compose preserves all volumes used by your services. When docker-compose up runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you’ve created in volumes isn’t lost.

Compose Up — Only Recreate Changed Containers —

Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly.

docker-compose -f docker-compose.json up

Think of image as a powered down container

  1. Splunk for events monitoring, Prometheus metrics!

Doesn’t the web service need the database service to be up and running? How do you control the start up order — the order in which services spin up?

You can control the order of service startup with the depends_on option. Compose always starts containers in dependency order, where dependencies are determined by depends_onlinksvolumes_from, and network_mode: "service:...".

However, Compose does not wait until a container is “ready” (whatever that means for your particular application) — only until it’s running. There’s a good reason for this.

The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.

To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.

Advantages of Docker Compose

ersion: '3'
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"

Run a one-off command on a service

Compose Up — Preserve volume data when containers are created

Compose preserves all volumes used by your services. When docker-compose up runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you’ve created in volumes isn’t lost.

Only Recreate Changed Containers —

Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly.

docker-compose -f docker-compose.json up

Up versus Run versus Start

The docker-compose start command is useful only to restart containers that were previously created but were stopped. It never creates new containers

Why do my services take 10 seconds to recreate or stop?

Compose stop attempts to stop a container by sending a SIGTERM. It then waits for a default timeout of 10 seconds. After the timeout, a SIGKILL is sent to the container to kill it forcefully. If you are waiting for this timeout, it means that your containers aren’t shutting down when they receive the SIGTERM signal.

To fix this issue, try the following:

JSON form versus String Form — Make sure you’re using the JSON form of CMD and ENTRYPOINT in your Dockerfile. String form uses bash, which doesn’t handle termination signals well.

Compose always uses the JSON form, so don’t worry if you override the command or entrypoint in your Compose file.

How do you automate deployments (from a git repo)?

Example deploy process may look like this:

  • Build an app using docker build . in the code directory.
  • Test an image.
  • Push the new image out to registry docker push myorg/myimage.
  • Notify remote app server to pull image from registry and run it (you can also do it directly using some configuration management tool).
  • Swap ports in a http proxy.
  • Stop the old container.

You can consider using amazon elastic beanstalk or google’s app engine with docker.

Elastic beanstalk / App Engine / Azure Web Services — will do most of the deployment for you and provide features such as auto-scaling, rolling updates, zero deployment deployments and more.

What is Image2Docker? And when should I use it?

Image2Docker is basically a PowerShell module that migrates existing Windows application from VMs to Windows Container images. Although it supports multiple application types, the main focus is on IIS.

You can use Image2Docker to export ASP.NET websites from a VM, so you can run them in a Windows Container with no application changes.

Supports Discovery of these artifacts:

  • Microsoft Windows Server Roles and Features
  • Microsoft Windows Add/Remove Programs (ARP)
  • Microsoft Windows Server Domain Name Server (DNS)
  • Microsoft Windows Internet Information Services (IIS)
  • HTTP Handlers in IIS configuration
  • IIS Websites and filesystem paths
  • ASP.NET web applications
  • Microsoft SQL Server instances
  • Apache Web Server

My app uses Integrated Windows Authentication. How will that work once an app is Containerized?

Containers cannot currently be joined to an Active Directory domain as required for Integrated Windows Authentication. A workaround is required for applications that require IWA as these applications are migrated to containers. The answer lies in something called a group MSA (managed service account).

A group MSA (gMSA) is a specific user principal that is used to connect to and access network resources.

Unlike MSAs, which can only be used by a single instance of a service, a gMSA can be used by multiple instances of a service running across multiple computers, such as in a server farm or in load-balanced services.

Containerized applications can use the gMSA when accessing domain resources (file shares, databases, directory services, etc.) from within a container.

Prior to creating a Group Managed Service Account for a containerized application or service, ensure that Windows Server worker nodes that are part of your Docker Swarm cluster are joined to your Active Directory domain. This is required to access and use the gMSA. Additionally, it is highly recommended to create an Active Directory group specifically for managing the Windows Server hosts in your Docker Swarm cluster.

Step 1 — New-ADGroup "Container Hosts" -GroupScope Global

What about logging — how and where does one see Docker Daemon logs?

It is important to distinguish between docker’s own logs and your app logs.

For daemon logs, we can configure different logging drivers for containers. By default, the stdout and stderr of the container are written in a JSON file located in /var/lib/docker/containers/[container-id]/[container-id]-json.log.

Run docker inspect to find your Docker file location

You can docker inspect each container to see where their logs are:

docker inspect --format='{{.LogPath}}' $INSTANCE_ID

2. Find the “Docker Root Dir” Value, e.g. /var/lib/docker

My application creates it’s own log files.

That’s great — and you can keep that unchanged. However, you will need a way for the container volume to interact with the local host filesystem. This doesn’t happen by default. And what’s worse is, when the container is destroyed, so are your log files.

Option 1 — BindMount a container volume to localhost:

This is the simplest approach and can be accomplished using the docker run command with the -v option (e.g.):

docker run — name=nginx -d -v ~/nginxlogs:/var/log/nginx -p 5000:80 nginx

(Note: The -v flag is very flexible. It can bindmount a volume or name a volume with just a slight adjustment in syntax. If the first argument begins with a / or ~/ you’re creating a bindmount. Remove that, and you’re naming the volume. see examples below)

  • -v /path:/path/in/container mounts the host directory, /path at the /path/in/container
  • -v path:/path/in/container creates a volume named path with no relationship to the host.

Option 2 — Create an independent Data Volume

What’s a data volume, you ask?

Containers by nature are transient, meaning that any files inside the container will be lost if the container shuts down. A data volume is defined as “a marked directory inside of a container that exists to hold persistent or commonly shared data.”

What if I don’t want to use a data volume?

There are at least three more options (although, data volumes are the preferred choice)

  1. Loggly — Containers can forward log events to a centralized logging service (such as Loggly) or store log events in a data volume.
  2. Docker Logging Driver — The Docker logging driver reads log events directly from the container’s stdout and stderr output; this eliminates the need to read to and write from log files, which translates into a performance gain.
  3. Dedicated Logging Container — This approach has the primary advantage of allowing log events to be managed fully within the Docker environment. Since a dedicated logging container can gather log events from other containers, aggregate them, then store or forward the events to a third-party service, this approach eliminates the dependencies on a host.

What is a ‘service’?

docker service create --replicas 5 -publish 8080:80 --name web nginx

What if my server uses SSL ? How do I add a certificate to a containerized site?

  1. Use Powershell to create a self signed certificate.
  2. Bind it to IIS (again, using Powershell)
  3. Docker run — docker run -ti — entrypoint cmd -p 80:80 -p 443:443 -h myweb -v c:\demo\appfiles:c:\temp microsoft/aspnet

Step 1 — Create a self signed certificate

openssl req \
-new \
-newkey rsa:4096 \
-days 3650 \
-nodes \
-x509 \
-subj "/C=US/ST=CA/L=SF/O=Docker-demo/CN=app.example.org" \
-keyout app.example.org.key \
-out app.example.org.cert

Step 2 — Create a docker-compose.yml file with the following content:


version: "3.2"services:
demo:
image: docker-demo
deploy:
replicas: 1
labels:
com.docker.lb.hosts: app.example.org
com.docker.lb.network: demo-network
com.docker.lb.port: 8080
com.docker.lb.ssl_cert: demo_app.example.org.cert
com.docker.lb.ssl_key: demo_app.example.org.key
environment:
METADATA: proxy-handles-tls
networks:
- demo-networknetworks:
demo-network:
driver: overlay
secrets:
app.example.org.cert:
file: ./app.example.org.cert
app.example.org.key:
file: ./app.example.org.key

Step 3 — Deploy the Stack

docker stack deploy --compose-file docker-compose.yml demo

Docker Expose

EXPOSE does not actually expose the network port. To expose a port to the world (or our network), you would need to specify another option when you run your container. For example, you would do the following:

1

docker run {image} -p 80

Need an experienced AWS/GCP/Azure Docker Professional to help out with your Docker / Public Cloud Strategy? Set up a time with Anuj Varma.

The post Docker Compose on Windows appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

]]>
152
ElasticSearch and Docker https://dockerarchitect.com/cool-docker-images/elasticsearch-and-docker/?utm_source=rss&utm_medium=rss&utm_campaign=elasticsearch-and-docker Tue, 03 Sep 2019 04:03:24 +0000 http://containerizationarchitect.com/?p=140 It’s a Java application, but running in Docker, you can treat it as a black box and manage it in … Continue reading >ElasticSearch and Docker

The post ElasticSearch and Docker appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

]]>
  • It’s a Java application, but running in Docker, you can treat it as a black box and manage it in the same way as all other Docker workloads—you don’t need to install Java or configure
    the JDK.
  • Elasticsearch exposes a REST API for writing, reading, and searching data, and there are client wrappers for the API available in all major languages.
  • Data in Elasticsearch is stored as JSON documents, and every document can be fully indexed so that you can search for any value in any field. It’s a clustered technology that can run across many nodes for scale and resilience. In Docker, you can run each node in a separate container and distribute them across your server estate to gain scale and resilience, but with the ease of deployment and management you get with Docker.
  • The same storage considerations apply to Elasticsearch as they do to any stateful workload—in development, you can save data inside the container so that when the  container is replaced, you start with a fresh database. In test environments, you can use a Docker volume mounted to a drive folder on the host to keep persistent storage outside the container. In production, you can use a volume with a driver for an on-premises storage array or a cloud-storage service.
  • Adopting Container-First Solution Design

    There’s an official Elasticsearch image on Docker Hub, but it currently has only Linux  variants. I have my own image on Docker Hub which packages Elasticsearch into a Windows Server 2019 Docker image. Running Elasticsearch in Docker is the same as  starting any container. This command exposes port 9200, which is the default port.

    ElasticSearch on AWS

    The service offers open-source Elasticsearch APIs, managed Kibana, and integrations with Logstash and other AWS Services, enabling you to securely ingest data from any source and search, analyze, and visualize it in real time.

    The post ElasticSearch and Docker appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

    ]]>
    140
    Certificates in Kubernetes https://dockerarchitect.com/kubernetes-architecture/certificates-in-kubernetes/?utm_source=rss&utm_medium=rss&utm_campaign=certificates-in-kubernetes Tue, 16 Jul 2019 02:07:48 +0000 http://containerizationarchitect.com/?p=84 Self Signed or Based on External PKI   Need an experienced AWS/GCP/Azure Professional to help out with your Public Cloud … Continue reading >Certificates in Kubernetes

    The post Certificates in Kubernetes appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

    ]]>
    Self Signed or Based on External PKI

     

    image



    Need an experienced AWS/GCP/Azure Professional to help out with your Public Cloud Strategy? Set up a time with Anuj Varma.

    The post Certificates in Kubernetes appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

    ]]>
    84
    Kubeadm – Bootstrap a kubernetes cluster https://dockerarchitect.com/kubernetes-architecture/kubeadm-bootstrap-a-kubernetes-cluster/?utm_source=rss&utm_medium=rss&utm_campaign=kubeadm-bootstrap-a-kubernetes-cluster Tue, 16 Jul 2019 02:05:39 +0000 http://containerizationarchitect.com/?p=80 Building a cluster through various steps – kubeadm is the preferred way to start up a cluster.  In the following … Continue reading >Kubeadm – Bootstrap a kubernetes cluster

    The post Kubeadm – Bootstrap a kubernetes cluster appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

    ]]>
    Building a cluster through various steps – kubeadm is the preferred way to start up a cluster.  In the following order, kubeadm

    1. Kubeadm init
    2. Pre Flight Checks – Pull container images  and check for available host resources
    3. Creates a Certificate Authority
    4. Generates Kubeconfig Files
    5. Generate Static Pod Manifests – for Control Plane Pods
    6. Starts up the Control Plane
    7. Taints the Master (System Pods on master node)
    8. Generates a Bootstrap Token
    9. Starts Add On Pods:  DNS and Kube Proxy

     

    image



    Need an experienced AWS/GCP/Azure Professional to help out with your Public Cloud Strategy? Set up a time with Anuj Varma.

    The post Kubeadm – Bootstrap a kubernetes cluster appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

    ]]>
    80
    Kubernetes Networking https://dockerarchitect.com/kubernetes-architecture/kubernetes-networking/?utm_source=rss&utm_medium=rss&utm_campaign=kubernetes-networking Tue, 16 Jul 2019 01:50:55 +0000 http://containerizationarchitect.com/?p=76   Three Guiding Principles   Networking Use Cases Within Same Pod  – Just Use Localhost On Same Node – Pod … Continue reading >Kubernetes Networking

    The post Kubernetes Networking appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

    ]]>
     

    Three Guiding Principles

    image

     

    Networking Use Cases

    Within Same Pod  – Just Use Localhost

    image

    On Same Node – Pod to Pod – Use Bridge Networking

    image

    On Different Nodes – Pod to Pod – Layer 2 and 3 reachability (Overlay Network)

    image

    External Access – Kube Proxy

    image



    Need an experienced AWS/GCP/Azure Professional to help out with your Public Cloud Strategy? Set up a time with Anuj Varma.

    The post Kubernetes Networking appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

    ]]>
    76
    Kubernetes components https://dockerarchitect.com/kubernetes-architecture/kubernetes-components/?utm_source=rss&utm_medium=rss&utm_campaign=kubernetes-components Tue, 16 Jul 2019 01:48:21 +0000 http://containerizationarchitect.com/?p=64 Technorati Tags: kubernetes architecture,kubernetes components Master aka Control Plane API Server Cluster Store (Store State) Scheduler Controller Manager Kubectl (CLI) … Continue reading >Kubernetes components

    The post Kubernetes components appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

    ]]>
    Technorati Tags: ,

    Master aka Control Plane

    1. API Server
    2. Cluster Store (Store State)
    3. Scheduler
    4. Controller Manager
    5. Kubectl (CLI) interacts with the API Server

    image

    API Server

    Cluster Store

    Scheduler – Pod Affinity (two pods always stay on same node) and Anti-Affinity (2 Pods can NEVER be on same node)

    Controller Manager

    1. Controller Loops
    2. Desired State – watch and update the API server
    3. ReplicaSet Controller

    image

    Pod Operations

    image

    Services

    Front Ends the Backend cluster. E.g. Http Service is the front end for a cluster of web servers

    image

    The post Kubernetes components appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

    ]]>
    64
    Docker – The Business Use Case https://dockerarchitect.com/uncategorized/45/?utm_source=rss&utm_medium=rss&utm_campaign=45 Fri, 14 Jun 2019 13:20:28 +0000 http://containerizationarchitect.com/?p=45 Do you have different technology stacks within your organization? Do these stacks come with their own monitoring tools, build tools … Continue reading >Docker – The Business Use Case

    The post Docker – The Business Use Case appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

    ]]>
    Do you have different technology stacks within your organization? Do these stacks come with their own monitoring tools, build tools and logging tools?

    Do you worry about deploying rolling updates while keeping an existing Production Environment up and running?

    Do you often end up with inconsistent PRODUCTION and DEVELOPMENT (or TEST) environments?

    Containerization may just be one of the most important technologies of our time. (A related post discusses some in production considerations when dockerizing your applications)

    1. Different Technology Stacks — Dockerizing an app works identically for a Node.js apps, a .NET app or a J2EE app. The final docker image runs on a specific host (Linux, Windows Server etc.), but the process of creating the image is identical for every technology stack.
    2. Simplicity (of Packaging and Deploying Apps) — A single line of YAML code can pull in an entire server OS, another line can build and install the most complex app (whatever your stack might be), another line can installs SSL/TLS certificates (or even create self signed certs on the fly and install those) inside your container image. This is far simpler than the effort required to configure an app and it’s dependent components on a host VM.
    3. Reusability of Packaged Apps (Environment Consistency) — Once created, the above image can be used as a template for one or as many containers as you like to run in any environment. This makes it possible to avoid inconsistencies between DEV, STAGING and PROD environments. The exact same blueprint is used to create the container instance — hence, it is impossible to have inconsistencies.
    4. Licensing Benefits — No additional licensing required! Think about that — running a dozen windows server OSes — all for the cost of a single underlying host OS! This is simply not an option on VMs or Physical Box hosting.
    5. Management — Think about a cluster of VMs and what it takes to auto manage those. It is not a trivial task -and you need a whole set of tools around monitoring and restarting failed nodes in a cluster. Docker Swarm (or Kubernetes) will manage the entire life cycle of your hundreds of containers — diligently monitoring them and bringing up additional ones if necessary.
    6. Guaranteed Uptime — Without any manual intervention, one can guarantee near 100% uptime, by ensuring that there’s always a fixed number of replicas (for each tier) up and running. Try doing that, while avoiding costs, using traditional VMs or Physical hosts!
    7. Simplified Devops Pipeline — One can replace all of the current devops provisioning and configuration infrastructure, with dockerized images pulled from a registry and deployed on a clean host.
    8. Clustering without the overhead costs — One can also start eliminating clusters of VMs (and physical Boxes) with clusters of containers, which are much more lightweight and far less resource needy. The days of clustered database servers — each server requiring the horsepower of a small factory, are behind us. With containers, you get the same clustered database functionality and power — at a fraction of the cost and a fraction of the host’s resources.
    9. Upgrades without downtime — Deploying an application update without ANY downtime is a feature of dockerized applications.
    10. Host Configuration — The host can be pre-configured (for e.g. can be domain joined, can be a certificate server etc.) using a ‘bootstrap’ docker image that executes configuration scripts on the host. The same solution (docker) that is used for packaging and deploying an app, can also be used to configure the underlying host.
    11. Guaranteed Uptime (worth mentioning again )— Ensuring that there is near 100% uptime for each containerized tier (the web server, the database server, middleware, messaging system etc.) — are not just possible, but are a built in feature of the Docker platform.

    Not Just For Running Continuous, Server Side Tasks

    Just as you can package an entire 3 tier app into a container, and have each tier up and running independently of the others, you can also use containers for some more day to day, mundane tasks.

    Consider these one time tasks (as opposed to continuously running server side tasks)

    • Isolating a specific exe suspected of having a virus. Simply package that exe — run it inside a container — and now ‘docker inspect’ it all you want. You would not compromise the underlying host.​
    • Populating data in a database (using SQL Scripts), that task can be executed from inside a container. The database can be outside the container or anywhere, for that matter.
    • Configuring an underlying host with appropriate powershell, bash scripts and additional components.

    Any utility task, Any sensitive task (isolated exes), Any one-off task that needs to be executed (on demand), are all candidates for Containerization.

    Summary

    Hopefully, this post provides you with an introduction to the power of containerization technology. Database Server Clusters (with 100% uptime), Web Server Farms (without additional hardware costs), consistent development and production environments, ROLLING application upgrades without downtime, executing tasks in isolation without compromising the underlying host — are just the tip of the containerization iceberg.

    It is our observation that several cutting edge technology centers in the U.S. are already ‘all in’ on Containerization technologies. Others are still on the sidelines.

    Others are on the sidelines, not because they are not aware of containers, but because guidance and help is hard to find. It is an emerging field — and finding Docker certified practitioners is a challenge in itself.

    Next Steps?

    Need an experienced AWS/GCP/Azure Professional to help out with your Public Cloud Strategy? Set up a time with Anuj Varma.

    The post Docker – The Business Use Case appeared first on Docker Architect, For all your docker, docker swarm and kubernetes needs.

    ]]>
    45