Do you have different technology stacks within your organization? Do these stacks come with their own monitoring tools, build tools and logging tools?
Do you worry about deploying rolling updates while keeping an existing Production Environment up and running?
Do you often end up with inconsistent PRODUCTION and DEVELOPMENT (or TEST) environments?
Containerization may just be one of the most important technologies of our time. (A related post discusses some in production considerations when dockerizing your applications)
- Different Technology Stacks — Dockerizing an app works identically for a Node.js apps, a .NET app or a J2EE app. The final docker image runs on a specific host (Linux, Windows Server etc.), but the process of creating the image is identical for every technology stack.
- Simplicity (of Packaging and Deploying Apps) — A single line of YAML code can pull in an entire server OS, another line can build and install the most complex app (whatever your stack might be), another line can installs SSL/TLS certificates (or even create self signed certs on the fly and install those) inside your container image. This is far simpler than the effort required to configure an app and it’s dependent components on a host VM.
- Reusability of Packaged Apps (Environment Consistency) — Once created, the above image can be used as a template for one or as many containers as you like to run in any environment. This makes it possible to avoid inconsistencies between DEV, STAGING and PROD environments. The exact same blueprint is used to create the container instance — hence, it is impossible to have inconsistencies.
- Licensing Benefits — No additional licensing required! Think about that — running a dozen windows server OSes — all for the cost of a single underlying host OS! This is simply not an option on VMs or Physical Box hosting.
- Management — Think about a cluster of VMs and what it takes to auto manage those. It is not a trivial task -and you need a whole set of tools around monitoring and restarting failed nodes in a cluster. Docker Swarm (or Kubernetes) will manage the entire life cycle of your hundreds of containers — diligently monitoring them and bringing up additional ones if necessary.
- Guaranteed Uptime — Without any manual intervention, one can guarantee near 100% uptime, by ensuring that there’s always a fixed number of replicas (for each tier) up and running. Try doing that, while avoiding costs, using traditional VMs or Physical hosts!
- Simplified Devops Pipeline — One can replace all of the current devops provisioning and configuration infrastructure, with dockerized images pulled from a registry and deployed on a clean host.
- Clustering without the overhead costs — One can also start eliminating clusters of VMs (and physical Boxes) with clusters of containers, which are much more lightweight and far less resource needy. The days of clustered database servers — each server requiring the horsepower of a small factory, are behind us. With containers, you get the same clustered database functionality and power — at a fraction of the cost and a fraction of the host’s resources.
- Upgrades without downtime — Deploying an application update without ANY downtime is a feature of dockerized applications.
- Host Configuration — The host can be pre-configured (for e.g. can be domain joined, can be a certificate server etc.) using a ‘bootstrap’ docker image that executes configuration scripts on the host. The same solution (docker) that is used for packaging and deploying an app, can also be used to configure the underlying host.
- Guaranteed Uptime (worth mentioning again )— Ensuring that there is near 100% uptime for each containerized tier (the web server, the database server, middleware, messaging system etc.) — are not just possible, but are a built in feature of the Docker platform.
Not Just For Running Continuous, Server Side Tasks
Just as you can package an entire 3 tier app into a container, and have each tier up and running independently of the others, you can also use containers for some more day to day, mundane tasks.
Consider these one time tasks (as opposed to continuously running server side tasks)
- Isolating a specific exe suspected of having a virus. Simply package that exe — run it inside a container — and now ‘docker inspect’ it all you want. You would not compromise the underlying host.
- Populating data in a database (using SQL Scripts), that task can be executed from inside a container. The database can be outside the container or anywhere, for that matter.
- Configuring an underlying host with appropriate powershell, bash scripts and additional components.
Any utility task, Any sensitive task (isolated exes), Any one-off task that needs to be executed (on demand), are all candidates for Containerization.
Summary
Hopefully, this post provides you with an introduction to the power of containerization technology. Database Server Clusters (with 100% uptime), Web Server Farms (without additional hardware costs), consistent development and production environments, ROLLING application upgrades without downtime, executing tasks in isolation without compromising the underlying host — are just the tip of the containerization iceberg.
It is our observation that several cutting edge technology centers in the U.S. are already ‘all in’ on Containerization technologies. Others are still on the sidelines.
Others are on the sidelines, not because they are not aware of containers, but because guidance and help is hard to find. It is an emerging field — and finding Docker certified practitioners is a challenge in itself.
Next Steps?
- Need help with Docker, Microservices or Kubernetes? Setup a call with Anuj — anuj.com Technology Associates, Inc.