Building Images in a Heterogeneous Cluster

Recently I was troubleshooting a customer problem in their on-premise cluster. But I was not sure where the problem lay. So I switched over to using my colleagues Docker Enterprise demo cluster that is running in Azure. In this heterogeneous cluster are 1 Universal Control Plan (UCP) manager, 1 Docker Trusted Registry (DTR), 2 Windows workers, and 1 Linux worker.

Simple Demo Cluster

I was attempting to reproduce my customer’s problem. However, what should have been easy turned into a problem; or else I wouldn’t be writing about it. I could not even get to my customer’s problem until i resolved an issue with simply building a linux image against a heterogeneous (Windows and Linux workers) cluster. At the time it felt rather silly and frustrating all at the same time. All I could do was wring my hands and groan.

I had downloaded my client bundle and sourced it in my bash shell.

$ source env.sh

The next thing I needed was to build the docker image from my custom Dockerfile. The Dockerfile was based on nginx and had a custom nginx.conf loaded into the image.

$ cd ~/my-pp
$ docker build -t my-app:1.0 .
Sending build context to Docker daemon  4.096kB
worker-win-2: Step 1/3 : FROM nginx:1.15.2 
worker-win-2: Pulling from library/nginx
worker-win-2: 
Failed to build image: no matching manifest for unknown in the manifest list entries

Ok, based on the last line of the log output it is not obvious what the issue is. However, if you look at the machine name that the build command was sent to, it becomes quite obvious what the problem is. I cannot build a linux based image on a windows machine. But how do I specify the target operating system on the command line?

I knew my friend Chuck had already encountered this problem. So this is what he told me to do; add the following option –build-arg ‘constraint:ostype==linux’ to my build command.

$ cd ~/my-app
$ docker build --build-arg 'constraint:ostype==linux' -t my-app .
Sending build context to Docker daemon  4.096kB
worker-linux-1: Step 1/3 : FROM nginx:1.15.2
worker-linux-1: Pulling from library/nginx
worker-linux-1: Pull complete 
worker-linux-1: Digest: sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
worker-linux-1: Status: Downloaded newer image for nginx:1.15.2
worker-linux-1:  ---> c82521676580
worker-linux-1: Step 2/3 : EXPOSE 8443 
worker-linux-1:  ---> Running in 88e99ace1e12
worker-linux-1: Removing intermediate container 88e99ace1e12
worker-linux-1:  ---> bd98a77c3b6b
worker-linux-1: Step 3/3 : COPY nginx.conf /etc/nginx/ 
worker-linux-1:  ---> 62b9f978af24
worker-linux-1: Successfully built 62b9f978af24
worker-linux-1: Successfully tagged my-app:latest

That’s it folks. Plain and simple.

$ docker build –build-arg ‘constraint:ostype==linux’ -t my-app .

In a heterogeneous cluster my builds are now targeting linux machines and not windows. Of course you can alternate the ostype to windows if that is your goal. Good luck and contact us at https://capstonec.com/about

Mark Miller
Solutions Architect
Docker Accredited Consultant

Help! I need to change the pod CIDR in my Kubernetes cluster

Change_k8s_pod_cidr 1 1

Your Docker EE Kubernetes cluster has been working great for months. The DevOps team is fully committed to deploying critical applications as Kubernetes workloads using their pipeline, and there are several production applications already deployed in your Kubernetes cluster.

But today the DevOps team tells you something is wrong; they can’t reach a group of internal corporate servers from Kubernetes pods. They can reach those same servers using basic Docker containers and Swarm services. You’re sure its just another firewall misconfiguration and you enlist the help of your network team to fix it. After several hours of troubleshooting, you realize that the problem is that you are using a CIDR (Classless Inter-Domain Routing) range for your cluster’s pod CIDR range that overlaps the CIDR range that the servers use.

Resistance is futile; management tells you that the server IP addresses can’t be changed, so you must change the CIDR range for your Kubernetes cluster. You do a little Internet surfing and quickly figure out that this is not considered an easy task. Worse yet, most of the advice is for Kubernetes clusters installed using tools like kubeadm or kops, while your cluster is installed under Docker EE UCP.

Relax! In this blog post, I’m going to walk you through changing the pod CIDR range in Kubernetes running under Docker EE. There will be some disruptions at the time that the existing Kubernetes pods are re-started to use IP addresses from the new CIDR range but they should be minimal if your applications use a replicated design.

Continue reading

32-bit Apps in a 64-bit Docker Container

I started my career in December of 1989 at a company named Planning Research Corporation which contracted a considerable amount of work with the Department of Defense. I spent one year working in Fortran 77. The next 6 years were far more interesting to me as I dove into the world of ANSI C programming using the Kernighan & Ritchie bible. I still have my book on a shelf.

Our systems ran on 3 different Unix operating systems. We managed Makefiles that targeted SunOS, DEC Ultrix, and IBM AIX platforms. At times this was quite challenging. However, everything in this environment was 32 bit architecture; but what did that matter to me at the time? 64 bit processors didn’t come along for many more years.

Continue reading

SSL Options with Kubernetes – Part 2

In the first post in this series, SSL Options with Kubernetes – Part 1, we saw how to use the Kubernetes LoadBalancer service type to terminate SSL for your application deployed on a Kubernetes cluster in AWS. In this post, we will see how this can be done for a Kubernetes cluster in Azure.

In general, Kubernetes objects are portable across the various types of infrastructure underlying the cluster, i.e. public cloud, private cloud, virtualized, bare metal, etc. However, some objects are implemented through the Kubernetes concept of Cloud Providers. The LoadBalancer service type is one of these. AWS, Azure, and GCP (as well as vSphere, OpenStack and others) all implement a load balancer service using the existing load balancer(s) their cloud service provides. As such, each implementation is different. These differences are accounted for in the annotations to the Service object. For example, here is the specification we used for our service in the previous post.

Continue reading

How to securely deploy Docker EE on the AWS Cloud

Overview

This reference deployment guide provides the step-by-step instructions for deploying Docker Enterprise Edition on the Amazon Web Services (AWS) Cloud. This automation references deployments that use the Docker Certified Infrastructure (DCI) template which is based on Terraform to launch, configure and run the AWS compute, network, storage and other services required to deploy a specific workload on AWS. The DCI template uses Ansible playbooks to configured the Docker Enterprise cluster environment.

Continue reading

SSL Options with Kubernetes – Part 1

In this post (and future posts) we will continue to look into questions our clients have asked about using Docker Enterprise that have prompted us to do some further research and/or investigation. Here we are going to look into the options for enabling secure communications with our applications running under Kubernetes container orchestration on a Docker Enterprise cluster.

The LoadBalancer type of a service in Kubernetes is available if you are using one of the major public clouds, AWS, Azure or GCP, via their respective cloud provider implementations. An Ingress resource is available on any Kubernetes cluster including both on-premises and in the cloud. Both LoadBalancer and Ingress provide the capability to terminate SSL traffic. In this post will show how this is accomplished with an AWS LoadBalancer service.

Continue reading

The Power in a Name

My full name is Mark Allen Miller. You can find my profile on LinkedIn under my full name https://www.linkedin.com/in/markallenmiller/. I went to college with two other Mark Millers. One of them also had the same middle initial as me so my name is not the most unique name in the world. My dad’s name is Siegfried Miller. At the age of 18, because he could “change the world”, he changed his last name from Mueller to Miller and yep, he doesn’t have a middle name. My grandfather’s name is Karl Mueller. His Austrian surname, prior to immigrating to the US in 1950, was Müller with an umlaut which is a mark ( ¨ ) used over a vowel to indicate a different vowel quality. Interesting trivia you might say, but what does this have to do with Docker?

Well, Docker originally had the name dotCloud. According to wikipedia “Docker represents an evolution of dotCloud’s proprietary technology, which is itself built on earlier open-source projects such as Cloudlets.” I had never even heard of Cloudlets until I wrote this blog.

Docker containers have names also. These names give us humans something a little more interesting to work with instead of the typical container id such as 648f7f486b24. The name of a container can be used to identify a running instance of an image, but it can also be used in most commands in place of the container id.

Continue reading

Kubernetes tolerations working together with Docker UCP scheduler restrictions

In this blog post we´ll take a look at how the scheduler controls in Docker UCP interact with Kubernetes taints and tolerations. Both are used to control what workloads are allowed to run on manager and DTR (Docker Trusted Registry) nodes. Docker EE UCP mangers nodes are also Kubernetes master nodes, and in production systems it is important to restrict what runs on the manager (master) and DTR nodes. We’ll walk through deploying a Kubernetes workload on every node in a Docker EE cluster.

Continue reading

Deploying a Docker stack file as a Kubernetes workload

Overview

Recently I’ve been hosting workshops for a customer who is exploring migrating from Docker Swarm orchestration to Kubernetes orchestration. The customer is currently using Docker EE (Enterprise Edition) 2.1, and plans to continue using that platform, just leveraging Kubernetes rather than Swarm. There are a number of advantages to continuing to use Docker EE including:

  • Pre-installed Kubernetes.
  • Group (team) and user management, including corporate LDAP integration.
  • Using the Docker UCP client bundle to configure both your Kubernetes and Docker client environment.
  • Availability of an on-premises registry (DTR) that includes advanced features such image scanning and image promotion.

I had already conducted a workshop on deploying applications as Docker services in stack files (compose files deployed as Docker stacks), demonstrating self-healing replicated applications, service discovery and the ability to publish ports externally using the Docker ingress network. Continue reading

Using a Private Registry in Kubernetes

Docker Trusted Registry (DTR) in a Docker Enterprise Edition (EE) cluster allows users to create a private image repository for their own use. They may want to do this when they want to use the cluster for their work but don’t want to or can’t use their own system or they’re not ready yet to share it with others. However, using a private image repository in a Kubernetes deployment requires some additional steps. In this post, I will show you how to setup the repository and use it in your deployment.
Continue reading