Deploying a Docker stack file as a Kubernetes workload

Overview

Recently I’ve been hosting workshops for a customer who is exploring migrating from Docker Swarm orchestration to Kubernetes orchestration. The customer is currently using Docker EE (Enterprise Edition) 2.1, and plans to continue using that platform, just leveraging Kubernetes rather than Swarm. There are a number of advantages to continuing to use Docker EE including:

  • Pre-installed Kubernetes.
  • Group (team) and user management, including corporate LDAP integration.
  • Using the Docker UCP client bundle to configure both your Kubernetes and Docker client environment.
  • Availability of an on-premises registry (DTR) that includes advanced features such image scanning and image promotion.

I had already conducted a workshop on deploying applications as Docker services in stack files (compose files deployed as Docker stacks), demonstrating self-healing replicated applications, service discovery and the ability to publish ports externally using the Docker ingress network.

The Docker stack file

The starting point for the previous Swarm workshop was pretty easy; we just wrote a simple Docker stack file from scratch. By “stack file”, I mean a Docker compose file intended to be used with the docker stack deploy command. This was fairly simple work for a team already familiar with running Docker containers. Basically, just specify the compose version and then add a service that specifies an image, a replication factor and the port to publish on. The hardest part was looking up the correct syntax for specifying a placement constraint. Something like this, only about 10 lines of YAML:

version: '3.3'
services:
  my-tomcat-svc:
    image: tomcat:8
    deploy:
      replicas: 1
      placement:
        constraints: [node.labels.my-env == dev]
    ports:
      - "9080:8080"

It’s worth noting that the service names in the stack file must follow the naming conventions for DNS names for the stack to be deployed as a Kubernetes workload in the future; for instance, underscore characters will cause errors.

Exploring Kubernetes

Getting started with similar Kubernetes manifest files was going to be a little more difficult due to the more expressive (some people might say more complex) nature of the YAML in a Kubernetes manifest file. There are various API versions related to different API resource types, more details to specify in general and the need for two manifests to do the same thing as a stack file. We need to create a manifest for a Kubernetes Deployment and a manifest for a Kubernetes Service. Those manifests can be combined into a single YAML or JSON file, but we still need to write manifests for two separate Kubernetes API resources.
In a previous informal training session with the customer, we created “template” Kubernetes manifest files by re-directing the output of “dry run” kubectl commands to YAML files. We then edited those “template” YAML files to work with our application. The kubectl commands we executed were similar to:

kubectl run my-tomcat --image=tomcat:8 --dry-run -o yaml > example-deploy.yaml

kubectl create service nodeport my-service --tcp=9080:8080 --node-port=34808 --dry-run -o yaml > example-service.yaml

While this approach worked OK, it left us with some editing and trial-and-error to get everything working as needed.
This time we decided to try out the Docker EE feature that allows us to deploy a Docker compose file as a Kubernetes workload. In theory, that approach should create both the Kubernetes Deployment and Service resources that provide the exact same functionality as the Docker stack. In addition, we can then capture YAML manifest files to use as templates for our workshop (and for future work) by outputting the contents of the Kubernetes API resources as YAML files, for example:

kubectl get deploy DEPLOYMENT_NAME -o yaml > deployment-from-stack.yaml

kubectl get svc SERVICE_NAME -o yaml > service-from-stack.yaml

Let’s try it out

We will deploy our existing Docker stack file as a Kubernetes workload and see how things work out.
Here are our expectations:

  • A Kubernetes Deployment and / or ReplicaSet will be created.
  • One replica of a Pod using the tomcat:8 container will be created.
  • The Pod will run on nodes with the label my-env=dev.
  • A Kubernetes Service will be created to load balance over and expose the Pod(s) created by the Deployment and ReplicaSet.
  • We can reach the Tomcat app by accessing any node in the cluster on port 9080 (the published port from the Docker stack file).

Set up the environment

Let’s get our environment set up.

  • You’ll need to use Docker EE with a UCP version that includes Kubernetes. I’m using Docker EE 2.1 with UCP v3.1.3.
  • Label one or more worker nodes with the Docker label my-env=dev.
  • Label one or more worker nodes with the Kubernetes label my-env=dev.
  • I’m using a Docker UCP admin account to avoid getting into the details of Kubernetes RBAC in this blog post.
    • You’ll need to use an account that has permissions to create and delete Deployments and Services in our target Kubernetes namespace.
  • We’re going to work from a client workstation, not directly on a Docker node. I’m using a Windows PowerShell terminal, but a Linux or Mac terminal will work basically the same way.
    • You will need the kubectl binary downloaded and installed in the PATH on your workstation.
    • You will need the Docker UCP bundle for your account downloaded on your workstation and you will need to set it up in the terminal window you will be using.
    • We are going to create a namespace named workshop for our workshop work. You can do this from the Docker UCP GUI or from the CLI client:
      kubectl create ns workshop

Deploy our stack file as a Kubernetes workload

We will do this from the Docker UCP GUI:

  • Navigate to the Stacks -> Create Stack panel.

create-stack-1

1. Type in a name for your stack in the Name text field, for instance my-kub-stack.
2. Select KUBERNETES WORKLOADS as the Orchestrator Mode.
3. Select the target Namespace. We will select the workshop namespace that we created.
4. Select COMPOSE FILE as the Application File Mode.
5. Click the Next button.

create-stack-2

6. Paste or upload your stack file into the Add Application File panel.
7. Click the Create button.

create-stack-3

Explore what was created in Kubernetes

Let’s see what was created in Kubernetes:

PS demo>kubectl -n workshop get all
NAME                                READY     STATUS    RESTARTS   AGE
Pod/my-tomcat-svc-8954d9668-mhtbr   1/1       Running   0          1m

NAME                    TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
service/my-tomcat-svc   ClusterIP   None                 55555/TCP   1m

NAME                            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-tomcat-svc   1         1         1            1           1m

NAME                                      DESIRED   CURRENT   READY     AGE
replicaset.apps/my-tomcat-svc-8954d9668   1         1         1         1m

OK, we can see that a Deployment, ReplicaSet, Pod and Service were created, but something does not look quite right. The Kubernetes Service that was created is a ClusterIP type Service. This type of Service is only reachable from within the cluster, meaning that a port will not be exposed on each node for this Service. Also, the Service does not use port 9080 as we specified in our stack file. We can dig into the reason for this later in another blog post, but it probably is related to port 9080 not being in the allowed NodePort range in Kubernetes. In the case of Docker EE, that range is 32768-35535 by default. We’ll change the port and Service type later when we edit the Service manifest file so that our Service can be accessed from outside the cluster.

Let’s look at the Deployment and the Service. Since we are planning to use the generated YAML for manifest file template purposes for our workshop, we’ll redirect the output to files in  YAML format:

PS demo>kubectl -n workshop get deploy my-tomcat-svc -o yaml --export > deployment-from-stack.yaml
PS demo>kubectl -n workshop get svc my-tomcat-svc -o yaml --export > service-from-stack.yaml

Note: we are using the –export flag to reduce the amount of run-time information that is included in the output.

View the files in a text editor. Wow! That is a lot of YAML compared to the original Docker stack file. Some sections are due to using the Docker compose to Kubernetes tooling, which needs to track what it created and to avoid naming collisions with other Kubernetes API resources. We can delete a lot of those lines along with any run-time information when we edit the generated files to create basic template files. We also note that the our Docker compose deployment constraint label is not mentioned anywhere in the Deployment manifest.

Generated Deployment YAML file:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    com.docker.stack.expected-generation: "1"
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: null
  generation: 1
  labels:
    com.docker.service.id: my-kub-stack-my-tomcat-svc
    com.docker.service.name: my-tomcat-svc
    com.docker.stack.namespace: my-kub-stack
  name: my-tomcat-svc
  ownerReferences:
  - apiVersion: compose.docker.com/v1beta2
    blockOwnerDeletion: true
    controller: true
    kind: Stack
    name: my-kub-stack
    uid: a1c89ae7-3896-11e9-bae3-9ae4c740a410
  selfLink: /apis/extensions/v1beta1/namespaces/workshop/deployments/my-tomcat-svc
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      com.docker.service.id: my-kub-stack-my-tomcat-svc
      com.docker.service.name: my-tomcat-svc
      com.docker.stack.namespace: my-kub-stack
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        com.docker.service.id: my-kub-stack-my-tomcat-svc
        com.docker.service.name: my-tomcat-svc
        com.docker.stack.namespace: my-kub-stack
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: beta.kubernetes.io/os
                operator: In
                values:
                - linux
              - key: beta.kubernetes.io/arch
                operator: In
                values:
                - amd64
      containers:
      - image: tomcat:8
        imagePullPolicy: IfNotPresent
        name: my-tomcat-svc
        ports:
        - containerPort: 8080
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}

Generated Service yaml file:

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    com.docker.service.id: my-kub-stack-my-tomcat-svc
    com.docker.service.name: my-tomcat-svc
    com.docker.stack.namespace: my-kub-stack
  name: my-tomcat-svc
  ownerReferences:
  - apiVersion: compose.docker.com/v1beta2
    blockOwnerDeletion: true
    controller: true
    kind: Stack
    name: my-kub-stack
    uid: a1c89ae7-3896-11e9-bae3-9ae4c740a410
  selfLink: /api/v1/namespaces/workshop/services/my-tomcat-svc
spec:
  clusterIP: None
  ports:
  - name: headless
    port: 55555
    protocol: TCP
    targetPort: 55555
  selector:
    com.docker.service.id: my-kub-stack-my-tomcat-svc
    com.docker.service.name: my-tomcat-svc
    com.docker.stack.namespace: my-kub-stack
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Manifest files after cleanup and changing to NodePort type Service

Let’s remove the content not required for basic Kubernetes functionality and change the Service manifest to use a NodePort type Service. I left the affinity section in the Deployment manifest rather than using the nodeSelector approach since affinity is more expressive and will be useful when I use the file for a template going forward. I also specified the namespace to use as part of the manifest files.

Deployment YAML file:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    project-name: demo-project
  name: my-tomcat-svc
  namespace: workshop
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-tomcat-app
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: my-tomcat-app
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: my-env
                operator: In
                values:
                - dev
      containers:
      - image: tomcat:8
        name: my-tomcat
        ports:
        - containerPort: 8080
          protocol: TCP

Service YAML file:

apiVersion: v1
kind: Service
metadata:
  labels:
    project-name: demo-project 
  name: my-tomcat-svc
  namespace: workshop
spec:
  ports:
  - name: 8088-8808
    nodePort: 34808
    port: 9080
    protocol: TCP
    targetPort: 8080
  selector:
     app: my-tomcat-app

Exposing Service ports on our nodes

We need to use a NodePort or LoadBalancer Service type to allow access to our Tomcat Pod from outside the Kubernetes cluster. We will use a NodePort type Service in this case since that will work even if our cluster is not running in a cloud environment like Azure, AWS, or GCP. From a Kubernetes perspective we could just delete the existing Service and create a new Service using our edited Service manifest file that uses a NodePort Service type and a port in the “legal” NodePort range. Since we also edited the selector in our Service spec, we will also delete the Deployment and create a new Deployment from our edited Deployment manifest file that uses a matching label in the Pod spec. That should be pretty simple.

Technically we could try to edit the existing Service and Deployment in place (yes, you can edit “live” objects in Kubernetes). However, there are some things that are immutable in live API objects and can’t be edited live (the details are outside of the scope of this blog post), and besides that we want to keep the state of our Service and Deployment in sync with the contents of our manifest files.

Delete the existing Service and Deployment

Delete the Service from command line:

PS demo>kubectl -n workshop delete svc my-tomcat-svc
service "my-tomcat-svc" deleted

The response indicates success. Let check to be sure…

PS demo>kubectl -n workshop get svc
NAME           TYPE       CLUSTER-IP  EXTERNAL-IP  PORT(S)    AGE
my-tomcat-svc  ClusterIP  None        <none>       55555/TCP  1m

Hmmm, our Service is still there. Well, after closer observation it is actually a new Service, based on the Age value. This is very unexpected behavior. We would expect this if we were trying to delete the Pod controlled by the Deployment and ReplicaSet, but we should be able to delete a Service. Let’s try to force the deletion and then check for the Service again:

PS demo>kubectl -n workshop delete svc my-tomcat-svc --force
service "my-tomcat-svc" deleted
PS demo>kubectl -n workshop get svc
NAME           TYPE       CLUSTER-IP  EXTERNAL-IP  PORT(S)     AGE
my-tomcat-svc  ClusterIP  None        <none>       55555/TCP   13s

Nope, still no luck…

If we try to delete the Deployment, we have the same problem. The Deployment and ReplicatSet are re-created, along with the Pod they control. Maybe there is more going on at a Docker Swarm / UCP level.

Check the UCP GUI for “Kubernetes stacks”

Let’s go back to the Docker UCP GUI and check to see if there was anything other than a Kubernetes Deployment, Service and the related Pod(s) created. Looking at the Stacks panel in the GUI, we see that a Kubernetes Workloads type stack was created.

delete-stack-1

Let’s try deleting that Stack from the GUI:

delete-stack-2

OK, our stack is deleted, let’s check using kubectl:

PS demo>kubectl -n workshop get all
No resources found.

Success! Our stubborn Kubernetes API resources are gone. Let’s get back to our workshop agenda. Create the Deployment and Service again, then check to see if they were created:

PS demo>kubectl apply -f tomcat-deployment.yaml -f tomcat-service.yaml
deployment.extensions "my-tomcat-svc" created
service "my-tomcat-svc" created
PS demo>kubectl -n workshop get all
NAME                                READY  STATUS   RESTARTS  AGE
Pod/my-tomcat-svc-86d677d985-mml5r  1/1    Running  0         9s

NAME                   TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)         AGE
service/my-tomcat-svc  NodePort  10.96.170.42  <none>       9080:34808/TCP  9s

NAME                           DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
deployment.apps/my-tomcat-svc  1        1        1           1          9s

NAME                                      DESIRED  CURRENT  READY  AGE
replicaset.apps/my-tomcat-svc-86d677d985  1        1        1      9s

Looks promising! Let’s see if we can access our Tomcat app externally by connecting to a node using the node port.

connect-to-tomcat

Success!

Conclusion

We were generally able to deploy a Docker stack file as Kubernetes workload, but there was some unexpected behavior:

  • Our originally published port form the Docker stack file was not externally published by the Kubernetes Service. Maybe if we had not specified a specific external port in the Docker stack file, a random port in the legal range would have been selected and used for our Kubernetes Service.
  • Our original deployment placement constraint was not properly included in the Kubernetes Deployment. This seems to work correctly with some placement labels such as hostnames, so your mileage may vary.
  • We could not directly delete the Kubernetes Service or Deployment using the Kubernetes API.

Is this a problem? Well, yes and no.

  • If you just need “starter” manifest file examples, no big problem as long as you don’t mind doing some editing including changing the Service type. I’m not sure this is really any easier than using the kubectl create –dry-run approach, however.
  • If you don’t need placement constraints and the ability to predictably expose your Service externally to the cluster (without editing the Kubernetes API resources or manifests) the behavior may be OK.
  • If you are planning to use the generated Kubernetes API resources or manifests with some kind of API automation, the behavior could cause unanticipated problems or extra steps to delete Services and modify manifests.

Leave a Reply