Minikube workflows

Martin Jensen
kubecloud
Published in
8 min readFeb 12, 2017

--

Minikube is a terrific tool for running a Kubernetes cluster on your local computer. It launches a single-node cluster in a virtual machine which is great for running Kubernetes locally. This post describes how to iterate fast when developing in a local Kubernetes cluster by avoiding unnecessary pushes/pulls of docker images, and shows how to mount folders from a local drive into a cluster.

The workflows are presented for MacOS using VirtualBox. Since the documentation on minikube’s official Github repository is very good, and a lot of great posts about minikube exists, this post will serve as a supplement and show how minikube’s building blocks can be combined.

Prerequisites

You will need the requirements listed below.

Starting minikube

A single command is enough to start a single-node Kubernetes cluster.

$ minikube start

A virtual machine (boot2docker) is created using the default VirtualBox VM-driver with 2 GB RAM and 2 CPUs assigned. Several custom flags are available to configure the amount of resources, the Kubernetes version, addons, the container runtime, and several other options. See minikube -h for the documentation.

Minikube ISO should now be downloaded and kubectl configured to point at your new cluster. You can verify that it works by running:

$ kubectl get nodes
NAME STATUS AGE
minikube Ready 1m

Hello world from Kubernetes

In order to see something running in the cluster, a simple nginx-container is launched below.

# Create deployment 
$ kubectl run nginx --image=nginx --port=80
# Create service
$ kubectl expose deployment nginx --type=NodePort
# Open service
$ minikube service nginx

You should see a boring nginx landing page.

Minikube’s service opens your browser on the service's location, which consists of <minikube-ip>:<service-NodePort>. If you need more a production-like setup instead of random ports assigned, you can take a look at ingress - but that's another blog post in itself.

Flow 1: Avoiding endless push/pull

The first awkward workflow you bump into is to push your images to a registry (such as Docker Hub) in order to pull the images inside of your Kubernetes cluster. Luckily, minikube provides an elegant solution that can save you time while limiting the usage of your precious bandwidth. The figure below shows how the docker client, host, and registry interact. Docker for Mac configures the docker client (CLI) to point towards the docker host that Docker for Mac creates. Instead, we need to point to the docker host running in the virtual machine we just created.

To point the docker client towards minikube’s docker environment, run the command below.

$ eval $(minikube docker-env)

Now, when you build a docker image, the Kubernetes cluster is able to use it directly. I have created an example repository to make it easier to follow along.

$ git clone git@github.com:KubeCloud/minikube-workflows.git 
$ cd minikube-workflows/1-docker-env

The 1-docker-env folder contains a simple Dockerfile containing an nginx configuration (two .conf files) and a simple website (a folder).

# Dockerfile 
FROM nginx:stable-alpine
COPY config/nginx.conf /etc/nginx/nginx.conf
COPY config/default.conf /etc/nginx/conf.d/default.conf
COPY website /usr/share/nginx/html/

Build the image and check that it appears among your images (in the VM). Since I am running Docker 1.13, I will be using the new syntax.

$ docker image build -t flow-1:1 . 
$ docker image ls

Create a Kubernetes deployment and a service as seen below.

$ kubectl run flow-1 --image=flow-1:1 --port=80 
$ kubectl expose deployment flow-1 --type=NodePort

To launch it, use the minikube’s service helper again.

$ minikube service flow-1

Now you can iterate by without pushing to a docker registry every time. Try making a change to the website and invoke the make target below.

$ make local

The script builds a new docker image, and sets the image on the existing deployment’s pod’s container(s) without pushing and pulling from a registry again. To get a unique image tag, a timestamp is used in the Makefile. When you finish testing locally, you can assign a proper tag and push it.

$ cat Makefile
.PHONY: local
REPO=flow-1
TIMESTAMP=tmp-$(shell date +%s )

local:
@eval $$(minikube docker-env) ;\
docker image build -t $(REPO):$(TIMESTAMP) -f Dockerfile .
kubectl set image deployment $(REPO) *=$(REPO):$(TIMESTAMP)

Flow 2: Avoiding endless push/pull using yaml-files

Let’s clean up and remove the deployment and service we just created.

$ kubectl delete deploy,svc flow-1

Declaring your desired state in yaml files (or JSON) makes it easier to keep track of your configuration and move it between your local cluster and other environments. If you understand the commands from Flow 1, you can actually generate the yaml-files in the following way.

$ kubectl run flow-2 --image=flow-2 --port=80 --dry-run -o yaml > demo-deployment.yaml 
$ kubectl expose deployment flow-2 --type=NodePort --dry-run -o yaml > demo-svc.yaml

In the repository, there is another folder for this flow called: 2-docker-env-yaml. The Makefile contains two new targets create and delete which creates/deletes a deployment and a service from yaml files placed in the folder: kubernetes/deploy. Let’s see how it works.

$ cd ../2-docker-env-yaml 
$ make create
Sending build context to Docker daemon 27.14 kB
Step 1 : FROM nginx:stable-alpine
---> 4f481234b230
Step 2 : COPY config/nginx.conf /etc/nginx/nginx.conf
---> Using cache
---> 1050fe628f85
Step 3 : COPY config/default.conf /etc/nginx/conf.d/default.conf
---> Using cache
---> 521550503a34
Step 4 : COPY website /usr/share/nginx/html/
---> Using cache
---> 3dd0057fb1f6
Successfully built 3dd0057fb1f6
kubectl create -f kubernetes/deploy/
deployment "flow-2" created
service "flow-2" created

Now you can open the service again, make changes to your code, and use make local to update the image in the cluster.

$ minikube service flow-2 
# Make changes to the website
$ make local

When you’re done, you can call make delete to remove the deployment and service again.

Among the benefits of this workflow is that you abstract the tedious parts away, which makes it possible to focus on the software you are trying out, additionally, the Kubernetes environment is more similar to the production setup than a local setup. However, debugging inside a container isn’t as easy as directly from an IDE. The workflow fits compiled languages well unless the building process takes too long (which is another problem to look at) since you compile a new version, builds an image, and try it out. The current example is using a simple nginx image for the sake of simplicity across the shown flow. Interpreted languages, on the other hand, doesn’t need a new container if the files can be mounted.

Flow 3: Mounting local files into a pod

The previously mentioned repository contains a third folder called: 3-mounting-local-files with an example of how to use a local volume mount inside Kubernetes. I have created a target that creates a symlink to avoid hardcoding the mount path to a specific developer machine (across a team).

$ make mount 
Password: <enter your password>
Setting up mount as symlink in /Users/.minikube-mounts folder flow-3

Start everything up and see it running.

$ make create 
$ minikube service flow-3

Now try to edit index.html under website, save it, and reload your browser. You should see the change right away without building anything.

Let’s understand how this is possible by looking at minikube’s mounted host folders. The website folder exists on the Host OS (bottom layer), but it is accessed as a volume from minikube (top layer).

The volume, flow-3-volume, is declared in deployment.yaml with a hostPath which specifies the path on the Kubernetes node (host) a given pod is running on. The volumeMount, on the other hand, specifies the mounthPath where the flow-3-volume should be mounted inside the container (flow-3). You can see that the mountPath is same as the one used in the Dockerfile since this is where nginx looks for the content of the website.

apiVersion: extensions/v1beta1  
kind: Deployment
spec:
...
template:
...
spec:
containers:
- image: flow-3:create
name: flow-3
ports:
- containerPort: 80

volumeMounts:
- mountPath: /usr/share/nginx/html/
name: flow-3-volume
volumes:
- name: flow-3-volume
hostPath:
path: /Users/.minikube-mounts/flow-3

The virtual machine (in VirtualBox) creates the shared folder automatically as seen below.

Concluding remarks

We have now seen a couple of different flows to ease development locally in a Kubernetes setup using scripts, yaml files, and volume mounts.

The build time of the docker images is especially important for flow 2, and different strategies for designing the images exist. If you build everything inside a single container, you risk including a development environment in your production environment which in itself isn’t desired and additionally leads to larger images and possible issues with caching during clean builds. If you, on the other hand, build outside the container you lose the determinism of the build process from specified dependencies. Splitting the build process into an image itself and moving the result into a container for the runtime could be a solution (think in ways of JDK vs JRE or Golang build vs static binary). This, furthermore, simplifies how to build CI/CD pipeline.

Flow 3 fit, for instance, some web development workflows where the source code is watched for changes and a new folder built continually. In this way, you can run the code in a more realistic setup with its dependencies available while having the benefit of quickly trying out new ideas.

That’s it for this time, and we only scratched the surface of what minikube can do.

Originally published at kubecloud.io on February 12, 2017.

--

--

Developing backend stuff @ Lunar Way. Very interested in Go, Kubernetes, and the cloud-native landscape.