Minikube Start Failure; Streaming server stopped, cannot assign requested address.

My Problem

I was attempting to downgrade my minikube kubernetes version to match an EKS cluster I had running in AWS.

This should have been fairly simple:

sudo minikube delete
minikube start --vm-driver=none --kubernetes-version 1.14.9

Unfortunately, it failed! Minikube would pause on staring kubernetes for about 4 minutes, and then fail. The kubelet was not coming up for some reason.  The output was huge, but this caught my eye:

Streaming server stopped unexpectedly: listen tcp … bind: cannot assign requested address

I spent about 2 hours going back and forth and even tried rebooting my laptop and starting a current/new version cluster again (which was already working), all to no avail.

The Solution

Eventually, I saw a post which suggested I had networking problems, and from that point I worked out that my /etc/hosts file was messed up.  This line was commented out from when I was toying around with some DNS name faking on another project:

#127.0.0.1 localhost

So, localhost wasn’t defined and weird things were happening (obviously). Un-commenting this and starting minikube worked after that.

I’m sure this error can manifest from other networking issues as well; hopefully this saves you some time or points you in the right direction at least.

Minikube helm list – error forwarding port, socat not found.

I was trying to downgrade my minikube cluster to have it match my cloud EKS cluster version wise.  I had various issues in this process, but got everything working after a while… except helm.

So, I could use kubectl and list pods, I had done a helm init to install tiller (or enabled the minikube tiller addon), and everything was working from that angle.

Doing a “helm list” was giving an interesting error though:

$ helm list                                           
E0115 15:11:56.503124   32232 portforward.go:391] an error occurred forwarding 33511 -> 44134: error forwarding port 44134 to pod 9d0d87a8b6d37fc96b7947d1b21c273c3e9dd63207253570f0694ee2db91c545, uid : unable to
 do port forwarding: socat not found.

After a while, I found this github entry: https://github.com/helm/helm/issues/1371 which says to install socat (which would seem obvious from the error message; but I don’t like to be hasty).

So, I ran:

sudo apt-get install socat -y

and then helm started working like a charm =). Hope that helps someone else too!

Minikube ImagePullBackOff – Local Docker Image

Background Context

Earlier today I was beginning to port our Presto cluster into kubernetes.  So, the first thing I did was containerize Presto and try to run it in a kubernetes deployment in minikube (locally).

I’m fairly new to minikube.  So far, I’m running it with vm-driver=None so that it uses the local docker instance rather than a virtualbox VM/etc.

The Error

So, I got my docker image building well and tested it within docker itself.  That all worked great.  Then I wrote my kubernetes deployment and ran it using the image… but unfortunately, it came up with the pod saying Error: ImagePullBackOff.

I went down a rabbit hole for a while after this because many posts talk about how to enable your minikube to have access to your local docker repo.   But when you’re running vm-driver=None, you are literally running in your local docker – so it should already have access to everything.

The actual error is: “Failed to pull image “qaas-presto:latest”: rpc error: code = Unknown desc = Error response from daemon: pull access denied for qaas-presto, repository does not exist or may require ‘docker login’: denied: requested access to the resource is denied”.

So, the issue is that it’s trying to do a pull and it can’t find the image… but it shouldn’t need to pull because the image already exists locally as it was built locally.

Workaround

I found the workaround in this github entry: https://github.com/kubernetes/minikube/issues/2575. Basically, in your deployment/pod spec/whatever, you just set:

imagePullPolicy: Never

This makes it avoid trying to pull the image, so it never fails to find it.  It just assumes it is present, which it is, and it uses it and moves on.  You may not necessarily want to deploy your config to production with these settings, but you can always template them out with helm or something, so it’s a viable workaround.

 

 

Minikube – Setup Kubernetes UI

If you have not yet installed Minikube, refer to this blog first: https://coding-stream-of-consciousness.com/2018/11/08/installing-minikube-on-windows-10/.

Assuming you have Minikube running, open another command prompt (I used an administrator one).  Then run:

kubectl proxy

Once that is up, you can go to this link: http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/node?namespace=default and the kubernetes UI should be available.

The Kubernetes documentation may recommend a different link that does not work. I believe this is probably because Minikube is out of date compared to the current Kubernetes releases; but maybe it is just a documentation error.

For the record in case it changes in the future, the link from the Kubernetes documentation is: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/.

Installing Minikube on Windows 10

Installing Minikube on Windows 10 was not as straightforward as I had expected.  I tried it at home and gave up originally (instead using https://labs.play-with-k8s.com/).

I wanted to get it set up properly for a new job though, so I worked through the issues.  Please note in case your issues are different than mine, these links are extremely enlightening:

Much of the online documentation I found was outdated.  The actual Git Hub repository was fine though; so I recommend using the instructions there : https://github.com/kubernetes/minikube.  Mine are based on those and the previously noted links.

Before We Start – If You’re Already in a Bad State

If you’ve already started installing Minikube and you failed, or if you have been hacking at this with my instructions and are stuck, you may end up needing to delete the C:\users\user-name{.minikube and .kube} directories to clean things up.  I had to do this as I messed up the virtual switch/etc set-up originally.  Don’t do this unless you do get stuck though.

You can’t do this folder clean-up if the Hyper-V minikube VM is still running (you can see that in the Hyper-V manager).  If you try to delete them while its running, it will partially delete files and then you’re in a state where you can’t even stop the VM on purpose!  To fix that, you need to kill the processes related to the VM and set it to not automatically start up in the Hyper-V manager. Then you can remove the VM and try again.

Some detailed information on how to kill a hung Hyper-V VM is here: http://woshub.com/how-to-stop-a-hung-virtual-machine-on-hyper-v-2016/. This was painful to fix.

The Minikube start command will hang if you’re in these bad states, so you really do need to delete those directories to get it set up cleanly.

Warning: Shut Down Issues

I found after I had this all working that (1) my PC was having trouble restarting – it would never stop, and (2) if I came and tried to do “minikube stop”, it would hang forever.  It turns out the current version of minikube has issues noted here: https://github.com/kubernetes/minikube/issues/2914.  So, to shut it down you need to do “minikube ssh” and then “sudo poweroff”.  Even trying to manage it from Hyper-V will not work properly.  They recommend downgrading to v.27; the current version I saw the issue with is .30.  I haven’t tried the downgrade yet.  So, these instructions will get it working with v.30; but you will have this shut-down issue potentially.

For now, I’m personally just going to stay with v.30, but I told it to NOT automatically start the minikube VM when my PC starts up in Hyper-V.  If I use this too often and they don’t fix this issue, I may downgrade at a later date though.

Installation Steps

Here is the full set of steps I had to use to get Minikube correctly running on my Windows 10 desktop:

  • Enable the “Hyper-V” windows feature (I also enabled containers, and WIndows Hypervisor Platform, they probably aren’t needed though).
  • Go into Hyper-V manager (search your start menu) and:
    • Click Virtual Switch Manager
    • Create a new virtual switch of type “External”.
    • Name it “External”.
    • Set it to use the “External network” and pick the NIC you commonly use.
    • Press OK.
  • Get an administrative command prompt open.
  • Install https://chocolatey.org/ if you don’t already use it; it’s like yum or apt-get in Linux.
  • choco install minikube
  • choco install kubernetes-cli
  • minikube start –vm-driver “hyperv” –hyperv-virtual-switch “External” –v=7
    • Note that we’re telling it to use Hyper-V and the switch we created.
    • We’re also setting a very verbose log level.

If you have issues after this, you may want to clean up and try again with the information I provided before the steps.

Validation

Let’s try and launch something to make sure it’s working (with instructions copied from the Git Hub link).

  • Launch the “hello-minikube” sample image as a pod.
    • kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.4 --port=8080
  • Expose it as a service.
    • kubectl expose deployment hello-minikube --type=NodePort
  • Verify the pod is up and running (wait if it is still starting).
    • kubectl get pod
  • Ask Kubernetes for the service URL:
    • minikube service hello-minikube --url
  • Hit that URL in your browser or CURL (you should get back a bunch of text with CLIENT VALUES at the top of it.
  • Hopefully all that is working.  So, let’s remove the service and pod we deployed:
    • kubectl delete service hello-minikube
    • kubectl delete deployment hello-minikube
  • Now you have Minikube running, and it’s a clean copy.

I hope that helps you get started!

User Interface

Refer to this blog for the quick user-interface setup instructions: https://coding-stream-of-consciousness.com/2018/11/08/minikube-set-up-kubernetes-ui/.

Doing Upgrades

You will want to update minikube and the kubernetes CLI after a while.  You can see their versions in the status command and then you can update them easily with chocolatey.

  • minikube status
  • choco upgrade kubernetes-cli
  • choco upgrade minikube
  • minikube update-context