⚠ Kubernetes Ingress Best Practice ⚠

Generally when you set up an ingress controller in a k8s cluster, it will be exposed as NodePort across all of your nodes and the related load balancer will go over all nodes in your cluster (round robin or whatever it is). This happens with NGINX/ALB/etc.

We have seen this cause various problems at scale over the years. Two good examples are…

(1) An ALB ingress will register every node for every rule, and you’ll hit your AWS Service Quota for # of targets to a LB (1000 default) quickly as your # of rules times your # of nodes scale.

(2) If your cluster auto-scales a lot, there is a non-trivial chance your LB will direct through a node which may scale down on the route to your actual ingress pod. The network path is User -> LB -> Any Cluster Node -> Ingress Pod -> Service -> Target Pod. In the past we have seen this drop traffic and e.g. interrupt a user communicating with Trino. We saw this in various cases; but it seems like proper node draining could avoid it; so I can’t confirm if it is still an issue at this point.

In any case, it is a *very* good idea to use your ingress controller’s config to target a specific set of nodes that are finite and don’t scale a lot via labels. We tend to make a “main” node group to hold ingress controllers, coredns, kube dashboard, and similar things for this.

Azure LB Dropping Traffic Mysteriously – HaProxy / NGNIX / Apache / etc.

Failure Overview

I lost a good portion of last week fighting dropping traffic / intermittent connection issues in a basic tier azure load balancer.  The project this was working on had been up and running for 6 months without configuration changes and had not been restarted in 100 days.  Restarting it did not help, so clearly something had changed about the environment.  It also started happening in multiple deployments in different azure subscriptions, implying that it was not an isolated issue or server/etc related.

Solution

After doing a crazy amount of tests and eventually escalating to Azure support, who reviewed the problem for over 12 hours, Azure support pointed out this:

https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-custom-probe-overview#types

“Do not translate or proxy a health probe through the instance that receives the health probe to another instance in your VNet as this configuration can lead to cascading failures in your scenario. Consider the following scenario: a set of third-party appliances is deployed in the backend pool of a Load Balancer resource to provide scale and redundancy for the appliances and the health probe is configured to probe a port that the third-party appliance proxies or translates to other virtual machines behind the appliance. If you probe the same port you are using to translate or proxy requests to the other virtual machines behind the appliance, any probe response from a single virtual machine behind the appliance will mark the appliance itself dead. This configuration can lead to a cascading failure of the entire application scenario as a result of a single backend instance behind the appliance. The trigger can be an intermittent probe failure that will cause Load Balancer to mark down the original destination (the appliance instance) and in turn can disable your entire application scenario. Probe the health of the appliance itself instead.”

I was using a load balancer over a scale set, and the load balancer pointed at HaProxy, which was designed to route traffic to the “primary” server.  So, I wanted Azure’s load balancer to consider every server up as long as it could route to the “primary” server, even if other things on this server specifically were down.

But having the health probe check HAProxy meant that the health probe was routed to the “primary” server and triggered this error.

This seems like an Azure quirk to me… but they have it documented.  Once I switched the health probe to target something not routed by HaProxy the LB stabilized and everything was ok.

 

Azure VM Unresponsive, Can’t SSH

My VM Was Non-Responsive

Today I had an Azure virtual machine go down very unexpectedly.

I received error reports from users and tried to go to the related service endpoint myself… and sure enough, it didn’t come up.  Then, I tried to ssh onto the VM and I couldn’t.

I hopped into the Azure portal, went to the VM, and things actually looked alright… it wasn’t stopped, or de-allocated, or anything.

Why?

After multiple minutes of digging around the Azure portal for more information, suddenly the “Activity Log” popped up with a new entry.   This was relatively disconcerting as the issue had been reported over half an hour ago and I had been on the portal for multiple minutes.

The activity log said I had a “health event” which was “updated”.  Upon expanding it, I could see more events that had been “in progress”.  When you click the “in progress” event, you can get JSON for it and look into the details.  In my case, the bottom of the details said this:

    "properties": {
        "title": "We're sorry, your virtual machine isn't available because an unexpected failure on the host server",
        "details": null,
        "currentHealthStatus": "Unavailable",
        "previousHealthStatus": "Unknown",
        "type": "Downtime",
        "cause": "PlatformInitiated"
    }

So, the physical host which was running my VM in azure died. Azure automatically noticed this and moved it to a new physical host, though much slower than I would have appreciated.

The VM came up after a few more minutes and all was right with the world. So… the moral of the story is that if your VM is unresponsive, it may be because the host died, and you may have to wait quite a while to see information on that in the activity log. But it does auto resolve apparently which is nice.

Azure: Tagging All Resources in a Resource Group With its Tags

Recently, I had to go back and correctly tag a whole bunch of items in a new resource group, none of which had been given tags.

This kind of task can be daunting in the Azure portal… you have to click each, click the tags tab, and then type each key/value, for each tag, and save.  So… tagging 50 resources with 5 tags each ends up being 50 * 2 * 5 + 50 = 550 clicks at minimum, plus all the typing!  Clearly, this is a task better suited for the CLI.

Using the Azure CLI

Microsoft actually has a very full featured tutorial on this subject right here.  The more advanced code they provide will actually find every resource you have in every group and give each resource the tags from the group.  It will even optionally retain existing tags for resources that are already tagged.

I wanted something a little simpler with the login included so that I can quickly copy it in to fix a resource group here and there without worrying about affecting all the other resource groups.  So, here is the code. It also counts the items so you can see progress as it can take some time.

Note, I wanted to forcibly replace all the tags on the resources with the RG tags as some of them were incorrect. You can get code to merge with existing tags from the link noted above if you prefer.

tenant="your-tenant-id"
subscription="your-subscription-name"
rg="your-resource-group-name"

# Login to azure - it will give you a message and code to log in via
# a web browser on any device
az login --tenant "${tenant}" --subscription "${subscription}"

# Show subscriptions just to show that we're on the correct one.
echo "Listing subscriptions:"
az account list --output table

# Get the tags from the resource group in a useful format.
jsontag=$(az group show -n $rg --query tags)
t=$(echo $jsontag | tr -d '"{},' | sed 's/: /=/g')

# Get all resources in the target resource group, and loop through
# them applying the tags from the resource group. Count them to show
# progress as this can take time.
i=0
r=$(az resource list -g $rg --query [].id --output tsv)
for resid in $r
do
az resource tag --tags $t --id $resid
let "i+=1"
echo $i
done

Also note that you can find the total number of resources you are targeting in advance with this command so the counter is more practical :).

az resource list --resource-group "your-rg-name" --query "[].name" | jq length