[Reblogged] Copy Managed Images Between Subscriptions via Powershell

I recently had to promote a VM image for a scale set between subscriptions.  It turns out this was very complex; but this blog was a life saver.  So, I highly recommend reading it if you need to do this.

This is re-blogged; so just click the link below to see the full original block (which I highly recommend).

Michael S. Collier's Blog

Introduction

Azure Managed Disks were made generally available (GA) in February 2017. Managed Disks greatly simplify working with Azure Virtual Machines (VM) and Virtual Machine Scale Sets (VMSS). They effectively eliminate the need for you to have to worry about Azure Storage accounts and related VHD constraints/limits. When using managed disks for VMs or VMSS, you select the type of disk storage (SSD or HDD) and the size of disk needed. The Azure platform takes care of the rest. Besides the simplified management aspect, managed disks bring several additional benefits, but I’ll not reiterate those here, as there is a lot of good info already available (here, here and here).

While managed disks simplify management of Azure VMs, they also simplify working with VM images. Prior to managed disks, an image would need to be copied to the Storage account where the derived VM would be created…

View original post 1,255 more words

Azure Custom Script Extension – Text File Busy – Centos7.5 – VM Stuck on Creating

What’s Wrong

I’ve been building a scale set on Azure and have repeatedly observed around 40% of my VMs getting stuck on “Creating” in the azure portal.  The scale set uses a custom script VM extension and runs on the Centos 7.5 OS.

Debugging

After looking around online a lot, I came across numerous Git Hub issues against the custom script extension or the Azure Linux agent.  They are for varying OS’s, but they often involve the VM getting stuck in creating.  For example, here is one vs Ubuntu:

If you go to this file “/var/log/azure/custom-script/handler.log”, you can see details about what the custom script extension is doing.  Also note that “/var/log/waagent.log” can be useful as well.

$> vi /var/log/azure/custom-script/handler.log
+ /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-extension install
/var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-shim: line 77: /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-extension: Text file busy

In my case, it failed with “Text file busy”. for some reason. Again, there are numerous Git Hub entries for this – but no solutions:

Somewhere else online I saw reports that the Agent was failing while downloading files.  Note that if your plugin download works, you should see the script and more info in this location -> /var/lib/waagent/custom-script/download/1/script-name.sh (in my case, it is not there).

My custom script extension takes a script out of Azure Blob storage… so I’m going to try to bundle that script into the image and just issue the run command from the custom script extension to see if that makes it go away.

Result – Failure

Taking the script out of blob storage and putting it into the VM itself, and just calling it with the custom script extension’s command-to-execute mitigated this issue.  This is unfortunate as internalizing the script means every tweak requires a new image… but at least the scale set can work properly now and be stable :).

Avoiding downloading files made the issue less likely to occur… but it did come back.  It is just rarer.

I tried downgrading the Azure Linux Agent (waagent) to a version noted in one of those Git Hub issues.  It did not help.  I also tried reverting to Centos 7.3 which didn’t help.  I can’t find any way to make this work reliably.

Workaround

My workaround will be:

  • Take all customizations I was doing with the agent.
  • Move them into a packer build (from Hashicorp).
  • Packer will build the image I need for each environment, fully configured and working.
  • This way, I just run the image and don’t worry about modifying its config with the custom script extension.

This is painful and frustrating, so I will also raise the bug with Microsoft while doing the workaround.

 

Running Terraform on Centos7/RHEL7 With Docker

Install Docker

Here is a lean version of the Docker site content that I tested on Centos 7.5.  It yum installs some pre-requisites, adds the stable Docker Community Edition repository to yum, and then installs and starts Docker.

sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce
sudo systemctl start docker

Now Docker is started – but only the root user can really use it.  So, let’s create the docker group and add our current user to it.  That way we can use docker with our current user and avoid having to use sudo on every command.

These instructions from from here: https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user.

sudo groupadd docker
sudo usermod -aG docker $USER

After this, please re-log in (e.g. exit out of SSH and jump back into your server) so that your group memberships apply.

Now Docker is running and we can use it as ourselves.

Get Terraform Working in Docker

We will run Terraform as a single command inside of a Docker image.  So, let’s start by getting the latest Terraform image form Hashicorp:

docker pull hashicorp/terraform

Create a directory for your Terraform work and give ownership to your user. Also create a sub-directory to act as the Docker volume in which we will put your Terraform plans.

sudo mkdir /opt/terraform && sudo chown $USER:$USER /opt/terraform
cd /opt/terraform
mkdir tf-vol

Now let’s create a file at /opt/terraform/tf-vol/plan.tf with a sample Terraform plan (just a debug one).

output "test" {
  value = "Hello World!"
}

After this, we can run Terraform and tell docker to use that tf-vol directory as as a volume. Terraform will use it as the working directory, will find our plan, and will display “Hello World!”.

$ docker run -i -t -v /opt/terraform/tf-vol:/tf-vol/ -w /tf-vol/ hashicorp/terraform:light apply

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

test = Hello World!

So, we now have Docker installed, and Terraform running with it using an external volume to store our plans.

Ansible – Refer to Host in Group by Index

Occasionally it is very useful to refer to a host in a group by an index.  For example, if you are setting up Apache or HAProxy, you may need to push a configuration file out to each host that can redirect to all other hosts.

It is actually quite easy to refer to the hosts in a group by index, but its not necessarily easy to google it unfortunately.  Here is the syntax for the first 3 hosts in a group:

{{groups['coordinators'][0]}}
{{groups['coordinators'][1]}}
{{groups['coordinators'][2]}}

Presto Coordinator High Availability (HA)

Quick Recap – What is Presto?

Presto is an open-source distributed SQL query engine made by Facebook that runs as its own cluster.  It is able to refer to an existing hive metastore and run queries on the hive tables in HDFS/etc itself using its own resources.  It is much faster as it does everything in-memory rather than by using map-reduce.  It can connect to numerous data sources aside from hive as well (though I have only used it with Hive over Azure’s ADLS personally).

No High Availability

We started using Presto in an enterprise use case, and I was astounded to find out that it doesn’t have any high-availability (HA) built into it.  Presto as a product is wonderful – it is fast, easy to set up, provides pretty solid query diagnostics, handles massive queries in a very stable manner, etc.  So, the complete lack of a HA solution seems very strange given the strength of the product.

The critical component in Presto is the Coordinator, and it is a single point of failure.  It is the brains of the operation; it parses queries, breaks them into tasks, controls where work gets scheduled, etc.  Users only talk to the Coordinator node.

Coordinators vs Workers

Despite the importance of a coordinator, the only real differences between a coordinator and  the rest of the worker nodes at a configuration level are that:

  1. Coordinators specify that they are a coordinator.
  2. Coordinators can run an embedded discovery server – all nodes in the cluster report to this discovery server (including the coordinator itself).  This discovery server can actually be run separately from the coordinator as well if desired; I think the provision of an embedded one is relatively new.  The discovery server is how the coordinator knows the full set of nodes it is managing.
  3. Coordinators can choose whether or not they themselves are used to process queries (as opposed to just managing them).

Again, coordinators take client connections (e.g. JDBC, ODBC, etc), and they take queries from those connections, parse them, validate them, break them into tasks, and schedule them across the pool of available workers.

Workers just report to the discovery server and handle the tasks they are allocated.

HA Options

There is surprisingly little to find online about making Presto HA.  The only two solutions that I’ve seen are:

  • Run multiple clusters behind a load balancer.
  • Run multiple coordinators and some form of proxy service to ensure only one is ever active at a time.

Both of these have challenges and/or drawbacks.

If you are running multiple clusters, you probably want them to be active/active so you don’t only use half of your nodes at a time.  Handling this properly requires that your proxy service issue a redirect to the target cluster’s coordinator so that the client (e.g. JDBC connection) can re-send the request to that and talk to it directly.  This will work, but you’ve still limited the maximum query size you can do as you split your nodes into 2 or more clusters, so they cannot all co-operate on very large queries.

Running multiple coordinators for HA is preferable as you get to combine all of your nodes into a single, large cluster that can attack large queries.  It is not trivial to do though as if two coordinators operate at once, they can degrade and even deadlock the cluster.  We’ll dig into how to run with multiple coordinators now.

Using Multiple Coordinators

If you want to set up Presto using multiple coordinators, here is the general approach:

  1. Set up 2 or 3 nodes as coordinators.
  2. Tell them to run their own discovery servers in their config.
  3. Tell them to point at localhost for their own discovery server – this is quite important.
  4. Tell them not to do work (It will keep things more stable, but unfortunately, that means that your cluster has less power.  You’ll probably have far more workers than coordinators though, so it shouldn’t be an issue).
  5. Install HA proxy on the coordinator nodes.  Have all the coordinator nodes registered in order and make all but the first one a “backup”.  So, for example, run HA Proxy port 8385 and run Presto on port 8321.  All traffic will go to node #1 unless its down, in which case it will go to node #2, and so on.
  6. Set up a load balancer in front of the coordinator nodes pointing at the HA proxy port and make sure traffic can get through.
  7. Set up all worker nodes to target the load balancer for the discovery server.  So, all workers target the load balancer, which goes to any coordinator, all of which redirect to the primary one.  The primary coordinator always has all workers reaching it courtesy of HA proxy.
  8. As each coordinator itself only reports to its localhost discovery server, coordinators will not end up talking to each other’s discovery servers and will not interfere with each other.  Only one coordinator will ever have workers registered with it at a time.

Let Coordinators Use the Load Balancer?

If you let coordinators use the load balancer, then they will all end up at the primary coordinator’s discovery server.  Now… I have seen people online saying that they ran all nodes as coordinators (e.g. in the linked Gooogle Group conversations below) in which case this must be happening.

When I tried it though, I clearly got this warning from all the coordinators (and probably the workers too, but I didn’t check).  It comes out once a second.

2018-12-29T01:38:01.479Z WARN http-worker-176 com.facebook.presto.execution.SqlTaskManager Switching coordinator affinity from awe4s to 9mdsu
2018-12-29T01:38:01.806Z WARN http-worker-175 com.facebook.presto.execution.SqlTaskManager Switching coordinator affinity from 9mdsu to awe4s

Someone in a Git Hub issue I forgot the link to stated that this means that the memory management may get muddled up, which sounds scary.  I did provide a link to the warning in code below which somewhat verifies this.

So, maybe it is a disaster, or maybe it’s harmless – but in any case, I didn’t want warning messages coming at me once a second that looked this bad.  So, I opted to only have each coordinator talk to its own discovery server, which makes them 100% idle (not processing anything) unless they are the current primary coordinator.  This waste is unfortunate, but as we’ll have far more workers than coordinators, it’s not the end of the world.

Drawbacks

This will keep your cluster running in the event that a coordinator fails.  Any active queries at the time a coordinator fails will fail though – we can’t do anything about that unless Presto starts supporting HA internally.  Also, the fail-over period will be very much tied to your HA proxy configuration and your load balancer health checks (mine takes around 30 seconds using an Azure load balancer and HA proxy, I’ll be looking to reduce that).

Useful Links