Azure Custom Script Extension – Text File Busy – Centos7.5 – VM Stuck on Creating

What’s Wrong

I’ve been building a scale set on Azure and have repeatedly observed around 40% of my VMs getting stuck on “Creating” in the azure portal.  The scale set uses a custom script VM extension and runs on the Centos 7.5 OS.

Debugging

After looking around online a lot, I came across numerous Git Hub issues against the custom script extension or the Azure Linux agent.  They are for varying OS’s, but they often involve the VM getting stuck in creating.  For example, here is one vs Ubuntu:

If you go to this file “/var/log/azure/custom-script/handler.log”, you can see details about what the custom script extension is doing.  Also note that “/var/log/waagent.log” can be useful as well.

$> vi /var/log/azure/custom-script/handler.log
+ /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-extension install
/var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-shim: line 77: /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-extension: Text file busy

In my case, it failed with “Text file busy”. for some reason. Again, there are numerous Git Hub entries for this – but no solutions:

Somewhere else online I saw reports that the Agent was failing while downloading files.  Note that if your plugin download works, you should see the script and more info in this location -> /var/lib/waagent/custom-script/download/1/script-name.sh (in my case, it is not there).

My custom script extension takes a script out of Azure Blob storage… so I’m going to try to bundle that script into the image and just issue the run command from the custom script extension to see if that makes it go away.

Result – Failure

Taking the script out of blob storage and putting it into the VM itself, and just calling it with the custom script extension’s command-to-execute mitigated this issue.  This is unfortunate as internalizing the script means every tweak requires a new image… but at least the scale set can work properly now and be stable :).

Avoiding downloading files made the issue less likely to occur… but it did come back.  It is just rarer.

I tried downgrading the Azure Linux Agent (waagent) to a version noted in one of those Git Hub issues.  It did not help.  I also tried reverting to Centos 7.3 which didn’t help.  I can’t find any way to make this work reliably.

Workaround

My workaround will be:

  • Take all customizations I was doing with the agent.
  • Move them into a packer build (from Hashicorp).
  • Packer will build the image I need for each environment, fully configured and working.
  • This way, I just run the image and don’t worry about modifying its config with the custom script extension.

This is painful and frustrating, so I will also raise the bug with Microsoft while doing the workaround.

 

Azure + Terraform + Linux Custom Script Extension (Scale Set or VM)

Overview

Whether you are creating a virtual machine or a scale set in Azure, you can specify a “Custom Script Extension” to tailor the VM after creation.

Terraform Syntax

I’m not going to go into detail on how to do the entire scale set or VM, but here is the full extension block that should go inside either one of them.

resource "azurerm_virtual_machine_scale_set" "some-name" {
  # ... normal scale set config ...

  extension {
    name                 = "your-extension-name"
    publisher            = "Microsoft.Azure.Extensions"
    type                 = "CustomScript"
    type_handler_version = "2.0"

    settings = <<SETTINGS
    {
    "fileUris": ["https://some-blob-storage.blob.core.windows.net/my-scripts/run_config.sh"],
    "commandToExecute": "bash run_config.sh"
    }
SETTINGS
  }
}

Things to notice include:

  1. The extension settings have to be valid JSON (e.g. no new-lines in strings, proper quoting).
  2. This can get frustrating, so it helps to use a bash “heredoc” style block to write it the JSON (to help avoid quote escaping, etc). https://stackoverflow.com/a/2500451/857994
  3. Assuming you have a non-trivial use case, it is very beneficial to maintain your script(s) outside of your VM image.  After all… you don’t want to go make a new VM image every time you find a typo in your script.  This is what fileUris does; it lets you refer to a script in azure storage or in any reachable web location.
  4. You can easily create new Azure storage, create a blob container, and upload a file and mark it as public so that you can refer to it without authentication.  Don’t put anything sensitive in it in this case though; if you do, use a storage key instead.  I prefer to make it public but then pass any “secret” properties to it from the command-to-execute, that way all variables are managed by Terraform at execution time.
  5. The command-to-execute can call the scripts downloaded form the fileURIs.  When the extension is run on your VM or scale set VM(s) after deployment, the scripts are uploaded to /var/lib/waagent/custom-script/download/1/script-name.sh and then run with the command-to-execute.  This location serves as the working directory.

Debugging Failures

Sometimes things can go wrong when running custom scripts; even things outside your control.  For example, on Centos7.5, I keep getting 40% of my VMs or so stuck on “creating” and they clearly haven’t run the scripts.

In this case, you can look at the following log file to get more information:

/var/log/azure/custom-script/handler.log

Azure – Linux VM Image Creation – Powershell – With Service Principal/Account

Overview

I was working on creating generalized VM images for use with scale sets and auto-scaling and I found it rather painful to get the complete set of examples for:

  1. De-provision user/etc from VM.
  2. Use Azure Powershell with a Service principal.
  3. Generalize the VM and create an image.

So, here’s a short mostly-code post on how to do that.

Specific Steps

Fair warning… as far as I know, you can’t use the VM after doing this… but you can create a new copy of it from the image, so that doesn’t matter much.

Before getting to Powershell, run this in your VM to de-provision the most recently set up user account (e.g. I’ll install everything on user “john” created with the Azure VM).  This will remove that user.

sudo waagent -deprovision+user

Now, just run the below command after setting your own values for the 5 variables up top.  This will log in to the RM with the credentials you provide in the pop-up, and then it will stop and generalize the VM, adn tehn create an image from it and store that image in the same resource group as the VM.

$vmName = "YOUR_VM_NAME"
$rgName = "YOUR_RG_NAME"
$location = "YOUR_REGION"
$imageName = "YOUR_IMAGE_NAME"
$tenant = "YOUR_TENANT_ID"

$c = Get-Credential # Input your service principal client-id/secret.
Connect-AzureRmAccount -Credential $c -ServicePrincipal -Tenant $tenant

Stop-AzureRmVM -ResourceGroupName $rgName -Name $vmName -Force
Set-AzureRmVm -ResourceGroupName $rgName -Name $vmName -Generalized
$vm = Get-AzureRmVM -Name $vmName -ResourceGroupName $rgName
$image = New-AzureRmImageConfig -Location $location -SourceVirtualMachineId $vm.Id
New-AzureRmImage -Image $image -ImageName $imageName -ResourceGroupName $rgName

Configuration Trouble?

  • If you’re not sure what a service account / principal is or how to create one, the process is quite involved and I highly recommend following one of the many Microsoft-provided tutorials.
  • You can find your tenant ID by clicking the directory + subscription button at the top of the portal OR by hovering over your name/info at the top right corner.
  • The region strings can be tricky; but just Google the Microsoft site if you’re not sure.  A US East 2 example is “EastUS2”.

What’s Next?

Your VM image can now be found in that resource group – go to the portal and see.  You can go into the image in the portal and create a new VM from it, or you can use it to boot up a scale set, etc.

Azure Key Vault Usage

If you want to store passwords or certificates securely and have them separated from your application code, then Azure Key Vaults are a wonderful option.

You can even set up key vaults so that you can access them without providing a client ID, etc. which makes them ultra secure as you don’t have to provide your credentials in your code or config files.

Creating a Key Vault

To set up a key vault, you just:

  • Go to All Services in the portal.
  • Search for Key Vault.
  • Click create and then provide a name, resource group, and region.
    • Remember, all of your resources in Azure have to go into a resource group so they are logically identified and manageable.

Assigning Users

When you’re programmatically accessing resources in Azure, you always need a service principal.  You can get this by creating an azure App Registration.   This is involved, and if you’re doing this you probably already have one.  If not though, you can refer to this Microsoft tutorial for creating a service principal.

Assuming you have the principal ready, go into your vault in the portal and click “Access Policies”.  In here, you can pick which things you need to manage from a template, then give your service principal name and create.

Remember, after you do this and it shows the created one on the summary page, you STILL have to click “Save” at the top.  If you don’t, it’s not really there.  When you’re done refresh the web page with F5 to make sure it’s really there.

Adding Secrets

Adding secrets/passwords is simple.  Just click “Secrets” and then the (+) sign and type in your name/value.

Querying Secrets From an Application

This is very language dependent.  Microsoft has great tutorials for every language though.  Here are two for Python and Java for example:

Managed Service Identity

Now, we still have one problem here.  The key vault holds all of our passwords which is great… but we need a service principal (with a password) to access the vault.  So, if we leave that in our code or config files, we’re no better off in reality.

The final step is to read up on Managed Service Identities which let you configure a machine to securely talk to a key vault without providing the principal information.  This way your code and deployment config is 100% free of any passwords/etc.