Azure Custom Script Extension – Text File Busy – Centos7.5 – VM Stuck on Creating

What’s Wrong

I’ve been building a scale set on Azure and have repeatedly observed around 40% of my VMs getting stuck on “Creating” in the azure portal.  The scale set uses a custom script VM extension and runs on the Centos 7.5 OS.

Debugging

After looking around online a lot, I came across numerous Git Hub issues against the custom script extension or the Azure Linux agent.  They are for varying OS’s, but they often involve the VM getting stuck in creating.  For example, here is one vs Ubuntu:

If you go to this file “/var/log/azure/custom-script/handler.log”, you can see details about what the custom script extension is doing.  Also note that “/var/log/waagent.log” can be useful as well.

$> vi /var/log/azure/custom-script/handler.log
+ /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-extension install
/var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-shim: line 77: /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-extension: Text file busy

In my case, it failed with “Text file busy”. for some reason. Again, there are numerous Git Hub entries for this – but no solutions:

Somewhere else online I saw reports that the Agent was failing while downloading files.  Note that if your plugin download works, you should see the script and more info in this location -> /var/lib/waagent/custom-script/download/1/script-name.sh (in my case, it is not there).

My custom script extension takes a script out of Azure Blob storage… so I’m going to try to bundle that script into the image and just issue the run command from the custom script extension to see if that makes it go away.

Result – Failure

Taking the script out of blob storage and putting it into the VM itself, and just calling it with the custom script extension’s command-to-execute mitigated this issue.  This is unfortunate as internalizing the script means every tweak requires a new image… but at least the scale set can work properly now and be stable :).

Avoiding downloading files made the issue less likely to occur… but it did come back.  It is just rarer.

I tried downgrading the Azure Linux Agent (waagent) to a version noted in one of those Git Hub issues.  It did not help.  I also tried reverting to Centos 7.3 which didn’t help.  I can’t find any way to make this work reliably.

Workaround

My workaround will be:

  • Take all customizations I was doing with the agent.
  • Move them into a packer build (from Hashicorp).
  • Packer will build the image I need for each environment, fully configured and working.
  • This way, I just run the image and don’t worry about modifying its config with the custom script extension.

This is painful and frustrating, so I will also raise the bug with Microsoft while doing the workaround.

 

Running Terraform on Centos7/RHEL7 With Docker

Install Docker

Here is a lean version of the Docker site content that I tested on Centos 7.5.  It yum installs some pre-requisites, adds the stable Docker Community Edition repository to yum, and then installs and starts Docker.

sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce
sudo systemctl start docker

Now Docker is started – but only the root user can really use it.  So, let’s create the docker group and add our current user to it.  That way we can use docker with our current user and avoid having to use sudo on every command.

These instructions from from here: https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user.

sudo groupadd docker
sudo usermod -aG docker $USER

After this, please re-log in (e.g. exit out of SSH and jump back into your server) so that your group memberships apply.

Now Docker is running and we can use it as ourselves.

Get Terraform Working in Docker

We will run Terraform as a single command inside of a Docker image.  So, let’s start by getting the latest Terraform image form Hashicorp:

docker pull hashicorp/terraform

Create a directory for your Terraform work and give ownership to your user. Also create a sub-directory to act as the Docker volume in which we will put your Terraform plans.

sudo mkdir /opt/terraform && sudo chown $USER:$USER /opt/terraform
cd /opt/terraform
mkdir tf-vol

Now let’s create a file at /opt/terraform/tf-vol/plan.tf with a sample Terraform plan (just a debug one).

output "test" {
  value = "Hello World!"
}

After this, we can run Terraform and tell docker to use that tf-vol directory as as a volume. Terraform will use it as the working directory, will find our plan, and will display “Hello World!”.

$ docker run -i -t -v /opt/terraform/tf-vol:/tf-vol/ -w /tf-vol/ hashicorp/terraform:light apply

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

test = Hello World!

So, we now have Docker installed, and Terraform running with it using an external volume to store our plans.

Centos7 and RHEL7 Increasing Open File Descriptors & Process Limits (AND SystemD / SystemCTL!)

What’s the Problem?

When deploying on RHEL7 or Centos7, it is fairly common to see a warning like the following one (which I just got while installing Presto from Facebook):

WARNING: Current OS file descriptor limit is 4096. Presto recommends at least 8192.

There are a variety of these issues… but the basic problem is that your OS has set limits for things and sometimes we need to raise those limits depending on what we’re running (especially when we’re running large apps on large servers).

The ulimit being referred to here always ends up being extra hard to edit as you have to do it in multiple places and most blogs/posts don’t cover them all for some reason (having suffered through it multiple times now, I know that).

How Do We View the Limits?

In this warning, we see that the “OS File Descriptor” limit is 4096 currently.  So, lets look at the current settings with the “ulimit -a” command:

$> ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 257564
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

We can see in here that “max user processes” is 4096.  We can also see another option, open files, is 1024.

So, let’s increase both of those (only the first is relevant to the warning though).

Increasing the Limits

Edit/etc/sysctl.conf and add:

fs.file-max = 65536

Edit /etc/security/limits.conf  and add:

* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535

For some reason, the proc limit is also defined in a separate file located roughly at this path (the number can vary) – so please edit /etc/security/limits.d/20-nproc.conf  and make the contents into the following:

* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
root soft nproc unlimited

That last one is the one that most places miss.

Verifying the New Limits

Here’s the last tricky part… if you run “ulimit -a” again now, it won’t really look much better.  So, re-log-in to your shell/server and then run it, and you’ll see the settings are now updated (yay!).

$> ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 257564
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65535
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

But What About SystemD and SystemCTL?

I felt victorious at this point, but alas, when I ran presto and haproxy they both spit out warnings and/or errors again for the same reason.  What is this!?

It turns out I was running both in SystemD, and SystemD has its own way of managing these things.  So, in that case, the final step is to go to your unit file in /etc/systemd/system/your-app.service and add the following inside the [Service] section (the … just implies there may be content above or below it, just add those two properties in the existing section).

[Service]
...
LimitNPROC=65535
LimitNOFILE=65535
...

After adding that you should do a “sudo systemctl daemon-reload” and “sudo systemctl restart your-app” to apply the settings.

And finally, everything is right with the world!

Centos7 / RHEL7 Services with SystemD + Systemctl For Dummies – Presto Example

History – SystemV & Init.d

Historically in Centos and RHEL, you would use system-v to run a service.  Basically an application (e.g. Spring Boot) would provide an init-d script and you would either place it in /etc/init.d or place a symbolic link from there to your script.

The scripts would have functions for start/stop/restart/status and they would follow some general conventions.  Then you could use “chkconfig” to turn the services on so they would start with the sysem when it rebooted.

SystemD and SystemCTL

Things have moved on a bit and now you can use SystemD instead.  It is a very nice alternative.  Basically, you put a “unit” file in /etc/systemd/system/.service.  This unit file has basic information on what type of application you are trying to run and how it works.  You can specify the working directory, etc as well.

Here is an example UNIT file for Facebook’s Presto application.  We would place this at /etc/systemd/system/presto.service.

[Unit]
Description=Presto
After=syslog.target network.target

[Service]
User=your-user-here
Type=forking
ExecStart=/opt/presto/current/bin/launcher start
ExecStop=/opt/presto/current/bin/launcher stop
WorkingDirectory=/opt/presto/current/bin/
Restart=always

[Install]
WantedBy=multi-user.target

Here are the important things to note about this:

  1. You specify the user the service will run as – it should have access to the actual program location.
  2. Type can be “forking” or “simple”.  Forking implies that you have specific start and stop commands to manage the service (i.e. it kind of manages itself).  Simple implies that you’re just running something like a bash script or a Java JAR that runs forever (so SystemD will just make sure to start it with the command you give and restart it if it fails).
  3. Restart=always will make sure that, as long as you had it started in the first place, it starts whenever it does.  Try it; just kill -9 your application and it will come back.
  4. The install section is critical if you want the application to start up when the computer reboots.  You can not enable it for restart without this.

Useful Commands

  • sudo systemctl status presto (or your app name) –> current status.
  • sudo systemctl stop presto
  • sudo systemctl start presto
  • sudo systemctl restart presto
  • sudo systemctl enable presto -> enable for starting on reboot of server.
  • sudo systemctl disable presto -> don’t start on reboot of server.
  • sudo systemctl is-enabled presto; echo $? –> show if it is currently enabled for start-on-boot.

Install Airflow on Windows + Docker + CentOs

Continuing on my journey; setting up Apache Airflow on Windows directly was a disaster for various reasons.

Setting it up in the WSL (Windows Subsystem for Linux) copy of Ubuntu worked great.  But unfortunately, you can’t run services/etc properly in that, and I’d like to run it in a state reasonably similar to how we’ll eventually deploy it.

So, my fallback plan is Docker on Windows, which is working great (no surprise there).  It was also much less painful to set up in the end than the other options.  I’m also switching from Ubuntu to CentOS (non-enterprise version of RHEL) as I found out that docker has service files tested with it here: https://airflow.readthedocs.io/en/stable/howto/run-with-systemd.html.

Assuming you have docker for Windows set up properly, just do the following to set up Airflow in a new CentOS container.

Get and Run CentOS With Python 3.6 in Docker

docker pull centos/python-36-centos7
docker container run --name airflow-centos -it centos/python-36-centos7:latest /bin/bash

Install Airflow with Pip

pip install --upgrade pip
export SLUGIFY_USES_TEXT_UNIDECODE=yes
pip install apache-airflow

Set up Airflow

First install VIM.  Yes… this is docker so the images are hyper-stripped down to contain only the essentials.  You have to install anything else.

First, install VIM. I think I had to go connect to the container as root to do this using this command:

docker exec -it -u root airflow-centos /bin/bash

Then you can just install with yum fine. I’m not 100% sure this was needed, so feel free to try it as the normal user first.

yum install vim

I jumped back into the normal user after that (by removing the -u root from the command above).

Then set up Airflow’s home directory and database.

  • Set the Airflow home directory (permanently for the user).
    • vi ~/.bashrc and add this to the bottom of the file.
      • export AIRFLOW_HOME=~/airflow
    • Then re-source the file so you can use it immediately:
      • ~/.bashrc
  • Initialize the Airflow database (we just did defaults, so it will use a local SQLite DB).
    • airflow initdb

Then verify the install worked by checking its version:

root@03bae42c5cdb:/# airflow version
[2018-11-07 20:26:44,372] {__init__.py:51} INFO - Using executor SequentialExecutor
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
v1.10.0

Run Airflow Services

The actual Airflow hello world page here: https://airflow.apache.org/start.html just says to run Airflow like this:

  • airflow webserver -p 8080
  • airflow scheduler

You probably want to run these in the background and tell the logs to go to a file, etc.

It’s more professional just to run it as a service (on CentOS/RHEL which is why I switched to CentOS from Ubuntu).  But it turns out that running it as a service in Docker is tricky.

Even if you get everything set up properly, Docker by default enables/disables some features for security that make systemctl not work (so you can’t start the service).  It sounds like this is a whole rework to get this working (read here).  https://serverfault.com/questions/824975/failed-to-get-d-bus-connection-operation-not-permitted.

Also, I realize my idea may have been flawed in the first place (running it as  service in a container).  Containers are really intended to hold micro-services.  So, it would make more sense to launch the web server and the scheduler as their own containers and allow them to communicate with each other probably (I’m still figuring this out).  This thread nudged me into realizing that: https://forums.docker.com/t/systemctl-status-is-not-working-in-my-docker-container/9075.

It says:

Normally when you run a container you aren’t running an init system. systemctl is a process that communicates with systemd over dbus. If you aren’t running dbus or systemd, I would expect systemctl to fail.

What is the pid1 of your docker container? It should reflect the entrypoint and command that were used to launch the container.

For example, if I do the following, my pid1 would be bash:

$ docker run --rm -it centos:7 bash
[root@180c9f6866f1 /]# ps faux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.7  0.1  11756  2856 ?        Ss   03:01   0:00 bash
root        15  0.0  0.1  47424  3300 ?        R+   03:02   0:00 ps faux

Since only bash and ps faux are running in the container, there would be nothing for systemctl to communicate with.


So, the below steps probably get it working if you set the container up right in the first place (as a privileged container), but it isn’t working for me for now.  So feel free to stop reading here and use Airflow, but it won’t be running as a service.

I might come back and update this post and/or make future one on how to run airflow in multiple containers.  I’m also aware that there is an awesome image here that gets everything off the ground instantly; but I was really trying to get it working myself to understand it better: https://hub.docker.com/r/puckel/docker-airflow/.

—- service setup (not complete yet)

I found information on the Airflow website here: https://airflow.readthedocs.io/en/stable/howto/run-with-systemd.html stating:

Airflow can integrate with systemd based systems. This makes watching your daemons easy as systemd can take care of restarting a daemon on failure. In the scripts/systemd directory you can find unit files that have been tested on Redhat based systems. You can copy those to/usr/lib/systemd/system. It is assumed that Airflow will run under airflow:airflow. If not (or if you are running on a non Redhat based system) you probably need to adjust the unit files.

Environment configuration is picked up from /etc/sysconfig/airflow. An example file is supplied. Make sure to specify the SCHEDULER_RUNS variable in this file when you run the scheduler. You can also define here, for example, AIRFLOW_HOME or AIRFLOW_CONFIG.

I didn’t see much in the installation, so I found the scripts on Git Hub for the 1.10 version that we are running (based on our earlier version prompt):

https://github.com/apache/incubator-airflow/tree/v1-10-stable/scripts/systemd

Based on this, I: