Azure PaaS Postgres 10 Database Create + Connect Centos PSQL or DBeaver

Today I started using the “Azure Database for PostreSQL” PaaS service offering.  It went pretty smoothly, but connecting took a little more effort than I expected (all for good reasons!).

Creating the PostreSQL Service

You can find the creation screen in the Azure portal by pressing (+), clicking Databases, and scrolling down.

As with most things in Azure, creating the service through the portal was pretty trivial.  You basically just provide the name, region, resource group, subscription, select the size you want, specify a user + password, and you’re done!  It takes around a minute to complete with a smallish database size.

postgres-create

Connecting to the Database

We’re going to connect with DBeaver (its like SQuirreL and DBVizualizer if you haven’t heard of it).  Then we will also connect with the “psql” command line utility from Linux.  This should be pretty quick – but there are two wrenches in the works:

  1. SSL is enabled.
  2. Azure has blocked all inbound IPs by default – nothing can connect in.

Connecting with DBeaver

  • Go to your Postgres instance in the portal and view the “Overview” screen.
  • Open DBeaver, create a new Postgres connection.
  • Copy the server name from the portal into the host section of DBeaver.
  • Copy the Server Admin Login name from the portal into the user name section of DBeaver.
  • Type in your password for that Admin user.
  • Set the database as Postgres in DBeaver.
  • You can leave the port as the default 5432.
  • Now, go to driver properties on the left of DBeaver and set:
    • ssl to true
    • sslmode to require

This is shown here:

dbeaver-postgres

At this point, you’ve got all the connection details in DBeaver set up properly; but you still can’t connect.  You’ll have to go into the Azure portal, click “Connection Security”, and then create a firewall rule that allows your IP in.  You can also, alternatively, add in a pre-defined subnet you have for yourself, your company, etc.  At that point, everything on that subnet will be able to connect properly.

After this, you should be able to “Test Connection” successfully.

Connecting with PSQL from Centos 7

Assuming you opened up the firewall or subnet as noted at the end of the previous example with DBeaver, you can then just:

Install the PSQL client library:

And connect with the psql utility:

  • psql “sslmode=require host=yourhost.postgres.database.azure.com dbname=postgres user=youruser@yourhost”

Azure + Packer – Create Image With Only Access to Resource Group (Not Subscription)

What Was the Problem?

I recently had to create a VM image for an Azure scale-set using packer.  Overall, the experience was great… but getting off the ground took me about an hour.  This was because most tutorials/examples assume you have contributor access to the whole subscription, but I wanted to do it with a service principal that just had access to a specific resource group.

Working Configuration

Basically, you just need the right combination (or lack-there-of) of fields.

The tricky ones to get right were the combination of build_resource_group_name, managed_image_resource_group_name, and managed_image_name while leaving out location.

There was a Git Hub issue chain on this (https://github.com/hashicorp/packer/issues/5873) that went on for a very long time before someone finally worked out that you had to leave out location when you wanted to do this without subscription level contributor access.

Here is a reference config file that works if you populate your details:

{
"builders":[
{
"type":"azure-arm",
"client_id":"your-client-id",
"client_secret":"your-client-secret",
"tenant_id":"your-tenant-id",
"subscription_id":"your-subscription",
"build_resource_group_name":"your-existing-rg",
"managed_image_resource_group_name":"your-existing-rg",
"managed_image_name":"your-result-output-image-name",
"os_type":"Linux",
"image_publisher":"OpenLogic",
"image_offer":"CentOS",
"image_sku":"7.5",
"azure_tags":{
"ApplicationName":"Some Sample App"
},
"vm_size":"Standard_D2s_v3"
}
],
"provisioners":[
{
"execute_command":"chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
"inline":[
"yum -y install haproxy-1.5.18-8.el7",
"/usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync"
],
"inline_shebang":"/bin/sh -x",
"type":"shell"
}
]
}

[Reblogged] Copy Managed Images Between Subscriptions via Powershell

I recently had to promote a VM image for a scale set between subscriptions.  It turns out this was very complex; but this blog was a life saver.  So, I highly recommend reading it if you need to do this.

This is re-blogged; so just click the link below to see the full original block (which I highly recommend).

Michael S. Collier's Blog

Introduction

Azure Managed Disks were made generally available (GA) in February 2017. Managed Disks greatly simplify working with Azure Virtual Machines (VM) and Virtual Machine Scale Sets (VMSS). They effectively eliminate the need for you to have to worry about Azure Storage accounts and related VHD constraints/limits. When using managed disks for VMs or VMSS, you select the type of disk storage (SSD or HDD) and the size of disk needed. The Azure platform takes care of the rest. Besides the simplified management aspect, managed disks bring several additional benefits, but I’ll not reiterate those here, as there is a lot of good info already available (here, here and here).

While managed disks simplify management of Azure VMs, they also simplify working with VM images. Prior to managed disks, an image would need to be copied to the Storage account where the derived VM would be created…

View original post 1,255 more words

Azure Custom Script Extension – Text File Busy – Centos7.5 – VM Stuck on Creating

What’s Wrong

I’ve been building a scale set on Azure and have repeatedly observed around 40% of my VMs getting stuck on “Creating” in the azure portal.  The scale set uses a custom script VM extension and runs on the Centos 7.5 OS.

Debugging

After looking around online a lot, I came across numerous Git Hub issues against the custom script extension or the Azure Linux agent.  They are for varying OS’s, but they often involve the VM getting stuck in creating.  For example, here is one vs Ubuntu:

If you go to this file “/var/log/azure/custom-script/handler.log”, you can see details about what the custom script extension is doing.  Also note that “/var/log/waagent.log” can be useful as well.

$> vi /var/log/azure/custom-script/handler.log
+ /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-extension install
/var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-shim: line 77: /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-extension: Text file busy

In my case, it failed with “Text file busy”. for some reason. Again, there are numerous Git Hub entries for this – but no solutions:

Somewhere else online I saw reports that the Agent was failing while downloading files.  Note that if your plugin download works, you should see the script and more info in this location -> /var/lib/waagent/custom-script/download/1/script-name.sh (in my case, it is not there).

My custom script extension takes a script out of Azure Blob storage… so I’m going to try to bundle that script into the image and just issue the run command from the custom script extension to see if that makes it go away.

Result – Failure

Taking the script out of blob storage and putting it into the VM itself, and just calling it with the custom script extension’s command-to-execute mitigated this issue.  This is unfortunate as internalizing the script means every tweak requires a new image… but at least the scale set can work properly now and be stable :).

Avoiding downloading files made the issue less likely to occur… but it did come back.  It is just rarer.

I tried downgrading the Azure Linux Agent (waagent) to a version noted in one of those Git Hub issues.  It did not help.  I also tried reverting to Centos 7.3 which didn’t help.  I can’t find any way to make this work reliably.

Workaround

My workaround will be:

  • Take all customizations I was doing with the agent.
  • Move them into a packer build (from Hashicorp).
  • Packer will build the image I need for each environment, fully configured and working.
  • This way, I just run the image and don’t worry about modifying its config with the custom script extension.

This is painful and frustrating, so I will also raise the bug with Microsoft while doing the workaround.

 

Azure + Terraform + Linux Custom Script Extension (Scale Set or VM)

Overview

Whether you are creating a virtual machine or a scale set in Azure, you can specify a “Custom Script Extension” to tailor the VM after creation.

Terraform Syntax

I’m not going to go into detail on how to do the entire scale set or VM, but here is the full extension block that should go inside either one of them.

resource "azurerm_virtual_machine_scale_set" "some-name" {
  # ... normal scale set config ...

  extension {
    name                 = "your-extension-name"
    publisher            = "Microsoft.Azure.Extensions"
    type                 = "CustomScript"
    type_handler_version = "2.0"

    settings = <<SETTINGS
    {
    "fileUris": ["https://some-blob-storage.blob.core.windows.net/my-scripts/run_config.sh"],
    "commandToExecute": "bash run_config.sh"
    }
SETTINGS
  }
}

Things to notice include:

  1. The extension settings have to be valid JSON (e.g. no new-lines in strings, proper quoting).
  2. This can get frustrating, so it helps to use a bash “heredoc” style block to write it the JSON (to help avoid quote escaping, etc). https://stackoverflow.com/a/2500451/857994
  3. Assuming you have a non-trivial use case, it is very beneficial to maintain your script(s) outside of your VM image.  After all… you don’t want to go make a new VM image every time you find a typo in your script.  This is what fileUris does; it lets you refer to a script in azure storage or in any reachable web location.
  4. You can easily create new Azure storage, create a blob container, and upload a file and mark it as public so that you can refer to it without authentication.  Don’t put anything sensitive in it in this case though; if you do, use a storage key instead.  I prefer to make it public but then pass any “secret” properties to it from the command-to-execute, that way all variables are managed by Terraform at execution time.
  5. The command-to-execute can call the scripts downloaded form the fileURIs.  When the extension is run on your VM or scale set VM(s) after deployment, the scripts are uploaded to /var/lib/waagent/custom-script/download/1/script-name.sh and then run with the command-to-execute.  This location serves as the working directory.

Debugging Failures

Sometimes things can go wrong when running custom scripts; even things outside your control.  For example, on Centos7.5, I keep getting 40% of my VMs or so stuck on “creating” and they clearly haven’t run the scripts.

In this case, you can look at the following log file to get more information:

/var/log/azure/custom-script/handler.log