Azure PaaS Postgres 10 Database Create + Connect Centos PSQL or DBeaver

Today I started using the “Azure Database for PostreSQL” PaaS service offering.  It went pretty smoothly, but connecting took a little more effort than I expected (all for good reasons!).

Creating the PostreSQL Service

You can find the creation screen in the Azure portal by pressing (+), clicking Databases, and scrolling down.

As with most things in Azure, creating the service through the portal was pretty trivial.  You basically just provide the name, region, resource group, subscription, select the size you want, specify a user + password, and you’re done!  It takes around a minute to complete with a smallish database size.

postgres-create

Connecting to the Database

We’re going to connect with DBeaver (its like SQuirreL and DBVizualizer if you haven’t heard of it).  Then we will also connect with the “psql” command line utility from Linux.  This should be pretty quick – but there are two wrenches in the works:

  1. SSL is enabled.
  2. Azure has blocked all inbound IPs by default – nothing can connect in.

Connecting with DBeaver

  • Go to your Postgres instance in the portal and view the “Overview” screen.
  • Open DBeaver, create a new Postgres connection.
  • Copy the server name from the portal into the host section of DBeaver.
  • Copy the Server Admin Login name from the portal into the user name section of DBeaver.
  • Type in your password for that Admin user.
  • Set the database as Postgres in DBeaver.
  • You can leave the port as the default 5432.
  • Now, go to driver properties on the left of DBeaver and set:
    • ssl to true
    • sslmode to require

This is shown here:

dbeaver-postgres

At this point, you’ve got all the connection details in DBeaver set up properly; but you still can’t connect.  You’ll have to go into the Azure portal, click “Connection Security”, and then create a firewall rule that allows your IP in.  You can also, alternatively, add in a pre-defined subnet you have for yourself, your company, etc.  At that point, everything on that subnet will be able to connect properly.

After this, you should be able to “Test Connection” successfully.

Connecting with PSQL from Centos 7

Assuming you opened up the firewall or subnet as noted at the end of the previous example with DBeaver, you can then just:

Install the PSQL client library:

And connect with the psql utility:

  • psql “sslmode=require host=yourhost.postgres.database.azure.com dbname=postgres user=youruser@yourhost”

Azure + Packer – Create Image With Only Access to Resource Group (Not Subscription)

What Was the Problem?

I recently had to create a VM image for an Azure scale-set using packer.  Overall, the experience was great… but getting off the ground took me about an hour.  This was because most tutorials/examples assume you have contributor access to the whole subscription, but I wanted to do it with a service principal that just had access to a specific resource group.

Working Configuration

Basically, you just need the right combination (or lack-there-of) of fields.

The tricky ones to get right were the combination of build_resource_group_name, managed_image_resource_group_name, and managed_image_name while leaving out location.

There was a Git Hub issue chain on this (https://github.com/hashicorp/packer/issues/5873) that went on for a very long time before someone finally worked out that you had to leave out location when you wanted to do this without subscription level contributor access.

Here is a reference config file that works if you populate your details:

{
"builders":[
{
"type":"azure-arm",
"client_id":"your-client-id",
"client_secret":"your-client-secret",
"tenant_id":"your-tenant-id",
"subscription_id":"your-subscription",
"build_resource_group_name":"your-existing-rg",
"managed_image_resource_group_name":"your-existing-rg",
"managed_image_name":"your-result-output-image-name",
"os_type":"Linux",
"image_publisher":"OpenLogic",
"image_offer":"CentOS",
"image_sku":"7.5",
"azure_tags":{
"ApplicationName":"Some Sample App"
},
"vm_size":"Standard_D2s_v3"
}
],
"provisioners":[
{
"execute_command":"chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
"inline":[
"yum -y install haproxy-1.5.18-8.el7",
"/usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync"
],
"inline_shebang":"/bin/sh -x",
"type":"shell"
}
]
}

[Reblogged] Copy Managed Images Between Subscriptions via Powershell

I recently had to promote a VM image for a scale set between subscriptions.  It turns out this was very complex; but this blog was a life saver.  So, I highly recommend reading it if you need to do this.

This is re-blogged; so just click the link below to see the full original block (which I highly recommend).

Michael S. Collier's Blog

Introduction

Azure Managed Disks were made generally available (GA) in February 2017. Managed Disks greatly simplify working with Azure Virtual Machines (VM) and Virtual Machine Scale Sets (VMSS). They effectively eliminate the need for you to have to worry about Azure Storage accounts and related VHD constraints/limits. When using managed disks for VMs or VMSS, you select the type of disk storage (SSD or HDD) and the size of disk needed. The Azure platform takes care of the rest. Besides the simplified management aspect, managed disks bring several additional benefits, but I’ll not reiterate those here, as there is a lot of good info already available (here, here and here).

While managed disks simplify management of Azure VMs, they also simplify working with VM images. Prior to managed disks, an image would need to be copied to the Storage account where the derived VM would be created…

View original post 1,255 more words

Azure Custom Script Extension – Text File Busy – Centos7.5 – VM Stuck on Creating

What’s Wrong

I’ve been building a scale set on Azure and have repeatedly observed around 40% of my VMs getting stuck on “Creating” in the azure portal.  The scale set uses a custom script VM extension and runs on the Centos 7.5 OS.

Debugging

After looking around online a lot, I came across numerous Git Hub issues against the custom script extension or the Azure Linux agent.  They are for varying OS’s, but they often involve the VM getting stuck in creating.  For example, here is one vs Ubuntu:

If you go to this file “/var/log/azure/custom-script/handler.log”, you can see details about what the custom script extension is doing.  Also note that “/var/log/waagent.log” can be useful as well.

$> vi /var/log/azure/custom-script/handler.log
+ /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-extension install
/var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-shim: line 77: /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-extension: Text file busy

In my case, it failed with “Text file busy”. for some reason. Again, there are numerous Git Hub entries for this – but no solutions:

Somewhere else online I saw reports that the Agent was failing while downloading files.  Note that if your plugin download works, you should see the script and more info in this location -> /var/lib/waagent/custom-script/download/1/script-name.sh (in my case, it is not there).

My custom script extension takes a script out of Azure Blob storage… so I’m going to try to bundle that script into the image and just issue the run command from the custom script extension to see if that makes it go away.

Result – Failure

Taking the script out of blob storage and putting it into the VM itself, and just calling it with the custom script extension’s command-to-execute mitigated this issue.  This is unfortunate as internalizing the script means every tweak requires a new image… but at least the scale set can work properly now and be stable :).

Avoiding downloading files made the issue less likely to occur… but it did come back.  It is just rarer.

I tried downgrading the Azure Linux Agent (waagent) to a version noted in one of those Git Hub issues.  It did not help.  I also tried reverting to Centos 7.3 which didn’t help.  I can’t find any way to make this work reliably.

Workaround

My workaround will be:

  • Take all customizations I was doing with the agent.
  • Move them into a packer build (from Hashicorp).
  • Packer will build the image I need for each environment, fully configured and working.
  • This way, I just run the image and don’t worry about modifying its config with the custom script extension.

This is painful and frustrating, so I will also raise the bug with Microsoft while doing the workaround.

 

Azure + Terraform + Linux Custom Script Extension (Scale Set or VM)

Overview

Whether you are creating a virtual machine or a scale set in Azure, you can specify a “Custom Script Extension” to tailor the VM after creation.

Terraform Syntax

I’m not going to go into detail on how to do the entire scale set or VM, but here is the full extension block that should go inside either one of them.

resource "azurerm_virtual_machine_scale_set" "some-name" {
  # ... normal scale set config ...

  extension {
    name                 = "your-extension-name"
    publisher            = "Microsoft.Azure.Extensions"
    type                 = "CustomScript"
    type_handler_version = "2.0"

    settings = <<SETTINGS
    {
    "fileUris": ["https://some-blob-storage.blob.core.windows.net/my-scripts/run_config.sh"],
    "commandToExecute": "bash run_config.sh"
    }
SETTINGS
  }
}

Things to notice include:

  1. The extension settings have to be valid JSON (e.g. no new-lines in strings, proper quoting).
  2. This can get frustrating, so it helps to use a bash “heredoc” style block to write it the JSON (to help avoid quote escaping, etc). https://stackoverflow.com/a/2500451/857994
  3. Assuming you have a non-trivial use case, it is very beneficial to maintain your script(s) outside of your VM image.  After all… you don’t want to go make a new VM image every time you find a typo in your script.  This is what fileUris does; it lets you refer to a script in azure storage or in any reachable web location.
  4. You can easily create new Azure storage, create a blob container, and upload a file and mark it as public so that you can refer to it without authentication.  Don’t put anything sensitive in it in this case though; if you do, use a storage key instead.  I prefer to make it public but then pass any “secret” properties to it from the command-to-execute, that way all variables are managed by Terraform at execution time.
  5. The command-to-execute can call the scripts downloaded form the fileURIs.  When the extension is run on your VM or scale set VM(s) after deployment, the scripts are uploaded to /var/lib/waagent/custom-script/download/1/script-name.sh and then run with the command-to-execute.  This location serves as the working directory.

Debugging Failures

Sometimes things can go wrong when running custom scripts; even things outside your control.  For example, on Centos7.5, I keep getting 40% of my VMs or so stuck on “creating” and they clearly haven’t run the scripts.

In this case, you can look at the following log file to get more information:

/var/log/azure/custom-script/handler.log

Azure – Linux VM Image Creation – Powershell – With Service Principal/Account

Overview

I was working on creating generalized VM images for use with scale sets and auto-scaling and I found it rather painful to get the complete set of examples for:

  1. De-provision user/etc from VM.
  2. Use Azure Powershell with a Service principal.
  3. Generalize the VM and create an image.

So, here’s a short mostly-code post on how to do that.

Specific Steps

Fair warning… as far as I know, you can’t use the VM after doing this… but you can create a new copy of it from the image, so that doesn’t matter much.

Before getting to Powershell, run this in your VM to de-provision the most recently set up user account (e.g. I’ll install everything on user “john” created with the Azure VM).  This will remove that user.

sudo waagent -deprovision+user

Now, just run the below command after setting your own values for the 5 variables up top.  This will log in to the RM with the credentials you provide in the pop-up, and then it will stop and generalize the VM, adn tehn create an image from it and store that image in the same resource group as the VM.

$vmName = "YOUR_VM_NAME"
$rgName = "YOUR_RG_NAME"
$location = "YOUR_REGION"
$imageName = "YOUR_IMAGE_NAME"
$tenant = "YOUR_TENANT_ID"

$c = Get-Credential # Input your service principal client-id/secret.
Connect-AzureRmAccount -Credential $c -ServicePrincipal -Tenant $tenant

Stop-AzureRmVM -ResourceGroupName $rgName -Name $vmName -Force
Set-AzureRmVm -ResourceGroupName $rgName -Name $vmName -Generalized
$vm = Get-AzureRmVM -Name $vmName -ResourceGroupName $rgName
$image = New-AzureRmImageConfig -Location $location -SourceVirtualMachineId $vm.Id
New-AzureRmImage -Image $image -ImageName $imageName -ResourceGroupName $rgName

Configuration Trouble?

  • If you’re not sure what a service account / principal is or how to create one, the process is quite involved and I highly recommend following one of the many Microsoft-provided tutorials.
  • You can find your tenant ID by clicking the directory + subscription button at the top of the portal OR by hovering over your name/info at the top right corner.
  • The region strings can be tricky; but just Google the Microsoft site if you’re not sure.  A US East 2 example is “EastUS2”.

What’s Next?

Your VM image can now be found in that resource group – go to the portal and see.  You can go into the image in the portal and create a new VM from it, or you can use it to boot up a scale set, etc.

Azure Key Vault Usage

If you want to store passwords or certificates securely and have them separated from your application code, then Azure Key Vaults are a wonderful option.

You can even set up key vaults so that you can access them without providing a client ID, etc. which makes them ultra secure as you don’t have to provide your credentials in your code or config files.

Creating a Key Vault

To set up a key vault, you just:

  • Go to All Services in the portal.
  • Search for Key Vault.
  • Click create and then provide a name, resource group, and region.
    • Remember, all of your resources in Azure have to go into a resource group so they are logically identified and manageable.

Assigning Users

When you’re programmatically accessing resources in Azure, you always need a service principal.  You can get this by creating an azure App Registration.   This is involved, and if you’re doing this you probably already have one.  If not though, you can refer to this Microsoft tutorial for creating a service principal.

Assuming you have the principal ready, go into your vault in the portal and click “Access Policies”.  In here, you can pick which things you need to manage from a template, then give your service principal name and create.

Remember, after you do this and it shows the created one on the summary page, you STILL have to click “Save” at the top.  If you don’t, it’s not really there.  When you’re done refresh the web page with F5 to make sure it’s really there.

Adding Secrets

Adding secrets/passwords is simple.  Just click “Secrets” and then the (+) sign and type in your name/value.

Querying Secrets From an Application

This is very language dependent.  Microsoft has great tutorials for every language though.  Here are two for Python and Java for example:

Managed Service Identity

Now, we still have one problem here.  The key vault holds all of our passwords which is great… but we need a service principal (with a password) to access the vault.  So, if we leave that in our code or config files, we’re no better off in reality.

The final step is to read up on Managed Service Identities which let you configure a machine to securely talk to a key vault without providing the principal information.  This way your code and deployment config is 100% free of any passwords/etc.