Azure PaaS Postgres 10 Database Create + Connect Centos PSQL or DBeaver

Today I started using the “Azure Database for PostreSQL” PaaS service offering.  It went pretty smoothly, but connecting took a little more effort than I expected (all for good reasons!).

Creating the PostreSQL Service

You can find the creation screen in the Azure portal by pressing (+), clicking Databases, and scrolling down.

As with most things in Azure, creating the service through the portal was pretty trivial.  You basically just provide the name, region, resource group, subscription, select the size you want, specify a user + password, and you’re done!  It takes around a minute to complete with a smallish database size.

postgres-create

Connecting to the Database

We’re going to connect with DBeaver (its like SQuirreL and DBVizualizer if you haven’t heard of it).  Then we will also connect with the “psql” command line utility from Linux.  This should be pretty quick – but there are two wrenches in the works:

  1. SSL is enabled.
  2. Azure has blocked all inbound IPs by default – nothing can connect in.

Connecting with DBeaver

  • Go to your Postgres instance in the portal and view the “Overview” screen.
  • Open DBeaver, create a new Postgres connection.
  • Copy the server name from the portal into the host section of DBeaver.
  • Copy the Server Admin Login name from the portal into the user name section of DBeaver.
  • Type in your password for that Admin user.
  • Set the database as Postgres in DBeaver.
  • You can leave the port as the default 5432.
  • Now, go to driver properties on the left of DBeaver and set:
    • ssl to true
    • sslmode to require

This is shown here:

dbeaver-postgres

At this point, you’ve got all the connection details in DBeaver set up properly; but you still can’t connect.  You’ll have to go into the Azure portal, click “Connection Security”, and then create a firewall rule that allows your IP in.  You can also, alternatively, add in a pre-defined subnet you have for yourself, your company, etc.  At that point, everything on that subnet will be able to connect properly.

After this, you should be able to “Test Connection” successfully.

Connecting with PSQL from Centos 7

Assuming you opened up the firewall or subnet as noted at the end of the previous example with DBeaver, you can then just:

Install the PSQL client library:

And connect with the psql utility:

  • psql “sslmode=require host=yourhost.postgres.database.azure.com dbname=postgres user=youruser@yourhost”

HA Proxy + Centos 7 (or RHEL 7) – Won’t bind to any ports – SystemD!

What are the Symptoms?

This has bitten me badly twice now. I was deploying Centos 7.5 servers and trying to run HA Proxy on them through SystemD (I’m not sure if it is an issue otherwise).

Basically, no matter what port I use I get this message:

Starting frontend main: cannot bind socket [0.0.0.0:80]

Note that as I was too lazy to set up separate logging for the HAProxy config, I found this message in /var/log/messages with the other system messages.

Of course, seeing this your first thought is “he’s running another process on that port!”… but nope.  Also, the permissions are set up properly, etc.

What is the Problem?

The problem here is actually SE Linux.  I haven’t quite dug into why, but when running under SystemD, SELinux will deny access to all ports for HAProxy unless you go out of your way to allow it to access them.

How Do We Fix It?

The fix is very simple thankfully, just set this selinux boolean as a root/sudo user:

sudo setsebool -P haproxy_connect_any 1

…and voilà! if you restart your HAProxy it will connect fine.  I spent a lot of time on this before I found a decent documentation and forum references in these places.  I hope this helps you fix it faster!  I also found a stack-overflow eventually… but the accepted/good answer is like 10 down so I missed it the first pile of times.

Azure + Packer – Create Image With Only Access to Resource Group (Not Subscription)

What Was the Problem?

I recently had to create a VM image for an Azure scale-set using packer.  Overall, the experience was great… but getting off the ground took me about an hour.  This was because most tutorials/examples assume you have contributor access to the whole subscription, but I wanted to do it with a service principal that just had access to a specific resource group.

Working Configuration

Basically, you just need the right combination (or lack-there-of) of fields.

The tricky ones to get right were the combination of build_resource_group_name, managed_image_resource_group_name, and managed_image_name while leaving out location.

There was a Git Hub issue chain on this (https://github.com/hashicorp/packer/issues/5873) that went on for a very long time before someone finally worked out that you had to leave out location when you wanted to do this without subscription level contributor access.

Here is a reference config file that works if you populate your details:

{
"builders":[
{
"type":"azure-arm",
"client_id":"your-client-id",
"client_secret":"your-client-secret",
"tenant_id":"your-tenant-id",
"subscription_id":"your-subscription",
"build_resource_group_name":"your-existing-rg",
"managed_image_resource_group_name":"your-existing-rg",
"managed_image_name":"your-result-output-image-name",
"os_type":"Linux",
"image_publisher":"OpenLogic",
"image_offer":"CentOS",
"image_sku":"7.5",
"azure_tags":{
"ApplicationName":"Some Sample App"
},
"vm_size":"Standard_D2s_v3"
}
],
"provisioners":[
{
"execute_command":"chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
"inline":[
"yum -y install haproxy-1.5.18-8.el7",
"/usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync"
],
"inline_shebang":"/bin/sh -x",
"type":"shell"
}
]
}