Installing Jenkins on Centos 7.x for Docker Image Builds

First, just start by getting a root shell so we can drop the sudo command from everything (e.g. type sudo bash).  This isn’t best practice, but just make sure you exit out of it when you are done :).

Centos 7.x Complete Installation Steps

Note that this installs wget, OpenJDK Java 1.8, Jenkins, enables Jenkins to be on with system start, and then it installs docker, and finally, it adds the Jenkins user to the docker user group so that it can run commands from it effectively (using elevated privileges).  Lastly, it restarts Jenkins so that the user has the new docker group permissions in the running process.

yum update -y
yum install wget -y
yum install java-1.8.0-openjdk-devel -y
wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
rpm –import https://jenkins-ci.org/redhat/jenkins-ci.org.key
yum install jenkins -y
service jenkins start
chkconfig jenkins on
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
systemctl start docker
usermod -a -G docker jenkins
service jenkins restart
 

Validation

Assuming this all worked, you should see Jenkins running on your localhost:8080 port for that server, and you can follow its on-screen instructions.

For docker, you can run the hello world container to see if it is properly set up (docker run hello-world).

Resources

I got this information from the Jenkins wiki, the docker site itself, and one stack overflow entry as shown below:

Install Docker CE on Linux Centos 7.x

This is just a short post paraphrasing the very good (and verbose!) instructions on the Docker site here: https://docs.docker.com/install/linux/docker-ce/centos/.

Basically, to install Docker CE on a fresh Centos 7.x server, you have to:

  • Install the YUM config manager.
  • Install device-mapper-persistent data and LVM (for the storage driver).
  • Use the YUM config manager to add the stable  docker YUM repository.
  • Install docker.
  • Start docker.
  • Test that it worked.

This script does all of that and basically just saves you from skimming through the linked page repeatedly to find the few commands you need.

sudo yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo docker run hello-world

Assuming it works, you should see “Hello from Docker!” among various other output on your screen.

Bash – Grep (or Run Other Command) Only On Files Created This Week, Day.

Use Case

I just ran into a simple problem where I had to grep files on a server, but the directory had TONS and TONS of files in it.  I just wanted to target files created within the last week or so.

Working Command

It turns out this find command is very handy for this occasion.  It was taken and lightly modified from this unix stack-exchange post after a fair bit of searching.

find . -mtime -7 -exec grep "my_search_string" {} \;

Basically, it finds everything in “.” (the current directory) that was created in the last 7 days (as in 24 hour days, not from-this-morning days), and it executes the grep expression on it.

You can modify the timing however you want with mtime as well as change the target directory or command to execute, and of course you can pipe the output to whatever you want :).

Extending a LVM Volume (e.g. /opt) in Cenots 7

What Does LVM Mean?

Taking a description from The Geek Diary:

The Logical Volume Manager (LVM) introduces an extra layer between the physical disks and the file system allowing file systems to:

  • Be resized and moved easily and online without requiring a system-wide outage.
  • Use discontinuous space on disk.
  • Have meaningful names to volumes, rather than the usual cryptic device names.
  • Span multiple physical disks.

Extending a LVM Volume (e.g. /opt):

Run “vgs” to display information on the available volume groups. This will tell you if you have “free” space that you can allocate to one of the existing logical volumes. In our case, we have 30 GB free.

$> vgs
  VG     #PV #LV #SN Attr   VSize   VFree
  rootvg   1   7   0 wz--n- <63.00g <30.00g

Run “lvs” to display the logical volumes on your system and their sizes. Find the one you want to extend.

$> lvs
  LV     VG     Attr       LSize
  homelv rootvg -wi-ao----  1.00g
  optlv  rootvg -wi-ao----  2.00g
  rootlv rootvg -wi-ao----  8.00g
  swaplv rootvg -wi-ao----  2.00g
  tmplv  rootvg -wi-ao----  2.00g
  usrlv  rootvg -wi-ao---- 10.00g
  varlv  rootvg -wi-ao----  8.00g

Extend the logical volume using “lvextend”. In our case, I’m moving /opt from 2g to 5g.

$> lvextend -L 5g rootvg/optlv

Display the logical volumes again if you like. You won’t see a change yet, it will still say 2.00g.

Use df -hT to show what kind of file system you are using for the volume you resized. This can change the next command you have to do.

$> df -hT
Filesystem                Type      ...
/dev/mapper/rootvg-rootlv ext4      ...
devtmpfs                  devtmpfs  ...
tmpfs                     tmpfs     ...
tmpfs                     tmpfs     ...
tmpfs                     tmpfs     ...
/dev/mapper/rootvg-usrlv  ext4      ...
/dev/sda1                 ext4      ...
/dev/mapper/rootvg-optlv  ext4      ...
/dev/mapper/rootvg-tmplv  ext4      ...
/dev/mapper/rootvg-varlv  ext4      ...
/dev/mapper/rootvg-homelv ext4      ...
/dev/sdb1                 ext4      ...
tmpfs                     tmpfs     ...

If it is ext4, you can use the following command to tell the system to recognize the extended volume. If it is not, you will have to find the appropriate command for the given file system.

$> resize2fs /dev/mapper/rootvg-optlv

Now you should see the extended volume size in “lvs” or “df -h”; and you’re done!

$> df -h
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-rootlv  7.8G   76M  7.3G   2% /
devtmpfs                   3.9G     0  3.9G   0% /dev
tmpfs                      3.9G  4.0K  3.9G   1% /dev/shm
tmpfs                      3.9G  130M  3.8G   4% /run
tmpfs                      3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/rootvg-usrlv   9.8G  2.6G  6.7G  29% /usr
/dev/sda1                  976M  119M  790M  14% /boot
/dev/mapper/rootvg-optlv   4.9G  1.9G  2.9G  40% /opt
/dev/mapper/rootvg-tmplv   2.0G   11M  1.8G   1% /tmp
/dev/mapper/rootvg-varlv   7.8G  3.2G  4.2G  44% /var
/dev/mapper/rootvg-homelv  976M   49M  861M   6% /home
/dev/sdb1                   16G   45M   15G   1% /mnt/resource
tmpfs                      797M     0  797M   0% /run/user/1000

SystemD/SystemCTL Tail or View Service Log (Centos 7)

By default, systemd services will log their output to /var/log/messages, and you can view these messages with journalctl commands.

To tail the logs for a specific service you are running, you can simply do the following.

journalctl -u service-name -f

Just remove the -f if you want to view the log in general.

You can also just lazily use -f without the -u and service name if you want to tail recent system logs, which is often useful.

HA Proxy + Centos 7 (or RHEL 7) – Won’t bind to any ports – SystemD!

What are the Symptoms?

This has bitten me badly twice now. I was deploying Centos 7.5 servers and trying to run HA Proxy on them through SystemD (I’m not sure if it is an issue otherwise).

Basically, no matter what port I use I get this message:

Starting frontend main: cannot bind socket [0.0.0.0:80]

Note that as I was too lazy to set up separate logging for the HAProxy config, I found this message in /var/log/messages with the other system messages.

Of course, seeing this your first thought is “he’s running another process on that port!”… but nope.  Also, the permissions are set up properly, etc.

What is the Problem?

The problem here is actually SE Linux.  I haven’t quite dug into why, but when running under SystemD, SELinux will deny access to all ports for HAProxy unless you go out of your way to allow it to access them.

How Do We Fix It?

The fix is very simple thankfully, just set this selinux boolean as a root/sudo user:

sudo setsebool -P haproxy_connect_any 1

…and voilà! if you restart your HAProxy it will connect fine.  I spent a lot of time on this before I found a decent documentation and forum references in these places.  I hope this helps you fix it faster!  I also found a stack-overflow eventually… but the accepted/good answer is like 10 down so I missed it the first pile of times.

Azure Custom Script Extension – Text File Busy – Centos7.5 – VM Stuck on Creating

What’s Wrong

I’ve been building a scale set on Azure and have repeatedly observed around 40% of my VMs getting stuck on “Creating” in the azure portal.  The scale set uses a custom script VM extension and runs on the Centos 7.5 OS.

Debugging

After looking around online a lot, I came across numerous Git Hub issues against the custom script extension or the Azure Linux agent.  They are for varying OS’s, but they often involve the VM getting stuck in creating.  For example, here is one vs Ubuntu:

If you go to this file “/var/log/azure/custom-script/handler.log”, you can see details about what the custom script extension is doing.  Also note that “/var/log/waagent.log” can be useful as well.

$> vi /var/log/azure/custom-script/handler.log
+ /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-extension install
/var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-shim: line 77: /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.0.6/bin/custom-script-extension: Text file busy

In my case, it failed with “Text file busy”. for some reason. Again, there are numerous Git Hub entries for this – but no solutions:

Somewhere else online I saw reports that the Agent was failing while downloading files.  Note that if your plugin download works, you should see the script and more info in this location -> /var/lib/waagent/custom-script/download/1/script-name.sh (in my case, it is not there).

My custom script extension takes a script out of Azure Blob storage… so I’m going to try to bundle that script into the image and just issue the run command from the custom script extension to see if that makes it go away.

Result – Failure

Taking the script out of blob storage and putting it into the VM itself, and just calling it with the custom script extension’s command-to-execute mitigated this issue.  This is unfortunate as internalizing the script means every tweak requires a new image… but at least the scale set can work properly now and be stable :).

Avoiding downloading files made the issue less likely to occur… but it did come back.  It is just rarer.

I tried downgrading the Azure Linux Agent (waagent) to a version noted in one of those Git Hub issues.  It did not help.  I also tried reverting to Centos 7.3 which didn’t help.  I can’t find any way to make this work reliably.

Workaround

My workaround will be:

  • Take all customizations I was doing with the agent.
  • Move them into a packer build (from Hashicorp).
  • Packer will build the image I need for each environment, fully configured and working.
  • This way, I just run the image and don’t worry about modifying its config with the custom script extension.

This is painful and frustrating, so I will also raise the bug with Microsoft while doing the workaround.