AWS CentOS Extend Root Volume with No Downtime

In AWS, you can generally extend the root (or other) volume of any of your EC2 instances without downtime.  The steps slightly vary by OS, file system type, etc though.

On a rather default-configured AWS instance running the main marketplace Centos 7 image, I had to run the following commands.

  1. Find/modify volume in the AWS console “volumes” page under the EC2 service.
  2. Wait for it to get into the “Optimizing” state (visible in the volume listing).
  3. Run: sudo file -s /dev/xvd*
    • If you’re in my situation, this will output a couple lines like this.
      • /dev/xvda: x86 boot sector; partition 1: ID=0x83, active, starthead 32, startsector 2048, 134215647 sectors, code offset 0x63
      • /dev/xvda1: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)
    • The important part is the XFS; that is the file system type.
  4. Run: lsblk
    • Again, in my situation the output looked like this:
      • NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      • xvda 202:0 0 64G 0 disk
      • └─xvda1 202:1 0 64G 0 part /
    • This basically says that the data is in one partition under xvda.  Note; mine said 32G to start.   I increased it to 64G and am just going back through the process to document it.
  5. Run: sudo growpart /dev/xvda 1
    • This grows partition #1 of /dev/xvda to take up remaining space.
  6. Run: sudo xfs_growfs -d /
    • This tells the root volume to take up the available space in the partition.
  7. After this, you can just do a “df -h” to see the increased partition size.

Note, your volume may take hours to get out of the “optimizing” stage, but it still can be used immediately.

You can view the raw AWS instructions here in case any of this doesn’t line up for you when you go to modify your instance: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modify-volume.html.

 

Installing Jenkins on Centos 7.x for Docker Image Builds

First, just start by getting a root shell so we can drop the sudo command from everything (e.g. type sudo bash).  This isn’t best practice, but just make sure you exit out of it when you are done :).

Centos 7.x Complete Installation Steps

Note that this installs wget, OpenJDK Java 1.8, Jenkins, enables Jenkins to be on with system start, and then it installs docker, and finally, it adds the Jenkins user to the docker user group so that it can run commands from it effectively (using elevated privileges).  Lastly, it restarts Jenkins so that the user has the new docker group permissions in the running process.

yum update -y
yum install wget -y
yum install java-1.8.0-openjdk-devel -y
wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
rpm –import https://jenkins-ci.org/redhat/jenkins-ci.org.key
yum install jenkins -y
service jenkins start
chkconfig jenkins on
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
systemctl start docker
usermod -a -G docker jenkins
service jenkins restart
 

Validation

Assuming this all worked, you should see Jenkins running on your localhost:8080 port for that server, and you can follow its on-screen instructions.

For docker, you can run the hello world container to see if it is properly set up (docker run hello-world).

Resources

I got this information from the Jenkins wiki, the docker site itself, and one stack overflow entry as shown below:

Install Docker CE on Linux Centos 7.x

This is just a short post paraphrasing the very good (and verbose!) instructions on the Docker site here: https://docs.docker.com/install/linux/docker-ce/centos/.

Basically, to install Docker CE on a fresh Centos 7.x server, you have to:

  • Install the YUM config manager.
  • Install device-mapper-persistent data and LVM (for the storage driver).
  • Use the YUM config manager to add the stable  docker YUM repository.
  • Install docker.
  • Start docker.
  • Test that it worked.

This script does all of that and basically just saves you from skimming through the linked page repeatedly to find the few commands you need.

sudo yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install -y docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo docker run hello-world

Assuming it works, you should see “Hello from Docker!” among various other output on your screen.

Bash – Grep (or Run Other Command) Only On Files Created This Week, Day.

Use Case

I just ran into a simple problem where I had to grep files on a server, but the directory had TONS and TONS of files in it.  I just wanted to target files created within the last week or so.

Working Command

It turns out this find command is very handy for this occasion.  It was taken and lightly modified from this unix stack-exchange post after a fair bit of searching.

find . -mtime -7 -exec grep "my_search_string" {} \;

Basically, it finds everything in “.” (the current directory) that was created in the last 7 days (as in 24 hour days, not from-this-morning days), and it executes the grep expression on it.

You can modify the timing however you want with mtime as well as change the target directory or command to execute, and of course you can pipe the output to whatever you want :).

Extending a LVM Volume (e.g. /opt) in Cenots 7

What Does LVM Mean?

Taking a description from The Geek Diary:

The Logical Volume Manager (LVM) introduces an extra layer between the physical disks and the file system allowing file systems to:

  • Be resized and moved easily and online without requiring a system-wide outage.
  • Use discontinuous space on disk.
  • Have meaningful names to volumes, rather than the usual cryptic device names.
  • Span multiple physical disks.

Extending a LVM Volume (e.g. /opt):

Run “vgs” to display information on the available volume groups. This will tell you if you have “free” space that you can allocate to one of the existing logical volumes. In our case, we have 30 GB free.

$> vgs
  VG     #PV #LV #SN Attr   VSize   VFree
  rootvg   1   7   0 wz--n- <63.00g <30.00g

Run “lvs” to display the logical volumes on your system and their sizes. Find the one you want to extend.

$> lvs
  LV     VG     Attr       LSize
  homelv rootvg -wi-ao----  1.00g
  optlv  rootvg -wi-ao----  2.00g
  rootlv rootvg -wi-ao----  8.00g
  swaplv rootvg -wi-ao----  2.00g
  tmplv  rootvg -wi-ao----  2.00g
  usrlv  rootvg -wi-ao---- 10.00g
  varlv  rootvg -wi-ao----  8.00g

Extend the logical volume using “lvextend”. In our case, I’m moving /opt from 2g to 5g.

$> lvextend -L 5g rootvg/optlv

Display the logical volumes again if you like. You won’t see a change yet, it will still say 2.00g.

Use df -hT to show what kind of file system you are using for the volume you resized. This can change the next command you have to do.

$> df -hT
Filesystem                Type      ...
/dev/mapper/rootvg-rootlv ext4      ...
devtmpfs                  devtmpfs  ...
tmpfs                     tmpfs     ...
tmpfs                     tmpfs     ...
tmpfs                     tmpfs     ...
/dev/mapper/rootvg-usrlv  ext4      ...
/dev/sda1                 ext4      ...
/dev/mapper/rootvg-optlv  ext4      ...
/dev/mapper/rootvg-tmplv  ext4      ...
/dev/mapper/rootvg-varlv  ext4      ...
/dev/mapper/rootvg-homelv ext4      ...
/dev/sdb1                 ext4      ...
tmpfs                     tmpfs     ...

If it is ext4, you can use the following command to tell the system to recognize the extended volume. If it is not, you will have to find the appropriate command for the given file system.

$> resize2fs /dev/mapper/rootvg-optlv

Now you should see the extended volume size in “lvs” or “df -h”; and you’re done!

$> df -h
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-rootlv  7.8G   76M  7.3G   2% /
devtmpfs                   3.9G     0  3.9G   0% /dev
tmpfs                      3.9G  4.0K  3.9G   1% /dev/shm
tmpfs                      3.9G  130M  3.8G   4% /run
tmpfs                      3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/rootvg-usrlv   9.8G  2.6G  6.7G  29% /usr
/dev/sda1                  976M  119M  790M  14% /boot
/dev/mapper/rootvg-optlv   4.9G  1.9G  2.9G  40% /opt
/dev/mapper/rootvg-tmplv   2.0G   11M  1.8G   1% /tmp
/dev/mapper/rootvg-varlv   7.8G  3.2G  4.2G  44% /var
/dev/mapper/rootvg-homelv  976M   49M  861M   6% /home
/dev/sdb1                   16G   45M   15G   1% /mnt/resource
tmpfs                      797M     0  797M   0% /run/user/1000

Using SystemD Timers (Like Cron) – Centos

The more I use it, the more I like using SystemD to manage services.  It is sometimes verbose, but in general, it is very easy to use and very powerful.  It can restart failed services, start services automatically, handle logging and rolling for simplistic services, and much more.

SystemD Instead of Cron

Recently, I had to take a SystemD service and make it run on a regular timer.  So, of course, my first thought was using cron… but I’m not a particularly big fan of cron for reasons I won’t go into here.

I found out that SystemD can do this as well!

Creating a SystemD Timer

Let’s say you have a (very) simple service defined like this in SystemD (in /etc/systemd/system/your-service.service):

[Unit]
Description=Do something useful.

[Service]
Type=simple
ExecStart=/opt/your-service/do-something-useful.sh
User=someuser
Group=someuser

Then you can create a timer for it by creating another SystemD file in the same location but ending with “.timer” instead of “.service”. It can handle basically any interval, can start at reboot, and can even time tasks based on the reboot time.

[Unit]
Description=Run service once daily.

[Timer]
OnCalendar=*-*-* 00:30:00
Unit=your-service.service

[Install]
WantedBy=multi-user.target

Once the timer file is made, you can enable it like to:

sudo systemctl daemon-reload
sudo systemctl enable your-service.timer
sudo systemctl start your-service.timer

After this, you can verify the timer is working and check its next (and later on, previous) execution time with this command:

$> systemctl list-timers --all
NEXT                         LEFT          LAST                         PASSED  UNIT                         ACTIVATES
Tue 2019-03-05 20:00:01 UTC  14min left    Mon 2019-03-04 20:00:01 UTC  23h ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Wed 2019-03-06 00:30:00 UTC  4h 44min left n/a                          n/a     your-service.timer     your-service.service
n/a                          n/a           n/a                          n/a     systemd-readahead-done.timer systemd-readahead-done.service

HA Proxy + Centos 7 (or RHEL 7) – Won’t bind to any ports – SystemD!

What are the Symptoms?

This has bitten me badly twice now. I was deploying Centos 7.5 servers and trying to run HA Proxy on them through SystemD (I’m not sure if it is an issue otherwise).

Basically, no matter what port I use I get this message:

Starting frontend main: cannot bind socket [0.0.0.0:80]

Note that as I was too lazy to set up separate logging for the HAProxy config, I found this message in /var/log/messages with the other system messages.

Of course, seeing this your first thought is “he’s running another process on that port!”… but nope.  Also, the permissions are set up properly, etc.

What is the Problem?

The problem here is actually SE Linux.  I haven’t quite dug into why, but when running under SystemD, SELinux will deny access to all ports for HAProxy unless you go out of your way to allow it to access them.

How Do We Fix It?

The fix is very simple thankfully, just set this selinux boolean as a root/sudo user:

sudo setsebool -P haproxy_connect_any 1

…and voilà! if you restart your HAProxy it will connect fine.  I spent a lot of time on this before I found a decent documentation and forum references in these places.  I hope this helps you fix it faster!  I also found a stack-overflow eventually… but the accepted/good answer is like 10 down so I missed it the first pile of times.