There is probably a much cleaner way of doing this using off-the-shelf automations. But I was just following along with the AWS installation instructions and got this working.
- name: Download AWS CLI bundle.
shell: "cd /tmp && rm -rf /tmp/awscli* && curl 'https://s3.amazonaws.com/aws-cli/awscli-bundle.zip' -o 'awscli-bundle.zip'"
- name: Update repositories cache and install "unzip" package
- name: Unzip AWS CLI bundle.
shell: "cd /tmp && unzip awscli-bundle.zip"
- name: Run AWS CLI installer.
shell: "/tmp/awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws"
- name: Log into aws ecr docker registry
when: jupyterhub__notebook_registry != ''
shell: "$(/usr/local/bin/aws ecr get-login --no-include-email --region us-east-1)"
In order to do the actual login, you need to endure your EC2 instance has an IAM role assigned to it that has reader privileges. Then you should be all good!
I was trying to run ansible over a debian slim docker image and I got this error, which was rather cryptic. There wasn’t any particularly good information on google for fixing it either.
ContextualVersionConflict: (cryptography 1.7.1 (/usr/lib/python2.7/dist-packages), Requirement.parse(‘cryptography>=2.5’), set([‘paramiko’]))
ERROR! Unexpected Exception, this is probably a bug: (cryptography 1.7.1 (/usr/lib/python2.7/dist-packages), Requirement.parse(‘cryptography>=2.5’), set([‘paramiko’]))
the full traceback was:
Traceback (most recent call last):
File “/usr/local/bin/ansible-playbook”, line 97, in <module>
mycli = getattr(__import__(“ansible.cli.%s” % sub, fromlist=[myclass])…
In my case, running this before my ansible install resolved the issue. It just forces an upgrade to a newer version which ansible is okay with. Of course, if you were dependent on the lesser version this may not be an option.
pip install –force-reinstall cryptography==2.7
When you’re deploying code, you often want the files you have checked out of git to be executable. For example, if you have a script for automation, you want to be able to do ./script rather than “bash script” or “python script”. That way the #! line takes care of doing the right thing.
You can actually make a file “executable” in git by doing this command before your push:
git update-index --chmod=+x somefile.sh
Today I was setting up a new Jenkins server to run docker image builds and pushes to Amazon ECR.
Jenkins installed fine, as did the AWS CLI, and docker. Unfortunately, when I went to use docker against AWS from Jenkins, I had some integration issues at first (which is less than surprising).
Anyway! this required me to become the “jenkins” user which Jenkins runs as by default when installed with its normal installers. Unfortunately, when you try to “su – jenkins”, you will find that not much happens.
I found in this stack-overflow post that this is because the jenkins user is a service account not made for interactive terminals. Here is the quote:
Jenkins is a service account, it doesn’t have a shell by design. It is generally accepted that service accounts shouldn’t be able to log in interactively
If for some reason you want to login as jenkins, you can do so with:
sudo su -s /bin/bash jenkins
So, just do the following and you’ll be fine.
sudo su -s /bin/bash jenkins
If you need to download a secured Artifactory object from an automation server, and you don’t have a service account that you can use, you can go to artifactory for your user, generate a token, and get it.
Then you can simply do this:
wget --header='X-JFrog-Art-Api: your-very-long-token-from-artifactory' https://company.com/artifactory/local-pypi-repo/some_repo/some_project/artifact_name-3.1.0-py3-none-any.whl
WGet will use the artifactory token in its header and artifactory will allow you to download the artifact as if you are yourself.
Word of caution; while you haven’t revealed your user-name and password, this token can effectively be used for any Artifactory API as if its you. So, be cautious with who else can see this still :).
First, just start by getting a root shell so we can drop the sudo command from everything (e.g. type sudo bash). This isn’t best practice, but just make sure you exit out of it when you are done :).
Centos 7.x Complete Installation Steps
Note that this installs wget, OpenJDK Java 1.8, Jenkins, enables Jenkins to be on with system start, and then it installs docker, and finally, it adds the Jenkins user to the docker user group so that it can run commands from it effectively (using elevated privileges). Lastly, it restarts Jenkins so that the user has the new docker group permissions in the running process.
yum update -y
yum install wget -y
yum install java-1.8.0-openjdk-devel -y
wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
rpm –import https://jenkins-ci.org/redhat/jenkins-ci.org.key
yum install jenkins -y
service jenkins start
chkconfig jenkins on
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
systemctl start docker
usermod -a -G docker jenkins
service jenkins restart
Assuming this all worked, you should see Jenkins running on your localhost:8080 port for that server, and you can follow its on-screen instructions.
For docker, you can run the hello world container to see if it is properly set up (docker run hello-world).
I got this information from the Jenkins wiki, the docker site itself, and one stack overflow entry as shown below:
I was reading a docker script someone else created and came across an interesting blog explaining a parameter it used (–add-host) right here. I recommend reading that longer blog, but I’m recording the short notes and link here for myself as I’m sure I’ll be using this a lot.
The “–add-host” Parameter
Long story short, you can just add “–add-host=some_dns_name:some_ip_address” to your docker command in order to make your container have any DNS name resolve to any IP address.
This works by having the container put an entry for this DNS/IP pair into the /etc/hosts file.
Use Outside of Docker
I haven’t used the /etc/hosts file in a while. But this reminded me about it. The article points out that you can either add a DNS mapping or even override an existing DNS mapping using this file.
So, for example, I could make google.com point at this website from within the given OS instance, if I updated that file properly.
Pretty cool and useful :).