How can I configure CloudWatch Agent using Packer? - amazon-cloudwatch

I'm trying to configure CW Agent using packer. I was able to install the CW Agent using packer but however when tried to start the agent it fails to create the image
I know the fact that temporary EC2 instance that packer create during the image creation process does not have necessary iam roles attached which allows to push metrics/logs to cloudwatch. I know that's why it fails to start the agent.
# Install CloudWacth Agent
wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
sudo dpkg -i -E ./amazon-cloudwatch-agent.deb
rm amazon-cloudwatch-agent.deb
But when I add below line to start the CW Agent, it fails to create the image
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c ssm:AmazonCloudWatch-linux -s
Is there any way that I can achieve this? I need to be able to start the
agent while setting up the image.

Related

How do I resolve Invalid SSH Key Entry error when starting App with GCE

I'm trying to launch my app on Google Compute Engine, and I get the following error:
Sep 26 22:46:09 debian google_guest_agent[411]: ERROR non_windows_accounts.go:199 Invalid ssh key entry - unrecognized format: ssh-rsa AAAAB...
I'm having a hard time interpreting it. I have the following startup script:
# Talk to the metadata server to get the project id
PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
REPOSITORY="github_sleepywakes_thunderroost"
# Install logging monitor. The monitor will automatically pick up logs sent to
# syslog.
curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash
service google-fluentd restart &
# Install dependencies from apt
apt-get update
apt-get install -yq ca-certificates git build-essential supervisor
# Install nodejs
mkdir /opt/nodejs
curl https://nodejs.org/dist/v16.15.0/node-v16.15.0-linux-x64.tar.gz | tar xvzf - -C /opt/nodejs --strip-components=1
ln -s /opt/nodejs/bin/node /usr/bin/node
ln -s /opt/nodejs/bin/npm /usr/bin/npm
# Get the application source code from the Google Cloud Repository.
# git requires $HOME and it's not set during the startup script.
export HOME=/root
git config --global credential.helper gcloud.sh
git clone https://source.developers.google.com/p/${PROJECTID}/r/${REPOSITORY} /opt/app/github_sleepywakes_thunderroost
# Install app dependencies
cd /opt/app/github_sleepywakes_thunderroost
npm install
# Create a nodeapp user. The application will run as this user.
useradd -m -d /home/nodeapp nodeapp
chown -R nodeapp:nodeapp /opt/app
# Configure supervisor to run the node app.
cat >/etc/supervisor/conf.d/node-app.conf << EOF
[program:nodeapp]
directory=/opt/app/github_sleepywakes_thunderroost
command=npm start
autostart=true
autorestart=true
user=nodeapp
environment=HOME="/home/nodeapp",USER="nodeapp",NODE_ENV="production"
stdout_logfile=syslog
stderr_logfile=syslog
EOF
supervisorctl reread
supervisorctl update
# Application should now be running under supervisor
My instance shows I have 2 public SSH keys. The second begins like this one in the error, but after about 12 characters it is different.
Any idea why this might be occurring?
Thanks in advance.
Once you deployed your VM instance, its a default setting that the SSH key isn't
configure yet, but you can also configure the SSH key upon deploying the VM instance.
To elaborate the answer of #JohnHanley, I tried to test in my environment.
Created a VM instance, verified the SSH configuration. As a default configuration there's no SSH key configured as I said earlier you can configure SSH key upon deploying the VM
Created a SSH key pair via CLI, you can use this link for instruction details
Navigate your VM instance, Turn off > EDIT > Security > Add Item > SSH key 1 - copy+paste generated SSH key pair > Save > Power ON VM instance
Then test the VM instance if accessible.
Documentation link How to Add SSH keys to project metadata.

Tf Serving - Docker from source or build from git?

Struggling to understand the workflow here for tf serving.
Official docs say to “docker pull tensorflow/serving”. But they also say to “git clone https://github.com/tensorflow/serving.git”
Which one should I use? I assume the git version is so I can build my own custom serving image?
When I pull the official image from docker and run the container, why can’t I access the root? Is it because I haven’t “built it” properly yet?
If you have added some custom code, then clone first and then build image.
If you want to deploy image directly, pull image and run.
BTW, what do you mean by "access the root"? AFAIC, root is the default user in a container.
I think that is a good observation.
Only place where I feel cloning Git hub repository using "https://github.com/tensorflow/serving.git" is required if you want to run the examples like 'half_plus_two', 'half_plus_three' or if you want to run the Examples mentioned in the link,
https://github.com/tensorflow/serving/tree/master/tensorflow_serving/example.
Except that, as far as I know, pulling the Docker Image should do everything needed.
Even building the Custom Docker Image using our Custom Model doesn't need us to clone the Git hub repo.
Code for building Custom Docker Image is shown below:
sudo docker run -d --name sb tensorflow/serving
sudo docker cp /usr/local/google/home/abc/Jupyter_Notebooks/Premade_Estimator_Export sb:/models/Premade_Estimator_Export
sudo docker commit --change "ENV MODEL_NAME Premade_Estimator_Export" sb iris_container
sudo docker kill sb
sudo docker pull tensorflow/serving
sudo docker run -p 8501:8501 --mount type=bind,source=/usr/local/google/home/abc/Jupyter_Notebooks/TF_Serving/Premade_Estimator_Export,target=/models/Premade_Estimator_Export -e MODEL_NAME=Premade_Estimator_Export -t tensorflow/serving &
saved_model_cli show --dir /usr/local/google/home/abc/Jupyter_Notebooks/Premade_Estimator_Export/1556272508 --all
curl http://localhost:8501/v1/models/Premade_Estimator_Export #To get the status of the model
Regarding access to Root, if I understand it correctly, you don't want to run the docker commands using Sudo at the start for each command. Please follow the below mentioned command to get access to Root.
i. Add docker group if it does not already exist
ii. Add the connected user $USER to the docker group. Below are the commands to be run in the Terminal:
sudo groupadd docker
sudo usermod -aG docker $USER
iii. Reboot your PC and you should be able to execute Docker commands without sudo.

How to pass securely SSH Keys to Docker Build?

I want to create a Docker image for devs that reproduces our production servers. Those servers are configured by Ansible.
My idea is to run an ansible-pull to apply all the configuration inside the container. The problem is that I need the SSH key to pull the playbook, but I don't want to share the SSH key on the Docker image.
So, there is a way to have the SSH keys on build time without having them on run time?
Nice question. The simple way to do it is by removing the SSH keys after the Ansible stuff in the build - but because Docker stores images as layers, someone could still find the old layer with the keys in it.
If you build this Dockerfile:
FROM ubuntu
COPY ansible-ssh-key.rsa /key.rsa
RUN [ansible stuff]
RUN rm /key.rsa
The final image will have all your Ansible state and the SSH key will be gone but someone could easily run docker history to look at all the image layers, and just start a container from an intermediate layer before the key was deleted, and grab the key.
The trick would be to do something like this and then use Jason Wilder's docker-squash tool to squash the final image. In the squashed image the intermediate layer is gone and there's no way to get at the deleted key.
I'd setup some local file serving facility available only in your build environment.
E.g. start lighttpd on your build host to serve your pem-files only to local clients.
And in your Dockerfile do add/pull/cleanup in a single run:
RUN curl -sO http://build-host:8888/key.pem && ansible-pull -U myrepo && rm -rf key.pem
In this case it should be done in a single layer, so there should be no trace of key.pem left after layer commit.
This is another solution by using this repo, dockito/vault,
Secret store to be used on Docker image building.
I create a service dockito/vault and Ubuntu image where I attach my private key to the volume and run it as a process using,
docker run -it -v ~/.ssh:/vault/.ssh ubuntu /bin/bash -c "echo mysupersecret > /vault/.ssh/key"
docker run -d -p 14242:3000 -v ~/.ssh:/vault/.ssh dockito/vault
And, here is my Dockerfile
FROM ubuntu:14.04
RUN apt-get update -y && \
apt-get install -y curl && \
curl -L $(ip route|awk '/default/{print $3}'):14242/ONVAULT >
/usr/local
/bin/ONVAULT && \
chmod +x /usr/local/bin/ONVAULT
ENV REV_BREAK_CACHE=1
RUN ONVAULT echo ENV: && env && echo TOKEN ENV && echo $TOKEN
RUN ONVAULT ls -lha ~/.ssh/
RUN ONVAULT cat ~/.ssh/key
You can use the alpine linux to reduce final build size, and built the image as,
docker build -f Dockerfile -t mohan08p/VaultTest .
And, you are done. You can inspect the image. Secrets has not stored inside the image as its empty.
docker run -it mohan08p/VaultTest ls /root/.ssh
This is good technique to pass the .ssh at the build time. Only disadvantage is I need to keep additional Vault service running.
You could mount the SSH Keys into the Container on runtime.
docker run -v /path/to/ssh/key:/path/to/key/in/container image command

Reflecting code changes in docker containers

I have a basic hello world Node application written on express. I have just dockerised this application by creating a basic dockerfile in the applications root directory. I created a docker image, and then ran that image to run it in a running container
# Dockerfile
FROM node:0.10-onbuild
RUN npm install
EXPOSE 3000
CMD ["node", "./bin/www"]
sudo docker build -t docker-express
sudo docker run --name test-container -d -p 80:3000 docker-express
I can access the web application. My question is.. When I made code changes to my application, eg change 'hello world' to 'hello bob', my changes are not reflected within the running container.
What is a good development workflow to update changes in the container? Surely I shouldn't have to delete and rebuild the images after each change?
Thank you :)
Check out the section on Sharing Volumes. You should be able to share your host volume with the docker container and then any time you need a change you can just restart the server (or have something restart it for you!).
Your command would look something like: sudo docker run -v /src/webapp:/webapp --name test-container -d -p 80:3000 docker-express
Which mounts /src/webapp (on the host) to /webapp (in the container).

Amazon EC2: How install glassfish in EC2?

i'm trying to deploy my JSF site in EC2 instances, i'm new with cloud computing.
How do i install the GassFish 3 OpenSource in my EC2 instance ?
Update:
To download use 'curl' command :
curl http://www.java.net/download/jdk6/6u27/promoted/b03/binaries/jdk-6u27-ea-bin-b03-linux-i586-27_may_2011-rpm.bin > java-rpm.bin
or using wget:
wget http://www.java.net/download/jdk6/6u27/promoted/b03/binaries/jdk-6u27-ea-bin-b03-linux-i586-27_may_2011-rpm.bin
Here is what you need to do:
Get an AMI instance launched. Follow this tutorial to install. (Unfortunately, Glassfish installation tutorials are given as YouTube video on their official website!) The Simplest is to start with an existing EBS backed instance. This is how I started.
Now, if you want to kill the instance, it's same as throwing machine out of window. If you want to reuse it later or probably want to make a blue print for many instances that you will be launching in future. You need to bundle it up and register as an image.
If you have EBS backed instance, creating an image out of it is easier than sending an email. All you need to do is to login to your AWS Web Console, select the instance that you wanted to create an AMI of, select Instance Actions > Create Image from menu. Done!
If you have instance storage based AMI. You need to bundle up, and store in your S3 bucket, and register the AMI using, ec2-api-tools and ec2-ami-tools. So, have them installed in your instance and create the image as very neatly explained here.
Now, as far as cost is concerned, refer this. As far as I understand (my clients pay, so I don't really know how much) your running instance is going to cost you some money, even if there is no activity. However, if you make an AMI and store in S3 or in a EBS volume, you will be paying for storage cost.
Hope this explains what you wanted.
First you need to install jdk and then set environment variable JAVA_HOME.
Then follow below commands (Applicable on Amazon Linux EC2 ):
Directory used here is : usr/server
wget http://download.oracle.com/glassfish/4.1.2/release/glassfish-4.1.2.zip
unzip glassfish-4.1.2.zip
mv glassfish4 ../server/
groupadd glassfish-group
useradd -s /bin/bash -g glassfish-group glassfish-user
cd usr/server
chown -Rf glassfish-user.glassfish-group glassfish4
ls -l | grep glassfish
cd glassfish4
cd glassfish/domains
cd glassfish/bin
pwd
cd /etc/init.d/
wget https://geekstarts.info/scripts/glassfish.sh
mv glassfish.sh glassfish
chmod 755 glassfish
ls -l | grep glassfish
cd ~ glassfish/
su vector-user
whoami
pwd
cd glassfish4/bin
ls -l
whoami
./asadmin
change-master-password --savemasterpassword // default is chageit
change-admin-password // default is blank
start-domain
enable-secure-admin
restart-domain
stop-domain