I'm trying to deploy an app version to elastic beanstalk but my config file formatting is incorrect.
There's a lot of escaped quotes so I don't think this is correct but I'm not sure how to resolve it.
This is the line that's causing issues;
command: "sudo bash -c 'echo \"<img src=\'http://www.foo.com/img/custom_500.png\' alt=\'500\' style=\'left:50%;top:50%;position:fixed;margin-top:-235px;margin-left:-200px\'/>\"' > custom_50x.html"
Try without the opening and closing Quotes, like this:
command: sudo bash -c 'echo \"<img src=\'http://www.foo.com/img/custom_500.png\' alt=\'500\' style=\'left:50%;top:50%;position:fixed;margin-top:-235px;margin-left:-200px\'/>\"' > custom_50x.html
A useful tool for determining quickly if something is wrong is to use this online YAML Parser.
Related
So basically, I have this command that runs in Gitlab CI to update a field in YAML configuration before packaging and pushing a Helm chart.
yq -i -y ".pod.image.imageTag="${CI_COMMIT_SHORT_SHA}"" deployment/values.yaml
values.yaml
pod:
image:
repository: my.private.repo/my-project
imageTag: 'latest'
nodegroupName: "nessie-nodegroup"
But I keep getting this error.
jq: error: syntax error, unexpected IDENT, expecting $end (Unix shell quoting issues?)
.pod.image.imageTag=4c0118bf
The variable is actually read but it looks like I'm doing something wrong in the yq command.
Any ideas where that error is coming from ? Trying with only one quote doesn't read the environment variable obviously. I already tried it.
Update:
Trying with :
yq -i -y '.pod.image.imageTag="${CI_COMMIT_SHORT_SHA}"' deployment/values.yaml
and
yq -i -y .pod.image.imageTag="${CI_COMMIT_SHORT_SHA}" deployment/values.yaml
didn't work either.
With the -y option I assume you are using the kislyuk/yq implementation.
Use jq's --arg option to introduce values from shell:
yq -i -y --arg tag "${CI_COMMIT_SHORT_SHA}" '.pod.image.imageTag=$tag' deployment/values.yaml
Since the Q has been tagged jq, it might be worth mentioning that the Go implementation of jq supports YAML, so e.g.:
CI_COMMIT_SHORT_SHA=foo
gojq --yaml-input --yaml-output --arg tag "${CI_COMMIT_SHORT_SHA}" '
.pod.image.imageTag=$tag
' values.yaml
produces
pod:
image:
imageTag: foo
repository: my.private.repo/my-project
nodegroupName: nessie-nodegroup
Notice, though, that gojq sorts the keys.
I have a docker-compose.yml file which builds some node regarding to a Dockerfile. This Dockerfile has a wget command as RUN which needs authentication. The problem is that authentication step doesn't succeed. I tried echo to check command before execution and it was correct; it works fine in the shell, but still getting Username/Password Authentication Failed.
This is the command:
wget --user $USER --password $PASS $URL
Any idea what generates this?
EDIT 1:
I have no problem executing above command with docker build -t myimagename:myimagetag .
I've solved it. The issue was using special characters in the $PASS. I had to escape those characters with a \ while passing them to the docker build through the shell. I have set $PASS in my .env file with character escaping too, but it happened that the values contained in the .env file are rawly passed to the Dockerfile mentioned in docker-compose.yml.
Removing \ escaping in the .env file solved everything.
I am not sure if this is possible without creating my own base image, but I use environment variables in /etc/environment on our servers and typically make them accessible to apache by doing the following:
$ printf "HTTP_VAR1=var1-value\n\
HTTP_VAR2=var2-value"\
>> /etc/environment
$ mkdir /usr/lib/systemd/system/httpd.service.d
$ printf "[Service]\n\
EnvironmentFile=/etc/environment"\
> /usr/lib/systemd/system/httpd.service.d/environment.conf
$ systemctl daemon-reload
$ systemctl restart httpd
$ reboot
The variables are then available in any PHP calls to getenv('HTTP_VAR1'); and etc. However, in running this from a docker file I get dbus errors on the systemctl commands. Without the systemctl commands it seems the variables are not available to apache as it seems the new EnvironmentFile directive doesn't take effect. My docker file snippet:
FROM centos/httpd:latest
RUN printf "HTTP_VAR1=var1-value\n\
HTTP_VAR2=var2-value"\
>> /etc/environment
RUN mkdir /usr/lib/systemd/system/httpd.service.d &&\
printf "[Service]\n\
EnvironmentFile=/etc/environment"\
> /usr/lib/systemd/system/httpd.service.d/environment.conf
RUN systemctl daemon-reload &&\
systemctl restart httpd
COPY entrypoint.sh /entrypoint.sh
So I happened upon the answer to the issue today. It seems that systemd drops backslashes inside single quotes, but it may effect double-quotes too from what I saw in testing. I found the systemd development mailing list thread from April 2014 where patching the issue was being discussed. It seems as though the fix never made it in. So we have to work around it.
In attempting to work around it I noticed some issues with actually reading the variables at all. It seemed as though either Apache or php-cli would get the correct variables, and sometimes not at all, it took a bit of sleuthing to figure out what was going on. Then I started reading into the systemd's EnvironmentFile directive to see if there was more to gain from the docs. It turns out it does not evaluate bash so export won't work. It expects a text file with variable assignments and herein lies one of the main issues that might keep this from being resolved.
I then devised a workable solution. Utilizing systemd's ExecStartPre directive I am able to run a script on startup of the httpd service. I then read in the environment file and write a new plain text one that will then be used by httpd's systemd unit. Here is the code:
Firstly, I moved my variables to /etc/profile.d/ directory rather than /etc/environment file.
file: /etc/profile.d/environment.sh
This is where we store all our environment variables, this gets easily sourced on all interactive shell logins. In the rarer cases where we need to have these variables available non-interactively we can either provide --login flag to /bin/bash or source it manually.
export HTTP_VAR1=var1-value-with-a-back\slash
export HTTP_VAR2=var2-value
file: /usr/lib/systemd/system/httpd.service.d/environment.conf
Our drop-in unit file to extend how the httpd service works. I add in a script that runs before httpd starts up. This gets ran on all httpd restarts and starts. The script that runs generates a plain text file at /etc/profile.d/environment.env which we subsequently tell systemd to load as an EnvironmentFile.
[Service]
ExecStartPre=/usr/bin/bash -c "/usr/local/bin/generate-plain-environment-file"
EnvironmentFile=/etc/profile.d/environment.env
file: /usr/local/bin/generate-plain-environment-file
Here is the script I am using, I whipped this together really fast, I really don't think it is that robust and it could be better. It just removes the export from the beginning of the lines and then escapes any backslashes since systemd drops single backslashes. A more proper solution might be to use bash to evaluate each line and obtain the variable value in case of usage of variables or other bash in the actual bash variables, then output them as plain text name=value assignments, however this is not part of my use-case so I didn't bother.
#!/bin/bash
cd /etc/profile.d/
rm -rf "./environment.env"
while IFS='' read -r line || [[ -n "$line" ]]; do
echo $(echo "${line}" | sed 's/^export //' | sed 's/\\/\\\\/g') >> "./environment.env";
done < "./environment.sh"
file: /etc/profile.d/environment.env
This is the resulting file generated by the described script.
HTTP_VAR1=var1-value-with-a-back\\slash
HTTP_VAR2=var2-value
Conclusion is that the I now have two files with the same thing in them but I only need to maintain the one, the other is generated each time we restart httpd. Also, we fix the backslash issue in the process. Hurray!
I am learning the shell language. I have creating a shell script whose function is to login into the DB and run a .sql file. Following are the contents of the script -
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
echo "Running SQL Dump - auto_qa_db_sync"
\\i auto_qa_db_sync.sql
After running the above script, I get the following error
./autoqa_script.sh: 39: ./autoqa_script.sh: /i: not found
Following one article, I tried reversing the slash but it didn't worked.
I don't understand why this is happening. Because when I try manually running the sql file, it works properly. Can anyone help?
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production and run script"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT -f auto_qa_db_sync.sql
The lines you put in a shell script are (moreless, let's say so for now) equivalent to what you would put right to the Bash prompt (the one ending with '$' or '#' if you're a root). When you execute a script (a list of commands), one command will be run after the previous terminates.
What you wanted to do is to run the client and issue a "\i ./autoqa_script.sh" comand in it.
What you did was to run the client, and after the client terminated, issue that command in Bash.
You should read about Bash pipelines - these are the way to run programs and input text inside them. Following your original idea to solving the problem, you'd write something like:
echo '\i auto_qa_db_sync.sql' | $DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
Hope that helps to understand.
First steps in FreeBSD: trying to run my installation script. Fast help needed:
# ls
configure
# file configure
configure: Bourne-Again shell script text executable
# ./configure
./configure: Command not found
# configure
configure: Command not found
What is wrong, how can I execute this script?
Do you have bash installed? If not use FreeBSD Ports to install it. Use where bash to find out.
Use the force Luke :)
# pkg_add -r bash
May it be, that your's configure script doesn't have appropriate executions rights. Try to cast:
chmod 777 configure
If it works, fix it to
chmod 764 configure
configure scripts are ultra portable shell scripts. There is no need for bash here. The problem is somewhere else.
What's the first line in the configure script? Maybe a CR/LF snuck in, which is a common cause for a totally misleading error message saying that the script was not found, when it was the interpreter that was not found.
Please try /bin/sh ./configure
Install the bash package using
pkg add bash
or
make -C /usr/ports/shells/bash install clean
By default FreeBSD comes with tcsh and a POSIX compatible FreeBSD sh
On older FreeBSD systems you will need to do
rehash
before you can run it.
First line of this script (#!/usr/bin/bash, i suppose) should be changed to #!/usr/local/bin/bash.
And of course, you should have shells/bash port installed.