Entrypoint script is not running from docker-compose file - sql

My goal is to create an SQL login for my apps before running other images. Since my container uses Linux - scripts are saved with LF line endings. And the Docker output console is not showing any errors related to the script, only about my apps - they can't connect to the server because no such login exists.
The problem is that the shell script is not running and no login is being created. Thanks for your help in advance.
I was looking for the examples on the web, and here is what I came up with:
docker-compose.yml
version: '3.4'
services:
mssql:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
SA_PASSWORD: "3qqimIuTQEGqVCD!"
ACCEPT_EULA: "Y"
LOGIN: "MyLogin"
PASSWORD: "3qqimIuTQEGqVCD!"
ports:
- "1433:1433"
volumes:
- ./DockerScripts/SQL/CreateLogin.sql:/CreateLogin.sql
- ./DockerScripts/Shell/Entrypoint.sh:/Entrypoint.sh
entrypoint:
- ./Entrypoint.sh
webapi:
image: ${DOCKER_REGISTRY-}webapi
build:
context: .
dockerfile: Source/Code/Web/WebApi/Dockerfile
depends_on:
- mssql
maintenance:
image: ${DOCKER_REGISTRY-}maintenance
build:
context: .
dockerfile: Source/Code/Web/Maintenance/Dockerfile
depends_on:
- mssql
DockerScripts\Shell\Entrypoint.sh
#!/bin/bash
# Start SQL server
/opt/mssql/bin/sqlservr
# Wait for MSSQL server to start
export STATUS=1
i=0
while [[ $STATUS -ne 0 ]] && [[ $i -lt 30 ]]; do
i=$i+1
/opt/mssql-tools/bin/sqlcmd -t 1 -U sa -P $SA_PASSWORD -Q "select 1" >> /dev/null
STATUS=$?
done
if [ $STATUS -ne 0 ]; then
echo "Error: MS SQL Server took more than 30 seconds to start up."
exit 1
fi
echo "MS SQL Server started successfully."
echo "Setting up server login."
/opt/mssql-tools/bin/sqlcmd -U sa -P $SA_PASSWORD -S localhost -i CreateLogin.sql
DockerScripts\SQL\CreateLogin.sql
USE [master];
GO
CREATE LOGIN [$(LOGIN)] WITH PASSWORD=N'$(PASSWORD)', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF;
GO
ALTER SERVER ROLE [dbcreator] ADD MEMBER [$(LOGIN)];
GO
UPDATE
I removed a lot of stuff since it doesn't relate to the issue.
So for now, the main problem persists - Entrypoint.sh just not being called on compose startup.

Okay, so finally I was able to solve this issue.
The problem was not that Entrypoint.sh was not called, but that all the commands after
# Start SQL server
/opt/mssql/bin/sqlservr
were just skipped.
I don't really know why, but I was searching the web more and more and in the end, I came up with this solution.
First of all, I separated the login creation logic into its own script file, this slightly improves readability:
DockerScripts/Shell/CreateLogin.sh
#!bin/bash
echo "Creating MS SQL Login."
for i in {1..50};
do
/opt/mssql-tools/bin/sqlcmd -U sa -P $SA_PASSWORD -S localhost -i CreateLogin.sql
if [ $? -eq 0 ]
then
echo "MS SQL Login created."
break
else
echo "..."
sleep 1
fi
done
Secondly - I simplified my Entrypoint.sh file to just a few rows:
#!bin/bash
/opt/mssql/bin/sqlservr | /opt/mssql/bin/permissions_check.sh | /Scripts/CreateLogin.sh
Let me explain what the commands above means:
/opt/mssql/bin/sqlservr - this start the server itself.
/opt/mssql/bin/permissions_check.sh - this is just the default
entrypint file.
/Scripts/CreateLogin.sh - and this point us to the
server login creation.
And of course, I mounted the new shell script to a volume.
That's it, literally months of struggling with this issue, and turned out it was very simple to solve.
Hope this would help somebody else. Thanks!

Related

Simplify running multiple SSH commands from a Gitlab CI runner to an external server

Currently using the ff. (simplified) code in .gitlab-ci.yml to run multiple SSH commands:
stage: deploy
script:
- ssh 1.2.3.4 "docker login -u foo -p bar example.com"
- ssh 1.2.3.4 "docker pull my_image"
- ssh 1.2.3.4 "docker run -d -p 80:80 my_image"
- ssh 1.2.3.4 "and so on ..."
- ssh 1.2.3.4 "exit"
It works but is there a simpler way to do this, for e.g., without specifying ssh 1.2.3.4 in every line?
You can use environment variables and make them protected passing from the GitLab project level configuration
In order to minimalize duplication please use !reference like this:
.ssh-access:
script:
- ssh 1.2.3.4
script:
- !reference [.ssh-access, script]
- docker login
- ...
But you can't add this to the very same line, or you can try, I haven't checked
You can try multiline commands as well:
script:
- |
!reference [.ssh-access, script]
docker login
...
For those who care, I figured something out which works for me. A simplified version of the above code is:
stage: deploy
script:
- >
ssh 1.2.3.4
"
docker login -u foo -p bar example.com;
docker pull my_image;
docker run -d -p 80:80 my_image;
and so on ...;
exit;
"

Show remote command output in CI job results

I have CI pipeline which have stages like this. As it shows most of the stuff here is done on remote machine which is working fine.
The only issues I am unable to see the command outputs here. For e.g. scp is used with -v which if run manually on machine shows a lot of verbose information useful for debugging etc. same goes for cp -v but in job results it shows no such information.
So is there a way I can re-route the command outputs from remote machine to local (gitlab job output)
my job 1/6:
rules:
- changes:
- ${LOCA_FILE_PATH}
stage: prepare
allow_failure: true
script: |
ssh ${USER}#${HOST} '([ -f "${PATH}/test_conf_1.txt" ] && cp -v "${PATH}/test_conf_1.txt" ${PATH}/test_yaml_$CI_COMMIT_TIMESTAMP.txt)'
my job 2/6:
rules:
- changes:
- ${LOCA_FILE_PATH}
stage: scp
script:
scp -v ${TF_ROOT}${LOCA_FILE_PATH} ${USER}#${HOST}:${PATH}/
Perhaps you can try something like this:
ssh user#host 2>&1 command | tee ssh-session.log
cat ssh-session.log
In the script part you can define a variable and hold there the result of your command and you can print this out.
script:
- RESULT=$(scp -v ${TF_ROOT}${LOCA_FILE_PATH} ${USER}#${HOST}:${PATH}/)
- echo $RESULT

Why does MSSQL in Docker return "The last operation was terminated because the user pressed CTRL+C" on sql queries?

I'm on Archlinux 64x (4.17.4-1-ARCH) with Docker (version 18.06.0-ce, build 0ffa8257ec). I'm using Microsoft's MSSQL docker container CU7. Each time I'm trying to enter a query or to run a SQL file I get this warning message:
Sqlcmd: Warning: The last operation was terminated because the user pressed CTRL+C.
Then when I check in the database with Datagrip, the query hasn't been executed! Here are my commands :
docker pull microsoft/mssql-server-linux:2017-CU7
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=GitGood*0987654321" -e "MSSQL_PID=Developer" -p 1433:1433 --name beep_boop_boop -d microsoft/mssql-server-linux:2017-CU7
# THIS
sudo echo "CREATE DATABASE test;" > /test.sql
docker exec beep_boop_boop /opt/mssql-tools/bin/sqlcmd -U SA -P GitGood*0987654321 < test.sql
# OR
docker exec beep_boop_boop /opt/mssql-tools/bin/sqlcmd -U SA -P GitGood*0987654321 -Q "CREATE DATABASE test;"
My question is How to avoid Warning operation was terminated by user warning on MSSQL queries?
You should use docker-compose, I'm sure it will make your life easier. My guess is you're getting an error without actually knowing it. First time I tried, I used an unsafe password which didn't meet security requirements and I got this error:
ERROR: Unable to set system administrator password: Password validation failed. The password does not meet SQL Server password policy requirements because it is not complex enough. The password must be at least 8 characters long and contain characters from three of the following four sets: Uppercase letters, Lowercase letters, Base 10 digits, and Symbols..
I see your password is strong, but note that you have a * in your password, which may be executed if not correctly escaped.
Or the server is just not started when running with your command line, example:
# example of a failing attempt
docker run -it --rm -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=GitGood*0987654321' -p 1433:1433 microsoft/mssql-server-linux:2017-CU7 bash
# wait until you're inside the container, then check if server is running
apt-get update && apt-get install -y nmap
nmap -Pn localhost -p 1433
If it's not running, you'll see something like that:
Starting Nmap 7.01 ( https://nmap.org ) at 2018-08-27 06:12 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000083s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE
1433/tcp closed ms-sql-s
Nmap done: 1 IP address (1 host up) scanned in 0.38 seconds
Enough with the intro, here's a working solution:
docker-compose.yml
version: '2'
services:
db:
image: microsoft/mssql-server-linux:2017-CU7
container_name: beep-boop-boop
ports:
- 1443:1443
environment:
ACCEPT_EULA: Y
SA_PASSWORD: GitGood*0987654321
Then run the following commands and wait until the image is ready:
docker-compose up -d
docker-compose logs -f &
up -d to demonize the container so it keeps running in the background.
logs -f will read logs and follow (similar to what tail -f does)
& to run the command in the background so we don't need to use a new shell
Now get a bash running inside that container like this:
docker-compose exec db bash
Once inside the image, you can run your commands
/opt/mssql-tools/bin/sqlcmd -U SA -P $SA_PASSWORD -Q "CREATE DATABASE test;"
/opt/mssql-tools/bin/sqlcmd -U SA -P $SA_PASSWORD -Q "SELECT name FROM master.sys.databases"
Note how I reused the SA_PASSWORD environment variable here so I didn't need to retype the password.
Now enjoy the result
name
--------------------------------------------------------------------------------------------------------------------------------
master
tempdb
model
msdb
test
(5 rows affected)
For a proper setup, I recommend replacing the environment key with the following lines in docker-compose.yml:
env_file:
- .env
This way, you can store your secrets outside of your docker-compose.yml and also make sure you don't track .env in your version control (you should add .env to your .gitignore and provide a .env.example in your repository with proper documentation.
Here's an example project which confirms it works in Travis-CI:
https://github.com/GabLeRoux/mssql-docker-compose-example
Other improvements
There are probably some other ways to accomplish this with one liners, but for readability, it's often better to just use some scripts. In the repo, I took a few shortcuts such as sleep 10 in run.sh. This could be improved by actually waiting until the db is up with a proper way. The initialization script could also be part of an entrypoint.sh, etc. Hope it gets you started 🍻

glassfish start script fails through crontab

I have a created a script to check to see if my glassfish server is running (installed on a freebsd system), if it isn't, the script attempts to kill the java process to ensure it's not hung, and then issues the asadmin start-domain command
If this script runs from the command line it is successful 100% of the time. When it is run from the cron tab, every line runs except the asadmin start-domain line - it does not seem to execute or at least does not complete, i.e. the server is not running after this script runs.
For anyone not familiar with glassfish or the asadmin utility used to start the server, it is my understanding that a forked process is used. could this be causing a problem via cron?
Again, in all my tests today, the script runs to completion when run from the command line. Once it's executed through the cron, it does not complete... what would be different running this from the crontab???
thanks in advance for any help... i'm pulling my hair out trying to make this work!
#!/bin/bash
JAVA_HOME=/usr/local/diablo-jdk1.6.0/; export JAVA_HOME
timevar=`date +%d-%m-%Y_%H.%M.%S`
process_name='java'
get_contents=`cat urls.txt`
for i in $get_contents
do
echo checking $i
statuscode=$(curl --connect-timeout 10 --write-out %{http_code} --silent --output /dev/null $i)
case $statuscode in
200)
echo "$timevar $i $statuscode okay" >> /usr/home/user1/logfile.txt
;;
*)
echo "$timevar $i $statuscode bad" >> /usr/home/user1/logfile.txt
echo "Status $statuscode found" | mail -s "Check of $i failed" some.address#gmail.com
process_id=`ps acx | grep -i $process_name | awk {'print $1'}`
if [ -z "$process_id" ]
then
echo "java wasn't found in the process list"
else
echo "Killing java, currently process $process_id"
kill -9 $process_id
fi
/usr/home/user1/glassfish3/bin/asadmin start-domain domain1
;;
esac
done
Also, just for completeness, here is the entry in the cron tab:
*/2 * * * * /usr/home/user1/server.check.sh >> /usr/home/user1/cron.log
Ok... found the answer to this on another site, but I thought I'd add the answer in here for future reference.
The problem was the PATH!! even though java_home was set, java itself wasn't in the path for the cron daemon.
A quick test to see what path is available to your cron, add this line:
*/2 * * * * env > /usr/home/user1/env.output
From what I can gather, the PATH initially available to cron is pretty minimal. Since java was in /usr/local/bin, i added that to the path right in the crontab and kaboom! it worked!
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
*/2 * * * * /usr/home/user1/server.check.sh >> /usr/home/user1/cron.log

Bash Script to install PostgreSQL - Failing

I built this deployment script which runs when my debian 6.0 server is deployed. I have shown it here before (this is a linode stackscript incase anyone else is wondering):
#!/bin/bash
#
# Install PostgreSQL
#
# Copyright (c) 2010 Filip Wasilewski <en#ig.ma>.
#
# My ref: http://www.linode.com/?r=aadfce9845055011e00f0c6c9a5c01158c452deb
function postgresql_install {
aptitude -y install postgresql postgresql-contrib postgresql-dev libpq-dev
}
function postgresql_create_user {
# postgresql_create_user(username, password)
if [ -z "$1" ]; then
echo "postgresql_create_user() requires username as the first argument"
return 1;
fi
if [ -z "$2" ]; then
echo "postgresql_create_user() requires a password as the second argument"
return 1;
fi
echo "CREATE ROLE $1 WITH LOGIN ENCRYPTED PASSWORD '$2';" | sudo -i -u postgres psql
}
function postgresql_create_database {
# postgresql_create_database(dbname, owner)
if [ -z "$1" ]; then
echo "postgresql_create_database() requires database name as the first argument"
return 1;
fi
if [ -z "$2" ]; then
echo "postgresql_create_database() requires an owner username as the second argument"
return 1;
fi
sudo -i -u postgres createdb --owner=$2 $1
}
postgresql_install
postgresql_create_user(username, password)
postgresql_create_database(dbname, username)
I deployed my server with this script, which was built on top of Filip's version, but then when I try to see if postgresql is running by typing pg_ctl it says command not found.
Where I have I gone wrong on this? Since it deploys when the server runs I am not able to see where it is going wrong.
As people say it looks like you don't have the PostgreSQL bin directory on your path. In my experience on Ubuntu with PostgreSQL 9.1/9.2 on install from apt, the postgres user is created but it doesn't properly set up your environment, so pg_ctl and initdb etc are not on your PATH.
I can't see what PostgreSQL version you're using, but my 9.1 & 9.2 instances store the binaries in /usr/lib/postgresql/9.2/bin
Check that directory to see if it contains pg_ctl and other binaries. If it does,
try running:
export PATH=$PATH:/usr/lib/postgresql/9.2/bin
and see if that allows you to run pg_ctl. If so, you'll need to execute that at login from .bashrc or similar