Why does MSSQL in Docker return "The last operation was terminated because the user pressed CTRL+C" on sql queries? - sql

I'm on Archlinux 64x (4.17.4-1-ARCH) with Docker (version 18.06.0-ce, build 0ffa8257ec). I'm using Microsoft's MSSQL docker container CU7. Each time I'm trying to enter a query or to run a SQL file I get this warning message:
Sqlcmd: Warning: The last operation was terminated because the user pressed CTRL+C.
Then when I check in the database with Datagrip, the query hasn't been executed! Here are my commands :
docker pull microsoft/mssql-server-linux:2017-CU7
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=GitGood*0987654321" -e "MSSQL_PID=Developer" -p 1433:1433 --name beep_boop_boop -d microsoft/mssql-server-linux:2017-CU7
# THIS
sudo echo "CREATE DATABASE test;" > /test.sql
docker exec beep_boop_boop /opt/mssql-tools/bin/sqlcmd -U SA -P GitGood*0987654321 < test.sql
# OR
docker exec beep_boop_boop /opt/mssql-tools/bin/sqlcmd -U SA -P GitGood*0987654321 -Q "CREATE DATABASE test;"
My question is How to avoid Warning operation was terminated by user warning on MSSQL queries?

You should use docker-compose, I'm sure it will make your life easier. My guess is you're getting an error without actually knowing it. First time I tried, I used an unsafe password which didn't meet security requirements and I got this error:
ERROR: Unable to set system administrator password: Password validation failed. The password does not meet SQL Server password policy requirements because it is not complex enough. The password must be at least 8 characters long and contain characters from three of the following four sets: Uppercase letters, Lowercase letters, Base 10 digits, and Symbols..
I see your password is strong, but note that you have a * in your password, which may be executed if not correctly escaped.
Or the server is just not started when running with your command line, example:
# example of a failing attempt
docker run -it --rm -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=GitGood*0987654321' -p 1433:1433 microsoft/mssql-server-linux:2017-CU7 bash
# wait until you're inside the container, then check if server is running
apt-get update && apt-get install -y nmap
nmap -Pn localhost -p 1433
If it's not running, you'll see something like that:
Starting Nmap 7.01 ( https://nmap.org ) at 2018-08-27 06:12 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000083s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE
1433/tcp closed ms-sql-s
Nmap done: 1 IP address (1 host up) scanned in 0.38 seconds
Enough with the intro, here's a working solution:
docker-compose.yml
version: '2'
services:
db:
image: microsoft/mssql-server-linux:2017-CU7
container_name: beep-boop-boop
ports:
- 1443:1443
environment:
ACCEPT_EULA: Y
SA_PASSWORD: GitGood*0987654321
Then run the following commands and wait until the image is ready:
docker-compose up -d
docker-compose logs -f &
up -d to demonize the container so it keeps running in the background.
logs -f will read logs and follow (similar to what tail -f does)
& to run the command in the background so we don't need to use a new shell
Now get a bash running inside that container like this:
docker-compose exec db bash
Once inside the image, you can run your commands
/opt/mssql-tools/bin/sqlcmd -U SA -P $SA_PASSWORD -Q "CREATE DATABASE test;"
/opt/mssql-tools/bin/sqlcmd -U SA -P $SA_PASSWORD -Q "SELECT name FROM master.sys.databases"
Note how I reused the SA_PASSWORD environment variable here so I didn't need to retype the password.
Now enjoy the result
name
--------------------------------------------------------------------------------------------------------------------------------
master
tempdb
model
msdb
test
(5 rows affected)
For a proper setup, I recommend replacing the environment key with the following lines in docker-compose.yml:
env_file:
- .env
This way, you can store your secrets outside of your docker-compose.yml and also make sure you don't track .env in your version control (you should add .env to your .gitignore and provide a .env.example in your repository with proper documentation.
Here's an example project which confirms it works in Travis-CI:
https://github.com/GabLeRoux/mssql-docker-compose-example
Other improvements
There are probably some other ways to accomplish this with one liners, but for readability, it's often better to just use some scripts. In the repo, I took a few shortcuts such as sleep 10 in run.sh. This could be improved by actually waiting until the db is up with a proper way. The initialization script could also be part of an entrypoint.sh, etc. Hope it gets you started 🍻

Related

running postgresql image with podman failed

When running postgresql alpine image with podman :
podman run --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=test -e POSTGRES_USER=test -d postgres:11-alpine
the result is :
Error: /usr/bin/slirp4netns failed: "open(\"/dev/net/tun\"): No such device\nWARNING: Support for sandboxing is experimental\nchild failed(1)\nWARNING: Support for sandboxing is experimental\n"
The running system is archlinux. Is there a way to fix this error or a turn arround ?
Thanks
Is slirp4netns correctly installed? Check the project Site for information.
Sometimes the flag order matters. try -d first and -p last (directly infornt of the image) looking like:
podman run -d --name postgres -e POSTGRES_PASSWORD=test -e POSTGRES_USER=test -p 5432:5432 postgres:11-alpine
Try only creating the neccessary password, then log into your container and create manually (this always worked for me)
podman run -d --name postgres -e POSTGRES_PASSWORD=test -p 5432:5432 postgres:11-alpline
podman exec -it postgres bash
Create default user postgres
su - postgres
start postgres
psql
create databases and tables
CREATE USER testuser WITH PASSWORD 'testpassword' | Doku
CREATE DATABASE testdata WITH OWNER testuser
Check if it worked
\l+
Connect to your Database via IP and Port
I assume you upgraded Arch packages recently. Most likely your system needs a restart.

How can I import a SQL Server RDS backup into a SQL Server Linux Docker instance?

I've followed the directions from the AWS documentation on importing / exporting a database from RDS using their stored procedures.
The command was similar to:
exec msdb.dbo.rds_backup_database
#source_db_name='MyDatabase',
#s3_arn_to_backup_to='my-bucket/myBackup.bak'
This part works fine, and I've done it plenty of times in the past.
However what I want to achieve now; is restoring this database to a local SQL Server instance; however I'm struggling at this point. I'm assuming this isn't a "normal" SQL Server dump - but I'm unsure what the difference is.
I've spun up a new SQL Server for Linux Docker instance; which seems all set. I have made a few changes so that the sqlcmd tool is installed; so technically the image I'm running is comprised of this Dockerfile; not much different.
FROM microsoft/mssql-server-linux:2017-latest
RUN apt-get update && \
apt-get install -y curl && \
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
apt-get update && \
apt-get install -y mssql-tools unixodbc-dev
This image works fine; I'm building it via docker build -t sql . and running it via docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=myPassword1!' -p 1433:1433 -v $(pwd):/backups sql
Within my local folder, I have my backup from RDS downloaded, so this file is now in /backups/myBackup.bak
I now try to run sqlcmd to import the data with the following command; and I'm running into an issue which makes me assume this isn't a traditional SQL dump. Unsure what a traditional SQL dump looks like, but the majority of the file looks garbled with ^#^#^#^# and of course other things.
/opt/mssql-tools/bin/sqlcmd -S localhost -i /backups/myBackup.bak -U sa -P myPassword1! -x
And finally; I get this error:
Sqlcmd: Error: Syntax error at line 56048 near command 'GO' in file '/backups/myBackup.bak'.
Final Answer
My final solution for this mainly came from using -Q and running a RESTORE query rather than importing with the file, but I also needed to include some MOVE options as they were pointing at Windows file paths.
/opt/mssql-tools/bin/sqlcmd -U SA -P myPassword -Q "RESTORE DATABASE MyDatabase FROM DISK = N'/path/to/my/file.bak' WITH MOVE 'mydatabase' TO '/var/opt/mssql/mydatabase.mdf', MOVE 'mydatabase_log' TO '/var/opt/mssql/mydatabase.ldf', REPLACE"
You should use the RESTORE DATABASE command to interact with your backup file instead of specifying it as an input file of commands to the database:
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P myPassword1! -Q "RESTORE DATABASE MyDatabase FROM DISK='/backups/myBackup.bak'"
According to the sqlcmd Docs, the -i flag you used specifies:
The file that contains a batch of SQL statements or stored procedures.
That flag likely won't work properly if given a database backup file as an argument.

Nagios - NRPE: Command '...' not defined

In /usr/local/nagios/etc/nrpe.cfg I added a new command check_this_process to the already pre-defined ones:
command[check_users]=/usr/local/nagios/libexec/check_users -w 5 -c 10
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_hda1]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /dev/$
command[check_zombie_procs]=/usr/local/nagios/libexec/check_procs -w 5 -c 10 -s$
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 150 -c 200
command[check_this_process]=/usr/local/nagios/libexec/check_procs -w 15 -c 20 -C name
This works:
define service{
use generic-service
host_name my_host
service_description CPU Load
check_command check_nrpe!check_load
}
This doesn't:
define service{
use local-service
host_name my_host
service_description cron
check_command check_nrpe!check_this_process
}
and returns: NRPE: Command 'check_this_process' not defined
The terminology used in the supplied docs is a little confusing, but I'll put it like this:
As written in Page 10 of https://assets.nagios.com/downloads/nagioscore/docs/nrpe/NRPE.pdf, you need to modify /usr/local/nagios/etc/commands.cfg on your Nagios server and add the following to define the check_nrpe command:
define command{
command_name check_nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
}
On your Nagios server, define your service definition as you've already done:
define service{
use local-service
host_name my_host
service_description cron
check_command check_nrpe!check_this_process
}
On your remote host to be monitored, the following is going to be different depending on whether you installed NRPE:
using the tarball and xinetd as in
https://assets.nagios.com/downloads/nagioscore/docs/nrpe/NRPE.pdf
or using a package manager like yum as in
http://sharadchhetri.com/2013/03/02/how-to-install-and-configure-nagios-nrpe-in-centos-and-red-hat/
If you used the tarball / xinetd method, your NRPE configuration file will likely be located at /usr/local/nagios/etc/nrpe.cfg on your remote-host-to-be-monitored. (To avoid typing that all the time, I'll just call it "my_host").
So, on my_host, modify /usr/local/nagios/etc/nrpe.cfg.
Add
command[check_this_process]=/usr/local/nagios/libexec/check_procs -w 15 -c 20 -C name
So that it looks like:
command[check_users]=/usr/local/nagios/libexec/check_users -w 5 -c 10
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_hda1]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /dev/$
command[check_zombie_procs]=/usr/local/nagios/libexec/check_procs -w 5 -c 10 -s$
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 150 -c 200
command[check_this_process]=/usr/local/nagios/libexec/check_procs -w 15 -c 20 -C name
(Note: the above is assuming you have a process called name. If not, replace name with your real process name: i.e. crond)
Restart xinetd:
service xinetd restart
(NOTE: restarting xinted might not be necessary, but I don't use it so I'm a little fuzzy on this one.)
However, if you installed NRPE on my_host using a package manager like yum, your NRPE configuration file will likely be located at /etc/nagios/nrpe.cfg.
So, on my_host, modify /etc/nagios/nrpe.cfg.
Add
command[check_this_process]=/usr/local/nagios/libexec/check_procs -w 15 -c 20 -C name
So that it looks like:
command[check_users]=/usr/local/nagios/libexec/check_users -w 5 -c 10
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_hda1]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /dev/$
command[check_zombie_procs]=/usr/local/nagios/libexec/check_procs -w 5 -c 10 -s$
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 150 -c 200
command[check_this_process]=/usr/local/nagios/libexec/check_procs -w 15 -c 20 -C name
Restart the nrpe service:
service nrpe restart
Back on your Nagios server, run a verification of your Nagios configuration settings:
nagios -v /usr/local/nagios/etc/nagios.cfg
Check the output for errors.
If there are no errors, restart Nagios:
service nagios restart
On your Nagios server you should have a check_nrpe utility installed somehwere as a result of installing the "check_nrpe plugin" on your Nagios server.
See pages 9 and 10 of: https://assets.nagios.com/downloads/nagioscore/docs/nrpe/NRPE.pdf
This check_nrpe utility will most likely be located at: /usr/local/nagios/libexec/check_nrpe
Using the host information for my_host manually test your NRPE connection from the Nagios server.
Execute the following:
/usr/local/nagios/libexec/check_nrpe -H <IP Address of my_host> -c check_this_process
If everything is setup correctly, you should get some output on the command line.
My trouble-shooting guide for 'NRPE: Command ... not found.' Ordered from most common to least common - in my environment.
Was the NRPE daemon restarted AFTER adding the new command? If it is a new command, then NRPE MUST be restarted.
Typos/spelling errors. Does the configured command name on the Nagios side, match that the one in the NRPE config?
Permissions issues. Does the USER that NRPE runs as, have READABLE and EXECUTABLE access to the actual command being ran? Did you test run the command, as the NRPE user? On that same system? TIP: Use the dash (-) when changing to the NRPE user on Linux (su - ...) so you import said users environment as well.
Path issues. Was the FULL PATH to the actual command put into the NRPE config file? Doing this will normally eliminate issues with PATHs, so don't do it any other way.
Bad commands. Does the actual command really execute? Or is it simply throwing an error and exiting? Do you have the correct version of (INSERT SOMETHING HERE) to run the command, installed on the remote system? You should be able to run any command defined in the nrpe.cfg from the command line, and all new commands should be checked BEFORE being added to the nrpe.cfg.
IF ALL THE ABOVE FAILS: Enable DEBUGGING in NRPE and check the log files (on the remote host). This is a bit of a drawn out process - described in the documentation - read it. It is important to disable DEBUGGING as soon as you get output that looks like it would be useful.
This checklist ASSUMES that you've done the needful things to the various Nagios and NRPE configs to get it working in the first place. Hopefully others will read this before posting yet another question as to why they are seeing this error.

ssh client (dropbear on a router) does no output when put in background

I'm trying to automate some things on remote Linux machines with bash scripting on Linux machine and have a working command (the braces are a relict from cmd concatenations):
(ssh -i /path/to/private_key user#remoteHost 'sh -c "echo 1; echo 2; echo 3; uname -a"')
But if an ampersand is concatenated to execute it in background, it seems to execute, but no output is printed, neither on stdout, nor on stderr, and even a redirection to a file (inside the braces) does not work...:
(ssh -i /path/to/private_key user#remoteHost 'sh -c "echo 1; echo 2; echo 3; uname -a"') &
By the way, I'm running the ssh client dropbear v0.52 in BusyBox v1.17.4 on Linux 2.4.37.10 (TomatoUSB build on a WRT54G).
Is there a way to get the output either? What's the reason for this behaviour?
EDIT:
For convenience, here's the plain ssh help output (on my TomatoUSB):
Dropbear client v0.52
Usage: ssh [options] [user#]host[/port][,[user#]host/port],...] [command]
Options are:
-p <remoteport>
-l <username>
-t Allocate a pty
-T Don't allocate a pty
-N Don't run a remote command
-f Run in background after auth
-y Always accept remote host key if unknown
-s Request a subsystem (use for sftp)
-i <identityfile> (multiple allowed)
-L <listenport:remotehost:remoteport> Local port forwarding
-g Allow remote hosts to connect to forwarded ports
-R <listenport:remotehost:remoteport> Remote port forwarding
-W <receive_window_buffer> (default 12288, larger may be faster, max 1MB)
-K <keepalive> (0 is never, default 0)
-I <idle_timeout> (0 is never, default 0)
-B <endhost:endport> Netcat-alike forwarding
-J <proxy_program> Use program pipe rather than TCP connection
Amendment after 1 day:
The braces do not hurt, with and without its the same result. I wanted to put the ssh authentication to background, so the -f option is not a solution. Interesting side note: if an unexpected option is specified (like -v), the error message WARNING: Ignoring unknown argument '-v' is displayed - even when put in background, so getting output from background processes generally works in my environment.
I tried on x86 Ubuntu regular ssh client: it works. I also tried dbclient on x86 Ubuntu: works, too. So this problem seems to be specific to the TomatoUSB build - or inside the "dropbear v0.52" was an unknown fix between the build in TomatoUSB and the one Ubuntu provides (difference in help output is just the double-sized default receive window buffer on Ubuntu)... how can a process know if it was put in background? Is there a solution to the problem?
I had the similar problem on my OpenWRT router. Dropbear SSH client does not write anything to output if there is no stdin, e.g. when run by cron. I presume that & has the same effect on process stdin (no input).
I found some workaround on author's bugtracker. Try to redirect input from /dev/zero.
Like:
ssh -i yourkey user#remotehost "echo 123" </dev/zero &
It worked for me as I tried to describe at my blog page.

How to shorten an inittab process entry, a.k.a., where to put environment variables that will be seen by init?

I am setting up a Debian Etch server to host ruby and php applications with nginx. I have successfully configured inittab to start the php-cgi process on boot with the respawn action. After serving 1000 requests, the php-cgi worker processes die and are respawned by init. The inittab record looks like this:
50:23:respawn:/usr/local/bin/spawn-fcgi -n -a 127.0.0.1 -p 8000 -C 3 -u someuser -- /usr/bin/php-cgi
I initially wrote the process entry (everything after the 3rd colon) in a separate script (simply because it was long) and put that script name in the inittab record, but because the script would run its single line and die, the syslog was filled with errors like this:
May 7 20:20:50 sb init: Id "50" respawning too fast: disabled for 5 minutes
Thus, I got rid of the script file and just put the whole line in the inittab. Henceforth, no errors show up in the syslog.
Now I'm attempting the same with thin to serve a rails application. I can successfully start the thin server by running this command:
sudo thin -a 127.0.0.1 -e production -l /var/log/thin/thin.log -P /var/run/thin/thin.pid -c /path/to/rails/app -p 8010 -u someuser -g somegroup -s 2 -d start
It works apparently exactly the same whether I use the -d (daemonize) flag or not. Command line control comes immediately back (the processes have been daemonized) either way. If I put that whole command (minus the sudo and with absolute paths) into inittab, init complains (in syslog) that the process entry is too long, so I put the options into an exported environment variable in /etc/profile. Now I can successfully start the server with:
sudo thin $THIN_OPTIONS start
But when I put this in an inittab record with the respawn action
51:23:respawn:/usr/local/bin/thin $THIN_OPTIONS start
the logs clearly indicate that the environment variable is not visible to init; it's as though the command were simply "thin start."
How can I shorten the inittab process entry? Is there another file than /etc/profile where I could set the THIN_OPTIONS environment variable? My earlier experience with php-cgi tells me I can't just put the whole command in a separate script.
And why don't you call a wrapper who start thin whith your options?
start_thin.sh:
#!/bin/bash
/usr/local/bin/thin -a 127.0.0.1 -e production -l /var/log/thin/thin.log -P /var/run/thin/thin.pid -c /path/to/rails/app -p 8010 -u someuser -g somegroup -s 2 -d start
and then:
51:23:respawn:/usr/local/bin/start_thin
init.d script
Use a script in
/etc/rc.d/init.d
and set the runlevel
Here are some examples with thin, ruby, apache
http://articles.slicehost.com/2009/4/17/centos-apache-rails-and-thin
http://blog.fiveruns.com/2008/9/24/rails-automation-at-slicehost
http://elwoodicious.com/2008/07/15/nginx-haproxy-thin-fastcgi-php5-load-balanced-rails-with-php-support/
Which provide example initscripts to use.
edit:
Asker pointed out this will not allow respawning. I suggested forking in the init script and disowning the process so init doesn't hang (it might fork() the script itself, will check). And then creating an infinite loop that waits on the server process to die and restarts it.
edit2:
It seems init will fork the script. Just a loop should do it.