running postgresql image with podman failed - archlinux

When running postgresql alpine image with podman :
podman run --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=test -e POSTGRES_USER=test -d postgres:11-alpine
the result is :
Error: /usr/bin/slirp4netns failed: "open(\"/dev/net/tun\"): No such device\nWARNING: Support for sandboxing is experimental\nchild failed(1)\nWARNING: Support for sandboxing is experimental\n"
The running system is archlinux. Is there a way to fix this error or a turn arround ?
Thanks

Is slirp4netns correctly installed? Check the project Site for information.
Sometimes the flag order matters. try -d first and -p last (directly infornt of the image) looking like:
podman run -d --name postgres -e POSTGRES_PASSWORD=test -e POSTGRES_USER=test -p 5432:5432 postgres:11-alpine
Try only creating the neccessary password, then log into your container and create manually (this always worked for me)
podman run -d --name postgres -e POSTGRES_PASSWORD=test -p 5432:5432 postgres:11-alpline
podman exec -it postgres bash
Create default user postgres
su - postgres
start postgres
psql
create databases and tables
CREATE USER testuser WITH PASSWORD 'testpassword' | Doku
CREATE DATABASE testdata WITH OWNER testuser
Check if it worked
\l+
Connect to your Database via IP and Port

I assume you upgraded Arch packages recently. Most likely your system needs a restart.

Related

Why does MSSQL in Docker return "The last operation was terminated because the user pressed CTRL+C" on sql queries?

I'm on Archlinux 64x (4.17.4-1-ARCH) with Docker (version 18.06.0-ce, build 0ffa8257ec). I'm using Microsoft's MSSQL docker container CU7. Each time I'm trying to enter a query or to run a SQL file I get this warning message:
Sqlcmd: Warning: The last operation was terminated because the user pressed CTRL+C.
Then when I check in the database with Datagrip, the query hasn't been executed! Here are my commands :
docker pull microsoft/mssql-server-linux:2017-CU7
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=GitGood*0987654321" -e "MSSQL_PID=Developer" -p 1433:1433 --name beep_boop_boop -d microsoft/mssql-server-linux:2017-CU7
# THIS
sudo echo "CREATE DATABASE test;" > /test.sql
docker exec beep_boop_boop /opt/mssql-tools/bin/sqlcmd -U SA -P GitGood*0987654321 < test.sql
# OR
docker exec beep_boop_boop /opt/mssql-tools/bin/sqlcmd -U SA -P GitGood*0987654321 -Q "CREATE DATABASE test;"
My question is How to avoid Warning operation was terminated by user warning on MSSQL queries?
You should use docker-compose, I'm sure it will make your life easier. My guess is you're getting an error without actually knowing it. First time I tried, I used an unsafe password which didn't meet security requirements and I got this error:
ERROR: Unable to set system administrator password: Password validation failed. The password does not meet SQL Server password policy requirements because it is not complex enough. The password must be at least 8 characters long and contain characters from three of the following four sets: Uppercase letters, Lowercase letters, Base 10 digits, and Symbols..
I see your password is strong, but note that you have a * in your password, which may be executed if not correctly escaped.
Or the server is just not started when running with your command line, example:
# example of a failing attempt
docker run -it --rm -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=GitGood*0987654321' -p 1433:1433 microsoft/mssql-server-linux:2017-CU7 bash
# wait until you're inside the container, then check if server is running
apt-get update && apt-get install -y nmap
nmap -Pn localhost -p 1433
If it's not running, you'll see something like that:
Starting Nmap 7.01 ( https://nmap.org ) at 2018-08-27 06:12 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000083s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE
1433/tcp closed ms-sql-s
Nmap done: 1 IP address (1 host up) scanned in 0.38 seconds
Enough with the intro, here's a working solution:
docker-compose.yml
version: '2'
services:
db:
image: microsoft/mssql-server-linux:2017-CU7
container_name: beep-boop-boop
ports:
- 1443:1443
environment:
ACCEPT_EULA: Y
SA_PASSWORD: GitGood*0987654321
Then run the following commands and wait until the image is ready:
docker-compose up -d
docker-compose logs -f &
up -d to demonize the container so it keeps running in the background.
logs -f will read logs and follow (similar to what tail -f does)
& to run the command in the background so we don't need to use a new shell
Now get a bash running inside that container like this:
docker-compose exec db bash
Once inside the image, you can run your commands
/opt/mssql-tools/bin/sqlcmd -U SA -P $SA_PASSWORD -Q "CREATE DATABASE test;"
/opt/mssql-tools/bin/sqlcmd -U SA -P $SA_PASSWORD -Q "SELECT name FROM master.sys.databases"
Note how I reused the SA_PASSWORD environment variable here so I didn't need to retype the password.
Now enjoy the result
name
--------------------------------------------------------------------------------------------------------------------------------
master
tempdb
model
msdb
test
(5 rows affected)
For a proper setup, I recommend replacing the environment key with the following lines in docker-compose.yml:
env_file:
- .env
This way, you can store your secrets outside of your docker-compose.yml and also make sure you don't track .env in your version control (you should add .env to your .gitignore and provide a .env.example in your repository with proper documentation.
Here's an example project which confirms it works in Travis-CI:
https://github.com/GabLeRoux/mssql-docker-compose-example
Other improvements
There are probably some other ways to accomplish this with one liners, but for readability, it's often better to just use some scripts. In the repo, I took a few shortcuts such as sleep 10 in run.sh. This could be improved by actually waiting until the db is up with a proper way. The initialization script could also be part of an entrypoint.sh, etc. Hope it gets you started 🍻

Postgres Container not see user

I have a problem, I'm trying to run a Postgres instance inside a docker container for it to be used in a java application. I try running the following command:
docker run --name postgres -e POSTGRES_USER=root -e POSTGRES_PASSWORD=postgres -v postgres:/var/lib/postgresql/data -P -d postgres
The container seems to be created successfully. But if I try to access to it to create a DB or table I do:
docker exec -it postgres /bin/bash
If I run the following:
psql -u postgres -p
The following response is returned:
/usr/lib/postgresql/10/bin/psql: invalid option -- 'u'
And that's no good for my application. I have read on the d.hub to use -e P_USER and P_Password to set it, but it doesn't work .

Error in creation of postgresql from bash - Install & Import

I am trying to automate the install of debian with postgreSQL but I'm running into issues with my script. The database import of schema.sql into the db1 doesn't seem to be working, and I'm not sure if I even created the database correctly.
This is the code I am using:
# POSTGRES
apt-get install -y postgresql
echo "CREATE ROLE deploy LOGIN ENCRYPTED PASSWORD '$APP_DB_PASS';" | sudo -u postgres psql
su postgres -c "createdb db1 --owner deploy"
su postgres -c "createdb db2 --owner deploy"
service postgresql reload
# IMPORT SQL
psql --username=postgres spider < /etc/schema.sql
When I try to see if the database is created I get the following errors and the SQL import didn't seem to work.
root#li624-168:/etc/app# psql -U root spider
psql: FATAL: role "root" does not exist
root#li624-168:/etc//app# psql -U deploy spider
psql: FATAL: Peer authentication failed for user "deploy"
Can anyone tell me please where I have gone wrong?
Firstly, make sure you check result codes when executing commands. You can abort your bash script by adding set -e at the top. If any single command fails it will stop immediately.
Secondly, take another look at the error message:
Peer authentication failed for user "deploy"
You're trying to login as "deploy" and it seems to recognize the user-name. However, your operating-system user is not called "deploy", so peer auth fails. It looks like you want to login using a password, so set up your pg_hba.conf file to allow that.
Postgres databases are owned by Linux users. So, you need to create an user in postgres tha have the same name of your Linux user. then, you have to use the new user to create your db. Example:
My linux account is razcor
sudo su postgres -c 'createuser -d -E -R -S razcor'
this creates a postgres user
sudo su razcor -c "createdb db1 --owner razcor"
this creates my db
result:
razcor#ubuntu:~$ psql -U razcor db1
psql (8.4.17)
Type "help" for help.
db1=>
In your case create a user named: root
#Richard Huxton: yes, I agree.

Automating install and creation of postgresql databases in shell

I am trying to create two databases called spider and geo under postgresql via an automated shell script. This is the code so far.
apt-get install -y postgresql
echo "CREATE ROLE deploy LOGIN ENCRYPTED PASSWORD '$APP_DB_PASS';" | sudo -u postgres psql
su postgres -c "createdb spider --owner deploy"
su postgres -c "createdb geo --owner deploy"
/etc/init.d/postgresql reload
Can anyone please have a look and see if I am going about this the right way. Moreover, when I try to see if it works by running the following command I get an error:
root:~# psql -l
psql: FATAL: role "root" does not exist
Where have I gone wrong, and is there any way to improve this script?
Judging by the apt-get, your deployment platform is Ubuntu-(ish).
apt-get install -y postgresql
echo "CREATE ROLE deploy LOGIN ENCRYPTED PASSWORD '$APP_DB_PASS';" | sudo -u postgres psql
su postgres -c "createdb spider --owner deploy"
su postgres -c "createdb geo --owner deploy"
service postgresql reload
Then you should be able to log in by specifying a user on the command line:
psql -U root spider
or
psql -U deploy spider
Generally speaking, you're on the right track.

Adding postgresql database import command to creation

This is my shell command for creating a database. It is run as part of a deployment script to automatically create two databases without human intervention.
# POSTGRES
apt-get install -y postgresql
echo "CREATE ROLE deploy LOGIN ENCRYPTED PASSWORD '$APP_DB_PASS';" | sudo -u postgres psql
su postgres -c "createdb db1 --owner deploy"
su postgres -c "createdb db2 --owner deploy"
service postgresql reload
Within this code, could someone please explain how I can integrate importing a SQL file into postgresql within this stage.
I believe it is something like this but I haven't go that to work:
psql --username=postgres < /etc/schema.sql
On debian, you probably need to follow Daniel's comment because PostgreSQL is probably configured to require the OS user to be the same username as the db user.
So you need to
su postgres -c "psql -f /etc/schema.sql"
Alternatively you could put all these db creation commands in a nice little shell script (without the su postgres -c part) and then just run that all at once. Something like:
#!/bin/bash
echo "CREATE ROLE deploy LOGIN ENCRYPTED PASSWORD '$APP_DB_PASS';" | su postgres -c psql
createdb db1 --owner deploy
createdb db2 --owner deploy
psql db1 -f /etc/schema.sql
psql db2 -f /etc/schema.sql
Then you can run that with sudo or su -c and simplify the things that can go wrong, auth-wise.