Postgres Container not see user - sql

I have a problem, I'm trying to run a Postgres instance inside a docker container for it to be used in a java application. I try running the following command:
docker run --name postgres -e POSTGRES_USER=root -e POSTGRES_PASSWORD=postgres -v postgres:/var/lib/postgresql/data -P -d postgres
The container seems to be created successfully. But if I try to access to it to create a DB or table I do:
docker exec -it postgres /bin/bash
If I run the following:
psql -u postgres -p
The following response is returned:
/usr/lib/postgresql/10/bin/psql: invalid option -- 'u'
And that's no good for my application. I have read on the d.hub to use -e P_USER and P_Password to set it, but it doesn't work .

Related

Is it possible to add full-text-index-search to an existing image of sql server

Currently I run the following command to create the sql server in docker, however this does not contain full-text-search.
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=YourStrong#Passw0rd" -p 1433:1433 --name sql1 --hostname sql1 -d mcr.microsoft.com/mssql/server:2022-latest

Export Query Result as CSV file from Docker PostgreSQL container to local machine

I'm not sure if this is possible of if I'm doing something wrong since I'm still pretty new to Docker. Basically, I want to export a query result inside PostgreSQL docker container as a csv file to my local machine.
This is where I got so far. Firstly, I run my PostgreSQL docker container with this command:
sudo docker run --rm --name pg-docker -e POSTGRES_PASSWORD=something -d -p 5432:5432 -v $HOME/docker/volumes/postgres:/var/lib/postgresql/data postgres
Then I access the docker container with docker exec to run PostgreSQL command that would copy the query result to a csv file with specified location like this:
\copy (select id,value from test) to 'test_1.csv' with csv;
I thought that should export the query result as a csv file named test_1.csv in the local machine, but I couldn't find the file anywhere in my local machine, also checked both of these directories: $HOME/docker/volumes/postgres; /var/lib/postgresql/data postgres
You can export the data to the STDOUT and pipe the result to a file in the client machine:
docker exec -it -u database_user_name container_name \
psql -d database_name -c "COPY (SELECT * FROM table) TO STDOUT CSV" > output.csv
-c tells psql you to execute a given SQL statement when the connection is established.
So your command should look like this:
docker exec -it -u postgres pgdocker \
psql -d yourdb -c "COPY (SELECT * FROM test) TO STDOUT CSV" > test_1.csv
The /var/lib/postgresql/data directory is where the database server stores its data files. It isn't a directory that users need to manipulate directly or where nothing interesting can be found.
Paths like test_1.csv are relative to working directory. The default directory when you enter the postgres container with docker exec is / so that's where your file should be. You can also switch to another directory with cd before running psql:
root#b9e5a0572207:/some/other/path# cd /some/other/path/
root#b9e5a0572207:/some/other/path# psql -U postgres
... or you can provide an absolute path:
\copy (select id,value from test) to '/some/other/path/test_1.csv' with csv;
You can use docker cp to transfer a file from the container to the host:
docker cp pg-docker:/some/other/path/test_1.csv /tmp
... or you can create a volume if this is something you do often.

running postgresql image with podman failed

When running postgresql alpine image with podman :
podman run --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=test -e POSTGRES_USER=test -d postgres:11-alpine
the result is :
Error: /usr/bin/slirp4netns failed: "open(\"/dev/net/tun\"): No such device\nWARNING: Support for sandboxing is experimental\nchild failed(1)\nWARNING: Support for sandboxing is experimental\n"
The running system is archlinux. Is there a way to fix this error or a turn arround ?
Thanks
Is slirp4netns correctly installed? Check the project Site for information.
Sometimes the flag order matters. try -d first and -p last (directly infornt of the image) looking like:
podman run -d --name postgres -e POSTGRES_PASSWORD=test -e POSTGRES_USER=test -p 5432:5432 postgres:11-alpine
Try only creating the neccessary password, then log into your container and create manually (this always worked for me)
podman run -d --name postgres -e POSTGRES_PASSWORD=test -p 5432:5432 postgres:11-alpline
podman exec -it postgres bash
Create default user postgres
su - postgres
start postgres
psql
create databases and tables
CREATE USER testuser WITH PASSWORD 'testpassword' | Doku
CREATE DATABASE testdata WITH OWNER testuser
Check if it worked
\l+
Connect to your Database via IP and Port
I assume you upgraded Arch packages recently. Most likely your system needs a restart.

Update all odoo modules in a docker container

I am working on a Odoo Docker container. I tried to find the appropriate command to update all the modules through the command line there but in vain. What is the appropriate command to do so ? I've put docker restart container_name -u all but also in vain.
Thanks in advance !
If you are using the docker-compose up command to start your servers, then you need to add the following line to your docker-compose.yml file under the odoo service:
command: odoo -u all -d odoo-prod
Where odoo-prod is the name of your database. This overwrites the default command (which is just odoo without update) and tells docker to update all modules when the container restarts.
Alternatively, you can also run a separate container that performs the updates:
docker run -v [your volumes] odoo:10.0 odoo -u all -d odoo-prod
This also overwrites the command from the dockerfile with the command stated here that includes the update.
You should have an ENTRYPOINT or CMD in your Dockerfile that runs python odoo.py -u all, the -u all option is for Odoo not docker restart
Open your container console
docker exec -it odoo bash
Update your module using other port
/usr/bin/odoo -p 8070 -d mydb -u mymodule
If the database it's on another container
/usr/bin/odoo -p 8070 --db_host=172.17.0.2 --db_user=odoo --db_password=odoo -d mydb -u mymodule

Import dump/sql file into my postgresql database on Linode

I recently moved my Ruby on Rails 4 app from Heroku to Linode. Everything has been setup correctly, but I need to populate my database with a file, lets call it movies.sql
I am not very familiar with postgresql command and VPS, so having trouble getting this done. I uploaded it to Dropbox since I saw many SO posts that you can use S3/Dropbox.
I saw different commands like this (unsure how to go about it in my situation):
psql -U postgres -d testdb -f /home/you/file.sql
psql -f file.sql dbname
psql -U username -d myDataBase -a -f myInsertFile
So which is the correct one in my situation and how to run when I SSH in Linode? Thanks
You'll need to get the file onto your server or you'll need to use a different command from your terminal.
If you have the file locally, you can restore without sshing in using the psql command:
psql -h <user#ip_address_of_server> -U <database_username> -d <name_of_the_database> -f local/path/to/your/file.sql
Otherwise, the command is:
psql -U <database_username> -d <name_of_the_database> < remote/path/to/your/file.sql
-U sets the db username, -h sets the host, -d sets the name of the database, and -f tells the command you're restoring from a file.