How to login to postgresql db - After session kill (for copy database) - sql

I tried to copy a database within the same server using postgresql server
I tried the below query
CREATE DATABASE newdb WITH TEMPLATE originaldb OWNER dbuser;
And got the below error
ERROR: source database "originaldb" is being accessed by 1 other user
So, I executed the below command
SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'originaldb' AND pid <> pg_backend_pid();
Now none of us are able to login/connect back to the database.
When I provide the below command
psql -h 192.xx.xx.x -p 9763 -d originaldb -U postgres
It prompts for a password and upon keying password, it doesn't return any response
May I understand why does this happen? How can I connect to the db back? How do I restart/make the system to let us login back?
Can someone help us with this?

It sounds like something is holding an access exclusive lock on a shared catalog, such as pg_database. If that is the case, no one will be able to log in until that lock gets released. I wouldn't think the session-killing code you ran would cause such a situation, though. Maybe it was just a coincidence.
If you can't find an active session, you can try using system tools to figure out what is going on, like ps -efl|fgrep postgre. Or you can just restart the whole database instance, using whatever method you would usually use to do that, like pg_ctl restart -D <data_directory> or sudo service postgresql restart or some GUI method if you are on an OS that does that.

Related

how to take backup of only roles/users in postgresql?

I am trying to restore the backup of the database but it's giving me the error of roles, so I got to know that first, we have to take backup of roles/users then we take complete backup but I am unaware of command that can be used. can anyone help me with the command?
You can use pg_dumpall for that with the --globals-only option:
pg_dumpall --globals-only --file=all_roles_and_users.sql -p postgres -h ...
The file all_roles_and_users.sql will contain all roles and role memberships currently defined in the instance (aka "cluster") you connect to.
This nee

Postgres loading datatable to PgAdmin 4

I am new to Postgres and i am trying to learn from an online tutorial. One of the first thing is to load the data, as follows:
Finally, run psql -U <username> -f clubdata.sql -d postgres -x -q to
create the 'exercises' database, the Postgres 'pgexercises' user, the
tables, and to load the data in. Note that you may find that the sort
order of your results differs from those shown on the web site:
I am using pdAdmin4 and opened the SQL shell. However I wasn't able to load this database. First of all, how can i figure out what is my current username?
Secondly, I have never worked with command line before and am quite unsure how to do this. Could someone break this down step-by-step?
You can run "psql -h" for more help. You never have a current username as such, you have to specify it but start with "-U postgres" and ask again if that doesn't work.
Your sql file to load will need the folder path or you could try the cmd prompt and change to the folder where your clubdata file is. Your command line assumes there is already a database named postgres which there probably is. Try again;
psql -U postgres -f clubdata.sql -d postgres -x -q
The command psql is for the command line client. You need to run this in a terminal.
I wrestled with this input myself, despite a little CLI experience with psql. It may help to remove the -q flag in the end to make the output non-quiet, then you see what's going on.
Lastly, beware that the import creates a schema, so you need to read up on schemas. See this related question for a bit more background: https://dba.stackexchange.com/questions/264398/cant-find-any-tables-after-psql-dump-import-from-pgexercises-com

Proper way to migrate a postgres database?

I have a dev version and a production version running in django.
I recently started populating it with a lot of data and found that the django loaddata tries to load everything into memory before adding it into the db and my files will be too big for that.
What is the proper way to push my data from my dev machine to my production?
I did...
pg_dump -U user -W db ./filename.sql
and then on the production server I did...
psql dbname < filename.sql
It seems like it worked, all the data is there, but it came up with some errors such as
relation xxx already exists
constrain xxx for relation xxx already exists
and there were quite a few of them, but like I said everything appears to be there. Is this the right way to do it?
Edit: I have in the production machine the database with information and I don't want truncate the tables before import.
This is the script that I use:
pg_dump -d DATABASE_NAME -U postgres --format plain --inserts > /FILE.sql
Edit: As you says in comments that you don't want truncate the tables before import, you can't do this type of import into your production database. I suggest empty your production database before import the dev database dump.

magento migration from one domain to another

I am transferring my magento site from an old domain to a new domain.
I have exported the database file from the old server and I have done all the necessary changes.
Now I'm trying to run the exported file into the new database but sql is stuck at loading for almost an hour.
Please somebody help.
See loading screen attached. Image here
Thank you.
I would suggest making a backup of the whole 'cPanel' and then reimport it back in. This way you won't mess anything up with the database. If you still do need to perform exporting and reimporting back - make sure you disable the key check by adding this before and after your database.
SET foreign_key_checks = 0;
SET foreign_key_checks = 1;
And to successfully import large database you must increase the memory limit on your 'mysql' within '.ini' file.
I wouldn't do it through a GUI interface. Do you have SSH access? If so, here's how you can run it from Command Line which won't be limited by browser processing.
dump:
mysqldump -u '<<insert user>>' -p --single-transaction --database <<database name>> > data_dump.sql
load:
mysql -p -u '<<insert user>>' <<database name>> < data_dump.sql
It's best to do this as the root user so you don't have any trouble.
On import, if you are getting errors that the definer is not a user, you can either create the definer user or you can run this sed command that will replace the definer in your file with the current mySQL user:
sed -E 's/DEFINER=`[^`]+`#`[^`]+`/DEFINER=<<new user name>>/g' data_dump.sql > cleaned_data_dump.sql
As espradley and damek132 said, combine both the answers. Disable foreign key checks, if it's not. But, mostly it's there while exporting the sql dump.
And use the mysql command line through ssh. You should be up and running in half an hour.

Running DB2 SQL from shell command line does not finishing execution

I am on a unix server which is set up to remotely connect to another db2 unix server.
I was able to connect to DB2 using following script:
db2 "connect to <server name> user <user name> using <pass>";
Then I ran following command to save results of SQL to a file
db2 "select * from <tablename>" > /myfile.txt
The script starts execution but never ends.I tried using -x before select too but same result never ends execution.Table is small has only one record.When I forcefully end execution the header of table gets saved in file with following error:
SQL0952N Processing was cancelled due to an interrupt. SQLSTATE=57014
Please help I am stuck in a riddle.
You could monitor the connection and the output file in order to know what is happening.
Before start the monitoring, get the current application handle
db2 "values SYSPROC.MON_GET_APPLICATION_ID()"
Open a second terminal, and execute db2top against your databases. Checks the current sessions (L) and take a look at your connection (previous application ID). If you see a Lock Wait status, it is just because another connection put a lock on that table, and it is not possible to read it concurrently.
db2top -d myDB
Try to execute the same query with another isolation level
db2 "select * from <tablename> WITH UR"
If that is the problem, you should analyze which other processes are running (modifying data) on the database.
Open another terminal, and do a
tail -f /myfile.txt
If you see the file is changing, it is just because the output is too big. Just wait.