I tried connecting to the database server using the command:
psql -h host_ip -d db_name -U user_name --password
It displays the following line and refuses to connect.
psql: FATAL: too many connections for role "user_name".
How to close the active connections?
I do not have admin rights for the database. I am just an ordinary user.
From inside any DB of the cluster:
Catch 22: you need to be connected to a database first. Maybe you can connect as another user? (By default, some connections are reserved for superusers with the superuser_reserved_connections setting.)
To get detailed information for each connection by this user:
SELECT *
FROM pg_stat_activity
WHERE usename = 'user_name';
As the same user or as superuser you can cancel all (other) connections of a user:
SELECT pg_cancel_backend(pid) -- (SIGINT)
-- pg_terminate_backend(pid) -- the less patient alternative (SIGTERM)
FROM pg_stat_activity
WHERE usename = 'user_name'
AND pid <> pg_backend_pid();
Better be sure it's ok to do so. You don't want to terminate important queries (or connections) that way.
pg_cancel_backend() and pg_terminate_backend() in the manual.
From a Linux shell
Did you start those other connections yourself? Maybe a hanging script of yours? You should be able to kill those (if you are sure it's ok to do so).
You can investigate with ps which processes might be at fault:
ps -aux
ps -aux | grep psql
If you identify a process to kill (better be sure, you do not want to kill the server):
kill 123457689 # pid of process here.
Or with SIGKILL instead of SIGTERM:
kill -9 123457689
I'm pretty new to pgAdmin, but so far I have not utilized the command line. I had the same issue and I found the easiest way to resolve the issue in my case was to simply delete the processes listed in "Database Activity" in the Dashboard.
(just click the X on the left side of the PID)
It's a bit tedious since you must delete each process individually, but doing so should free up your available connections.
Hope this is useful.
You need to connect on your postgresql db and run command:
ALTER ROLE your_username CONNECTION LIMIT -1;
Check pool_size this is probably too much or to small set value on local psql settings. You should at first check with pool_size = 10 (like default). This should fix errors of too_many_connections.
Please check how many connections are allowed on that user and you can just kill other connections for that user and log in again. But it's better to just increase the connection limit for that user.
This issue mostly arises on PgAdmin. It seems like after all these years still this issue persists.
This will drop existing connections except for yours:
Query pg_stat_activity and get the pid values you want to kill, then issue SELECT pg_terminate_backend(pid int) to them.
PostgreSQL 9.2 and above:
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'TARGET_DB' -- ← change this to your DB
AND pid <> pg_backend_pid();
PostgreSQL 9.1 and below:
SELECT pg_terminate_backend(pg_stat_activity.procpid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'TARGET_DB' -- ← change this to your DB
AND procpid <> pg_backend_pid();
from https://stackoverflow.com/a/5408501/13813241 (duplicate)
I was getting this for a specific incident.
Django Development. I had a shell open to query django models. I also have the dev server running. I was using elephantsql for testing/prototyping. So it threw error. Once I exited out of django manage.py shell, it started working.
Related
In Oracle SQL Developer, I need to switch the active database connection manually. Is there a command that will connect to a different database programmatically, assuming that the login credentials are already saved? I'm trying to avoid clicking on the drop-down menu at the top right of the window which selects the active connection.
Perhaps I should rather have a single SQL file per database? I could understand that argument. But this to prepare to migrate some tables from one database to another and so it's nice to have all of the context in one file.
On database1, run a query on table1 which is located in schema1.
-- manually switch to database1 (looking for a command to replace this step)
ALTER SESSION SET CURRENT_SCHEMA = schema1
SELECT * FROM table1;
On database2, run a query on table2 which is located in schema2.
-- manually switch to database2
ALTER SESSION SET CURRENT_SCHEMA = schema2
SELECT * FROM table2;
Looks like this is well documented here
Use this command
CONN[ECT] [{<logon>| / |proxy} [AS {SYSOPER | SYSDBA | SYSASM}] [edition=value]]
You need a DDL TRIGGER to perform an event after your presql
CREATE TRIGGER sample
ON TABLE
AFTER
Event
........
THEN
ALTER SESSION SET
CURRENT_SCHEMA = schema2
SELECT * FROM table2;
I don't know of a way in which to change your selected connection in SQL Developer, but there is a programmatic method for temporarily changing the connection under which the script commands are run, as #T.S. pointed out. I want to give a few examples, which might be helpful to people (as they would have been for me).
So let's say your script has part A and part B and you want to execute them one after the other but from different connections. Then you can use this:
CONNECT username1/password1#connect_identifier1;
-- Put commands A here to be executed under this connection.
DISCONNECT; -- username1
CONNECT username2/password2#connect_identifier2;
-- Put commands B here to be executed under this connection.
DISCONNECT; -- username2
The connect_identifier part identifies the database where you want to connect. For instance, if you want to connect to a pluggable database on the local machine, you may use something like this:
CONNECT username/password#localhost/pluggable_database_name;
or if you want to connect to a remote database:
CONNECT username/password#IP:port/database_name;
You can omit the password, but then you will have to input it in a prompt each time you run that section. If you want to consult the CONNECT command in more detail, this reference document may be useful.
In order to execute the commands, you would then select the code that you are interested in (including the relevant CONNECT commands) and use Run Script (F5) or just use Run Script (F5) without selecting anything which will execute the entire script file. SQL Developer will execute your commands, put the output into the Script Output tab and then stop the connection. Note that the output of SELECT commands might be unpleasant to read inside Script Output. This can be mitigated by running the following command first (just once):
SET sqlformat ansiconsole;
There is also Run Statement (Ctrl+Enter), but do note that Run Statement (Ctrl+Enter) does not seem to work well with this workflow. It will execute and display each SELECT statement into a separate Query Result tab, which is easier to read, BUT the SELECT query will always be executed from the context of the active connection in SQL Developer (the one in the top right), not the current code connection of the CONNECT statement. On the other hand, INSERT commands, for instance, DO seem to be executed in the context of the current code connection of the CONNECT statement. This (rather inconsistent) behaviour is probably not what you want, so I recommend using Run Script (F5) as described above.
I am a complete beginner with postgresql. I created a test database called iswdp and now I want to delete it. When I do:
dropdb iswdp
the command returns no output and when I \list the table iswdp is still there.
dropdb iswdp;
returns:
ERROR: syntax error at or near "dropdb"
LINE 1: dropdb iswdp
I used:
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE datname = current_database()
AND pid <> pg_backend_pid();
which I found from another stackoverflow post to disconnect from all databases then attempted dropdb iswdp again with the same result.
I'm stuck, can someone help me? I'm doing this on linux mint from the bash terminal.
The command dropdb is a command issued from a shell prompt. From a sql prompt (like psql), you would want to issue a DROP DATABASE command.
I recommend opening psql and issuing DROP DATABASE iswdp;. That should work.
You may get an error that looks like ERROR: cannot drop the currently open database, which will happen if you connect to iswdp and try to drop it. If that happens, try instead to connect to the postgres database, and issue the same DROP DATABASE command. It should work.
dropdb is a commando from shell not from psql program, from psql the commnado is drop database iswp;
I am trying to kill a Sybase process but no success.
sp_who returns, among others the line:
fid,spid,status,loginame,origname,hostname,blk_spid,dbname,tempdbname,cmd,block_xloid,threadpool
' 0',' 14','running','sa','sa','server',' 0','DBSOTEST','tempdb','INSERT',' 0','syb_default_pool'
If I try to kill this process (kill 14) I have the error:
Could not execute statement.
You cannot use KILL to kill your own
process. Sybase error code=6104 Severity Level=16, State=1,
Transaction State=1 Line 1
select syb_quit() exists from my session but the process is not stopped.
Observation:
After a restart of Sybase server the process is there. It is this normal? I do not have any insert command that is running, or any other program that does the insert.
Any insert command in the any table of the DB does not work.
Any select command works.
How can I obtain the permission to insert in the tables of my database?
There seems to be two questions combined: one about killing and one about permissions. Please raise separate questions for there.
As for the killing, your own process will always be there the moment you connect to the ASE server. And as the error message says, you cannot kill yourself.
When there are errors with inserting etc., at least post the error messages. Or talk to your DBA.
I am currently trying to maintain a postgres database of player information when I've encountered an issue with one of our players. The player came to us saying that he could not load his character into the game world. I did a small SELECT statement to pull up the player information with no problems. However, when they try to use a character then we update a field in the row that says that the player character is active. However we noticed that we couldn't run an update or a delete statement on that row. Every other row in the table gets modified without any issues.
On of our DB admins thought that it was a lock that was applied on that row, but after further investigation with see nothing that could be locking that row.
Any advice or suggestions is greatly appreciated.
It certainly sounds like a lock.
The "big hammer" approach is bouncing (stoping/starting) your database to kill any running queries.
The "scalpel" approach is to run show full processlist; and try to find a query that's running that might have created a lock. Note that the query itself might not lock the row, but an earlier query in the same transaction might have locked it. Any long-running query is a suspect.
EDIT
I have found an issue in postgres that keeps the lock even after a restart. Here's how you find and fix it:
Run this query:
select * from pg_prepared_xacts
Here's the doc for this system view
If there are any rows, it means there are orphaned/dangling transactions, which can hold locks even after restarting postgres.
To kill them, for every row get the gid (a long UUID), and execute this:
ROLLBACK PREPARED '<paste gid here>'
That should clean things up.
If you are on a *nix platform, you can run this single command that will do the lot:
psql -U postgres mydatabase -c "select * from pg_prepared_xacts" | grep -v transaction | grep \| | awk '{print $3}' | xargs -I % psql -U postgres mydatabase -c "ROLLBACK PREPARED '%'"
I need to restart a database because some processes are not working. My plan is to take it offline and back online again.
I am trying to do this in Sql Server Management Studio 2008:
use master;
go
alter database qcvalues
set single_user
with rollback immediate;
alter database qcvalues
set multi_user;
go
I am getting these errors:
Msg 5061, Level 16, State 1, Line 1
ALTER DATABASE failed because a lock could not be placed on database 'qcvalues'. Try again later.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.
Msg 5061, Level 16, State 1, Line 4
ALTER DATABASE failed because a lock could not be placed on database 'qcvalues'. Try again later.
Msg 5069, Level 16, State 1, Line 4
ALTER DATABASE statement failed.
What am I doing wrong?
After you get the error, run
EXEC sp_who2
Look for the database in the list. It's possible that a connection was not terminated. If you find any connections to the database, run
KILL <SPID>
where <SPID> is the SPID for the sessions that are connected to the database.
Try your script after all connections to the database are removed.
Unfortunately, I don't have a reason why you're seeing the problem, but here is a link that shows that the problem has occurred elsewhere.
http://www.geakeit.co.uk/2010/12/11/sql-take-offline-fails-alter-database-failed-because-a-lock-could-not-error-5061/
I managed to reproduce this error by doing the following.
Connection 1 (leave running for a couple of minutes)
CREATE DATABASE TESTING123
GO
USE TESTING123;
SELECT NEWID() AS X INTO FOO
FROM sys.objects s1,sys.objects s2,sys.objects s3,sys.objects s4 ,sys.objects s5 ,sys.objects s6
Connections 2 and 3
set lock_timeout 5;
ALTER DATABASE TESTING123 SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
Try this if it is "in transition" ...
http://learnmysql.blogspot.com/2012/05/database-is-in-transition-try-statement.html
USE master
GO
ALTER DATABASE <db_name>
SET OFFLINE WITH ROLLBACK IMMEDIATE
...
...
ALTER DATABASE <db_name> SET ONLINE
Just to add my two cents. I've put myself into the same situation, while searching the minimum required privileges of a db login to run successfully the statement:
ALTER DATABASE ... SET SINGLE_USER WITH ROLLBACK IMMEDIATE
It seems that the ALTER statement completes successfully, when executed with a sysadmin login, but it requires the connections cleanup part, when executed under a login which has "only" limited permissions like:
ALTER ANY DATABASE
P.S. I've spent hours trying to figure out why the "ALTER DATABASE.." does not work when executed under a login that has dbcreator role + ALTER ANY DATABASE privileges. Here's my MSDN thread!
I will add this here in case someone will be as lucky as me.
When reviewing the sp_who2 list of processes note the processes that run not only for the effected database but also for master. In my case the issue that was blocking the database was related to a stored procedure that started a xp_cmdshell.
Check if you have any processes in KILL/RollBack state for master database
SELECT *
FROM sys.sysprocesses
WHERE cmd = 'KILLED/ROLLBACK'
If you have the same issue, just the KILL command will probably not help.
You can restarted the SQL server, or better way is to find the cmd.exe under windows processes on SQL server OS and kill it.
In SQL Management Studio, go to Security -> Logins and double click your Login. Choose Server Roles from the left column, and verify that sysadmin is checked.
In my case, I was logged in on an account without that privilege.
HTH!
Killing the process ID worked nicely for me.
When running "EXEC sp_who2" Command over a new query window... and filter the results for the "busy" database , Killing the processes with "KILL " command managed to do the trick. After that all worked again.
I know this is an old post but I recently ran into a very similar problem. Unfortunately I wasn't able to use any of the alter database commands because an exclusive lock couldn't be placed. But I was never able to find an open connection to the db. I eventually had to forcefully delete the health state of the database to force it into a restoring state instead of in recovery.
In rare cases (e.g., after a heavy transaction is commited) a running CHECKPOINT system process holding a FILE lock on the database file prevents transition to MULTI_USER mode.
In my scenario, there was no process blocking the database under sp_who2. However, we discovered because the database is much larger than our other databases that pending processes were still running which is why the database under the availability group still displayed as red/offline after we tried to 'resume data'by right clicking the paused database.
To check if you still have processes running just execute this command:
select percent complete from sys.dm_exec_requests
where percent_complete > 0