Debug PostgreSQL sql query - sql

Is there a way to configure PostgreSQL so that when I run a "delete from table_a;"
it outputs some information how many entries that was cascade deleted.
I'm running my querys in the cli application.

I found a solution. It was good enough for me, though I wanted a estimated statistics on how many rows that were effected.
This will ouput a list of all constraints triggered by the query.
EXPLAIN ANALYZE DELETE FROM table_a;

You could use a plpgsql trigger function on tables to increment a sequence on a delete to get an exact count.
You would need to reset the sequence before issuing the delete. You could use a different sequence per table to get per table statistics.

Related

A simple SQL statement is timing out

This is a very simple SQL statement:
Update FosterHomePaymentIDNo WITH (ROWLOCK) Set FosterHomePaymentIDNo=1296
But it's timing out when I execute it from the context of an ASP.NET WebForms application.
Could this have something to do with the rowlock? How can I make sure that this SQL statement runs in a reasonable amount of time without compromising the integrity of the table? I suspect removing the rowlock would help, but then we could run into different users updating the table at the same time.
And yes, this is a "next ID" table that contains only one column and only one row; I don't know why it was set up this way instead of using an identity column or even select max(id) + 1!
If an UPDATE of a one-row table takes a long time, it's probably blocked by another session that updated it in a transaction that hasn't committed yet. This is why you don't want to generate IDs like this.

delete first X row Ingres ANSI

I have 730000+ records which I need to delete in Ingres db which work with ANSI92 and I need to delete then without overload db, simple delete where search condition, doesn't work, DB just use all memory and trowing error. thinking to run it in loop, and delete by portions 10-20K of records .
i tried to use top and it didn't work
delete top (10)from TABLE where web_id <0 ;
, also was trying to use Limit also didnt work
DELETE FROM from TABLE where web_id <0 LIMIT 10;
any ideas how to do it ? Thank you !
You could use a session temporary table to hold the first 10 tids (tuple id's) and then delete based on those:
declare global temporary table session.tenrows as
select first 10 tid the_tid from "table" where web_id<0
on commit preserve rows with norecovery;
delete from "table" where tid in (select the_tid from session.tenrows);
When you say "without overload db", do you mean avoiding hitting the force-abort limit of the transaction log file? If so what might work for you is:
set session with on_logfull=notify;
delete from table where web_id<0;
This would automatically commit your transaction at points where force-abort is reached then carry on, rather than rolling back and reporting an error.
A downside of using this setting is that it can be tricky to unpick what has/hasn't been done if any other error should occur (your work will likely be partially committed), but since this appears to be a straight delete from a table it should be quite obvious which rows remain and which don't.
The "set session" statement must be run at the start of a transaction.
I would advise not running concurrent sessions with "on_logfull=notify" (there have been bugs in this area, whether they're fixed in your installation depends on your version/patch level).

Delete statement suddenly taking more time in Oracle11g

I had a table test that has 25000 rows in Oracle 11g. I had given a Delete statement as Follows.
delete from test;
This query was running more than 5 minutes but still not coming out and am not getting any error messages also. Can someone tell me what could be the problem?
try
delete from test nologging;
If you are using delete without a filter, you're better off using truncate as turbanoff said. Also in my experience using a delete from table within a script of some sort causes the symptoms you are having.

MySQL trigger in order to cache results?

I have a query that takes about a second to execute pre-cache. Post-cache is fine. Here's a description of the problem: MySQL: nested set is slow?
If I can't get a solution for my problem, would creating a trigger to loop through and execute all the possible queries that table might have to do (i.e. if there are 100 records on that table, it would execute 100 queries) be a good idea? This way, when my application does execute such a query, I can depend on cached results.
It feels like a bad solution, but I really can't afford a 1 second response time from that query.
Since you are using a MyISAM table, you can try preload the table indexes into the key cache.
http://dev.mysql.com/doc/refman/5.0/en/cache-index.html
http://dev.mysql.com/doc/refman/5.0/en/load-index.html

How to efficiently remove all rows from a table in DB2

I have a table that has something like half a million rows and I'd like to remove all rows.
If I do simple delete from tbl, the transaction log fills up. I don't care about transactions this case, I do not want to rollback in any case. I could delete rows in many transactions, but are there any better ways to this?
How to efficiently remove all rows from a table in DB2? Can I disable the transactions for this command somehow or is there special commands to do this (like truncate in MySQL)?
After I have deleted the rows, I will repopulate the database with similar amount of new data.
It seems that following command works in newer versions of DB2.
TRUNCATE TABLE someschema.sometable IMMEDIATE
To truncate a table in DB2, simply write:
alter table schema.table_name activate not logged initially with empty table
From what I was able to read, this will delete the table content without doing any kind of logging which will go much easier on your server's I/O.