Delete statement suddenly taking more time in Oracle11g - sql

I had a table test that has 25000 rows in Oracle 11g. I had given a Delete statement as Follows.
delete from test;
This query was running more than 5 minutes but still not coming out and am not getting any error messages also. Can someone tell me what could be the problem?

try
delete from test nologging;

If you are using delete without a filter, you're better off using truncate as turbanoff said. Also in my experience using a delete from table within a script of some sort causes the symptoms you are having.

Related

Cancelled update statement is preventing drop statement for a table in SQL Server

I had an update statement on a table with 200M rows in SQL Server, and it had been running over 15 hours and I cancelled the query. Ever since that moment, the performance of the server has significantly dropped. When I tried dropping that table it's not allowed and I believe the table is locked and the changes from the update statement are reverting. I actually don't want that table and need to drop it and what should be my approach. Any help or guidance is much appreciated since I could not come across a proper solution.
I thought of restarting SQL Server and I believe I will get stuck with recover mode and things can get worse.

delete first X row Ingres ANSI

I have 730000+ records which I need to delete in Ingres db which work with ANSI92 and I need to delete then without overload db, simple delete where search condition, doesn't work, DB just use all memory and trowing error. thinking to run it in loop, and delete by portions 10-20K of records .
i tried to use top and it didn't work
delete top (10)from TABLE where web_id <0 ;
, also was trying to use Limit also didnt work
DELETE FROM from TABLE where web_id <0 LIMIT 10;
any ideas how to do it ? Thank you !
You could use a session temporary table to hold the first 10 tids (tuple id's) and then delete based on those:
declare global temporary table session.tenrows as
select first 10 tid the_tid from "table" where web_id<0
on commit preserve rows with norecovery;
delete from "table" where tid in (select the_tid from session.tenrows);
When you say "without overload db", do you mean avoiding hitting the force-abort limit of the transaction log file? If so what might work for you is:
set session with on_logfull=notify;
delete from table where web_id<0;
This would automatically commit your transaction at points where force-abort is reached then carry on, rather than rolling back and reporting an error.
A downside of using this setting is that it can be tricky to unpick what has/hasn't been done if any other error should occur (your work will likely be partially committed), but since this appears to be a straight delete from a table it should be quite obvious which rows remain and which don't.
The "set session" statement must be run at the start of a transaction.
I would advise not running concurrent sessions with "on_logfull=notify" (there have been bugs in this area, whether they're fixed in your installation depends on your version/patch level).

Delete a broken postgreSQL table

I have a table of ~20 millions in postgreSQL and I want to delete it.
But every one of there operations doesn't work (It still running more than 12 hours without success):
- DELETE
- TRUNCATE
- VACUUM
- ANALYZE
I can't do anythink on this table...
A few day's ago I try to re-generate the id (BIGSERIAL) of each line with:
ALTER SEQUENCE "data_id_seq" RESTART WITH 1
UPDATE data SET id=nextval('data_id_seq')
And I think this operation brok the table...
If someone know how can I delete this table, thanks for help !
Try this...
DROP TABLE table_name;
See the doc

Debug PostgreSQL sql query

Is there a way to configure PostgreSQL so that when I run a "delete from table_a;"
it outputs some information how many entries that was cascade deleted.
I'm running my querys in the cli application.
I found a solution. It was good enough for me, though I wanted a estimated statistics on how many rows that were effected.
This will ouput a list of all constraints triggered by the query.
EXPLAIN ANALYZE DELETE FROM table_a;
You could use a plpgsql trigger function on tables to increment a sequence on a delete to get an exact count.
You would need to reset the sequence before issuing the delete. You could use a different sequence per table to get per table statistics.

MySQL trigger in order to cache results?

I have a query that takes about a second to execute pre-cache. Post-cache is fine. Here's a description of the problem: MySQL: nested set is slow?
If I can't get a solution for my problem, would creating a trigger to loop through and execute all the possible queries that table might have to do (i.e. if there are 100 records on that table, it would execute 100 queries) be a good idea? This way, when my application does execute such a query, I can depend on cached results.
It feels like a bad solution, but I really can't afford a 1 second response time from that query.
Since you are using a MyISAM table, you can try preload the table indexes into the key cache.
http://dev.mysql.com/doc/refman/5.0/en/cache-index.html
http://dev.mysql.com/doc/refman/5.0/en/load-index.html