Delete rows using table partition - sql

I have a huge log table which is very busy in saving the user logs. My task is to schedule a job which keeps the last 3 days log or last 50k rows (whichever is greater) and deletes the rest. Shall this can be done through TABLE PARTITION.? I can't do this through DELETE statement which is very time expensive and stopping rows to be inserted. The table contains log_time as VARCHAR.
Thanks.

I can suggest you this simple solution:
create every 3 days table named like log-2015-01-28 and write all logs between 2015-01-28 - 2015-01-30 to this table
after 2015-01-30 create new table log-2015-01-28-31 and write all new rows to it
DROP table log-2015-01-28 after 2 days
I think it must work very fast

create trigger on insert
while trigger check the spaceused of the table .
if > 50k then delete the 'oldest row'

Related

How to create an error table that only logs last 100 entries

Is there a way to create a simple ERROR table that only logs the last 100 entries or do I have to write sql that after an insert, deletes any entries older than number 100?
I am using a derby database in the java project.
delete from table
where row_number() over (order by id desc)<100
Schedule this to run via SQL Server Agent every 15 mins or so and you will always have top 100 entries in the table.

Create trigger for SQLite, when new record is inserted, to delete old records

I have the table "log" in an SQLite database, where I write all my log files.
However I want to pretend the database from getting to big - and the smartest way in doing this, is by using a trigger on the insert command - at least I think so...
When a new record is inserted, a trigger shall get fired, that deletes all records older than 10 days.
Or...
When a new record is inserted, a trigger shall get fired, that deletes all old records, which exceed a specific amount (for example 1000).
I need an example code.
Kind Regards, and thx.
This will create an insert trigger that will delete anything with a create date that is more then 10 days old.
CREATE TRIGGER [TRIGGER_NAME] AFTER INSERT ON my_table
BEGIN
DELETE FROM LOG WHERE DATE(CREATE_DATE) > DATE('now', '-10 days');
END
If you want to do something based on size like you were saying with 1000 rows you can do something like this.
CREATE TRIGGER [TRIGGER_NAME] AFTER INSERT ON my_table
BEGIN
DELETE FROM LOG WHERE ROW_NO NOT IN
(SELECT TOP 1000 ROW_NO FROM LOG ORDER BY CREATE_DATE DESC);
END
This will select the 1000 newest rows and delete anything that is not in that select statement.

Using SQL*Plus with Oracle to DELETE data

I have above 60M rows to delete from 2 separate tables (38M and 19M). I have never deleted this amount of rows before and I'm aware that it'll cause things like rollback errors etc. and probably won't complete.
What's the best way to delete this amount of rows?
You can delete some number of rows at a time and do it repeatedly.
delete from *your_table*
where *conditions*
and rownum <= 1000000
The above sql statement will remove 1M rows at once, and you can execute it 38 times, either by hand or using PL/SQL block.
The other way I can think of is ... If the large portion of data should be removed, you can negate the condition and insert the data (that should be remained) to a new table, and after inserting, drop the original table and rename the new table.
create table *new_table* as
select * from *your_table*
where *conditions_of_remaining_data*
After the above, you can drop the old table, and rename the table.
drop table *your_table*;
alter table *new_table* rename to *your_table*;

Database Update Query for Huge Records

We hare having around 20,80,000 records in the table.
We needed to add new column to it and we added that.
Since this new column needs to be primary key and we want to update all rows with Sequence
Here's the query
BEGIN
FOR loop_counter IN 1 .. 211 LOOP
update user_char set id = USER_CHAR__ID_SEQ.nextval where user_char.id is null and rownum<100000;
commit;
END LOOP;
end;
But it'w now almost 1 day completed. still the query is running.
Note: I am not db developer/programmer.
Is there anything wrong with this query or any other query solution (quick) to do the same job?
First, there does not appear to be any reason to use PL/SQL here. It would be more efficient to simply issue a single SQL statement to update every row
UPDATE user_char
SET id = USER_CHAR__ID_SEQ.nextval
WHERE id IS NULL;
Depending on the situation, it may also be more efficient to create a new table and move the data from the old table to the new table in order to avoid row migration, i.e.
ALTER TABLE user_char
RENAME TO user_char_old;
CREATE TABLE user_char
AS
SELECT USER_CHAR__ID_SEQ.nextval, <<list of other columns>>
FROM user_char;
<<Build indexes on user_char>>
<<Drop and recreate any foreign key constraints involving user_char>>
If this was a large table, you could use parallelism in the CREATE TABLE statement. It's not obvious that you'd get a lot of benefit from parallelism with a small 2 million row table but that might shave a few seconds off the operation.
Second, if it is taking a day to update a mere 2 million rows, there must be something else going on. A 2 million row table is pretty small these days-- I can populate and update a 2 million row table on my laptop in somewhere between a few seconds and a few minutes. Are there triggers on this table? Are there foreign keys? Are there other sessions updating the rows? What is the query waiting on?

script to delete millions of rows from a oracle table based on either age or date

Is there a possiblity to write a script in oracle which deletes the rows from a table based on the age. i.e., I want to delete the rows s. I have a table with millions of rows in it and I want to keep only the latest 3 months rows. I have the following table with column names as
I am very new to database stuff. How can I write a script for this?
With this many rows deleted in a single transaction you should also predict that much undo space will be used. All the rows that you delete will be briefly saved in the undo tablespace to allow you to rollback transaction and, more importantly, to allow other users to see the rows until you COMMIT your delete. See this asktom thread for advice.
Since FEED_DT_TM is a DATE, there is no need to use TO_DATE to cast it to a DATE. Simply
DELETE FROM your_table_name
WHERE sysdate - feed_dt_tm >= 120
Also consider the option of keeping the rows you need in a new table and then dropping the old table.
Something like..
create table new_table_2_months
as
select *
from table1
where date_column > (sysdate-60)
drop table table1;
alter table new_table_2_months rename to table1;
Make sure you also look at constraints, indexes and other objects, if applicable to the initial table. And don't forget to TEST, TEST, TEST.