Rows for certain column value is automatically deleted within 30 sec from an Oracle database table. I have searched all schedular, procedures, and all_source nowhere I could find any delete or truncate statement against table.
There are other records as well in table but only records against certain column value are getting deleted.
Related
I have database (couple hundred tables) that all contains a specific column called LastReplicationDate. This column name is always the same in each table, and is always the same value within each table.
Is it possible to write a query that gets the distinct value of this column assigned to each table in my database without having to select each table via a union query?
In my MS SQL table I have created a trigger(AFTER INSERT,UPDATE) , inside that I have checked the columns created for Inserted and Deleted tables.
But I am finding mismatch of columns in case of row update , in that case both temporary tables along with main table are having different number of columns inside them ( like main table contains 52 columns and inserted is having 49 columns and deleted is having only 47)
Note: these missing columns are not computed columns.
So wanted to know , in which case we can observe mismatch of columns (number wise) for these tables in case of update.
As most of the commenters rightly mentioned that it is not possible, when checked in more details the difference between columns was not because of inserted or deleted table but rather way in which columns have been identified for checking purpose.
In my code I used XML for path , which was skipping multiple columns of inserted if value against them was null. But when handled with FOR XML PATH,TYPE it gave identical coulmns.
I have a table in my SQL Server which I update on a monthly basis with new data from the Client. Unfortunately, there is no timestamp which tells me when the rows were added or modified. This becomes a problem when sometimes the same data gets appended twice, and I need to manually check and delete the repeated rows. Is there a way to write a query like below:
DELETE FROM (table)
WHERE (date_time_when_row_is_added) <= (manually_specified_datetime)
Is there a way to tell the last date time any table records were modified in SQL Server 2012? without having to put in a column for last update in table.
The idea is not to query say a 10 million record table when no rows were changed.
Is there a way to tell the last date time any table records were
modified in SQL Server 2012? without having to put in a column for
last update in table.
Yes - you can write an update/insert/delete trigger on the table that records the fact that an update happened along with the date and time.
The idea is not to query say a 10 million record table when no rows
were changed.
You might also want to look at the built in Change Tracking features
I have above 60M rows to delete from 2 separate tables (38M and 19M). I have never deleted this amount of rows before and I'm aware that it'll cause things like rollback errors etc. and probably won't complete.
What's the best way to delete this amount of rows?
You can delete some number of rows at a time and do it repeatedly.
delete from *your_table*
where *conditions*
and rownum <= 1000000
The above sql statement will remove 1M rows at once, and you can execute it 38 times, either by hand or using PL/SQL block.
The other way I can think of is ... If the large portion of data should be removed, you can negate the condition and insert the data (that should be remained) to a new table, and after inserting, drop the original table and rename the new table.
create table *new_table* as
select * from *your_table*
where *conditions_of_remaining_data*
After the above, you can drop the old table, and rename the table.
drop table *your_table*;
alter table *new_table* rename to *your_table*;