extracting dates from SCN_TO_TIMESTAMP(ORA_ROWSCN) - sql

I have a problem where I am supposed to extract row creation date for each row and be part of a large report.With SCN_TO_TIMESTAMP(ORA_ROWSCN) i can view record creation dates but i can not convert,extract that data and user it somewhere else. I'm getting an error message which says "ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1"
The query i wrote was as follows:
*insert into MEMBER_CREATION_DATE(NATIONAL_ID,CHECKNO,CREATION_DATE)
select NATIONAL_ID,CHECKNO,trunc(scn_to_timestamp(ora_rowscn)) from MEMBER*

Your clue is ORA-08181: specified number is not a valid system change number
What it means is that the SCN_TO_TIMESTAMP is not able to get the ORA_ROWSCN because the record is no longer part of the UNDO data. The SCN_TO_TIMESTAMP which is the timestamp associated to that System Change Number is too old, therefore you get the error.
You can check the oldest available SCN number in database by this query:
select min(SCN) min_scn from sys.smon_scn_time;
As Oracle states:
The association between an SCN and a timestamp when the SCN is generated is remembered by the database for a limited period of time. This period is the maximum of the auto-tuned undo retention period, if the database runs in the Automatic Undo Management mode, and the retention times of all flashback archives in the database, but no less than 120 hours. The time for the association to become obsolete elapses only when the database is open. An error is returned if the SCN specified for the argument to SCN_TO_TIMESTAMP is too old.

Related

Oracle : Date time of load

I need to extract some data from an Oracle table that was loaded on a particular day. Is there a way to do that? The rows do not have any datetimestamp entry
Found it - ORA_ROWSCN. Have to figure out how to convert it to a date (SCN_TO_TIMESTAMP is not working)
In general, no. You'd need a date column in the table.
If the load was recent, you could try
select scn_to_timestamp( ora_rowscn ), t.*
from table t
However, there are several problems with this
Oracle only knows how to convert recent SCN's to timestamps (on the order of a few days). You probably would need to create a new function that called scn_to_timestamp and handled the exception if the SCN can't be converted to a timestamp.
The conversion of an SCN to a timestamp is approximate (should be within a minute)
Unless the table was built with rowdependencies (which is not the default), the SCN is stored at the block level not at the row level. So if your load changed one row in the block, all the rows in the block would have the same updated SCN. If you can tolerate picking up some rows that were loaded earlier and/or you know that your load only writes to new blocks, this may be less of an issue.
Beyond that, you'd be looking at things like whether flashback logs were enabled or some other mechanism was in place to track data versioning.

How to find updated date for hive tables?

How to find the last DML or DQL update timestamp for Hive table. I can find TransientDDLid by using "formatted describe ". But it is helping in getting Modified Date. How can I figure out the latest UPDATED DATE for a Hive Table(Managed/External)?
Do show table extended like 'table_name';
It will give number of milliseconds elapsed since epoch.
Copy that number, remove last 3 digits and do select from_unixtime(no. of milliseconds elapsed since epoch)
e.g. select from_unixtime(1532442615733);
This will give you timestamp of that moment in current system's time zone.
I guess this is what you're looking for...

Cdc and how long are logs kept

I started using SQL capture data change tables on Microsoft SQL Server 2016, it looks fairly easy to use mechanism, but now when I was using some tutorial I found and there was n info about that there is a limited time that data is kept in those tables, I think default is 3 days.
I was trying to find some info about it but with no luck so my questions stands:
Is there a way to increase that time that logs are kept or even turn it off.
You are looking for the Retention Period, which is indeed 3 days by default.
You can change it using sys.sp_cdc_change_job
USE [YourDatabase];
EXECUTE sys.sp_cdc_change_job
#job_type = N'cleanup',
#retention = 2880;
[ #retention ] =retention Number of minutes that change rows are to be
retained in change tables. retention is bigint with a default of NULL,
which indicates no change for this parameter. The maximum value is
52494800 (100 years). If specified, the value must be a positive
integer. retention is valid only for cleanup jobs.
Please note, that this affects ALL tables marked to be tracked by CDC in the database, there is no way to configure it per table.
https://msdn.microsoft.com/en-us/library/bb510748(v=sql.105).aspx

Streaming to Partitioned Tables BigQuery outside of listed date bounds

I have noticed in the BigQuery documentation that it says that you can
stream to partitions within the last 30 days in the past and 5
days in the future relative to the current date, based on current UTC
time.
However, I found it actually allows you to stream further back - we successfully got it to stream to a partition 6 months in the past.
Trying to stream to a date over a year ago however gives this error message:
BigQuery error in insert operation: The destination table's partition
tmp$20160101 is outside the allowed bounds. You can only stream to
partitions within 366 days in the past and 31 days in the future
relative to the current date.
The error message clearly specifies the bounds as 366<->31. Is this simply a mistake the BigQuery documentation?
Google cloud link
Latest update: This is now strictly enforced
This is not a mistake but a transform period to reduce the user's impact. The allowed the date range will be shorten with time goes, however what document says will be respected for sure.
Few points:
You are choosing ingestion-time partitioned method, try using column-partitions instead (https://cloud.google.com/bigquery/docs/creating-column-partitions)
Also make sure you are passing a valid date to bigQuery. I just faced this issue while date was in invalid format.

how to update the previous rows with last_modified date column having null values?

I have a loader table in which the feed updates and inserts records for every three hours. A few set of records show Null values for the last_modified date even though I have a merge which checks for last_modified date column to sysdate. For future purpose, I set the last_modified to sysdate and enabled Not NULL constraint.Is there any way where we can rectify for these set of records alone to have the last_modified date with the correct timestamp (the records should have the last_modified date with the date when the insert/update is done).
Thanks
No, the last modification time is not stored in a row by default. You have to do that yourself like you are doing now, or enable some form of journaling. There is no way to correct any old records where you have not done so.
If your rows were modified "recently enough", you might still map their ora_rowscn to their approximate modification TIMESTAMP using SCN_TO_TIMESTAMP :
UPDATE MY_TABLE
SET Last_Modified = SCN_TO_TIMESTAMP(ora_rowscn)
WHERE Last_Modified IS NULL;
This is not a magic bullet though. To quote the documentation:
The usual precision of the result value is 3 seconds.
The association between an SCN and a timestamp when the SCN is generated is remembered by the database for a limited period of time. This period is the maximum of the auto-tuned undo retention period, if the database runs in the Automatic Undo Management mode, and the retention times of all flashback archives in the database, but no less than 120 hours. The time for the association to become obsolete elapses only when the database is open. An error is returned if the SCN specified for the argument to SCN_TO_TIMESTAMP is too old.
If you try to map ora_rowscn of rows outside the allowed window, you will get the error ORA-08181 "specified number is not a valid system change number".