Oracle : Date time of load - sql

I need to extract some data from an Oracle table that was loaded on a particular day. Is there a way to do that? The rows do not have any datetimestamp entry

Found it - ORA_ROWSCN. Have to figure out how to convert it to a date (SCN_TO_TIMESTAMP is not working)

In general, no. You'd need a date column in the table.
If the load was recent, you could try
select scn_to_timestamp( ora_rowscn ), t.*
from table t
However, there are several problems with this
Oracle only knows how to convert recent SCN's to timestamps (on the order of a few days). You probably would need to create a new function that called scn_to_timestamp and handled the exception if the SCN can't be converted to a timestamp.
The conversion of an SCN to a timestamp is approximate (should be within a minute)
Unless the table was built with rowdependencies (which is not the default), the SCN is stored at the block level not at the row level. So if your load changed one row in the block, all the rows in the block would have the same updated SCN. If you can tolerate picking up some rows that were loaded earlier and/or you know that your load only writes to new blocks, this may be less of an issue.
Beyond that, you'd be looking at things like whether flashback logs were enabled or some other mechanism was in place to track data versioning.

Related

Postgresql Performance: What Is the Best Way to Use pg_timezone_names?

We use only timestamps without time zone for a global application. However, some things have to be in local times for user convenience. In order for that to work, we have to deal with the conversion from local to UTC, including handling daylight savings. We don't need precision below that of minute.
pg_timezone_names contains everything we need, including the unambiguous long string for time zone name (e.g., 'US/Eastern'), the interval utc_offset, and the boolean is_dst. (I am assuming the latter two values change as dst boundaries are crossed.)
I am trying to figure out the best performance model, assuming we ultimately have millions of users. Here are the options being considered:
TZ name string ('US/Eastern') in the table for the location. Every time a time transformation (from local to UTC or back) is needed, we directly call pg_timezone_names for the utc_offset of that time zone. (This is assuming that view is well-indexed.) Index on the string in the location table, of course.
Local table time_zones replicating pg_timezone_names, but adding id and boolean in_use columns (and dropping the abbreviation.) Include tz_id in the location table as a foreign key instead of the string.
In the case of a local table, use a procedure that fires around the clock at one minute after every hour over the 26 hours or so that time zones can change, that checks the list of time zones in_use that have just passed two AM Sunday (based on the locally-stored offset,) and calls pg_timezone_names for the updated offset and is_dst values. Trigger updates on the local table check whenever a zone goes into use and makes sure it has the correct values.
The question is whether it is faster to evaluate the indexed string in the location table and then pull the offset from pg_timezone_names every time it is needed, or use a local time_zones table to pull the offset with the FK. I'm thinking the second will be much faster, because it avoids the initial string handling, but it really depends on the speed of the view pg_timezone_names.
After researching this more and discussing with a colleague, I've realized a flaw in the second option above. That option would indeed be quite a bit faster, but it only works if one wishes to pull the current utc_offset for a time zone. If one needs to do it for a timestamp that is not current or a range of timestamps, the built-in postgres view needs to be called, so each timestamp can be called at timezone, which will make the appropriate Daylight Savings conversion for that particular timestamp.
It's slower, but I don't think it can be improved, unless one is only interested in the current timestamp conversion, which is extremely unlikely.
So I am back to the first option, and indexing the time zone string in the local table is no longer necessary, as it would never be searched or sorted on.

Improve performance of deducting values of same table in SQL

for a metering project I use a simple SQL table in the following format
ID
Timestamp: dat_Time
Metervalue: int_Counts
Meterpoint: fk_MetPoint
While this works nicely in general I have not found an efficient solution for one specific problem: There is one Meterpoint which is a submeter of another Meterpoint. I'd be interested in the Delta of those two Meterpoints to get the remaining consumption. As the registration of counts is done by one device I get datapoints for the various Meterpoints at the same Timestamp.
I think I found a solution applying a subquery which appears to be not very efficient.
SELECT
A.dat_Time,
(A.int_Counts- (SELECT B.int_Counts FROM tbl_Metering AS B WHERE B.fk_MetPoint=2 AND B.dat_Time=A.dat_Time)) AS Delta
FROM tbl_Metering AS A
WHERE fk_MetPoint=1
How could I improve this query?
Thanks in advance
You can try using a window function instead:
SELECT m.dat_Time,
(m.int_counts - m.int_counts_2) as delta
FROM (SELECT m.*,
MAX(CASE WHEN fk.MetPoint = 2 THEN int_counts END) OVER (PARTITION BY dat_time) as int_counts_2
FROM tbl_Metering m
) m
WHERE fk_MetPoint = 1
From a query point of view, you should as a minimum change to a set-based approach instead of an inline sub-query for each row, using a group by as a minimum but it is a good candidate for a windowing query, just as suggested by the "Great" Gordon Linoff
However if this is a metering project, then we are going to expect a high volume of records, if not now, certainly over time.
I would recommend you look into altering the input such that delta is stored as it's own first class column, this moves much of the performance hit to the write process which presumably will only ever occur once for each record, where as your select will be executed many times.
This can be performed using an INSTEAD OF trigger or you could write it into the business logic, in a recent IoT project we computed or stored these additional properties with each inserted reading to greatly simplify many types of aggregate and analysis queries:
Id of the Previous sequential reading
Timestamp of the Previous sequential reading
Value Delta
Time Delta
Number of readings between this and the previous reading
The last one sounds close to your scenario, we were deliberately batching multiple sequential readings into a single record.
You could also process the received data into a separate table that includes this level of aggregation information, so as not to pollute the raw feed and to allow you to re-process it on demand.
You could redirect your analysis queries to this second table, which is now effectively a data warehouse of sorts.

extracting dates from SCN_TO_TIMESTAMP(ORA_ROWSCN)

I have a problem where I am supposed to extract row creation date for each row and be part of a large report.With SCN_TO_TIMESTAMP(ORA_ROWSCN) i can view record creation dates but i can not convert,extract that data and user it somewhere else. I'm getting an error message which says "ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1"
The query i wrote was as follows:
*insert into MEMBER_CREATION_DATE(NATIONAL_ID,CHECKNO,CREATION_DATE)
select NATIONAL_ID,CHECKNO,trunc(scn_to_timestamp(ora_rowscn)) from MEMBER*
Your clue is ORA-08181: specified number is not a valid system change number
What it means is that the SCN_TO_TIMESTAMP is not able to get the ORA_ROWSCN because the record is no longer part of the UNDO data. The SCN_TO_TIMESTAMP which is the timestamp associated to that System Change Number is too old, therefore you get the error.
You can check the oldest available SCN number in database by this query:
select min(SCN) min_scn from sys.smon_scn_time;
As Oracle states:
The association between an SCN and a timestamp when the SCN is generated is remembered by the database for a limited period of time. This period is the maximum of the auto-tuned undo retention period, if the database runs in the Automatic Undo Management mode, and the retention times of all flashback archives in the database, but no less than 120 hours. The time for the association to become obsolete elapses only when the database is open. An error is returned if the SCN specified for the argument to SCN_TO_TIMESTAMP is too old.

How to refer commit time of the records on BigQuery

Range decorators of BigQuery refers added time of the records.
References table data added between and
(from https://cloud.google.com/bigquery/table-decorators)
Or it seems to have been also called commit time.
the timestamps are compared to a commit time
(from https://code.google.com/p/google-bigquery/issues/detail?id=160#c12)
Is there any way to know added time or commit time of the records?
I.e. something like "SELECT ThisRowCommitTime(), * FROM table", or through Tabledata:list, that would expose the timestamp for each row?
No, that's a reasonable thing to look for, but it's not currently available.
You could file a feature request and it might help there to motivate further about how the feature would be useful to you. In particular: would this still be useful to you if it were exposed only for data up to 7 days old, matching the range you can time-travel with a decorator?

how to update the previous rows with last_modified date column having null values?

I have a loader table in which the feed updates and inserts records for every three hours. A few set of records show Null values for the last_modified date even though I have a merge which checks for last_modified date column to sysdate. For future purpose, I set the last_modified to sysdate and enabled Not NULL constraint.Is there any way where we can rectify for these set of records alone to have the last_modified date with the correct timestamp (the records should have the last_modified date with the date when the insert/update is done).
Thanks
No, the last modification time is not stored in a row by default. You have to do that yourself like you are doing now, or enable some form of journaling. There is no way to correct any old records where you have not done so.
If your rows were modified "recently enough", you might still map their ora_rowscn to their approximate modification TIMESTAMP using SCN_TO_TIMESTAMP :
UPDATE MY_TABLE
SET Last_Modified = SCN_TO_TIMESTAMP(ora_rowscn)
WHERE Last_Modified IS NULL;
This is not a magic bullet though. To quote the documentation:
The usual precision of the result value is 3 seconds.
The association between an SCN and a timestamp when the SCN is generated is remembered by the database for a limited period of time. This period is the maximum of the auto-tuned undo retention period, if the database runs in the Automatic Undo Management mode, and the retention times of all flashback archives in the database, but no less than 120 hours. The time for the association to become obsolete elapses only when the database is open. An error is returned if the SCN specified for the argument to SCN_TO_TIMESTAMP is too old.
If you try to map ora_rowscn of rows outside the allowed window, you will get the error ORA-08181 "specified number is not a valid system change number".