Combine results of multiple queries in Oracle Sql - sql

Database - Oracle Database 10g Release 10.2.0.4.0 ,
working with Oracle SQL Developer
EDIT
Sorry: Query-1 Should be :
SELECT TABLE_NAME FROM USER_TABLES;
earlier it was SELECT OWNER, TABLE_NAME FROM ALL_TABLES;
Output-1: ALL TABLES that I own
Query-2:
SELECT COUNT(*) FROM MYTABLE_1;
Output-2: Total number of rows in specific table MYTABLE_1
Query-3: SELECT MAX(ORA_ROWSCN) FROM MYTABLE_1;
Output of Query-3 is a number (493672033308) which is further used in Query-4
Query-4: SELECT SCN_TO_TIMESTAMP(493672033308) FROM DUAL;
Output-4 is last updated time of specific table MYTABLE_1
How to combine all of this to get a list of all user tables with a total of 3 columns having column headers TABLE_NAME, TOTAL_ROWS, LAST_UPDATE_TIME
EDIT-2: Final Query:
SELECT t.TABLE_NAME
, m.TIMESTAMP
, t.NUM_ROWS
, ((NVL(t.NUM_ROWS,0) + m.INSERTS) - m.DELETES) AS TOT_ROWS
FROM USER_TABLES t
LEFT OUTER JOIN USER_TAB_MODIFICATIONS m
ON t.TABLE_NAME = m.TABLE_NAME
ORDER BY t.TABLE_NAME;
Thanks APC, StevieG, bob dylan :)

You want to use the contents of the data dictionary to drive a query. This can only be done with dynamic SQL, in a procedure.
Some points to bear in mind:
Oracle maintains the SCN Timestamp mapping to support Flashback Query. It just keeps the mapping for the supported UNDO_RETENTION period. So we can use SCN_TO_TIMESTAMP() only for tables which have recent activity. Staler tables will hurl ORA-08181.
Tables with no rows won't have an associated SCN. SCN_TO_TIMESTAMP() hurls if we pass it null for the SCN.
So a robust solution is quite complex. This one uses DBMS_OUTPUT to display the results; other mechanisms are available:
declare
n pls_integer;
max_scn number;
x_scn_too_old exception;
pragma exception_init(x_scn_too_old ,-08181);
txt varchar2(30);
begin
for lrec in ( select table_name from user_tables )
loop
execute immediate
'select count(*), max(ora_rowscn) from '
|| lrec.table_name
into n, max_scn;
dbms_output.put(lrec.table_name
||' count='||to_char(n));
begin
if n > 0 then
select to_char(scn_to_timestamp(max_scn), 'yyyy-mm-dd hh24:mi:ss.ff3')
into txt
from dual;
else
txt := null;
end if;
exception
when x_scn_too_old then
txt := ('earlier');
end;
dbms_output.put_line(' ts='||txt );
end loop;
end;
/
There is a pure SQL alternative, using NUM_ROWS from USER_TABLES and the USER_TAB_MODIFICATIONS view. This view is maintained by Oracle to monitor the staleness of statistics on tables. As you're on 10g this will be happening automatically (in 9i we had to switch on monitoring for specific tables).
USER_TAB_MODIFICATIONS gives us numbers for the DML activity on each table, which is neat because we can add those numbers to NUM_ROWS to get an accurate total, which is much more efficient than issuing a COUNT().
Again a couple of points.
Any table which lacks statistics will have NUM_ROWS=0. For this reason I use NVL() in the arithmetic column
USER_TAB_MODIFICATIONS only contains data for tables which have changed since the last time statistics were gathered on them. Once we gather statistics on a table it disappears from that view until more DML is issued. So, use an outer join.
Note that we will only have a timestamp for tables with stale statistics. This is less predictable than the SCN_TO_TIMESTAMP used above, as it depends on your stats gathering strategy.
So here it is:
select t.table_name
, m.timestamp
, t.num_rows
, ((nvl(t.num_rows,0) + m.inserts) - m.deletes) as tot_rows
from user_tables t
left outer join USER_TAB_MODIFICATIONS m
on t.table_name = m.table_name
order by t.table_name
/
Perhaps the best solution is a combination, using NUM_ROWS and USER_TAB_MODIFICATIONS to avoid the count, and only checking ORA_ROWSCN for tables with fresh statistics.
Note that this is only a concern because you don't have your own journalling or table audit. Many places add metadata columns on their tables to track change data (e.g. CREATED_ON, CREATED_BY, UPDATED_ON, UPDATED_BY).

I'd do it like this:
SELECT a.OWNER, a.TABLE_NAME, a.NUM_ROWS, b.TIMESTAMP
FROM ALL_TABLES a
INNER JOIN DBA_TAB_MODIFICATIONS b ON a.OWNER = b.TABLE_OWNER AND a.TABLE_NAME = b.TABLE_NAME
edit - This is correct, except for NUM_ROWS might not be fully accurate:
http://www.dba-oracle.com/t_count_rows_all_tables_in_schema.htm

I don't have enough rep to post a comment however the reason why StevieG's answer is still returning an error is because you don't have access to the dba_tab_modifications view, instead use user_ / all_ equivalents in-line with your permissions:
SELECT a.OWNER, a.TABLE_NAME, a.NUM_ROWS, b.TIMESTAMP
FROM ALL_TABLES a
INNER JOIN ALL_TAB_MODIFICATIONS b ON a.OWNER = b.TABLE_OWNER AND a.TABLE_NAME = b.TABLE_NAME

Solution to your problem is simple.
use Subquery Factoring. this will help you for sure.
find reference at below link.
http://oracle-base.com/articles/misc/with-clause.php

Related

Postgres SQL Statments, getting the right data

I know Postgres has a lot of functions, and I'm not the fittest in SQL anyways, but I need to know if its possible with Postgres to somehow get the data in a table with that statement-
SELECT table_name
FROM information_schema.tables
where table_schema='public'
I'm getting my tables I created which I want.
eg.
table_name
myTable1
myTable2
myTable3
Each of the Tables has different data filled, but every table has a column version and I want to access it.
Joining the tables wouldn't work at least it didn't went out the way I wanted. What I want is this
table_name
Version
myTable1
21
mytABLE2
12
with
Select version from mytable1 order by version desc limit 1
I get the last version but I would like to combine this somehow
I mean I can join the 3 tables but that's not what I want
So my question is it possible to do it? Or do I have to work around.
Because I believe that getting the table names is on a higher layer
In the end you need dynamic SQL for this. One way to do it, is to use a PL/pgSQL function, another way is to use query_to_xml() to run a dynamic query without the use of PL/pgSQL.
with data as (
select query_to_xml(format('select version
from %I.%I
order by version desc limit 1',
t.table_schema, t.table_name),
true, true, '') as result
from information_schema.tables t
where t.table_schema = 'public'
)
select (xpath('/row/version/text()', result))[1]::text::int as version
from data;
The format() function is used to build a SELECT query the way you need it. The query_to_xml() will then return something like:
<row xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<version>21</version>
</row>
The xpath() function is then used to extract the 21 from that XML. As it returns an array of matches, the [1] is used to extract the first match. This is then converted to an integer.
Note that if there is at least one table that does not contain a column named version this query will fail. You can work around that by extending the WHERE clause to:
where table_schema = 'public'
and exists (select *
from information_schema.columns c
where c.table_schema = t.table_schema
and c.table_name = t.table_name
and c.column_name = 'version'

Converting one to many relation into a json column in PostgreSQL

I'm trying to query two information_schema tables in PostgreSQL - tables and columns in order to get the following result:
table_name - columns_as_json_array
Sort of converting this one to many relation into a json array column.
I tried a lot of different methods and came up with somethings like this:
SELECT t.table_name, c.json_columns
FROM information_schema.TABLES t
LEFT JOIN LATERAL(
SELECT table_name, json_agg(row_to_json(tbc)) AS json_columns
FROM information_schema.COLUMNS tbc
WHERE t.table_name = tbc.table_name
GROUP BY table_name
) as c ON TRUE;
This results a list of table_names but the json_columns always contains all of the columns available instead of the columns of that certain table.
Any ideas?
I don't really see the point for a lateral join here. As far as concerns, you can get the expected results by aggregating information_schema.columns:
select table_name, json_agg(row_to_json(c)) json_columns
from information_schema.columns c
group by table_name
order by table_name

How to get several records searching on the whole database

My question is, is it possible to list all the columns from the whole database not just in specific tables based on 3 different criteria which ones are in an "OR" relationship. so for example I have database called "Bank" and I have 3 criterias "Criteria1; Criteria2; Criteria3" and if any of them is true so the relation between them should be OR and not AND than I will get back all the columns matching the criterias and the output put should provide "account_id" or "customer_id" from the same table.
How do I procced in this case?
It is possible, but you probably don't want to do it. Anyway, you could write a stored procedure that finds all tables that contain the columns you want:
select distinct table_name from user_tab_cols utc
where exists (select * from user_tab_cols where table_name = utc.table_name
and column_name = 'ACCOUNT_ID')
and exists (select * from user_tab_cols where table_name = utc.table_name
and column_name = 'CUSTOMER_ID');
Given the tables you could run a query where you append table name and your criteria:
execute immediate 'select account_id, customer_id from agreement where '
|| your_criteria_here;
A bit messy, inefficient and treat this as pseudo-code. However, if you really want to do this for an ad-hoq query it should point you in the right direction!

Vertica Dynamic Max Timestamp from all Tables in a Schema

System is HP VERTICA 7.1
I am trying to create a SQL query which will dynamically find all particular tables in a specific schema that have a Timestamp column named DWH_CREATE_TIMESTAMP from system tables. (I have completed this part successfully)
Then, pass this list of tables to an outer query or some kind of looping statement which will select the MAX(DWH_CREATE_TIMESTAMP) and TABLE_NAME from all the tables in the list (200+) and union all the results together into one list.
The expected output is a 2 column table with all said tables with that TS field and the max of each value. Tables are constantly being created and dropped, so the point is to make everything totally dynamic where no TABLE_NAME values are ever hard-coded.
Any idea of Vertica specific ways to accomplish this without UDF's would be greatly appreciated.
Inner Query (working):
select distinct(table_name)
from columns
where column_name = 'DWH_CREATE_TIMESTAMP'
and table_name in (select DISTINCT(table_name) from all_tables where schema_name = 'PTG_DWH')
Outer Query (attempted - not working):
SELECT Max(DWH_CREATE_DATE) from
WITH table_name AS (
select distinct(table_name)
from columns
where column_name = 'DWH_CREATE_DATE' and table_name in (select DISTINCT(table_name) from all_tables where schema_name = 'PTG_DWH'))
SELECT MAX(DWH_CREATE_DATE)
FROM table_name
Thanks!!!
No way to do that in one SQL .
You can used the below method for node max timestamp columns values
select projections.anchor_table_name,vs_ros.colname,max(max_value) from vs_ros,vs_ros_min_max_values,storage_containers,projections where vs_ros.colname ilike 'timestamp'
and vs_ros.salstorageid=storage_containers.sal_storage_id
and vs_ros_min_max_values.rosid=vs_ros.rosid
and storage_containers.projection_name=projections.projection_name
group by projections.anchor_table_name,vs_ros.colname

Find the tables affected by user error & reverse the mistake

I'm working on an Oracle database with an error made by a user. The issue is a number of person records were moved into a different "round". Each round has "episodes": Wrong "round" means all the episode processing has been affected (episodes skipped over). These users won't receive mails they were supposed to receive as a result of missed "episodes".
I have a query put together that identifies all the records that have been mistakenly updated. I need a way to modify the query to help find all tables that have been wrongly moved into "round 2".
(All the tables I need to identify are ones featuring the "round_no" value)
EDIT: There are over 70+ tables! With "ROUND_NO" COLUMN, I need to only identify the ones with these person records found in them.
I also need to then take this data and return it back to round 1, from the incorrect round 2.
Here is the query that identifies persons that have been "skipped" into round 2 in error:
SELECT p.person_id
, p.name
, ep2.open_date
, ( SELECT pr1.open_date
FROM Person_ep ep1
WHERE ep1.person_id = ep2.person_id
AND er1.round_no = 1 /* SOMETHING IS MISSING WHERE, WHERE IS er1 defined */
)
r1epiopen /* Round 1 episode open date */
FROM person p
join region r
on r.region_code = p.region_code
and r.location_id = 50
join Person_ep er2
ON er2.person_id = p.person_id
AND er2.round_no = 2
ORDER
BY p.person_id
Using SQL Developer 3.2.20.09 on an Oracle 11G RDBMS.
Sorry to see this post that late... Hope it's not too late...
I suppose you are using Oracle 10+, and you know approximately the hour of the crime (!).
I see 2 possibilities:
1) Use the Log Miner to review the executed SQL: http://docs.oracle.com/cd/B19306_01/server.102/b14215/logminer.htm
2) Use the flashback query to review data of a table in the past. But for this one you need to test in on every suspected table (70+) :( http://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_flashback.htm#ADFNS01001
On the suspected table you could run this kind of SQL to see if update occurred in timeframe:
SELECT versions_startscn, versions_starttime,
versions_endscn, versions_endtime,
versions_xid, versions_operation,
description
FROM my_table
VERSIONS BETWEEN TIMESTAMP TO_TIMESTAMP('2014-01-29 14:59:08', 'YYYY-MM-DD HH24:MI:SS')
AND TO_TIMESTAMP('2014-01-29 14:59:36', 'YYYY-MM-DD HH24:MI:SS')
WHERE id = 1;
I have no practical experience using log miner but I think it would be the best solution, especially if you have archive log activated :D
You can access the data values of affected table before the update (if you know the time of the update) using a query like this one:
SELECT COUNT(*) FROM myTable AS OF TIMESTAMP TO_TIMESTAMP('2014-01-29 13:34:12', 'YYYY-MM-DD HH24:MI:SS');
Of course, data will be available only if still available (retention in undo tablepace).
You could then create a temp table with data before the update:
create table tempTableA as SELECT * FROM myTable AS OF TIMESTAMP TO_TIMESTAMP('2014-01-29 13:34:12', 'YYYY-MM-DD HH24:MI:SS');
Then update you table with values coming from tempTableA.
If you want to find all tables with column "round_no" you probably should use this query
select table_name from all_tab_columns where column_name='round_no'
if you want to get only the tables you can update
SELECT table_name
FROM user_tab_columns c, user_tables t
WHERE c.column_name = 'ROUND_NO'
AND t.table_name = c.table_name;
should work
or for the purists
SELECT table_name
FROM user_tab_columns c
JOIN user_tables t ON t.table_name = c.table_name
WHERE c.column_name = 'ROUND_NO';