Oracle SQL - Nested sub query not joining to main query - sql

I'm having real trouble getting the sub query join to the main query in the where clause somewhere along the way.
Query that works when explicitly defining the field:
SELECT m.field1, m.field2, m.field3, m.myfield, etc etc
(SELECT aa.daysfromprev12 FROM(
(SELECT subsubm.myfield, MAX(subsubm.date_to)-(SELECT MAX(add_months( to_date(subsubsubm.date_from), -12 )) FROM maintable subsubsubm WHERE subsubsubm.myfield= subsubm.myfield) AS daysfromprev12,
row_number() OVER (ORDER BY (MAX(subsubm.date_to)-(SELECT MAX(add_months( to_date(subsubsubm.date_from), -12 )) FROM maintable subsubsubm WHERE subsubsubm.myfield= subsubm.myfield)) DESC) rn
FROM maintable subsubm
WHERE subsubm.myfield = '123456'
GROUP BY subsubm.myfield, subsubm.absence_id) aa)
where aa.myfield = '123456' and aa.rn = 2)
AS dayss
FROM maintable m
where m.myfield = '123456'
How can I replace subsubm.myfield = '123456' & aa.myfield = '123456' to reference the main query = m.myfield

There are way too many calls to the same table in your SQL statement there. If I've managed to unwind your query ok, I think it can be replaced with the following:
SELECT field1,
field2,
field3,
field4,
myfield,
MAX(CASE WHEN rn = 2 THEN days end) OVER (PARTITION BY myfield) days
FROM (SELECT field1,
field2,
field3,
field4,
myfield,
daysfromprev12 AS days,
row_number() OVER (ORDER BY daysfromprev12 DESC) rn
FROM (SELECT field1,
field2,
field3,
field4,
myfield,
MAX(date_to) OVER (PARTITION BY myfield, absence_id) -
MAX(add_months(TRUNC(date_from), -2)) OVER (PARTITION BY myfield) daysfromprev12
FROM maintable
WHERE myfield = '123456'));
N.B. Untested, since you haven't provided any sample data to work with. Also, you were doing to_date(date_from) which I have converted to trunc(date_from) on the assumption that date_from is of DATE datatype and you wanted to get rid of the time part. If it's a string, then you'd need to also input the date format mask in the to_date() to avoid the unnecessary implicit conversion taking place.
ETA: If you're going to go with this approach, you would probably find it easier to read/write/maintain if you use subquery factoring (aka common table expressions aka CTE) to separate out your subqueries. E.g. the above query could be rewritten as:
with get_initial_prev12days as (SELECT field1,
field2,
field3,
field4,
myfield,
MAX(date_to) OVER (PARTITION BY myfield, absence_id) -
MAX(add_months(TRUNC(date_from), -2)) OVER (PARTITION BY myfield) daysfromprev12
FROM maintable
WHERE myfield = '123456'),
interim_results as (SELECT field1,
field2,
field3,
field4,
myfield,
daysfromprev12 AS days,
row_number() OVER (ORDER BY daysfromprev12 DESC) rn
FROM get_initial_prev12days)
select field1,
field2,
field3,
field4,
myfield,
MAX(CASE WHEN rn = 2 THEN days end) OVER (PARTITION BY myfield) days
from interim_results;

Related

Performance difference between named window vs explicit over clause in the select list

Is there any performance gain between defining the over clause of an analytic function in the select list vs defining it as a named window?
For example, would there be any performance difference between the two queries down below?
The second query explicitly states that calculated fields 1 and 2 work over the same window.
I am working with BigQuery standard SQL (but also happy to hear about other SQL engines)
Query 1:
select
field1, field2, field3,
max(field4) over (partition by field1, field2) as calculated_field_1,
max(field5) over (partition by field1, field2) as calculated_field_2,
max(field6) over (partition by field1) as calculated_field_3,
from my_table
Query 2:
select
field1, field2, field3,
max(field4) over w as calculated_field_1,
max(field5) over w as calculated_field_2,
max(field6) over (partition by field1) as calculated_field_3,
from my_table
window w as (partition by field1, field2)
Don't think so there would be any issue. The scanning is same. The memory calculation on window is same in both the queries.

Select with column that no in the group by SQL Server

I want to select a column that is not in the GROUP BY.
My code:
SELECT
dbo.func(field1, field2), field3
FROM
table
WHERE
field4 = 1224
GROUP BY
dbo.func(field1, field2), field3
HAVING
COUNT(id) > 1
And I want to select also the column id like this:
SELECT
id, dbo.func(field1, field2), field3
FROM
table
WHERE
field4 = 1224
GROUP BY
dbo.func(field1, field2), field3
HAVING
COUNT(id) > 1
I suspect that you want to apply a count restriction and then return all matching records from the original table, along with the output of the scalar function. One approach is to use COUNT as analytic function with a partition which corresponds to the columns which appeared in your original GROUP BY clause. The difference here is that we don't actually aggregate the original table.
WITH cte AS (
SELECT id, dbo.func(field1, field2) AS out, field3,
COUNT(id) OVER (PARTITION BY dbo.func(field1, field2), field3) cnt
FROM yourTable
WHERE field4 = 1224
)
SELECT id, out, field3
FROM cte
WHERE cnt > 1;
You could join back to the original table to retrieve the matching row(s) with id:
SELECT t.id
, filter.funresult
, t.field3
FROM table t
JOIN (
SELECT dbo.func(field1,field2) as funresult
, field3
FROM table
WHERE field4 = 1224
GROUP BY
dbo.func(field1,field2)
, field3
HAVING COUNT(id) > 1
) filter
ON filter.funresult = dbo.func(t.field1, t.field2)
AND filter.field3 = t.field3

Speed up count on distinct

My query return the volume of each field where data is not null.
SELECT COUNT(field1) AS field1, COUNT(field2) AS field2, COUNT(field3) AS field3
FROM (
SELECT field1, field2, field3
FROM table1, table2
WHERE table1.id=table2.idt1
ORDER BY table1.id ASC
LIMIT 10000
) AS rq
table1.id is The primary key of table1 and table2.idt1 is the secondary key of table2.
This query is working perfectly well, but if I need to return the DISTINCT volume of each field, like this
SELECT COUNT(DISTINCT(field1)) AS field1, COUNT(DISTINCT(field2)) AS field2, COUNT(DISTINCT(field3)) AS field3
FROM (
SELECT field1, field2, field3
FROM table1, table2
WHERE table1.id=table2.idt1
ORDER BY table1.id ASC
LIMIT 10000
) AS rq
Problems begins... The query is working on and do the job, but the performances are of course very much slower than without the DISTINCT clause.
Each field of table1 and table2 are indexes with btree
CREATE INDEX field1_index ON table1 USING btree (field1)
CREATE INDEX field2_index ON table1 USING btree (field2)
CREATE INDEX field3_index ON table2 USING btree (field3)
How can I speed up this DISTINCT count ? Maybe with better indexes ?
Thanks for help
I've tried something similar in a big table. (12 Millions rows)
Without the DISTINCT it takes 10 seconds.
With the DISTINCT like your code it take 19 seconds.
Puting the DISTINCT inside the subquery takes 11 seconds
SELECT COUNT(field1) AS field1, COUNT(field2) AS field2, COUNT(field3) AS field3
FROM (
SELECT DISTINCT(field1) AS field1, DISTINCT(field2) AS field2, DISTINCT(field3) AS field3
FROM table1, table2
WHERE table1.id=table2.idt1
ORDER BY table1.id ASC
LIMIT 10000
) AS rq
Other thing, if you only want to filter NULL data, you can make that in the where clause instead of using distinct.
Postgres does not optimize COUNT(DISTINCT) very well. You have multiple such expressions, which makes it a bit harder. I am going to suggest using window functions and conditional aggregation:
SELECT SUM(CASE WHEN seqnum_1 = 1 THEN 1 ELSE 0 END) as field1,
SUM(CASE WHEN seqnum_2 = 1 THEN 1 ELSE 0 END) as field2,
SUM(CASE WHEN seqnum_3 = 1 THEN 1 ELSE 0 END) as field3
FROM (SELECT field1, field2, field3,
ROW_NUMBER() OVER (PARTITION BY field1 ORDER BY field1) as seqnum_1,
ROW_NUMBER() OVER (PARTITION BY field2 ORDER BY field2) as seqnum_2,
ROW_NUMBER() OVER (PARTITION BY field3 ORDER BY field3) as seqnum_3
FROM table1 JOIN
table2
ON table1.id=table2.idt1
ORDER BY table1.id ASC
LIMIT 10000
) rq
EDIT:
It occurs to me that the row_number() might be processed before the limit. Try this version:
SELECT SUM(CASE WHEN seqnum_1 = 1 THEN 1 ELSE 0 END) as field1,
SUM(CASE WHEN seqnum_2 = 1 THEN 1 ELSE 0 END) as field2,
SUM(CASE WHEN seqnum_3 = 1 THEN 1 ELSE 0 END) as field3
FROM (SELECT field1, field2, field3,
ROW_NUMBER() OVER (PARTITION BY field1 ORDER BY field1) as seqnum_1,
ROW_NUMBER() OVER (PARTITION BY field2 ORDER BY field2) as seqnum_2,
ROW_NUMBER() OVER (PARTITION BY field3 ORDER BY field3) as seqnum_3
FROM (SELECT field1, field2, field3
FROM table1 JOIN
table2
ON table1.id = table2.idt1
ORDER BY table1.id ASC
LIMIT 10000
) t
) rq

ORACLE Select and group by excluding one field

I have a very simple query (on Oracle 11g) to select 3 fields:
select field1, field2, field3, count(*) from table
where...
group by field1, field2, field3
having count(*) > 10;
Now, what I need, is exclude "field3" from the "group by" since I only need field 1 and 2 to be grouped, but I also need field3 in the output.
As far I know, all the fields in the select must be reported also in "group by", so how can I handle that?
Thanks
Lucas
select t.field1, t.field2, t.field3, tc.Count
from table t
inner join (
select field1, field2, count(*) as Count
from table
where...
group by field1, field2
having count(*) > 10
) tc on t.field1 = tc.field1 and t.field2 = tc.field2
Use the analytical version of the "count" function:
select * from (
select field1, field2, field3, count(*) over(partition by field1, field2) mycounter
from table )
--simulate the having clause
where mycounter > 10;
If you don't group by field3 anymore, there can suddenly be different field3 per group. You must decide which one to show, e.g. the maximum:
select field1, field2, max(field3), count(*) from table
where...
group by field1, field2
having count(*) > 10;
The only way I know how to handle that is to first isolate the Field1 and Field2 data and create a new table, then link it back to the original table adding in Field3.
Select Table2.Field1, Table2.Field2, Table1.Field3
From
(Select Field1, max(Field2) as Field2
From Table1) Table2
Where Table2.Field1 = Table1.Field1
And Table2.Field2 = Table1.Field2
Group By
Table2.Field1, Table2.Field2, Table1.Field3

Unable to get the right output from Oracle SQL

I have a table with field1, field2, field3, … and I need to count the number of items in field1 such that I return all records(field1,filed2,field3,…) that occur 6 times or less in the table.
My SQL code is:
SELECT field1, field2, field3, count(field1) CNT
FROM myTable
WHERE trunc(date) = tp_date(‘03/22/2011’,’mm/dd/yyyy’)
GROUP BY field1
HAVING COUNT(field1) < 7;
The output that I am getting from the above code is all records are returned from the table not what I expected? Any help would be appreciated!!
I think you need to use a subquery:
SELECT field1, field2, field3,
FROM myTable
WHERE trunc(date) = tp_date(‘03/22/2011’,’mm/dd/yyyy’)
AND field1 in
(SELECT field1
FROM mytable
GROUP BY field1
HAVING COUNT(field1) < 7);
WITH tmp AS
(
SELECT field1, COUNT(1) as CountOfField1
FROM myTable
WHERE trunc(date) = tp_date(‘03/22/2011’,’mm/dd/yyyy’)
GROUP BY field1
HAVING COUNT(field1) < 7
)
SELECT mytable.field1, mytable.field2, mytable.field3, tmp.CountOfField1
FROM myTable
INNER JOIN tmp
ON myTable.Field1 = tmp.Field1
Yet another way to do it:
SELECT t.field1, t.field2, t.field3,
FROM myTable t
WHERE trunc(t.date) = tp_date(‘03/22/2011’,’mm/dd/yyyy’)
AND EXISTS
( SELECT *
FROM mytable t2
WHERE t2.field1 = t.field1
AND trunc(t2.date) = tp_date(‘03/22/2011’,’mm/dd/yyyy’)
GROUP BY t2.field1
HAVING COUNT(t2.field1) < 7
)
;