Using MS SQL 2012
I want to do something like
select a, b, c, a+b+c d
However a, b, c are complex computed columns, lets take a simple example
select case when x > 4 then 4 else x end a,
( select count(*) somethingElse) b,
a + b c
order by c
I hope that makes sense
You can use a nested query or a common table expression (CTE) for that. The CTE syntax is slightly cleaner - here it is:
WITH CTE (a, b)
AS
(
select
case when x > 4 then 4 else x end a,
count(*) somethingElse b
from my_table
)
SELECT
a, b, (a+b) as c
FROM CTE
ORDER BY c
I would probably do this:
SELECT
sub.a,
sub.b,
(sub.a + sub.b) as c,
FROM
(
select
case when x > 4 then 4 else x end a,
(select count(*) somethingElse) b
FROM MyTable
) sub
ORDER BY c
The easiest way is to do this:
select a,b,c,a+b+c d
from (select <whatever your calcs are for a,b,c>) x
order by c
That just creates a derived table consisting of your calculations for a, b, and c, and allows you to easily reference and sum them up!
Related
Given the following function:
CREATE
OR REPLACE FUNCTION myfunction(a float, b float, c float)
RETURNS float AS
$$
select sum(1/(1+exp(-(series - c)/4)))
from (
select (a + ((row_number()) over(order by 0))*1) series
from table(generator(rowcount => 10000)) x
qualify series <= b
)
$$;
I get all the expected results when executing the following queries:
select
myfunction(1, 10, 1);
select
myfunction(1, 100, 1);
select
myfunction(1, 10, 1.1);
select
myfunction(0, 1, 89.87);
select
myfunction(0, 1, null);
However when I run the following query:
select
myfunction(a, b, c)
from
(
select
1 as a,
10 as b,
1.1 as c
union
select
0 as a,
1 as b,
null as c
);
I get an error:
"Unsupported subquery type cannot be evaluated".
While this query does work:
select
a, b, myfunction(a, b, c)
from
(
select
1 as a,
10 as b,
1 as c
union
select
1 as a,
100 as b,
1 as c
);
Why can't Snowflake handle null or decimal numbers in the 'c' column when I input multiple rows while individual rows weren't a problem?
And how can this function be rewritten to be able to handle these cases?
SQL UDFs are converted to subqueries (for now), and if Snowflake can not determine the data type returned from these subqueries, you get the "Unsupported subquery" error. The issue is not about decimals or null. The issue is A and C variables (which are used in SUM()) contain different values. For example, the following ones work:
select
myfunction(a, b, c )
from
(
select
1 as a,
1 as b,
1.1 as c
union
select
1 as a,
100 as b,
1.1 as c
);
select
myfunction(a, b, c )
from
(
select
1 as a,
1 as b,
null as c
union
select
1 as a,
100 as b,
null as c
);
You may hit these kinds of errors when you try to write complex functions with SQL UDFs. Sometimes rewriting them can help, but I don't see a way for this one. As a workaround, you may re-write it in JavaScript because JS UDFs are not converted to subqueries:
CREATE
OR REPLACE FUNCTION myfunction(a float, b float, c float)
RETURNS float
language javascript AS
$$
var res = 0.0;
for (let series = A + 1; series <= B; series++) {
res += (1/(1+Math.exp(-(series - C)/4)));
}
return res;
$$;
According to my tests, the above UDF returns the same result as the SQL version, and it doesn't hit "Unsupported subquery" error.
Weird one. Can you try selecting from the subquery and running it through a cast?
Like this:
select a, b, c
from
(select cast(a as float) as a, cast(b as float) as b, cast(c as float) as c from
(
select
1 as a,
10 as b,
1 as c
union
select
1 as a,
100 as b,
null as c
) as t) as x
In the end implementing it as a python function allowed for also handling all the edge cases:
CREATE
OR REPLACE FUNCTION myfunction(a float, b float, c float)
returns float
language python
runtime_version=3.8
handler='compute'
as
$$
def compute(a, b, c):
import math
if b < a:
return None
if c is None:
return None
res = []
step_size = 1
it = a
while it < b:
res.append(it)
it += step_size
res = sum([1/(1+math.exp(-1*(i-c)/4)) for i in res])
return res
$$;
My code takes in a parameter ${ID}$ (string) and based on what ID evaluates to I want to chose a different table to use. Guess I cant use a case inside a FROM statement. Some example code looks like:
select *
from ${ID}$_charges.transaction_charge
where execution_date = '2011-03-22'
So if ID is 'N' then I want to use the transaction_charge table so the statement resolves to N_charges.transaction_charge
However if ID is 'B' or 'P' then I want to use a different table called conformity_charge and the statement would evaluate to B_charges.conformity_charge or P_charges.conformity_charge
How can I write this statement?
If you have a low number of possible tables to target, the closest you can get, apart from dynamic SQL, is:
NOTE: Depending of the capabilities of your database engine and the size of your tables there might be performance penalties that may or may not matter.
SELECT a, b, c
FROM (
SELECT 'N' as TableName, a, b, c
FROM N_charges.transaction_charge
UNION ALL
SELECT 'P' as TableName, a, b, c
FROM P_charges.transaction_charge
UNION ALL
SELECT 'B' as TableName, a, b, c
FROM B_charges.transaction_charge
) t
WHERE TableName = '${ID}$'
# Another variation
SELECT a, b, c
FROM N_charges.transaction_charge
WHERE 'N' = '${ID}$'
UNION ALL
SELECT a, b, c
FROM P_charges.transaction_charge
WHERE 'P' = '${ID}$'
UNION ALL
SELECT a, b, c
FROM B_charges.transaction_charge
WHERE 'B' = '${ID}$'
I want to get the below result:
source table :
Cnt A B
4 ABC YU/FGH
5 ABC YU/DFE
5 ABC KL
2 LKP BN/ER
4 JK RE
Result:
Cnt A B
9 ABC YU
5 ABC KL
2 LKP BN
4 JK RE
Here I want the count by grouping 'B' and want to display the 'B' record only till the special character (/)
Basically, you will have to filter out the all the characters after the "/" symbol and then apply a SUM and a GROUP BY. You can see this below. The inner query filters out the unwanted string and the outer query does the SUM and the GROUP BY :
SELECT SUM(t.Cnt), t.A, t.B
FROM (
SELECT Cnt,
A,
CASE
WHEN CHARINDEX('/', B) > 0 THEN SUBSTRING(B, 0, CHARINDEX('/', B))
ELSE B
END AS B
FROM #Tab
) t
GROUP BY t.A, t.B
ORDER BY t.A
You can see this working here -> http://rextester.com/IQJ79191
Hope this helps!!!
You can get your string till '/' by using SUBSTRING.
select
count(SUBSTRING(reverse(B),0,charindex('/',reverse(B)))),
A,
SUBSTRING(reverse(B),0,charindex('/',reverse(B)))
from source_table group by B;
Solution for Oracle - substr(B,0,instr(B,'/',1)-1) B
Put this both in select and groupby
I can suggest you to use a query like this:
select
sum(Cnt) Cnt,
A,
left(B, charindex('/',B+'/',0)-1) B -- Using `+'\'` will do the trick
from
t
group by
A,
left(B, charindex('/',B+'/',0)-1);
By using String and CharIndex Functions.
;WITH SourceTable(Cnt,A,B) AS
(
SELECT 4,'ABC','YU/FGH'UNION ALL
SELECT 5,'ABC','YU/DFE'UNION ALL
SELECT 5,'ABC','KL' UNION ALL
SELECT 2,'LKP','BN/ER' UNION ALL
SELECT 4,'JK','RE'
)
SELECT SUM(Cnt) AS Cnt,A,CASE WHEN CHARINDEX('/',B) = 0 THEN B
ELSE SUBSTRING(B,0,CHARINDEX('/',B)) END AS [B] FROM SourceTable
GROUP BY A,CASE WHEN CHARINDEX('/',B) = 0 THEN B
ELSE SUBSTRING(B,0,CHARINDEX('/',B)) END
ORDER BY Cnt DESC
Try this query --
SELECT SUM(Cnt) AS [COUNT]
,A
,CASE
WHEN CHARINDEX('/', B) > 0
THEN SUBSTRING(B, 1, (CHARINDEX('/', B) - 1))
ELSE B
END
FROM tblSample
GROUP BY A, B
ORDER BY A, B
This question is taking my previous problem further: (most recent (max) date for every id)
Suppose i have a table
which has
a = id
b = date
c = NewestDate
d = someValues -- ex 0.3
e = currentValue --this is what i need to create
For every a (id) i have b,c,d
I want to create 'e' with for every 'a' checks the date in 'c' then in 'e' it's being inserted the corresponding value from 'd'.
the column c was created like this:
SELECT a,
b,
max(b) OVER (PARTITION BY a) AS c
FROM myTable
ORDER BY a,b
example:
a b c d e
1 2009.02.15 2015.03.20 0.432 0.122 --e taken from the row below
1 2015.03.20 2015.03.20 0.122 0.122 --the value of e
1 2014.04.02 2015.03.20 0.98 0.122 --e taken from the row above
2 2010.04.12 2014.07.01 0.467 0.578
2 2014.07.01 2014.07.01 0.578 0.578
.
.
Anyone has a solution for this?
tried like this:
select *
into #myTable
from myTable
select
t1.a,
t1.b,
t1.c,
t1.d,
t.d as e
from #myTable t
left join myTable t1 on t.c = t1.b and t.a = t1.a
order by a, b
I think you can just use conditional aggregation with a window function:
SELECT t.*,
MAX(CASE WHEN b = c THEN d END) OVER (PARTITION BY a) as d
FROM (SELECT a, b, max(b) OVER (PARTITION BY a) AS c
FROM myTable
) t
ORDER BY a, b
I have a select query like,
select a, b, c, d, e, f, g, h, i, j from sample_table
I need to have distinct set of records from this table, so I put
select distinct a, b, c, d, e, f, g, h, i, j from sample_table
But still, the duplicate columns are coming in the result set, as i,j is differing with a minor change like result, result1, RESULT. I need to get rid of this minor change but want to have it in the result set.
How do I select distinct columns of a,b,c,d,e,f,g,h and also have i,j in the result set.
You can do this using the analytic functions:
select a, b, c, d, e, f, g, h, i, j
from (select st.*,
row_number() over (partition by a, b, c, d, e, f, g, h order by a) as seqnum
from sample_table
) st
where seqnum = 1;
This ensures that the values of i and j come from the same row.
SELECT DISTINCT removes duplicate rows.
If you consider certain values to be "the same" either within a column or between two columns, then before rows containing them can be seen as duplicates by the DBMS you have to make them actually the same.
Within a column you can convert each possible variation to one particular variation. This is called converting to a canonical or normal form.
select distinct ...,
case i when "result1" then "result"
else "RESULT" then "result"
else "result" then "result"
else "dOg" then "dog"
...
end as i,
convert_to_upper_case(j) as j,
correct_spelling(k) as k
from sample_table
If you want to consider values to be the same across columns then you can convert them in that way and compare the canonical forms. Or you can write an expression that compares them and output a single value both columns. This is called an equivalence relation.
select distinct ...,g,h,i, i as j
from sample_table
where ...
AND my_canonical_form(g) = my_canonical_form(h)
AND equivalent_according_to_me(i,j)
That can be used in generating sample_table if j wasn't really supposed to be different from i there:
select distinct ..., t.i, t.i as j -- no u.j
from t,u where ... and close_enough(t.i,u.j)
The idea is that canonical_form(x) = canonical_form(y) exactly when equivalent(x,y).
You can either keep both i and j columns or drop one if you want.
May be you can try:
select distinct a, b, c, d, e, f, g, h, min(upper(i)) i, min(upper(j)) j from sample_table
group by a, b, c, d, e, f, g, h;
You can consider using min or max combined with substring, upper, lower or whichever suits your requirement.
As alex poole has pointed out, you can also consider having a column with timestamp, so that the latest or the earliest record can be displayed in the result-set.
In addition to Nishanthi Grashia answer: if you have to show all the values of the different columns, you can use listagg as aggregate function:
select same1,same2,same3,same4,same5,same6,same7,same8,
listagg(diff1, ',') within group (order by 1,2,3,4,5,6,7,8)
, listagg(diff2, ',') within group (order by 1,2,3,4,5,6,7,8) from (
select 1 as same1, 2 as same2 ,3 as same3, 4 as same4,5 as same5,6 as same6,7 as same7, 8 as same8,'m' as diff1,'n' as diff2 from dual
union
select 1,2,3,4,5,6,7,8,'n','o' from dual
union
select 2,3,4,5,6,7,8,9,'p','q' from dual
union
select 2,3,4,5,6,7,8,9,'p','x' from dual
) qry1 group by same1,same2,same3,same4,same5,same6,same7,same8