operating on multiple postgreSQL CTEs - sql

I'm very new to SQL, thank you for bearing with me.
I'm using PostgreSQL, trying to produce a simple summary statistic on the percentage of home team or away team victories in a dataset of soccer penalty shootouts. I've tried doing this with a few common table expressions, which I then try to perform some operations on (divide one from another) like so:
WITH home_team_wins (match) AS (
SELECT match
FROM penalties
WHERE neutral_stadium = 0
AND attacker_home = 1
AND match_winner = 1
GROUP BY match
),
away_team_wins (match) AS (
SELECT match
FROM penalties
WHERE neutral_stadium = 0
AND attacker_home = 0
AND match_winner = 1
),
num_home_wins (match) AS (
SELECT COUNT(DISTINCT match)
FROM home_team_wins
),
num_away_wins (match) AS (
SELECT COUNT(DISTINCT match)
FROM away_team_wins
),
total_matches (match_count) AS (
SELECT COUNT(DISTINCT match)
FROM penalties
WHERE neutral_stadium = 0
)
SELECT num_home_wins.match / total_matches.match_count, num_away_wins.match / total_matches.match_count
FROM num_home_wins, num_away_wins, total_matches;
I'm confused by the results, and don't fully understand what's happening beneath the hood. I expect to see a couple decimal values (in this case, 0.485 and 0.515), but instead both results come out to zero:
weird result
If I don't operate on the CTE variables and simply select them, I see the expected numbers:
SELECT num_home_wins.match, num_away_wins.match, total_matches.match_count
FROM num_home_wins, num_away_wins, total_matches;
expected match counts
Obviously, dividing these values shouldn't equal zero. Why does operating on them produce this weird result?
I won't be surprised if there's a more efficient way to do this, which will also be helpful to know, but he main question above still stands. Thank you!

Try something like this to make your query more compact :
SELECT COUNT(DISTINCT match) FILTER (WHERE attacker_home = 1 AND match_winner = 1) :: numeric / COUNT(DISTINCT match)
, COUNT(DISTINCT match) FILTER (WHERE attacker_home = 0 AND match_winner = 1) :: numeric / COUNT(DISTINCT match)
FROM penalties
WHERE neutral_stadium = 0
and as stated by #klin, you must cast to numeric to avoid zero as the result of an integer division.

Related

SQL Oracle: Trying to pull a count with AND operators, New and needs experienced eyes

I am new to SQL and have had pretty good luck figuring things out thus far but I am missing something in this query:
The question is how to return a distinct count from two columns using another column and the criteria if the value is greater than 0.
I have tried IF and AND operators (My current query returns a 0 not an error, and it works when only using one .shp criteria)
select count (distinct ti.TO_ADDRESS)
from ti
where ti.input_id = 'xxx_029_01z_c_zzzzbab_ecrm.shp'
and ti.input_id = 'xxx_030_01z_c_zzzzbab_ecrm.shp'
and ti.OPENED>0;
Thanks so much!!
I think you want two levels of aggregation:
select count(*)
from (select ti.TO_ADDRESS
from ti
where ti.input_id in ('xxx_029_01z_c_zzzzbab_ecrm.shp', 'xxx_030_01z_c_zzzzbab_ecrm.shp') and
ti.OPENED > 0
group by ti.TO_ADDRESS
having count(distinct ti.input_id) = 2 -- has both of them
) ti;

SQL Server - Multiplying row values for a given column value [duplicate]

Im looking for something like SELECT PRODUCT(table.price) FROM table GROUP BY table.sale similar to how SUM works.
Have I missed something on the documentation, or is there really no PRODUCT function?
If so, why not?
Note: I looked for the function in postgres, mysql and mssql and found none so I assumed all sql does not support it.
For MSSQL you can use this. It can be adopted for other platforms: it's just maths and aggregates on logarithms.
SELECT
GrpID,
CASE
WHEN MinVal = 0 THEN 0
WHEN Neg % 2 = 1 THEN -1 * EXP(ABSMult)
ELSE EXP(ABSMult)
END
FROM
(
SELECT
GrpID,
--log of +ve row values
SUM(LOG(ABS(NULLIF(Value, 0)))) AS ABSMult,
--count of -ve values. Even = +ve result.
SUM(SIGN(CASE WHEN Value < 0 THEN 1 ELSE 0 END)) AS Neg,
--anything * zero = zero
MIN(ABS(Value)) AS MinVal
FROM
Mytable
GROUP BY
GrpID
) foo
Taken from my answer here: SQL Server Query - groupwise multiplication
I don't know why there isn't one, but (take more care over negative numbers) you can use logs and exponents to do:-
select exp (sum (ln (table.price))) from table ...
There is no PRODUCT set function in the SQL Standard. It would appear to be a worthy candidate, though (unlike, say, a CONCATENATE set function: it's not a good fit for SQL e.g. the resulting data type would involve multivalues and pose a problem as regards first normal form).
The SQL Standards aim to consolidate functionality across SQL products circa 1990 and to provide 'thought leadership' on future development. In short, they document what SQL does and what SQL should do. The absence of PRODUCT set function suggests that in 1990 no vendor though it worthy of inclusion and there has been no academic interest in introducing it into the Standard.
Of course, vendors always have sought to add their own functionality, these days usually as extentions to Standards rather than tangentally. I don't recall seeing a PRODUCT set function (or even demand for one) in any of the SQL products I've used.
In any case, the work around is fairly simple using log and exp scalar functions (and logic to handle negatives) with the SUM set function; see #gbn's answer for some sample code. I've never needed to do this in a business application, though.
In conclusion, my best guess is that there is no demand from SQL end users for a PRODUCT set function; further, that anyone with an academic interest would probably find the workaround acceptable (i.e. would not value the syntactic sugar a PRODUCT set function would provide).
Out of interest, there is indeed demand in SQL Server Land for new set functions but for those of the window function variety (and Standard SQL, too). For more details, including how to get involved in further driving demand, see Itzik Ben-Gan's blog.
You can perform a product aggregate function, but you have to do the maths yourself, like this...
SELECT
Exp(Sum(IIf(Abs([Num])=0,0,Log(Abs([Num])))))*IIf(Min(Abs([Num]))=0,0,1)*(1-2*(Sum(IIf([Num]>=0,0,1)) Mod 2)) AS P
FROM
Table1
Source: http://productfunctionsql.codeplex.com/
There is a neat trick in T-SQL (not sure if it's ANSI) that allows to concatenate string values from a set of rows into one variable. It looks like it works for multiplying as well:
declare #Floats as table (value float)
insert into #Floats values (0.9)
insert into #Floats values (0.9)
insert into #Floats values (0.9)
declare #multiplier float = null
select
#multiplier = isnull(#multiplier, '1') * value
from #Floats
select #multiplier
This can potentially be more numerically stable than the log/exp solution.
I think that is because no numbering system is able to accommodate many products. As databases are designed for large number of records, a product of 1000 numbers would be super massive and in case of floating point numbers, the propagated error would be huge.
Also note that using log can be a dangerous solution. Although mathematically log(a*b) = log(a)*log(b), it might not be in computers as we are not dealing with real numbers. If you calculate 2^(log(a)+log(b)) instead of a*b, you may get unexpected results. For example:
SELECT 9999999999*99999999974482, EXP(LOG(9999999999)+LOG(99999999974482))
in Sql Server returns
999999999644820000025518, 9.99999999644812E+23
So my point is when you are trying to do the product do it carefully and test is heavily.
One way to deal with this problem (if you are working in a scripting language) is to use the group_concat function.
For example, SELECT group_concat(table.price) FROM table GROUP BY table.sale
This will return a string with all prices for the same sale value, separated by a comma.
Then with a parser you can get each price, and do a multiplication. (In php you can even use the array_reduce function, in fact in the php.net manual you get a suitable example).
Cheers
Another approach based on fact that the cardinality of cartesian product is product of cardinalities of particular sets ;-)
⚠ WARNING: This example is just for fun and is rather academic, don't use it in production! (apart from the fact it's just for positive and practically small integers)⚠
with recursive t(c) as (
select unnest(array[2,5,7,8])
), p(a) as (
select array_agg(c) from t
union all
select p.a[2:]
from p
cross join generate_series(1, p.a[1])
)
select count(*) from p where cardinality(a) = 0;
The problem can be solved using modern SQL features such as window functions and CTEs. Everything is standard SQL and - unlike logarithm-based solutions - does not require switching from integer world to floating point world nor handling nonpositive numbers. Just number rows and evaluate product in recursive query until no row remain:
with recursive t(c) as (
select unnest(array[2,5,7,8])
), r(c,n) as (
select t.c, row_number() over () from t
), p(c,n) as (
select c, n from r where n = 1
union all
select r.c * p.c, r.n from p join r on p.n + 1 = r.n
)
select c from p where n = (select max(n) from p);
As your question involves grouping by sale column, things got little bit complicated but it's still solvable:
with recursive t(sale,price) as (
select 'multiplication', 2 union
select 'multiplication', 5 union
select 'multiplication', 7 union
select 'multiplication', 8 union
select 'trivial', 1 union
select 'trivial', 8 union
select 'negatives work', -2 union
select 'negatives work', -3 union
select 'negatives work', -5 union
select 'look ma, zero works too!', 1 union
select 'look ma, zero works too!', 0 union
select 'look ma, zero works too!', 2
), r(sale,price,n,maxn) as (
select t.sale, t.price, row_number() over (partition by sale), count(1) over (partition by sale)
from t
), p(sale,price,n,maxn) as (
select sale, price, n, maxn
from r where n = 1
union all
select p.sale, r.price * p.price, r.n, r.maxn
from p
join r on p.sale = r.sale and p.n + 1 = r.n
)
select sale, price
from p
where n = maxn
order by sale;
Result:
sale,price
"look ma, zero works too!",0
multiplication,560
negatives work,-30
trivial,8
Tested on Postgres.
Here is an oracle solution for anyone who needs it
with data(id, val) as(
select 1,1.0 from dual union all
select 2,-2.0 from dual union all
select 3,1.0 from dual union all
select 4,2.0 from dual
),
neg(val , modifier) as(
select exp(sum(ln(abs(val)))), case when mod(count(*),2) = 0 then 1 Else -1 end
from data
where val <0
)
,
pos(val) as (
select exp(sum(ln(val)))
from data
where val >=0
)
select (select val*modifier from neg)*(select val from pos) product from dual

Doing Math with 2 Subquerys

I have two subquerys both calculating sums. I would like to do an Artithmetic Minus(-) with the result of both Querys . eg Query1: 400 Query2: 300 Result should be 100.
Obvious a basic - in the query does not work. The minus works as MINUS on sets. How can I solve this? Do you have any ideas?
SELECT CustumersNo FROM Custumers WHERE
(
SELECT SUM(value) FROM roe WHERE roe.credit = Custumers.CustumersNo
-
SELECT SUM(value) FROM roe WHERE roe.debit = Custumers.CustumersNo
)
> 500
Using Informix - sorry missed that point
To get the original syntax to work, you would need to surround the sub-selects in parentheses:
SELECT CustumersNo
FROM Custumers
WHERE ((SELECT SUM(value) FROM roe WHERE roe.credit = Custumers.CustumersNo)
-
(SELECT SUM(value) FROM roe WHERE roe.debit = Custumers.CustumersNo)
) > 500
Note that aggregates are defined to ignore nulls in the values they aggregate in standard SQL. However, the SUM of an empty set of rows is NULL, not zero.
You can get inventive and devise ways to always have a value for each customer listed in the roe table, such as:
SELECT CustomersNo
FROM (SELECT CustomersNo, SUM(value) AS net_credit
FROM (SELECT credit AS CustomersNo, +value
UNION
SELECT debit AS CustomersNo, -value
) AS x
GROUP BY CustomersNo
) AS y
WHERE net_credit > 500;
You can also do that with an appropriate HAVING clause if you wish. Note that this avoids issues with customers who have credit entries but no debit entries or vice versa; all the entries that are present are treated appropriately.
Your misspelling (or unorthodox spelling) of 'customers' is nearly as good as 'costumers'.
Something like what you tried should work. It may be a syntax problem, and it may depend on what type of SQL you are using. However, an approach like this would be more efficient:
Update: I see you were having a problem with nulls, so I updated it to handle nulls properly.
select CustumersNo from (
select CustumersNo,
sum(coalesce(roecredit.value,0)) - sum(coalesce(roedebit.value,0))
as balance
FROM Custumers
join roe roecredit on roe.credit = Custumers.CustumersNo
join roe roedebit on roe.debit = Custumers.CustumersNo
group by CustumersNo
)
where balance > 500
Caveat: I don't have experience with Informix specifically.

SQL query to add or subtract values based on another field

I need to calculate the net total of a column-- sounds simple. The problem is that some of the values should be negative, as are marked in a separate column. For example, the table below would yield a result of (4+3-5+2-2 = 2). I've tried doing this with subqueries in the select clause, but it seems unnecessarily complex and difficult to expand when I start adding in analysis for other parts of my table. Any help is much appreciated!
Sign Value
Pos 4
Pos 3
Neg 5
Pos 2
Neg 2
Using a CASE statement should work in most versions of sql:
SELECT SUM( CASE
WHEN t.Sign = 'Pos' THEN t.Value
ELSE t.Value * -1
END
) AS Total
FROM YourTable AS t
Try this:
SELECT SUM(IF(sign = 'Pos', Value, Value * (-1))) as total FROM table
I am adding rows from a single field in a table based on values from another field in the same table using oracle 11g as database and sql developer as user interface.
This works:
SELECT COUNTRY_ID, SUM(
CASE
WHEN ACCOUNT IN 'PTBI' THEN AMOUNT
WHEN ACCOUNT IN 'MLS_ENT' THEN AMOUNT
WHEN ACCOUNT IN 'VAL_ALLOW' THEN AMOUNT
WHEN ACCOUNT IN 'RSC_DEV' THEN AMOUNT * -1
END) AS TI
FROM SAMP_TAX_F4
GROUP BY COUNTRY_ID;
select a= sum(Value) where Sign like 'pos'
select b = sum(Value) where Signe like 'neg'
select total = a-b
this is abit sql-agnostic, since you didnt say which db you are using, but it should be easy to adapat it to any db out there.

Is there a performance difference between HAVING on alias, vs not using HAVING

Ok, I'm learning, bit by bit, about what HAVING means.
Now, my question is if these two queries have difference performance characteristics:
Without HAVING
SELECT x + y AS z, t.* FROM t
WHERE
x = 1 and
x+y = 2
With HAVING
SELECT x + y AS z, t.* FROM t
WHERE
x = 1
HAVING
z = 2
Yes it should be different - (1) is expected to be faster.
Having will ensure that first the main query is run and then the having filter is applied - so it basically works on a the dataset returned by the (query minus having).
The first query should be preferable, since it does not select those records at all.
HAVING is used for queries that contain GROUP BY or return a single row containg the result of aggregate functions. For example SELECT SUM(scores) FROM t HAVING SUM(scores) > 100 returns either one row, or no row at all.
The second query is considered invalid by the SQL Standard and is not accepted by some database systems.