Selecting with two different WHERE statements - sql

I'm struggling to work out how to run a select query where I'm checking for two different values at the same time, and wanting them in seperate columns.
My table example:
ID | foreignID | value | accepted
----------------------------------
1 | 1 | 5 | Y
2 | 1 | 2 | Y
3 | 1 | 4 | N
4 | 2 | 8 | Y
And what I'm trying to do is along the lines of this:
SELECT
foreignID,
SUM(value WHERE (accepted='Y')) AS sum1,
SUM(value WHERE (accepted='N')) AS sum2
FROM example
WHERE foreignID='1'
My expected results would be:
foreignID | sum1 | sum2
------------------------
1 | 7 | 4
Obviously the above code wouldn't work, it's just a half-sudo code to show what I want. Essentially I only want to check with one foreignID but then get results from several SUMs that each require their own argument.
Does anyone know of any ways in which this could be achieved, or something similar. I've tried UNION which puts it into...
foreignID | sum
1 | 7
1 | 4
... but that's not really what I'm after.
I've also seen multiple select in one sql statement which seems to be on the right track but again, that's using UNION which I don't think is ideal for my example.
I could be wrong there so please do prove me wrong if I am. I might just be overlooking something! Thanks for any help you can provide.

Try this:
SELECT
foreignID,
SUM(CASE WHEN accepted = 'Y' THEN value ELSE 0 END) AS sum1,
SUM(CASE WHEN accepted = 'N' THEN value ELSE 0 END) AS sum2
FROM example
WHERE foreignID='1'

SQL fiddle
create table test(
id int ,
foreignid int,
value int,
accepted char(1)
)
INSERT INTO test values (1,1,5,'Y');
INSERT INTO test values (2,1,2,'Y');
INSERT INTO test values (3,1,4,'N');
INSERT INTO test values (4,2,8,'Y');
select
foreignid,
sum( case when accepted='Y' then value else 0 end) as sumY
,sum( case when accepted='N' then value else 0 end) as sumN
from test
group by foreignid;

This is more readable than the case:
select
foreignID,
sum("value" * (accepted = 'Y')::int) sum1,
sum("value" * (accepted = 'N')::int) sum2
from example
where foreignID = '1'
The cast of the boolean to integer results in 0 or 1. Many, if not most, of the languages cast the same way. I tested in four:
C#
Console.WriteLine("{0} {1}",
7 * Convert.ToInt32(true) - 2 * Convert.ToInt32(false),
// Or shorter:
7 * (true ? 1 : 0) - 2 * (false ? 1 : 0)
);
Python
>>> 7 * True - 2 * False
7
Javascript
<script type="text/javascript">
document.write(7 * true - 2 * false);
</script>
PHP
<?php
echo 7 * true - 2 * false;
?>

Related

How to create a table to count with a conditional

I have a database with a lot of columns with pass, fail, blank indicators
I want to create a function to count each type of value and create a table from the counts. The structure I am thinking is something like
| Value | x | y | z |
|-------|------------------|-------------------|---|---|---|---|---|---|---|
| pass | count if x=pass | count if y=pass | count if z=pass | | | | | | |
| fail | count if x=fail | count if y=fail |count if z=fail | | | | | | |
| blank | count if x=blank | count if y=blank | count if z=blank | | | | | | |
| total | count(x) | count(y) | count (z) | | | | | | |
where x,y,z are columns from another table.
I don't know which could be the best approach for this
thank you all in advance
I tried this structure but it shows syntax error
CREATE FUNCTION Countif (columnx nvarchar(20),value_compare nvarchar(10))
RETURNS Count_column_x AS
BEGIN
IF columnx=value_compare
count(columnx)
END
RETURN
END
Also, I don't know how to add each count to the actual table I am trying to create
Conditional counting (or any conditional aggregation) can often be done inline by placing a CASE expression inside the aggregate function that conditionally returns the value to be aggregated or a NULL to skip.
An example would be COUNT(CASE WHEN SelectMe = 1 THEN 1 END). Here the aggregated value is 1 (which could be any non-null value for COUNT(). (For other aggregate functions, a more meaningful value would be provided.) The implicit ELSE returns a NULL which is not counted.
For you problem, I believe the first thing to do is to UNPIVOT your data, placing the column name and values side-by-side. You can then group by value and use conditional aggregation as described above to calculate your results. After a few more details to add (1) a totals row using WITH ROLLUP, (2) a CASE statement to adjust the labels for the blank and total rows, and (3) some ORDER BY tricks to get the results right and we are done.
The results may be something like:
SELECT
CASE
WHEN GROUPING(U.Value) = 1 THEN 'Total'
WHEN U.Value = '' THEN 'Blank'
ELSE U.Value
END AS Value,
COUNT(CASE WHEN U.Col = 'x' THEN 1 END) AS x,
COUNT(CASE WHEN U.Col = 'y' THEN 1 END) AS y
FROM #Data D
UNPIVOT (
Value
FOR Col IN (x, y)
) AS U
GROUP BY U.Value WITH ROLLUP
ORDER BY
GROUPING(U.Value),
CASE U.Value WHEN 'Pass' THEN 1 WHEN 'Fail' THEN 2 WHEN '' THEN 3 ELSE 4 END,
U.VALUE
Sample data:
x
y
Pass
Pass
Pass
Fail
Pass
Fail
Sample results:
Value
x
y
Pass
3
1
Fail
1
1
Blank
0
2
Total
4
4
See this db<>fiddle for a working example.
I think you don't need a generic solution like a function with value as parameter.
Perhaps, you could create a view grouping your data and after call this view filtering by your value.
Your view body would be something like that
select value, count(*) as Total
from table_name
group by value
Feel free to explain your situation better so I could help you.
You can do this by grouping by the status column.
select status, count(*) as total
from some_table
group by status
Rather than making a whole new table, consider using a view. This is a query that looks like a table.
create view status_counts as
select status, count(*) as total
from some_table
group by status
You can then select total from status_counts where status = 'pass' or the like and it will run the query.
You can also create a "materialized view". This is like a view, but the results are written to a real table. SQL Server is special in that it will keep this table up to date for you.
create materialized view status_counts with distribution(hash(status))
select status, count(*) as total
from some_table
group by status
You'd do this for performance reasons on a large table which does not update very often.

How to sum the values of the column in postgresql on a certain condition?

I have a table, that looks something like this:
Name | Val_Num
--------------------
Joey | 1
Joey | 2
Chandler| 2
Monica | 3
Monica | 2
What I need is to make a select in which I'll remove from duplicates in names and sum up the values in Val_Num (but the sum up should use simple formula - if value is 1 add 1, if value is not 1 add 0.5).
Result of the query should look like this:
Name | Val_Num
--------------------
Joey | 1.5
Chandler| 0.5
Monica | 1
Looking forward for your help, thanks.
Use a CASE expression in order to decide when to add 1 and when 0.5.
SELECT Name, SUM(CASE WHEN Val_Num = 1 THEN Val_Num ELSE 0.5 END)
FROM test_table
GROUP BY Name
You can use CASE WHEN statement to replace all values that are not 1 with 0.5, and then do a simple GROUP BY query
select name, sum(case when val_Num !=1
then 0.5 else val_Num end
) from table_1 group by name
Demo in DBfiddle

MS-Access Query to PostgreSQL View

I am converting a microsoft access query into a postgresql view. The query has obvious components that I have found reasonable answers to. However, I am still stuck on getting the final result:
SELECT All_Claim_Data.Sec_ID,
Sum(IIf([Type]="LODE",IIf([Status]="Active",1,0),0)) AS LD_Actv,
Sum(IIf([Type]="LODE",IIf([Loc_Date]>#8/31/2017#,IIf([Loc_Date]<#9/1/2018#,1,0),0),0)) AS LD_stkd_17_18,
Sum(IIf([Type]="LODE",IIf([Loc_Date]>#8/31/2016#,IIf([Loc_Date]<#9/1/2017#,1,0),0),0)) AS LD_stkd_16_17,
Sum(IIf([Type]="LODE",IIf([Loc_Date]<#1/1/1910#,IIf(IsNull([Clsd_Date]),1,(IIf([Clsd_Date]>#1/1/1900#,1,0))),0),0)) AS Actv_1900s,
Sum(IIf([Type]="LODE",IIf([Loc_Date]<#1/1/1920#,IIf(IsNull([Clsd_Date]),1,(IIf([Clsd_Date]>#1/1/1910#,1,0))),0),0)) AS Actv_1910s,
FROM All_Claim_Data.Sec_ID,
GROUP BY All_Claim_Data.Sec_ID,
HAVING (((Sum(IIf([casetype_txt]="LODE",1,0)))>0));
Realizing I need to use CASE SUM WHEN, here is what I have worked out so far:
CREATE OR REPLACE VIEW hgeditor.vw_test AS
SELECT All_Claim_Data.Sec_ID,
SUM (CASE WHEN(Type='LODE' AND WHEN(Status='Active',1,0),0)) AS LD_Actv,
SUM (CASE WHEN(Type='LODE' AND WHEN(Loc_Date>'8/31/2017' AND Loc_Date<'9/1/2018',1,0),0),0)) AS LD_stkd_17_18,
SUM (CASE WHEN(Type='LODE' AND WHEN(Loc_Date<'1/1/1910' AND (IsNull(Clsd_Date),1,(WHEN([Clsd_Date]>'1/1/1900',1,0))),0),0)) AS Actv_1900s
FROM All_Claim_Data.Sec_ID,
GROUP BY All_Claim_Data.Sec_ID,
HAVING (((SUM(IIf(Type='LODE',1,0)))>0));
The goal is to count the number of instances in which the Sec_ID has the following:
has (Type = LODE and Status = Active) = SUM integer
has (Type = LODE and Loc_Date between 8/31/2017 and 9/1/2018) = SUM Integer
My primary issue is getting a SUM integer to populate in the new columns
Case expressions are the equivalent to the Access IIF() functions, but WHEN isn't a function so it isn't used by passing a set of parameters. Think of it as being a tiny where clause instead, it evaluates one or more predicates to determine what to do, and the action taken is established by what you specify after THEN
CREATE OR REPLACE VIEW hgeditor.vw_test AS
SELECT
All_Claim_Data.Sec_ID
, SUM( CASE
WHEN TYPE = 'LODE' AND
STATUS = 'Active' THEN 1
ELSE 0
END ) AS LD_Actv
, SUM( CASE
WHEN TYPE = 'LODE' AND
Loc_Date > to_date('08/31/2017','mm/dd/yyyy') AND
Loc_Date < to_date('09/1/2018','mm/dd/yyyy') THEN 1
ELSE 0
END ) AS LD_stkd_17_18
, SUM( CASE
WHEN TYPE = 'LODE' AND
Loc_Date < to_date('1/1/1910','mm/dd/yyyy') AND
[Clsd_Date] > to_date('1/1/1900','mm/dd/yyyy') THEN 1
ELSE 0
END ) AS Actv_1900s
FROM All_Claim_Data.Sec_ID
GROUP BY
All_Claim_Data.Sec_ID
HAVING COUNT( CASE
WHEN Type = 'LODE' THEN 1
END ) > 0
;
By the way, you should NOT be relying on MM/DD/YYYY as dates in Postgres
nb: Aggregate functions ignore NULL, take this example:
+----------+
| id value |
+----------+
| 1 x |
| 2 NULL |
| 3 x |
| 4 NULL |
| 5 x |
+----------+
select
count(*) c_all
, count(value) c_value
from t
+-------+----------+
| c_all | c_value |
+-------+----------+
| 5 | 3 |
+-------+----------+
select
sum(case when value IS NOT NULL then 1 else 0 end) sum_case
, count(case when value IS NOT NULL then 1 end) count_case
from t
+----------+-------------+
| sum_case | count_case |
+----------+-------------+
| 3 | 3 |
+----------+-------------+

Compare version numbers in a SQL query

I have a table which is used for storing the compatibility for a specific version of a software. For example whether a version of the client is compatible with the backend. There is a lower and an upper bound, both have major, minor and revision version numbers. Upper bound numbers can be null (there is a check constraint which ensures that either all or none of them is null).
I'd like to create a query which returns the rows for various majorVersion, minorVersion and revisionVersion numbers.
Example (clientId left out to make it more simple):
minMajorVersion | minMinorVersion | minRevisionVersion | maxMajorVersion | maxMinorVersion | maxRevisionVersion
1 0 0 NULL NULL NULL
1 2 5 NULL NULL NULL
1 3 0 NULL NULL NULL
2 0 1 5 1 0
Let's say I want to know which client version is compatible with a backend version 1.2.6. For this, the query should return the first two rows, because the min versions are smaller, and the max versions are NULL.
For another backend version 2.0.1 the query should return the last row, and for backend version 5.2.0 the query should return nothing.
What I was able to create is this:
SELECT c.* FROM COMPATIBILITYQUALIFIER q
join client c on (c.id = q.clientid)
WHERE (q.MINBACKENDMAJORVERSION < 2
OR (q.MINBACKENDMAJORVERSION = 2 AND q.MINBACKENDMINORVERSION < 3)
OR (q.MINBACKENDMAJORVERSION = 2 AND q.MINBACKENDMINORVERSION = 3 AND q.MINBACKENDREVISIONVERSION <=6))
AND ((q.MAXBACKENDMAJORVERSION IS NULL)
OR ((q.MAXBACKENDMAJORVERSION > 2)
OR (q.MAXBACKENDMAJORVERSION = 2 AND q.MAXBACKENDMINORVERSION > 3)
OR (q.MAXBACKENDMAJORVERSION = 2 AND q.MAXBACKENDMINORVERSION = 3 AND q.MAXBACKENDREVISIONVERSION >= 6)))
order by c.MAJORVERSION DESC, c.MINORVERSION DESC, c.REVISIONVERSION DESC;
I don't think it would be performant.
An easy way to do this would be to create a stored procedure, but I don't want to put code in the DB right now.
Is there a way to do it with sub-queries? Anything else which is fast?
UPDATED.
Sure, the query is not the prettiest. But just because you have multiple conditional clauses like that doesn't mean that your query will be any slower.
Only as a matter of readability and to avoid repeating hardcoded values, I would rewrite the query to something like this:
select c.*
from compatibilityqualifier q
join (select 2 as major,
3 as minor,
6 as revision
from dual) ver
on 1=1
join client c
on c.id = q.clientid
where ver.major >= q.minBackendMajorVersion
and (ver.major > q.minBackendMajorVersion or ver.minor >= q.minBackendMinorVersion)
and (ver.major > q.minBackendMajorVersion or ver.minor > q.minBackendMinorVersion or ver.revision >= q.minBackendRevisionVersion)
and (q.maxBackendMajorVersion is null
or (ver.major <= q.maxBackendMajorVersion
and (ver.major < q.maxBackendMajorVersion or ver.minor <= q.maxBackendMinorVersion)
and (ver.major < q.maxBackendMajorVersion or ver.minor < q.maxBackendMinorVersion or ver.revision <= q.maxBackendRevisionVersion)
)
)
order by c.majorversion desc,
c.minorversion desc,
c.revisionversion desc
But I expect the performance to be pretty much identical.
For any given version number expressed as a tuple of (Major, Minor, Revision) you can use the following query to retrieve rows from your CompatibilityQualifier table. For example Version 1,2,6 below:
select q.*
from (select 1 major
, 2 minor
, 6 revision from dual) v
join CompatibilityQualifier q
on ( q.minMajorVersion < v.major or
( q.minMajorVersion = v.major and
( q.minMinorVersion < v.minor or
( q.minMinorVersion = v.minor and
q.minRevisionVersion <= v.revision))))
and ( q.maxMajorVersion is null or
q.maxMajorVersion > v.major or
( q.maxMajorVersion = v.major and
( q.MaxMinorVersion is null or
q.MaxMinorVersion > v.minor or
( q.MaxMinorVersion = v.minor and
( maxRevisionVersion is null or
q.maxRevisionVersion >= v.revision)))));
Which yields the following results:
| MINMAJORVERSION | MINMINORVERSION | MINREVISIONVERSION | MAXMAJORVERSION | MAXMINORVERSION | MAXREVISIONVERSION |
|-----------------|-----------------|--------------------|-----------------|-----------------|--------------------|
| 1 | 0 | 0 | (null) | (null) | (null) |
| 1 | 2 | 5 | (null) | (null) | (null) |
With revision 2,0,1 every row from CompatibilityQualifier would be returned since there are no upper bounds on any of the 1,x,x records.
If you really want records with NULL values of maxMajorVersion excluded from the result set when the queried major version number differs from the minMajorVersion then you can use this revised version:
select q.*
from (select 2 major
, 0 minor
, 1 revision from dual) v
join CompatibilityQualifier q
on ( q.minMajorVersion < v.major or
( q.minMajorVersion = v.major and
( q.minMinorVersion < v.minor or
( q.minMinorVersion = v.minor and
q.minRevisionVersion <= v.revision))))
and ( --q.maxMajorVersion is null or
q.maxMajorVersion > v.major or
( coalesce(q.maxMajorVersion -- When Null compare to minMajorVersion
,q.minMajorVersion) = v.major and
( q.MaxMinorVersion is null or
q.MaxMinorVersion > v.minor or
( q.MaxMinorVersion = v.minor and
( maxRevisionVersion is null or
q.maxRevisionVersion >= v.revision)))));
which just returns the one row:
| MINMAJORVERSION | MINMINORVERSION | MINREVISIONVERSION | MAXMAJORVERSION | MAXMINORVERSION | MAXREVISIONVERSION |
|-----------------|-----------------|--------------------|-----------------|-----------------|--------------------|
| 2 | 0 | 1 | 5 | 1 | 0 |
I know I am bit late to the party, but I found a very easy solution to compare versions in Oracle. Just need to make compare versions as varchar and compare.
Check the following simple query and tested it for different versions and it worked!!!
select case when '1.0.30' < '1.1.22' then 'true' else 'false' end as isVersionHigher
from dual;
--Result => 'true'
You can even compare versions just having major and minor digits. Check below query.
select case when '1.0' < '1.1.22' then 'true' else 'false' end as isVersionHigher
from dual;
--Result => 'true'

How to get first n numbers from float

I have table A with two columns id(int) and f_value(float). Now I'd like to select all rows where f_value starts from '123'. So for the following table:
id | f_value
------------
1 | 12
2 | 123
3 | 1234
I'd like to get the second and third row. I tried to use LEFT with cast but that was a disaster. For the following query:
select f_value, str(f_value) as_string, LEFT(str(f_value), 2) left_2,
LEFT(floor(f_value), 5) flor_5, LEFT('abcdef', 5) test
from A
I got:
f_value | as_string | left_2 | flor_5 | test
------------------------------------------------
40456510 | 40456510 | | 4.045 | abcde
40454010 | 40454010 | | 4.045 | abcde
404020 | 404020 | | 40402 | abcde
40452080 | 40452080 | | 4.045 | abcde
101020 | 101020 | | 10102 | abcde
404020 | 404020 | | 40402 | abcde
The question is why left works fine for 'test' but for other returns such weird results?
EDIT:
I made another test I now I'm even more confused. For query:
Declare #f as float
set #f = 40456510.
select LEFT(cast(#f as float), LEN(4045.)), LEFT(404565., LEN(4045.))
I got:
|
------------
4.04 | 4045
Is there a default cast which causes this?
Fiddle SQL
Seems like your query is a bit wrong. The LEFT part should go in the WHERE-Clause, not the SELECT-part.
Also, just use LIKE and you should be fine:
SELECT f_value, str(f_value) as_string, LEFT(str(f_value), 2) left_2,
LEFT(floor(f_value), 5) flor_5
WHERE f_value LIKE '123%'
CREATE TABLE #TestTable(ID INT, f_value FLOAT)
INSERT INTO #TestTable
VALUES (1,22),
(2,123),
(3,1234)
SELECT *
FROM #TestTable
WHERE LEFT(f_value,3)='123'
DROP TABLE #TestTable
I hope this will help.
The replace get rid of the period in the float, by multiplying by 1 any 0 in front will be removed.
SELECT f_value
FROM your_table
WHERE replace(f_value, '.', '') * 1 like '123%'
I found the solution. The problem was that SQL Server uses the exponential representation of floats. To resolve it you need to first convert float to BigInt and then use Left on it.
Example:
Select * from A where Left(Cast(float_value as BigInt), 4) = xxxx
/*
returns significant digits from #f (a float) as an integer
negative sign is stripped off
*/
declare #num_digits int = 3; /* needs to be positive; accuracy diminishes with larger values */
with samples(num, f) as (
select 1, cast(123.45 as float) union
select 2, 123456700 union
select 3, -1.234567 union
select 4, 0.0000001234
)
select num, f,
case when f = 0 or #num_digits < 1 then 0 else
floor(
case sign(log10(abs(f)))
when -1 then abs(f) * power(10e0, -floor(log10(abs(f))) + #num_digits - 1)
when 1 then abs(f) / power(10e0, ceiling(log10(abs(f))) - #num_digits)
end
)
end as significant_digits
from samples
order by num;
sqlfiddle
Convert the FLOAT value to DECIMAL then to VARCHAR using CAST AND use LIKE to select the value starting with 4045.
Query
SELECT * FROM tbl
WHERE CAST(CAST(f_value AS DECIMAL(20,12)) AS VARCHAR(MAX)) LIKE '4045%';
Fiddle demo for reference