I have a table called "where_clauses" which contains a bunch of conditions I would like to use for building dynamic queries. I would like to know all possible queries I could perform using this data. Here is my "where_clauses" data...
INSERT INTO where_clauses (id,col_name,clause) VALUES (1,'x','x < 1');
INSERT INTO where_clauses (id,col_name,clause) VALUES (2,'x','x < 2');
INSERT INTO where_clauses (id,col_name,clause) VALUES (3,'x','x < 3');
INSERT INTO where_clauses (id,col_name,clause) VALUES (4,'y','y < 1');
INSERT INTO where_clauses (id,col_name,clause) VALUES (5,'y','y < 2');
INSERT INTO where_clauses (id,col_name,clause) VALUES (6,'y','y < 3');
INSERT INTO where_clauses (id,col_name,clause) VALUES (7,'z','z < 1');
Ideally I would like the "all possible queries" in the form of an array of ids. For example, the "all possible queries" result would be...
{1}
{1,4}
{1,4,7}
{1,5}
{1,5,7}
{1,6}
{1,6,7}
{2}
{2,4}
{2,4,7}
{2,5}
{2,5,7}
{2,6}
{2,6,7}
{3}
{3,4}
{3,4,7}
{3,5}
{3,5,7}
{3,6}
{3,6,7}
{4}
{4,7}
{5}
{5,7}
{6}
{6,7}
{7}
Note that im throwing out joining on equal columns. What is a query that would give all possible where_clauses?
This is the sort of problem that the new WITH RECURSIVE is intended to solve. The following generalizes to any number of column names (not just x, y, z).
WITH RECURSIVE subq(a, x) AS
( VALUES (ARRAY[]::int[], NULL) /* initial */
UNION ALL
SELECT subq.a || id, col_name FROM subq JOIN where_clauses
ON x IS NULL OR x < col_name )
SELECT a FROM subq
WHERE x IS NOT NULL; /* discard the initial empty array */
SELECT string_to_array(TRIM(x || ',' || y || ',' || z, ','), ',')
FROM (
WITH sq AS (
SELECT a.id x, b.id y, c.id z
FROM where_clauses a, where_clauses b, where_clauses c
WHERE a.col_name != b.col_name AND
a.col_name != c.col_name AND
b.col_name != c.col_name AND
a.id < b.id AND
b.id < c.id
)
SELECT x, y, z FROM sq
UNION ALL
SELECT distinct x, y, null::int FROM sq
UNION ALL
SELECT distinct y, z, null::int FROM sq
UNION ALL
SELECT distinct x, null::int, null::int FROM sq
UNION ALL
SELECT distinct y, null::int, null::int FROM sq
UNION ALL
SELECT distinct z, null::int, null::int FROM sq
) ORDER BY 1;
Does above query helps you out ?
Try this code, it selects three columns, those that are not used for a clause are left NULL, you could concatenate or manipulate that result further:
--all possibilities with only one clause
SELECT
id AS ID1, NULL ID2, NULL AS ID3
FROM where_clauses
--all possibilities with two clauses (xy,xz,yz)
UNION
SELECT
WC1.id AS ID1, WC2.id AS ID2, NULL AS ID3
FROM where_clauses WC1
CROSS JOIN where_clauses WC2
WHERE
WC1.col_name != WC2.col_name
AND WC1.id > WC2.id
--all possibilities with an x and a y and a z clause
UNION
SELECT
WC1.id AS ID1, WC2.id AS ID2, WC3.id AS ID3
FROM where_clauses WC1
CROSS JOIN where_clauses WC2
CROSS JOIN where_clauses WC3
WHERE
WC1.col_name != WC2.col_name
AND WC1.id > WC2.id
AND WC1.col_name != WC3.col_name
AND WC1.id > WC3.id
AND WC2.col_name != WC3.col_name
AND WC2.id > WC3.id
Here is a fiddle.
EDIT: modified fiddle slightly
Related
Hi I need the result of this. so if a entityID matches to a value I need the sum of certain column.I am getting an expression missing error. Can someone point me to where the error is?
Thanks.
SELECT
p.jobTitle,
p.department,
p.person,
ufr.meets,
ufr.exceeds,
CASE
WHEN ufr.entityid = 'AHT' THEN (AD.acdcalls + AD.daacdcalls)
WHEN ufr.entityid = 'ACW' THEN (AD.acdcalls + AD.daacdcalls)
WHEN ufr.entityid = 'Adherence' THEN SUM(AA.totalSched)
WHEN ufr.entityid = 'Conformance' THEN SUM(AS.minutes)
ELSE null
END as weight,
(weight * meets) AS weightedMeets,
(weight * exceeds) AS weightedExceeds
FROM M_PERSON p
JOIN A_TMP5408_UNFLTRDRESULTSAG ufr
ON ufr.department = p.department AND ufr.jobTitle = p.jobTitle
LEFT JOIN M_AvayaDAgentChunk AD
ON AD.person = p.person and ufr.split = AD.split
LEFT JOIN M_AgentAdherenceChunk AA
ON AA.person = p.person
LEFT JOIN M_AgentScheduleChunk AS
ON AS.person = p.person
GROUP BY
p.person,
p.department,
p.jobTitle,
ufr.meets,
ufr.exceeds,
weight,
weightedMeets,
weightedExceeds
As well as the issues mentioned by #GordonLinoff (that AS is a keyword) and #DCookie (you need entityid in the group-by):
you also need acdcalls and daacdcalls in the group-by (unless you can aggregate those);
you can't refer to a column alias in the same level of query, so (weight * meets) AS weightedMeets isn't allowed - you've just define what weight is, in the same select list. You need to use an inline view, or a CTE, if you don't want to repeat the case logic.
I think this does what you want:
SELECT
jobTitle,
department,
person,
meets,
exceeds,
weight,
(weight * meets) AS weightedMeets,
(weight * exceeds) AS weightedExceeds
FROM
(
SELECT
MP.jobTitle,
MP.department,
MP.person,
ufr.meets,
ufr.exceeds,
CASE
WHEN ufr.entityid = 'AHT' THEN (MADAC.acdcalls + MADAC.daacdcalls)
WHEN ufr.entityid = 'ACW' THEN (MADAC.acdcalls + MADAC.daacdcalls)
WHEN ufr.entityid = 'Adherence' THEN SUM(MAAC.totalSched)
WHEN ufr.entityid = 'Conformance' THEN SUM(MASC.minutes)
ELSE null
END as weight
FROM M_PERSON MP
JOIN A_TMP5408_UNFLTRDRESULTSAG ufr
ON ufr.department = MP.department AND ufr.jobTitle = MP.jobTitle
LEFT JOIN M_AvayaDAgentChunk MADAC
ON MADAC.person = MP.person and ufr.split = MADAC.split
LEFT JOIN M_AgentAdherenceChunk MAAC
ON MAAC.person = MP.person
LEFT JOIN M_AgentScheduleChunk MASC
ON MASC.person = MP.person
GROUP BY
MP.person,
MP.department,
MP.jobTitle,
ufr.meets,
ufr.exceeds,
ufr.entityid,
MADAC.acdcalls,
MADAC.daacdcalls
);
Your fist two case branches could be combined since the calculation is the same, but will work either way.
In addition to the alias issue identified by Gordon, I think you'll find you need to use an aggregate function in all the THEN clauses of your CASE statement, and that you need to GROUP BY ufr.entityid as well. Otherwise you'll start getting ora-00979 errors (not a GROUP BY expression). If you don't want the aggregate function in all clauses, then you'll have to group by the expressions you're summing as well.
Small illustration:
CREATE TABLE tt (ID varchar2(32), sub_id varchar2(32), x NUMBER, y NUMBER);
INSERT INTO tt VALUES ('ID1', 'A', 1, 6);
INSERT INTO tt VALUES ('ID1', 'B', 1, 7);
INSERT INTO tt VALUES ('ID2', 'A', 2, 6);
INSERT INTO tt VALUES ('ID2', 'B', 2, 7);
INSERT INTO tt VALUES ('ID3', 'A', 3, 6);
INSERT INTO tt VALUES ('ID3', 'B', 3, 7);
INSERT INTO tt VALUES ('ID3', 'C', 3, 8);
SELECT ID, CASE WHEN sub_id = 'A' THEN SUM(y)
WHEN sub_id = 'B' THEN SUM(x)
ELSE (x + y) END tst
FROM tt
GROUP BY ID
ORA-00979: not a GROUP BY expression (points at sub_id in WHEN)
SELECT ID, CASE WHEN sub_id = 'A' THEN SUM(y)
WHEN sub_id = 'B' THEN SUM(x)
ELSE (x + y) END tst
FROM tt
GROUP BY ID, sub_id
ORA-00979: not a GROUP BY expression (points at x in ELSE)
SQL> SELECT ID, CASE WHEN sub_id = 'A' THEN SUM(y)
2 WHEN sub_id = 'B' THEN SUM(x)
3 ELSE SUM(x + y) END tst
4 FROM tt
5 GROUP BY ID, sub_id;
ID TST
-------------------------------- ----------
ID1 6
ID3 6
ID3 3
ID1 1
ID2 6
ID2 2
ID3 11
I need to write a write a SQL query that selects values from a table based on several tuples of selection criteria. It could be done using a where clause like this :
where (a = 1 and b='a') or (a=5 and b='s')
Is the best way to select:
select a, pk from x where a in (1,5)
select b, pk from x where b in ('a','s')
and join the result of the two queries using the primary key?
do you mean something(a self join) like this:
select x.a, x.pk
from x
join x x2 on x.pk=x2.pk
where x.a in (1,5)
and x2.b in ('a','s')
?
You can use join on table expression from VALUES. You can add in VALUES as much rows as you want. It will work on MSSQL:
DECLARE #x TABLE ( a INT, b CHAR(1) )
INSERT INTO #x
VALUES ( 1, 'a' ),
( 1, 'b' ),
( 1, 'c' ),
( 2, 'd' ),
( 2, 'e' ),
( 5, 'f' ),
( 5, 's' )
SELECT x.*
FROM #x x
JOIN (
VALUES ( 1, 'a'),
( 5, 's')
) AS v( a, b ) ON x.a = v.a AND x.b = v.b
Output:
a b
1 a
5 s
Based on my understanding you want write a SQL that uses a combination of two filters. Here is a simple solution that will work in any database.
Create a new column say "COLUMN_NEW" in the same table or build a temp table or a view with a new column (plus existing columns from original table).
Insert concatenated values of column a and column b in "COLUMN_NEW". Based on the example mentioned by you values in "COLUMN_NEW" will be "1a" and "5s"
Now you may have a different syntax for concat in different databases. Example concat(a,b) in SQL server.
SQL to select records from the table will be select * from table where COLUMN_NEW in ("1a",5s");
Suppose I have a list of values, such as 1, 2, 3, 4, 5 and a table where some of those values exist in some column. Here is an example:
id name
1 Alice
3 Cindy
5 Elmore
6 Felix
I want to create a SELECT statement that will include all of the values from my list as well as the information from those rows that match the values, i.e., perform a LEFT OUTER JOIN between my list and the table, so the result would be like follows:
id name
1 Alice
2 (null)
3 Cindy
4 (null)
5 Elmore
How do I do that without creating a temp table or using multiple UNION operators?
If in Microsoft SQL Server 2008 or later, then you can use Table Value Constructor
Select v.valueId, m.name
From (values (1), (2), (3), (4), (5)) v(valueId)
left Join otherTable m
on m.id = v.valueId
Postgres also has this construction VALUES Lists:
SELECT * FROM (VALUES (1, 'one'), (2, 'two'), (3, 'three')) AS t (num,letter)
Also note the possible Common Table Expression syntax which can be handy to make joins:
WITH my_values(num, str) AS (
VALUES (1, 'one'), (2, 'two'), (3, 'three')
)
SELECT num, txt FROM my_values
With Oracle it's possible, though heavier From ASK TOM:
with id_list as (
select 10 id from dual union all
select 20 id from dual union all
select 25 id from dual union all
select 70 id from dual union all
select 90 id from dual
)
select * from id_list;
the following solution for oracle is adopted from this source. the basic idea is to exploit oracle's hierarchical queries. you have to specify a maximum length of the list (100 in the sample query below).
select d.lstid
, t.name
from (
select substr(
csv
, instr(csv,',',1,lev) + 1
, instr(csv,',',1,lev+1 )-instr(csv,',',1,lev)-1
) lstid
from (select ','||'1,2,3,4,5'||',' csv from dual)
, (select level lev from dual connect by level <= 100)
where lev <= length(csv)-length(replace(csv,','))-1
) d
left join test t on ( d.lstid = t.id )
;
check out this sql fiddle to see it work.
Bit late on this, but for Oracle you could do something like this to get a table of values:
SELECT rownum + 5 /*start*/ - 1 as myval
FROM dual
CONNECT BY LEVEL <= 100 /*end*/ - 5 /*start*/ + 1
... And then join that to your table:
SELECT *
FROM
(SELECT rownum + 1 /*start*/ - 1 myval
FROM dual
CONNECT BY LEVEL <= 5 /*end*/ - 1 /*start*/ + 1) mypseudotable
left outer join myothertable
on mypseudotable.myval = myothertable.correspondingval
Assuming myTable is the name of your table, following code should work.
;with x as
(
select top (select max(id) from [myTable]) number from [master]..spt_values
),
y as
(select row_number() over (order by x.number) as id
from x)
select y.id, t.name
from y left join myTable as t
on y.id = t.id;
Caution: This is SQL Server implementation.
fiddle
For getting sequential numbers as required for part of output (This method eliminates values to type for n numbers):
declare #site as int
set #site = 1
while #site<=200
begin
insert into ##table
values (#site)
set #site=#site+1
end
Final output[post above step]:
select * from ##table
select v.id,m.name from ##table as v
left outer join [source_table] m
on m.id=v.id
Suppose your table that has values 1,2,3,4,5 is named list_of_values, and suppose the table that contain some values but has the name column as some_values, you can do:
SELECT B.id,A.name
FROM [list_of_values] AS B
LEFT JOIN [some_values] AS A
ON B.ID = A.ID
I am working on a query that is fairly similar the following:
CREATE TABLE #test (a char(1), b char(1))
INSERT INTO #test(a,b) VALUES
('A',NULL),
('A','B'),
('B',NULL),
('B',NULL)
SELECT DISTINCT a,b FROM #test
DROP TABLE #test
The result is, unsurprisingly,
a b
-------
A NULL
A B
B NULL
The output I would like to see in actuality is:
a b
-------
A B
B NULL
That is, if a column has a value in some records but not in others, I want to throw out the row with NULL for that column. However, if a column has a NULL value for all records, I want to preserve that NULL.
What's the simplest/most elegant way to do this in a single query?
I have a feeling that this would be simple if I weren't exhausted on a Friday afternoon.
Try this:
select distinct * from test
where b is not null or a in (
select a from test
group by a
having max(b) is null)
You can get the fiddle here.
Note if you can only have one non-null value in b, this can be simplified to:
select a, max(b) from test
group by a
Try this:
create table test(
x char(1),
y char(1)
);
insert into test(x,y) values
('a',null),
('a','b'),
('b', null),
('b', null)
Query:
with has_all_y_null as
(
select x
from test
group by x
having sum(case when y is null then 1 end) = count(x)
)
select distinct x,y from test
where
(
-- if a column has a value in some records but not in others,
x not in (select x from has_all_y_null)
-- I want to throw out the row with NULL
and y is not null
)
or
-- However, if a column has a NULL value for all records,
-- I want to preserve that NULL
(x in (select x from has_all_y_null))
order by x,y
Output:
X Y
A B
B NULL
Live test: http://sqlfiddle.com/#!3/259d6/16
EDIT
Seeing Mosty's answer, I simplified my code:
with has_all_y_null as
(
select x
from test
group by x
-- having sum(case when y is null then 1 end) = count(x)
-- should have thought of this instead of the code above. Mosty's logic is good:
having max(y) is null
)
select distinct x,y from test
where
y is not null
or
(x in (select x from has_all_y_null))
order by x,y
I just prefer CTE approach, it has a more self-documenting logic :-)
You can also put documentation on non-CTE approach, if you are conscious of doing so:
select distinct * from test
where b is not null or a in
( -- has all b null
select a from test
group by a
having max(b) is null)
;WITH CTE
AS
(
SELECT DISTINCT * FROM #test
)
SELECT a,b
FROM CTE
ORDER BY CASE WHEN b IS NULL THEN 9999 ELSE b END ;
SELECT DISTINCT t.a, t.b
FROM #test t
WHERE b IS NOT NULL
OR NOT EXISTS (SELECT 1 FROM #test u WHERE t.a = u.a AND u.b IS NOT NULL)
ORDER BY t.a, t.b
This is a really weird requirement. I wonder how you need it.
SELECT DISTINCT a, b
FROM test t
WHERE NOT ( b IS NULL
AND EXISTS
( SELECT *
FROM test ta
WHERE ta.a = t.a
AND ta.b IS NOT NULL
)
)
AND NOT ( a IS NULL
AND EXISTS
( SELECT *
FROM test tb
WHERE tb.b = t.b
AND tb.a IS NOT NULL
)
)
Well, I don't particularly like this solution, but it seems the most appropriate to me. Note that your description of what you want sounds exactly like what you get with a LEFT JOIN, so:
SELECT DISTINCT a.a, b.b
FROM #test a
LEFT JOIN #test b ON a.a = b.a
AND b.b IS NOT NULL
SELECT a,b FROM #test t where b is not null
union
SELECT a,b FROM #test t where b is null
and not exists(select 1 from #test where a=t.a and b is not null)
Result:
a b
---- ----
A B
B NULL
I'll just put here a mix of two answers that solved my issue, because my View was more complex
--IdCompe int,
--Nome varchar(30),
--IdVanBanco int,
--IdVan int
--FlagAtivo bit,
--FlagPrincipal bit
select IdCompe
, Nome
, max(IdVanBanco)
, max(IdVan)
, CAST(MAX(CAST(FlagAtivo as INT)) AS BIT) FlagAtivo
, CAST(MAX(CAST(FlagPrincipal as INT)) AS BIT) FlagPrincipal
from VwVanBanco
where IdVan = {IdVan} or IdVan is null
group by IdCompe, Nome order by IdCompe asc
Thanks to mosty mostacho and
kenwarner
Say I have a table which I query like so:
select date, value from mytable order by date
and this gives me results:
date value
02/26/2009 14:03:39 1
02/26/2009 14:10:52 2 (a)
02/26/2009 14:27:49 2 (b)
02/26/2009 14:34:33 3
02/26/2009 14:48:29 2 (c)
02/26/2009 14:55:17 3
02/26/2009 14:59:28 4
I'm interested in the rows of this result set where the value is the same as the one in the previous or next row, like row b which has value=2 the same as row (a). I don't care about rows like row (c) which has value=2 but does not come directly after a row with value=2. How can I query the table to give me all rows like (a) and (b) only? This is on Oracle, if it matters.
Use the lead and lag analytic functions.
create table t3 (d number, v number);
insert into t3(d, v) values(1, 1);
insert into t3(d, v) values(2, 2);
insert into t3(d, v) values(3, 2);
insert into t3(d, v) values(4, 3);
insert into t3(d, v) values(5, 2);
insert into t3(d, v) values(6, 3);
insert into t3(d, v) values(7, 4);
select d, v, case when v in (prev, next) then '*' end match, prev, next from (
select
d,
v,
lag(v, 1) over (order by d) prev,
lead(v, 1) over (order by d) next
from
t3
)
order by
d
;
Matching neighbours are marked with * in the match column,
This is a simplified version of #Bob Jarvis' answer, the main difference being the use of just one subquery instead of four,
with f as (select row_number() over(order by d) rn, d, v from t3)
select
a.d, a.v,
case when a.v in (prev.v, next.v) then '*' end match
from
f a
left join
f prev
on a.rn = prev.rn + 1
left join
f next
on a.rn = next.rn - 1
order by a.d
;
As #Janek Bogucki has pointed out LEAD and LAG are probably the easiest way to accomplish this - but just for fun let's try to do it by using only basic join operations:
SELECT mydate, VALUE FROM
(SELECT a.mydate, a.value,
CASE WHEN a.value = b.value THEN '*' ELSE NULL END AS flag1,
CASE WHEN a.value = c.value THEN '*' ELSE NULL END AS flag2
FROM
(SELECT ROWNUM AS outer_rownum, mydate, VALUE
FROM mytable
ORDER BY mydate) a
LEFT OUTER JOIN
(select ROWNUM-1 AS inner_rownum, mydate, VALUE
from mytable
order by myDATE) b
ON b.inner_rownum = a.outer_rownum
LEFT OUTER JOIN
(select ROWNUM+1 AS inner_rownum, mydate, VALUE
from mytable
order by myDATE) c
ON c.inner_rownum = a.outer_rownum
ORDER BY a.mydate)
WHERE flag1 = '*' OR
flag2 = '*';
Share and enjoy.