Optional parameters in cursor where clause - sql

I have the following sample query which takes values from procedure parameters. The parameter can be either passed or default to null.
SELECT * FROM table
WHERE( table_term_code = '201931'
OR (table_term_code = '201931' and table_DETL_CODE ='CA02')
OR ( table_term_code = '201931' and table_ACTIVITY_DATE = sysdate)
OR ( table_term_code = '201931' and table_SEQNO = NULL));
i.e the user can input term code and not input any other parameter, or can input term code and table_DETL_CODE and not any other input parameter.
Same goes for the other 2 or conditions.
If a term code is passed and table_DETL_CODE is null, the query should return all the values for that term_code, whereas this query returns null.
Is there a way to achieve this without case or if conditions in PL/SQL?

If I understood you correctly, this might be what you're looking for:
select *
from your_table
where (table_term_code = :par_term_code or :par_term_code is null)
and (table_detl_code = :par_detl_code or :par_detl_code is null)
and (table_activity_date = :par_activity_date or :par_activity_date is null)
and (table_seqno = :par_seqno or :par_seqno is null)

The description seems to that you require user to enter table_term_code and then either none or exactly 1 of the other 3. If so then perhaps:
select *
from your_table
where table_term_code = :par_term_code
and ( (table_detl_code = :par_detl_code and :par_activity_date is null and :par_seqno is null)
or (table_activity_date = :par_activity_date and :par_detl_code is null and :par_seqno is null)
or (table_seqno = :par_seqno and :par_detl_code is null and :par_activity_date is null)
or (table_seqno is null and :par_detl_code is null and :par_activity_date is null)
);

Related

subrequest in case return error on clickhouse

So i try a make a view , actually this is my code :
drop table if exists computed_datum_hours_base;
create view computed_datum_hours_base
as select
toStartOfHour(datetime_value) as datetime_desc,
computed_id,
computed_kind,
computed_type,
case
when computed_type = 'intensive' then avg(value)
when computed_type = 'extensive.some' then sum(value)
when computed_type = 'extensive.differential' then
(
select value as value_f from ref_computed_datum
where ref_computed_id = computed_id
and ref_computed_kind = computed_kind
and ref_computed_type = computed_type
and ref_datetime_value = toStartOfHour(addHours(datetime_value, 1))
) - (
select value as value_f from ref_computed_datum
where ref_computed_id = computed_id
and ref_computed_kind = computed_kind
and ref_computed_type = computed_type
and ref_datetime_value = toStartOfHour(datetime_value)
)
end as value,
count(uuid) as nb_value
from computed_datum
join ref_computed_datum
on computed_id = ref_computed_id
and computed_kind = ref_computed_kind
and computed_type = ref_computed_type
where uuid = ref_uuid
group by
computed_id,
computed_kind,
computed_type,
toStartOfHour(datetime_value)
;
my issue is on the case for extensive.differential ...
clickhouse say he can found the column for computed_id ... like the subrequest is scoped and didn't have access to the colum from the main requeste ...
So this is another bug of clickhouse ?
Or there are a reel scope and i can't do this like this ...
( so How can do this ? )
Edit: full error
Code: 47, e.displayText() = DB::Exception: Missing columns: 'datetime_value' 'computed_kind' 'computed_type' 'computed_id' 'value' while processing query: 'SELECT value AS value_f FROM api_client.ref_computed_datum WHERE (ref_computed_id = computed_id) AND (ref_computed_kind = computed_kind) AND (ref_computed_type = computed_type) AND (ref_datetime_value = toStartOfHour(addHours(datetime_value, 1)))', required columns: 'value' 'computed_id' 'ref_computed_id' 'ref_computed_kind' 'computed_type' 'ref_computed_type' 'computed_kind' 'ref_datetime_value' 'datetime_value', source columns: 'ref_flags' 'ref_computed_kind' 'ref_computed_id' 'ref_datetime_value' 'ref_computed_type' 'ref_EventDateTime' 'ref_insert' 'ref_value' 'ref_uuid' (version 20.4.2.9 (official build))
computed_datum folow this structure :
EventDateTime DateTime default now(),
insert String,
uuid String default generateUUIDv4(),
datetime_value DateTime,
computed_id Int32,
computed_kind String,
computed_type String,
value Float64,
flags String
```
I make a ref view that only prefix all colum with ref_ for making a walkaround about the alias bug.
At this time clickhouse didn't support correlated query ...
cf: https://github.com/ClickHouse/ClickHouse/issues/6697

Conditional statements in PIG

I have below input in a text file and need to generate output in another file based on the logic.
Here is my input file:
customerid|Dateofsubscription|Customercode|CustomerType|CustomerText
1001|2017-05-23|455|CODE|SPRINT56
1001|2017-05-23|455|DESC|Unlimited Plan
1001|2017-05-23|455|DATE|2017-05-05
1002|2017-05-24|455|CODE|SPRINT56
1002|2017-05-24|455|DESC|Unlimited Plan
1002|2017-05-24|455|DATE|2017-05-06
Logic:
If Customercode = 455
if( CustomerType = "CODE" )
Val= CustomerText
if( CustomerType = "DESC" )
Description = CustomerText
if( CustomerType = "DATE" )
Date = CustomerText
Output:
customerid|Val|Description|Date
1001|SPRINT56|Unlimited Plan|2017-05-05
1002|SPRINT56|Unlimited Plan|2017-05-06
Could you please help me with this.
rawData = LOAD data;
filteredData = FILTER rawData BY (Customercode == 455);
--Extract and set Val/Description/Date based on CustomerText and 'null' otherwise
ExtractedData = FOREACH filteredData GENERATE
customerId,
(CustomerType == "CODE" ? CustomerText : null) AS Val,
(CustomerType == "DESC" ? CustomerText : null) AS Description,
(CustomerType == "DATE" ? CustomerText : null) AS Date;
groupedData = GROUP ExtractedData BY customerId;
--While taking MAX, all 'nulls' will be ignored
finalData = FOREACH groupedData GENERATE
group as CustomerId,
MAX($1.Val) AS Val,
MAX($1.Description) AS Description,
MAX($1.Date) AS Date;
DUMP finalData;
I have specified the core logic. Loading, formatting and storage should be straight-forward.
Filter the input where customercode=455,generate the required 2 columns,then group by customerid and then use BagToString
.
B = FILTER A BY Customercode == 455;
C = FOREACH B GENERATE $0 as CustomerId,$4 as CustomerText;
D = GROUP C BY CustomerId;
E = FOREACH D GENERATE group AS CustomerId, BagToString(C.CustomerText, '|'); -- Note:This will generate 1001,SPRINT56|Unlimited Plan|2017-05-05 so,you will have to concat the first field with '|' and then concat the resulting field with the second field which is already delimited by '|'.
F = FOREACH E GENERATE CONCAT(CONCAT($0,'|'),$1);
DUMP F;

how to cascade CASE sql inside the condition WHERE

I am searching for a word match for the variable- #TextSearchWord
I have a #SearchCriteria variable which may have 5 values!
According to that value, i choose from which field i have to search my word!
-so i need to cascade the "CASE" statement inside the "WHERE" statement only and like other samples here inside the select statement !
SELECT COUNT(WordID) AS WordQty
FROM itinfo_QuranArabicWordsAll
WHERE (SiteID = #SiteID)
AND (QuranID = #QuranID)
AND (SuraID BETWEEN #StrtSuraID AND #EndSuraID)
AND (VerseOrder BETWEEN #StrtVerseSortOrder AND #EndVerseSortOrder)
AND (
-- here is my problem :
CASE
WHEN (#SearchCriteria = 'DictNM') THEN (WordDictNM = #TextSearchWord )
ELSE CASE
WHEN (#SearchCriteria = 'DictNMAlif') THEN (WordDictNMAlif = #TextSearchWord)
...
END
END
)
You don't need a CASE statement for this.
SELECT COUNT(WordID) AS WordQty
FROM itinfo_QuranArabicWordsAll
WHERE (SiteID = #SiteID)
AND (QuranID = #QuranID)
AND (SuraID BETWEEN #StrtSuraID AND #EndSuraID)
AND (VerseOrder BETWEEN #StrtVerseSortOrder AND #EndVerseSortOrder)
AND (
(#SearchCriteria = 'DictNM' AND WordDictNM = #TextSearchWord )
OR (#SearchCriteria = 'DictNMAlif' AND WordDictNMAlif = #TextSearchWord)
...
)
SQL WHERE clauses: Avoid CASE, use Boolean logic.
....
AND(
( #SearchCriteria = 'DictNM' AND WordDictNM = #TextSearchWord )
OR ( #SearchCriteria = 'DictNMAlif' AND WordDictNMAlif = #TextSearchWord )
)

Update multiple rows in same query using PostgreSQL

I'm looking to update multiple rows in PostgreSQL in one statement. Is there a way to do something like the following?
UPDATE table
SET
column_a = 1 where column_b = '123',
column_a = 2 where column_b = '345'
You can also use update ... from syntax and use a mapping table. If you want to update more than one column, it's much more generalizable:
update test as t set
column_a = c.column_a
from (values
('123', 1),
('345', 2)
) as c(column_b, column_a)
where c.column_b = t.column_b;
You can add as many columns as you like:
update test as t set
column_a = c.column_a,
column_c = c.column_c
from (values
('123', 1, '---'),
('345', 2, '+++')
) as c(column_b, column_a, column_c)
where c.column_b = t.column_b;
sql fiddle demo
Based on the solution of #Roman, you can set multiple values:
update users as u set -- postgres FTW
email = u2.email,
first_name = u2.first_name,
last_name = u2.last_name
from (values
(1, 'hollis#weimann.biz', 'Hollis', 'Connell'),
(2, 'robert#duncan.info', 'Robert', 'Duncan')
) as u2(id, email, first_name, last_name)
where u2.id = u.id;
Yes, you can:
UPDATE foobar SET column_a = CASE
WHEN column_b = '123' THEN 1
WHEN column_b = '345' THEN 2
END
WHERE column_b IN ('123','345')
And working proof: http://sqlfiddle.com/#!2/97c7ea/1
For updating multiple rows in a single query, you can try this
UPDATE table_name
SET
column_1 = CASE WHEN any_column = value and any_column = value THEN column_1_value end,
column_2 = CASE WHEN any_column = value and any_column = value THEN column_2_value end,
column_3 = CASE WHEN any_column = value and any_column = value THEN column_3_value end,
.
.
.
column_n = CASE WHEN any_column = value and any_column = value THEN column_n_value end
if you don't need additional condition then remove and part of this query
Let's say you have an array of IDs and equivalent array of statuses - here is an example how to do this with a static SQL (a sql query that doesn't change due to different values) of the arrays :
drop table if exists results_dummy;
create table results_dummy (id int, status text, created_at timestamp default now(), updated_at timestamp default now());
-- populate table with dummy rows
insert into results_dummy
(id, status)
select unnest(array[1,2,3,4,5]::int[]) as id, unnest(array['a','b','c','d','e']::text[]) as status;
select * from results_dummy;
-- THE update of multiple rows with/by different values
update results_dummy as rd
set status=new.status, updated_at=now()
from (select unnest(array[1,2,5]::int[]) as id,unnest(array['a`','b`','e`']::text[]) as status) as new
where rd.id=new.id;
select * from results_dummy;
-- in code using **IDs** as first bind variable and **statuses** as the second bind variable:
update results_dummy as rd
set status=new.status, updated_at=now()
from (select unnest(:1::int[]) as id,unnest(:2::text[]) as status) as new
where rd.id=new.id;
Came across similar scenario and the CASE expression was useful to me.
UPDATE reports SET is_default =
case
when report_id = 123 then true
when report_id != 123 then false
end
WHERE account_id = 321;
Reports - is a table here, account_id is same for the report_ids mentioned above. The above query will set 1 record (the one which matches the condition) to true and all the non-matching ones to false.
The answer provided by #zero323 works great on Postgre 12. In case, someone has multiple values for column_b (referred in OP's question)
UPDATE conupdate SET orientation_status = CASE
when id in (66934, 39) then 66
when id in (66938, 49) then 77
END
WHERE id IN (66934, 39, 66938, 49)
In the above query, id is analogous to column_b; orientation_status is analogous to column_a of the question.
In addition to other answers, comments and documentation, the datatype cast can be placed on usage. This allows an easier copypasting:
update test as t set
column_a = c.column_a::number
from (values
('123', 1),
('345', 2)
) as c(column_b, column_a)
where t.column_b = c.column_b::text;
#Roman thank you for the solution, for anyone using node, I made this utility method to pump out a query string to update n columns with n records.
Sadly it only handles n records with the same columns so the recordRows param is pretty strict.
const payload = {
rows: [
{
id: 1,
ext_id: 3
},
{
id: 2,
ext_id: 3
},
{
id: 3,
ext_id: 3
} ,
{
id: 4,
ext_id: 3
}
]
};
var result = updateMultiple('t', payload);
console.log(result);
/*
qstring returned is:
UPDATE t AS t SET id = c.id, ext_id = c.ext_id FROM (VALUES (1,3),(2,3),(3,3),(4,3)) AS c(id,ext_id) WHERE c.id = t.id
*/
function updateMultiple(table, recordRows){
var valueSets = new Array();
var cSet = new Set();
var columns = new Array();
for (const [key, value] of Object.entries(recordRows.rows)) {
var groupArray = new Array();
for ( const [key2, value2] of Object.entries(recordRows.rows[key])){
if(!cSet.has(key2)){
cSet.add(`${key2}`);
columns.push(key2);
}
groupArray.push(`${value2}`);
}
valueSets.push(`(${groupArray.toString()})`);
}
var valueSetsString = valueSets.join();
var setMappings = new String();
for(var i = 0; i < columns.length; i++){
var fieldSet = columns[i];
setMappings += `${fieldSet} = c.${fieldSet}`;
if(i < columns.length -1){
setMappings += ', ';
}
}
var qstring = `UPDATE ${table} AS t SET ${setMappings} FROM (VALUES ${valueSetsString}) AS c(${columns}) WHERE c.id = t.id`;
return qstring;
}
I don't think the accepted answer is entirely correct. It is order dependent. Here is an example that will not work correctly with an approach from the answer.
create table xxx (
id varchar(64),
is_enabled boolean
);
insert into xxx (id, is_enabled) values ('1',true);
insert into xxx (id, is_enabled) values ('2',true);
insert into xxx (id, is_enabled) values ('3',true);
UPDATE public.xxx AS pns
SET is_enabled = u.is_enabled
FROM (
VALUES
(
'3',
false
,
'1',
true
,
'2',
false
)
) AS u(id, is_enabled)
WHERE u.id = pns.id;
select * from xxx;
So the question still stands, is there a way to do it in an order independent way?
---- after trying a few things this seems to be order independent
UPDATE public.xxx AS pns
SET is_enabled = u.is_enabled
FROM (
SELECT '3' as id, false as is_enabled UNION
SELECT '1' as id, true as is_enabled UNION
SELECT '2' as id, false as is_enabled
) as u
WHERE u.id = pns.id;

Multiple 'in' statements in a where clause that need to match against each other

I have a very long query that is essentially an extension of the following:
update property.lease_period
set scca_uplift = '110',
scca_notes_code = '21006'
where (suite_id = 'CCBG08' and lease_id = '205059')
or (suite_id = 'CCBG14' and lease_id = '152424')
or (suite_id = 'CCCF048' and lease_id = '150659')
The where clause for this will have ~40 rows when complete. In order to make this task easier I was hoping to do something similar to the following:
update property.lease_period
set scca_uplift = '110',
scca_notes_code = '21006'
where suite_id in('CCBG08', 'CCBG14', 'CCCF048')
and lease_id in('205059', '152424', '150659')
Unfortunately lease_id isn't a unique field and there can be multiple lease_id's to the same suite_id (so subsequently the second query is unusable).
Is there a better way to do the first update statement given that this solution won't work?
You may create table type and pass the values thru it, like that:
CREATE TYPE Suite_Lease AS TABLE
(
suite_id varchar(15) NOT NULL,
lease_id varchar(15) NOT NULL
)
GO
CREATE PROC DoUpdate
#Params Suite_Lease READONLY,
#uplift varchar(15),
#code varchar(15)
AS
update property.lease_period set
scca_uplift = #uplift,
scca_notes_code = #code
from property.lease_period tab
JOIN #params filt
on tab.suite_id=filt.suite_id AND tab.lease_id=filt.lease_id
This will keep your Procedure cache dry and clean, instead if you using multiple "big" where clauses
How to pass table parameter into stored procedure (c#):
DataTable dt = new DataTable();
dt.Columns.Add(new DataColumn("suite_id", typeof (string)) {AllowDBNull = false, MaxLength = 15});
dt.Columns.Add(new DataColumn("lease_id", typeof (string)) {AllowDBNull = false, MaxLength = 15});
dt.Rows.Add("CCBG08", "205059");
... add more rows for match
using (var c = new SqlConnection("ConnectionString"))
{
c.Open();
using(var sc = c.CreateCommand())
{
sc.CommandText = "DoUpdate";
sc.CommandType = CommandType.StoredProcedure;
sc.Parameters.AddWithValue("#uplift", "110");
sc.Parameters.AddWithValue("#code", "21006");
sc.Parameters.Add(new SqlParameter("#Params", SqlDbType.Structured) { TypeName = null, Value = dt });
sc.ExecuteNonQuery();
}
}
Using the trick from this article. This looks a bit ugly, but it does the trick:
update property.lease_period
set scca_uplift = #uplift, scca_notes_code = #code
from property.lease_period tab
JOIN (
select 'CCBG08' as suite_id, '205059' as lease_id union all
select 'CCBG14', '152424' union all
select 'CCCF048', '150659'
) xxx
on tab.suite_id=xxx.suite_id AND tab.lease_id=xxx.lease_id
Try this
update property.lease_period
set scca_uplift = '110',
scca_notes_code = '21006'
where (suite_id in,lease_id) in
(select suite_id in,lease_id from XXX_table where CONDITION)
The last SELECT should give you those 40 combinations.
Derived from a comment by #dasblinkenlight (for Oracle) another possible way to do this would be the following:
select *
from property.lease_period
where (suite_id + ' ' + lease_id)
in (
('CCBG08 205059'),
('CCBG14 152424'),
('CCCF048 150659')
)
This isn't very recommended as it would be bad for indexing (concatenation on MicrosoftSQL) however I thought it was interesting all the same.
dasblinkenlights original comment:
#Michael I wish you were asking about Oracle, it's a lot cleaner
there: you do where (lease_period,lease_id) in
(('CCBG08','205059'),('CCBG14','152424'),('CCCF048','150659')), and it
does the trick. Why SQL Server couldn't do it is beyond me. –