Start a Case Statement without an outside 'Select' - CTE's - sql

Running this query (below) returns a 'Too Many Values' error:
select
case
when to_char(sysdate, 'yyyymmdd') = to_char(sysdate, 'yyyymm') || '01'
then (select FirstReportGroups.*, FirstReportDetails.*
from FirstReportGroups, FirstReportDetails)
when to_char(sysdate, 'yyyymmdd') = to_char(sysdate, 'yyyymm') || '16'
then (select SecondReportGroups.*, SecondReportDetails.*
from SecondReportGroups, SecondReportDetails)
end as LetsSee
from cmtmpentered t1 join cmtmpconf t2
on t1.casenumber = t2.casenumber
and t1.enc = t2.enc
;
I'm using CTE's (they are not included because it would make this very long) and it makes logical sense to me, but googling this 'Too Many Values' error gives me no substantial help. Running the CTE's individually works, so that is not the problem.
I think all would be solved if I could only get rid of the outside 'Select' statement and just keep the selects inside the Case. If I'm explaining this poorly, an example of what I'm looking for is this:
case
when to_char(sysdate, 'yyyymmdd') = to_char(sysdate, 'yyyymm') || '01'
then (select FirstReportGroups.*, FirstReportDetails.*
from FirstReportGroups, FirstReportDetails)
when to_char(sysdate, 'yyyymmdd') = to_char(sysdate, 'yyyymm') || '16'
then (select SecondReportGroups.*, SecondReportDetails.*
from SecondReportGroups, SecondReportDetails)
end as LetsSee
;
Is this doable in any capacity? This syntax obviously doesn't work, otherwise I wouldn't be asking the question - but is there a related way I can do this?

select FirstReportGroups.*, FirstReportDetails.*
from (select 1 a from dual) FirstReportGroups
cross join (select 2 b from dual) FirstReportDetails
where extract(day from sysdate) = 1
---------
union all
---------
select SecondReportGroups.*, SecondReportDetails.*
from (select 3 a from dual) SecondReportGroups
cross join (select 4 b from dual) SecondReportDetails
where extract(day from sysdate) = 16;
Replaced common table expressions with inline views. CTEs should only be used if they are referenced more than once. They may look a little nicer with small examples and for programmers only used to procedural code. Serious SQL requires multiple nested levels of inline views. And debugging is much easier without CTEs - the CTEs make it difficult to highlight and run sub-blocks of code.
Replaced case expression with predicate to filter by date. A CASE expression can only return a single value. You could probably do something fancy with types, but that would be horribly complicated. My code still makes the assumption that the two sets return the same types of values. If that's not true you'll need to do something different at the application level.
Replaced to_char with extract. Date handling can be confusing in Oracle. The trick is to keep everything in their native type. You should almost never need to use to_char or to_date for anything other than formatting for display. For any other operation there's almost certainly a function to handle it without converting types.
Replaced , with ANSI-syntax cross join. The ANSI-style syntax has several advantages. One major advantage is it makes it clear when cross joins are intentional.

You aren't specifying a join condition for LetsSee. You are effectively cross joining LetsSee with the result of your from clause:
from cmtmpentered t1
join cmtmpconf t2 on t1.casenumber = t2.casenumber and t1.enc = t2.enc
I suggest you join LetsSee with t1 or t2.
Also, you are not specifying a join condition for the pairs:
FirstReportGroups and FirstReportDetails
SecondReportGroups and SecondReportDetails

Related

SQL SELECT for Cursor using an NVL

I am trying to modify the SELECT statement used in an employee interface file because it is returning an error when it passes an employee with their END_DATE is...well 'dated'.
I am pretty new to this, so in order to skip Oracle users that are end dated, I feel that using the NVL expression would be best here? However, after researching it, I am just not grasping the concept too well, and unsure if it is what I should be using.
Here is the statement:
FROM oaa.xxap_employee_interim ei,
hr.per_all_people_f ppf,
hr.per_all_assignments_f paf,
hr.per_addresses pa,
apps.fnd_user u
WHERE new_update = 'U'
AND ppf.person_type_id = 6
AND ei.employee_number = ppf.employee_number
AND ppf.effective_end_date >= SYSDATE
AND ppf.person_id = paf.person_id
AND paf.effective_end_date >= SYSDATE
AND ppf.person_id = pa.person_id
AND u.employee_id = ppf.person_id
--AND NVL(u.end_date, SYSDATE + 5);
I was also suggested by a co-worker that using the 'IS NULL' would not be correct in this circumstance. Is that true? If so, why?
Thanks in advance.
I'm going to guess that you want to check if u.end_date is a date in the past or is not null. In that case I would use the following:
AND COALESCE(u.end_date, SYSDATE) < SYSDATE
The Oracle specific way to do that is
AND NVL(u.end_date, SYSDATE) < SYSDATE

Is there a way to define a named constant/parameter in a single-statement SQL query?

Tried to do something like:
WITH
dates as (SELECT '2015-01-01' as start, '2016-01-01' as end)
SELECT * FROM my_table WHERE start_date >= dates.start AND end_date <= dates.end
But got the error message "Relation 'dates' does not exist" (in Vertica). Is there any proper way to define a constant/parameter. In real example the query contains multiple selects over a defined time range, hence I'd like to maintain the values constants/parameters in a single place to allow them to be reused in the nested subqueries.
IF possible, I'd like to refrain from DECLARE/SET-like statements, where a separate line is required.
I would still do what #GordonLinoff mentioned (I doubt it has much impact on the query plan), but in case you really don't want to or like the feature I will show at the end...
You mentioned you were going to use vsql. You can do variables there.
\set start '2015-01-01'
\set end '2016-01-01'
SELECT *
FROM my_table
WHERE start_date >= :start
AND end_date <= :end;
Which is I guess kind of nice because you can also do things like:
\set start ''`date "+%Y-%m-%d %H:%M:%S"`''
Or echo the result of any command into a variable that you can use as a variable in your sql statement. Note that this variable is quite literal and you need to include any punctuation, quoting, etc. It isn't just a value, it's more like a template.
You need to have dates in the FROM clause if you want it in the query. You can do this as:
WITH dates as (SELECT '2015-01-01' as start, '2016-01-01' as end)
SELECT t.*
FROM my_table t JOIN
dates d
ON t.start_date >= d.start AND t.end_date <= d.end;
Note: You can also do this with a CROSS JOIN. I often write queries as:
WITH params as (
SELECT '2015-01-01' as start, '2016-01-01' as end
)
SELECT t.*
FROM params CROSS JOIN
my_table t
WHERE t.start_date >= params.start AND t.end_date <= params.end;

How to reduce query execution time for table with huge data

I am running this query in production(Oracle) and it is taking more than 3 minutes . Is there any way out to reduce the execution time ? Both svc_order and event table contains almost 1million records .
select 0 test_section, count(1) count, 'DD' test_section_value
from svc_order so, event e
where so.svc_order_id = e.svc_order_id
and so.entered_date >= to_date('01/01/2012', 'MM/DD/YYYY')
and e.event_type = 230 and e.event_level = 'O'
and e.current_sched_date between
to_date( '09/01/2010 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
and to_date('09/29/2013 23:59:59', 'MM/DD/YYYY HH24:MI:SS')
and (((so.sots_ta = 'N') and (so.action_type = 0))
or ((so.sots_ta is null) and (so.action_type = 0))
or ((so.sots_ta = 'N') and (so.action_type is null)))
and so.company_code = 'LL'
Looking at the what you said that you cannot create indexes. I hope that the query is making a full table scan on the table. Please try a parallel hint.
select /*+ full(so) parallel(so, 4) */ 0 test_section, count(1) count, 'DD' test_section_value
from svc_order so, event e
where so.svc_order_id = e.svc_order_id
and so.entered_date >= to_date('01/01/2012', 'MM/DD/YYYY')
and e.event_type = 230 and e.event_level = 'O'
and e.current_sched_date between
to_date( '09/01/2010 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
and to_date('09/29/2013 23:59:59', 'MM/DD/YYYY HH24:MI:SS')
and (((so.sots_ta = 'N') and (so.action_type = 0))
or ((so.sots_ta is null) and (so.action_type = 0))
or ((so.sots_ta = 'N') and (so.action_type is null)))
and so.company_code = 'LL'
You could at least avoid the triple AND/OR list by using COALESCE() (or its oracle equivalent IFNULL() ) Note: this does not catch the case where both sots_ta and action_type are NULL.
SELECT 0 test_section, count(1) count, 'DD' test_section_value
FROM svc_order so
JOIN event e ON so.svc_order_id = e.svc_order_id
WHERE e.event_type = 230 and e.event_level = 'O'
AND so.entered_date >= to_date('01/01/2012', 'MM/DD/YYYY')
AND e.current_sched_date >= to_date('09/01/2010 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
AND e.current_sched_date < to_date('10/01/2013 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
AND COALESCE(so.sots_ta, 'N') = 'N'
AND COALESCE(so.action_type, 0) = 0
AND so.company_code = 'LL'
I replaced the between by a plain t >= low AND t. < high) test because I don't like betweens semantics. I replaced the FROM kommalist by a JOIN because I like joins better.
We cannot have additional indexes but tables must have at least meaning full primary key, right so is there one? That should result in at least index, non/clustered, anything.
Look at it lets and try to make use of it.
In case table is a heap, and we want to deal with it as it is, then we should reduce the number rows in each table individually by applying respective where filters and then combine that result set.
In your query only meaning full result column depends on base tables is count(1). Other two columns are constants.
Because also JOIN/Cartesian Product etc….. will lead DB engine to look for Indexes so instead use INTERSECT which I feel should better in your case.
Some other changes you can do:
Avoid using TO_DATE or any kind of function in Right Side of the WHERE condition column. Prepare data in local Variable and use Local Variable in query.
Also you need to check is there any good performance gain using >= than BETWEEN ?
I have modified the query and also combined one redundant where condition.
Remember that if this changes works for you right now that doesn’t mean it will work always. As son your table start hitting more data that qualifies those WHERE conditions this swill again come back as slow query. so for short term this might work but longer term you have to think about alternate options
1) for example Indexed Views on top of this tables
2) Create same tables with different name and sync data
between new and original table using “Insert/Update/Delete Trigger”.
SELECT COUNT(1) AS [COUNT], 'DD' test_section_value ,0 test_section
FROM
(
SELECT so.svc_order_id
FROM svc_order so
WHERE so.entered_date >= to_date('01/01/2012', 'MM/DD/YYYY')
AND so.company_code = 'LL'
INTERSECT
SELECT e.svc_order_id
FROM event e
WHERE e.event_type = 230
AND e.event_level = 'O'
AND e.current_sched_date BETWEEN
to_date('09/01/2010 00:00:00','MM/DD/YYYY HH24:MI:SS')
AND to_date('09/29/2013 23:59:59','MM/DD/YYYY HH24:MI:SS')
AND (
(( so.sots_ta = 'N' ) AND ( so.action_type IS NULL OR so.action_type = 0))
OR
(( so.sots_ta IS NULL ) AND ( so.action_type = 0 ))
--or ((so.sots_ta = 'N') and (so.action_type is null))
)
)qry1
First, ensure statistics are up-to-date.
begin
dbms_stats.gather_table_stats('[schema]', 'svc_order');
dbms_stats.gather_table_stats('[schema]', 'event');
end;
/
This query is a very simple join between two small tables, but with complex predicates.
You almost certainly do not want to significantly re-write all of your queries in search of some magic syntax that will make everything run fast. Yes, there are some rare cases where BETWEEN does not work well, or moving the predicates into an inline view helps, or replacing the join with an INTERSECT might help. But that sounds like cargo-cult programming to me. Ask yourself, why would those changes make any difference? If those types of changes always improved performance, why wouldn't Oracle just translate the queries internally?
Normally, you should try to provide better information to the optimizer so it can make better decisions. Usually this is as simple as gathering statistics with the default settings. Some predicates are just too complex, and for that you should try to use
dynamic sampling, such as /*+ dynamic_sampling(6) */. Or maybe
add some histograms. Or perhaps add an expression statistic like this:
SELECT
DBMS_STATS.CREATE_EXTENDED_STATS(null,'SVC_ORDER',
'(((so.sots_ta = 'N') and (so.action_type = 0))
or ((so.sots_ta is null) and (so.action_type = 0))
or ((so.sots_ta = 'N') and (so.action_type is null)))'
)
FROM DUAL;
--Don't forget to re-gather statistics after this.
The optimizer is probably under-estimating the number of rows, and using a nested loop instead of a hash join. After providing it with more information, ideally it will start using a hash join. But at some point, after you've tried the above methods and possibly many other features, you can just tell it what kind of join to use. Which would be #Florin Ghita's suggestion, /*+use_hash(so e)*/.

Select throws an ORA-01858 exception

With the following query, the Oracle exception is thrown.
However, I cant see why. Can anyone shed some light?
select visit_id, to_date(response, 'DD/MM/YYYY') as convertedDate from
(
select *
from dat_results_ext
where item_name = 'CALLBACKDATE'
)
where to_date(response, 'DD/MM/YYYY') > sysdate
I understand the exception to be mean that its trying to convert the 'response' field, but it is meeting a non-numeric. Problem is the row that it should bring back has everything in the right format.
The 'response' field is a varchar field, but all the rows coming back with the 'item_name = 'CALLBACKDATE' clause are all of the correct format.
Any ideas?
The optimizer can rewrite your query before trying to find the best execution plan. In your case since you have no hints that would prevent the optimizer from doing this, it will probably unnest your subquery and rewrite your query as:
SELECT *
FROM dat_results_ext
WHERE item_name = 'CALLBACKDATE'
AND to_date(response, 'DD/MM/YYYY') > sysdate
You don't have control over the order of evaluation of the statements in the WHERE clause, so Oracle probably evaluated the to_date function first on a row that is not convertible to a date, hence the error.
I see two options to force Oracle to evaluate the statements in the order you want:
Use rownum. Rownum will materialize the subquery, preventing Oracle from merging it with the outer query:
SELECT visit_id, to_date(response, 'DD/MM/YYYY') AS convertedDate
FROM (SELECT r.*,
rownum /* will materialize the subquery */
FROM dat_results_ext r
WHERE item_name = 'CALLBACKDATE')
WHERE to_date(response, 'DD/MM/YYYY') > sysdate
Use the NO_MERGE hint:
SELECT visit_id, to_date(response, 'DD/MM/YYYY') AS convertedDate
FROM (SELECT /*+ NO_MERGE */ *
FROM dat_results_ext
WHERE item_name = 'CALLBACKDATE')
WHERE to_date(response, 'DD/MM/YYYY') > sysdate
The TO_DATE clause has to be evaluated before the truth of the WHERE clause can be determined. If you have values of response that can't be evaluated in the TO_DATE function, you'll see the error.
To be very precise, this is caused because some value of response do not match the format mask of DD/MM/YYYY. For example, if your session is set to default date format of DD-MON-YY, execute the following and you will receive the error message:-
select to_date('17/SEP/2012','DD/MM/YYYY') from dual;
ERROR:
ORA-01858: a non-numeric character was found where a numeric was expected
Since you passed a character in the month field and Oracle expects a number.

Using a variable in an SQL query (Oracle DBMS)?

I am typing SQL queries to get some data from a software tool which is based on an Oracle database. I am using the typical SELECT-statement.
Now, in my SQL-query I am using at different places the date "02.05.2012". Is there a way to define a variable date_string at the beginning and then use at all relevant places this variable?
This would simplify things a lot. Thanks for all hints and tips!
You might try to rewrite your query to return the literal from an inline view ...
select
my_date,
...
from(
select to_date('02.05.2012','DD.MM.YYYY') my_date from dual),
table2
where
some_column <= my_date
What you look for is a bind variable.
select to-date(:date, 'dd.mm.yyyy') date1
, to-date(:date, 'dd.mm.yyyy') + 1 date2
from dual
On runtime you need to pass the value to the bind variable. It all depends on your programming language how to bind the variable, but there is plenty documentation for that.
DEFINE only works if you use sql*plus, and that's usually not the case inside a "software tool" :)
EDIT:
I'm beginning to understand now. It's just a textarea where you can enter a query and it will execute it and return the result. In that case you either write some complicated pl/sql code, or enter all the dates manually, or use a cross join with a select from dual:
with (select to_date('02.05.2012', 'dd.mm.yyyy') my_date from dual) d
select *
from some_table t
cross join d -- no ON required
If you want to select using the current date you can use sysdate.
Using SQLPLUS you can define your own variables:
SQL> define mydate ="01-05-2012"
SQL> select to_date('&mydate','DD-MM-YYYY') from dual;
01-MAY-12
try the following :
SELECT *
FROM table1
WHERE to_char(date1,'DD/MM/YYYY') = '&&date'
AND to_char(date2,'DD/MM/YYYY') = '&&date'
AND to_char(date3,'DD/MM/YYYY') = '&&date'
you will get a prompt to enter the value for the &&date , if you want to enter different values for each date, you should type &date instead of &&date
DEFINE is useful for your requirement.
DEFINE NAME="example"
access with &NAME