I have a report in an APEX page and it has multiple columns with results (rows count) that go from anywhere between 10,000 and 1,000,000 (1M) records.
There is a conditional where clause that I have, and it uses a page item's value in order to determine or restrict the results that are shown... it looks something like this:
SELECT
...
FROM ...
WHERE ...
AND (:P2_STARTDATE IS NULL OR TO_DATE(:P2_STARTDATE, 'DD-MON-YYYY HH24:MI:SS') < creation_date)
I believe that at anytime I do enter a value for the P2_STARTDATE page item then the comparison takes place, but when I do not enter any value for the page item then it is supposed to be NULL and boolean operation should just return TRUE for the P2_STARTDATE IS NULL evaluation...
The query makes the execution time take as much as 45+ seconds when searching in 0.5M rows, which is not acceptable. I wrote the following change to test my theory:
SELECT
...
FROM ...
WHERE deleted_flag = 'N'
AND (:P2_STARTDATE IS NULL) -- comment the rest of the evaluation....
It evaluates immediately to NULL and returns the same resultset 0.5M+ in about 1 second... now, if I do set a value then the resultset is empty, obviously.
So the question is, how can I make Oracle APEX evaluate quickly to TRUE that expression? Thanks for any tips, workarounds, or solutions that you may offer.
I am not sure it is safe to assume that the SQL engine is using short-circuit evaluation on your OR.
Try this:
AND (:P2_STARTDATE IS NULL
OR
(:P2_STARTDATE IS NOT NULL
AND TO_DATE(:P2_STARTDATE, 'DD-MON-YYYY HH24:MI:SS') < creation_date)
)
I do not believe the second part of your query is sargeable - it cannot use an index. Plus your example has nothing to do with searching for the actual result set.
One way is to convert (OUTSIDE of this query) the bound variable to the correct datatype, so that the query becomes able to use the index on creation_date (there is an index on creation_date, right?)
SELECT
...
FROM ...
WHERE ...
AND :P2_STARTDATE IS NULL OR creation_date > :newdatevariable;
In any event get the function to_date out of there and pass in a constant.
Related
I am facing this issue:
Is there a way in PostgreSQL to put aggregated timestamp data into an array (for example using array_agg function) and then perform any match on some condition?
I am doing something similar with LIKE on aggregated strings (using string_agg(column,';')). But how to perform something similar on timestamps?
So if result would be '{10.10.2021,20.12.2021,1.1.1996}' as timestamp_array and I would like to filter rows that have at least one array element that after some input?
For example, ... WHERE 31.12.2021 > timestamp_array ... would not match the row above cause there is no array element after 31.12.2021.
But If I query ... WHERE 31.12.1996 > timestamp_array ..., the row above would be matched (cause at least one element of the array is in given interval).
First, you would use standard date formats. Then you can use:
where '2021-12-31' > any (timestamp_array)
Here is a db<>fiddle to illustrate the idea.
I would like to filter rows that has at least one array element that is after some input?
You can use the ANY construct as has been advised.
WHERE '1996-12-31'::timestamp < ANY ('{2021-10-10, 2021-12-20, 1996-01-01}'::timestamp[])
Has to be <, not >, obviously.
Your "timestamps" look a lot like dates - timestamp input accepts that, too.
But always use the recommended ISO 8601 format (as demonstrated), else your input depends on setting of the current session.
See:
IN vs ANY operator in PostgreSQL
But chances are, there is much more efficient way. You speak of "aggregated timestamp data". Typically it's much more efficient to check before aggregating. Not least because that can use indexes, as opposed to your approach. Typically, EXISTS does the job. Something like:
SELECT ...
FROM tbl t
WHERE EXISTS (SELECT FROM tbl t1 WHERE t1.id = t.id AND t1.timestamp_column > '1996-12-31'
GROUP BY ...
Start a new question with details of your query to get a fitting solution.
I have sensor data in a Postgres table measurements with columns id, timestamp, s0, s1, s2, ...
Besides, there is an index on columns (id, timestamp). I want to allow for dynamic math expressions (in the example below: sin(s3)*0.1000/s5) for calculation of derived values.
SELECT
timestamp,
trunc((sin(s3) * 0.1000/s5)::numeric, 3) AS "calculated"
FROM measurements
WHERE id = 42
ORDER BY timestamp DESC
LIMIT 10000;
Obviously, this is prone to a "division by zero" error which will make the query fail. Is there a way to catch this error and return e.g. NULL for the calculated value where the error would occur?
Inspired by
Postgres return null values on function error/failure when casting
Store a formula in a table and use the formula in a function
I already tried defining a postgres function eval_numeric(sensors int[], formula text) that parses the formula and returns NULL on exception. The third row of the SQL statement above now reads
trunc(eval_numeric(ARRAY[s3,s5],'sin(var1)*0.1/var2'), 3) AS "calculated"
This gives the desired behavior but execution time as reported by EXPLAIN ANALYZE increases by a factor of 20 (~20ms -> ~400ms). Any other ideas?
UPDATE
The dynamic expression to be evaluated stems from a web application user. So the formula above is only an example (might require checking for negative argument to square root). I'd rather have a generic error checking possibility and would prefer not having logic in the math expression. This would be easier for the end user and I could validate the allowed math e.g. with a math parser thereby preventing SQL injection.
Can you change the expression to this?
SELECT timestamp,
trunc((sin(s3) * 0.1000/nullif(s5, 0))::numeric, 3) AS "calculated",
FROM measurements
WHERE id = 42
ORDER BY timestamp DESC
LIMIT 10000;
This is the simplest way to accomplish what you want to do.
You could also use a CASE expression:
SELECT
timestamp,
CASE WHEN s5 != 0
THEN trunc((sin(s3) * 0.1000/s5)::numeric, 3)
ELSE NULL AS "calculated",
FROM measurements
WHERE id = 42
ORDER BY timestamp DESC
LIMIT 10000;
This has the potential benefit that you may replace the value with anything you want, including NULL.
Another option, if you don't care about rows which would have triggered a divide by zero, would be to just add the check on s5 to the WHERE clause and filter off those rows before the division happens.
I am using sql server 2008 r2 and I have two database, which is one have 11.000 record and another is just 3000 record, when i do run this query
SELECT Right(rtrim(tbltransac.No_Faktur),6) as NoUrut,
tbltransac.No_Faktur,
tbltransac.No_FakturP,
tbltransac.Kd_Plg,
Tblcust.Nm_Plg,
GRANDTOTAL AS Total_Faktur,
tbltransac.Nm_Pajak,
tbltransac.Tgl_Faktur,
tbltransac.Tgl_FakturP,
tbltransac.Total_Distribusi
FROM Tblcust
INNER JOIN ViewGrandtotal AS tbltransac ON Tblcust.Kd_Plg = tbltransac.Kd_Plg
WHERE tbltransac.Kd_Trn = 'J'
and year(tbltransac.tgl_faktur)=2015
And ISNULL(tbltransac.No_OPJ,'') <> 'SHOP'
Order by Right(rtrim(tbltransac.No_Faktur),6) Desc
It takes me 1 minute 30 sec in my server (I query it using sql management tool) that have 3000 record but it only took 3 sec to do a query in my another server which is have 11000 record, whats wring with my database?
I've already tried to backup and restore my 3000 record database and restore it in my 11000 record server, it's faster.. took 30 sec to do a query, but it's still annoying if i compare to my 11000 record server. They are in the same spec
How this happend? what i should check? i check on event viewer, resource monitor or sql management log, i couldn't find any error or blocked connection. There is no wrong routing too..
Please help... It just happen a week ago, before this it was fine, and I haven't touch the server more than a month...
as already mentioned before, you have three issues in your query.
Just as an example, change the query to this one:
SELECT Right(rtrim(tbltransac.No_Faktur),6) as NoUrut,
tbltransac.No_Faktur,
tbltransac.No_FakturP,
tbltransac.Kd_Plg,
Tblcust.Nm_Plg,
GRANDTOTAL AS Total_Faktur,
tbltransac.Nm_Pajak,
tbltransac.Tgl_Faktur,
tbltransac.Tgl_FakturP,
tbltransac.Total_Distribusi
FROM Tblcust
INNER JOIN ViewGrandtotal AS tbltransac ON Tblcust.Kd_Plg = tbltransac.Kd_Plg
WHERE tbltransac.Kd_Trn = 'J'
and tbltransac.tgl_faktur BETWEEN '20150101' AND '20151231'
And tbltransac.No_OPJ <> 'SHOP'
Order by NoUrut Desc --Only if you need a sorted output in the datalayer
Another idea, if your viewGrandTotal is quite large, could be an pre-filtering of this table before you join it. Sometimes SQL Server doesn't get a good plan which needs some lovely touch to get him in the right direction.
Maybe this one:
SELECT Right(rtrim(vgt.No_Faktur),6) as NoUrut,
vgt.No_Faktur,
vgt.No_FakturP,
vgt.Kd_Plg,
tc.Nm_Plg,
vgt.Total_Faktur,
vgt.Nm_Pajak,
vgt.Tgl_Faktur,
vgt.Tgl_FakturP,
vgt.Total_Distribusi
FROM (SELECT Kd_Plg, Nm_Plg FROM Tblcust GROUP BY Kd_Plg, Nm_Plg) as tc -- Pre-Filter on just the needed columns and distinctive.
INNER JOIN (
-- Pre filter viewGrandTotal
SELECT DISTINCT vgt.No_Faktur, vgt.No_Faktur, vgt.No_FakturP, vgt.Kd_Plg, vgt.GRANDTOTAL AS Total_Faktur, vgt.Nm_Pajak,
vgt.Tgl_Faktur, vgt.Tgl_FakturP, vgt.Total_Distribusi
FROM ViewGrandtotal AS vgt
WHERE tbltransac.Kd_Trn = 'J'
and tbltransac.tgl_faktur BETWEEN '20150101' AND '20151231'
And tbltransac.No_OPJ <> 'SHOP'
) as vgt
ON tc.Kd_Plg = vgt.Kd_Plg
Order by NoUrut Desc --Only if you need a sorted output in the datalayer
The pre filtering could increase the generation of a better plan.
Another issue could be just the multi-threading. Maybe your query get a parallel plan as it reaches the cost threshold because of the 11.000 rows. The other query just hits a normal plan due to his lower rows. You can take a look at the generated plans by including the actual execution plan inside your SSMS Query.
Maybe you can compare those plans to get a clue. If this doesn't help, you can post them here to get some feedback from me.
I hope this helps. Not quite easy to give you good hints without knowing table structures, table sizes, performance counters, etc. :-)
Best regards,
Ionic
Note: first of all you should avoid any function in Where clause like this one
year(tbltransac.tgl_faktur)=2015
Here Aaron Bertrand how to work with date in Where clause
"In order to make best possible use of indexes, and to avoid capturing too few or too many rows, the best possible way to achieve the above query is ":
SELECT COUNT(*)
FROM dbo.SomeLogTable
WHERE DateColumn >= '20091011'
AND DateColumn < '20091012';
And i cant understand your logic in this piece of code but this is bad part of your query too
ISNULL(tbltransac.No_OPJ,'') <> 'SHOP'
Actually Null <> "Shop" in this case, so Why are you replace it to ""?
Thanks and good luck
Here is some recommendations:
year(tbltransac.tgl_faktur)=2015 replace this with tbltransac.tgl_faktur >= '20150101' and tbltransac.tgl_faktur < '20160101'
ISNULL(tbltransac.No_OPJ,'') <> 'SHOP' replace this with tbltransac.No_OPJ <> 'SHOP' because NULL <> 'SHOP'.
Order by Right(rtrim(tbltransac.No_Faktur),6) Desc remove this, because ordering should be done in presentation layer rather then in data layer.
Read about SARG arguments and predicates:
What makes a SQL statement sargable?
To write an appropriate SARG, you must ensure that a column that has
an index on it appears in the predicate alone, not as a function
parameter. SARGs must take the form of column inclusive_operator
or inclusive_operator column. The column name is alone
on one side of the expression, and the constant or calculated value
appears on the other side. Inclusive operators include the operators
=, >, <, =>, <=, BETWEEN, and LIKE. However, the LIKE operator is inclusive only if you do not use a wildcard % or _ at the beginning of
the string you are comparing the column to
I've got a vague, possibly cargo-cult memory from years of working with SQL Server that when you've got a possibly-null column, it's not safe to write "WHERE" clause predicates like:
... WHERE the_column IS NULL OR the_column < 10 ...
It had something to do with the fact that SQL rules don't stipulate short-circuiting (and in fact that's kind-of a bad idea possibly for query optimization reasons), and thus the "<" comparison (or whatever) could be evaluated even if the column value is null. Now, exactly why that'd be a terrible thing, I don't know, but I recall being sternly warned by some documentation to always code that as a "CASE" clause:
... WHERE 1 = CASE WHEN the_column IS NULL THEN 1 WHEN the_column < 10 THEN 1 ELSE 0 END ...
(the goofy "1 = " part is because SQL Server doesn't/didn't have first-class booleans, or at least I thought it didn't.)
So my questions here are:
Is that really true for SQL Server (or perhaps back-rev SQL Server 2000 or 2005) or am I just nuts?
If so, does the same caveat apply to PostgreSQL? (8.4 if it matters)
What exactly is the issue? Does it have to do with how indexes work or something?
My grounding in SQL is pretty weak.
I don't know SQL Server so I can't speak to that.
Given an expression a L b for some logical operator L, there is no guarantee that a will be evaluated before or after b or even that both a and b will be evaluated:
Expression Evaluation Rules
The order of evaluation of subexpressions is not defined. In particular, the inputs of an operator or function are not necessarily evaluated left-to-right or in any other fixed order.
Furthermore, if the result of an expression can be determined by evaluating only some parts of it, then other subexpressions might not be evaluated at all.
[...]
Note that this is not the same as the left-to-right "short-circuiting" of Boolean operators that is found in some programming languages.
As a consequence, it is unwise to use functions with side effects as part of complex expressions. It is particularly dangerous to rely on side effects or evaluation order in WHERE and HAVING clauses, since those clauses are extensively reprocessed as part of developing an execution plan.
As far as an expression of the form:
the_column IS NULL OR the_column < 10
is concerned, there's nothing to worry about since NULL < n is NULL for all n, even NULL < NULL evaluates to NULL; furthermore, NULL isn't true so
null is null or null < 10
is just a complicated way of saying true or null and that's true regardless of which sub-expression is evaluated first.
The whole "use a CASE" sounds mostly like cargo-cult SQL to me. However, like most cargo-cultism, there is a kernel a truth buried under the cargo; just below my first excerpt from the PostgreSQL manual, you will find this:
When it is essential to force evaluation order, a CASE construct (see Section 9.16) can be used. For example, this is an untrustworthy way of trying to avoid division by zero in a WHERE clause:
SELECT ... WHERE x > 0 AND y/x > 1.5;
But this is safe:
SELECT ... WHERE CASE WHEN x > 0 THEN y/x > 1.5 ELSE false END;
So, if you need to guard against a condition that will raise an exception or have other side effects, then you should use a CASE to control the order of evaluation as a CASE is evaluated in order:
Each condition is an expression that returns a boolean result. If the condition's result is true, the value of the CASE expression is the result that follows the condition, and the remainder of the CASE expression is not processed. If the condition's result is not true, any subsequent WHEN clauses are examined in the same manner.
So given this:
case when A then Ra
when B then Rb
when C then Rc
...
A is guaranteed to be evaluated before B, B before C, etc. and evaluation stops as soon as one of the conditions evaluates to a true value.
In summary, a CASE short-circuits buts neither AND nor OR short-circuit so you only need to use a CASE when you need to protect against side effects.
Instead of
the_column IS NULL OR the_column < 10
I'd do
isnull(the_column,0) < 10
or for the first example
WHERE 1 = CASE WHEN isnull(the_column,0) < 10 THEN 1 ELSE 0 END ...
I've never heard of such a problem, and this bit of SQL Server 2000 documentation uses WHERE advance < $5000 OR advance IS NULL in an example, so it must not have been a very stern rule. My only concern with OR is that it has lower precedence than AND, so you might accidentally write something like WHERE the_column IS NULL OR the_column < 10 AND the_other_column > 20 when that's not what you mean; but the usual solution is parentheses rather than a big CASE expression.
I think that in most RDBMSes, indices don't include null values, so an index on the_column wouldn't be terribly useful for this query; but even if that weren't the case, I don't see why a big CASE expression would be any more index-friendly.
(Of course, it's hard to prove a negative, and maybe someone else will know what you're referring to?)
Well, I've repeatedly written queries like the first example since about forever (heck, I've written query generators that generate queries like that), and I've never had a problem.
I think you may be remembering some admonishment somebody gave you sometime against writing funky join conditions that use OR. In your first example, the conditions joined by the OR restrict the same one column of the same table, which is OK. If your second condition was a join condition (i.e., it restricted columns from two different tables), then you could get into bad situations where the query planner just has no choice but to use a Cartesian join (bad, bad, bad!!!).
I don't think your CASE function is really doing anything there, except perhaps hamper your query planner's attempts at finding a good execution plan for the query.
But more generally, just write the straightforward query first and see how it performs for realistic data. No need to worry about a problem that might not even exist!
Nulls can be confusing. The " ... WHERE 1 = CASE ... " is useful if you are trying to pass a Null OR a Value as a parameter ex. "WHERE the_column = #parameter. This post may be helpful Passing Null using OLEDB .
Another example where CASE is useful is when using date functions on the varchar columns. adding ISDATE before using say convert(colA,datetime) might not work, and when colA has non-date data the query can error out.
I have a varchar column in a database and a requirement has come in so a user can enter a range to/from eg/ABC001 to ABC100
I have the following query but feel it might not be strict enough to work out if any values within that range exist.
SELECT count(*) FROM MyTable where MyColumn between 'ABC001' and 'ABC005'
I have a feeling an order by should be used or is there a better way to calculate the existence of values within a alphanumeric range
No orderby is required. That should be perfrect.
If you want to boost that operation you can create a index on it.
Order by operation is done at the end of query execution, so the data will be retrived in the same way.
OP said:
or is there a better way to calculate
the existence of values within a
alphanumeric range
The best way would be:
SELECT count(*) FROM MyTable where MyColumn>='ABC001' and MyColumn<='ABC005'
I find most people can't remember if BETWEEN includes or excludes the "end points". By just always using >= and/or > and/or <= and/or < you have more clarity and flexibility.
Any ORDER BY would be applied to the resulting set of rows that meet the WHERE condition, and has nothing to do with the WHERE filtering. You can use it if you want the final result set in a particular order, but it will have no effect on which rows are included in the results.