Can COMMON Table expressions be used to avoid having SQL Server perform the following string parsing twice per record? My guess is "no."
SELECT DISTINCT
Client_ID
,RIGHT('0000000' + RIGHT(Client_ID
,PATINDEX('%[^0-9]%'
,REVERSE('?' + Client_ID)) - 1)
,7) AS CorrectedClient
FROM
membob_vw
WHERE
Client_ID <> RIGHT('0000000' + RIGHT(Client_ID
,PATINDEX('%[^0-9]%'
,REVERSE('?' + Client_ID)) - 1)
,7)
ORDER BY
1
,2
Every time I try to format the SQL as a "Code Block" it looks good (displaying on multiple lines) until the page is refreshed, after which point the SQL is displayed , for me at least, all on ONE line- and I can't seem to corerct that.
Does it display that way for people that are using a browser new that IE6? My company imposes this POS browser on me and prevents me for using any other.
NO, a CTE will not do anything performance wise for this query. It may seem strange/inefficient to type in the same thing large string expression twice. However, SQL Server will only do the string expression one time per row, it has been optimized for things like that.
EDIT
the CTE will reduce the duplicate code:
;WITH AllRows AS (
SELECT DISTINCT
Client_ID
,RIGHT('0000000' + RIGHT(Client_ID
,PATINDEX('%[^0-9]%'
,REVERSE('?' + Client_ID)) - 1)
,7) AS CorrectedClient
FROM
membob_vw
)
SELECT * FROM AllRows WHERE Client_ID<>CorrectedClient
ORDER BY
1
,2
but won't perform any better. USE SET SHOWPLAN_ALL ON and I'll bet you see the same query plan for each version.
BE CAREFUL trying to make queries look pretty and reduce redundant code fragments! simple looking SQL changes can have major adverse performance implications! always performance (run and/or query plan) check any changes you make. I have seen trivial changes made to queries that run instantly, that results in them then taking minutes to run. The key with SQL is performance not pretty code. If the application is slow, who cares if the code looks good.
If you're going to be running this query a lot, and especially if Client_ID is seldom updated, you should consider a computed column or pre-calculating CorrectedClient and storing it separately.
Related
I have a problem where the fix is to exchange what gets filtered first, but I'm not sure if that is even possible and not knowledgeable enough how it works.
To give an example:
Here is a table
When you filter this using the ff query:
select * from pcparts where Parts = 'Monitor' and id = 255322 and Brand = 'Asus'
by logic this will be correct as the Asus component with a character in its ID will be filtered and will prevent an ORA-01722 error.
But to my experience this is inconsistent.
I tried using the same filtering in two different DB connections, the first one didn't get the error (as expected) but other one got an ORA-01722 error.
Checking the explain plan the difference in the two DB's is the ff:
I was thinking if its possible to make sure that the Parts got filtered first before the ID but I'm unable to find anything when i was searching, is this even possible, if not, what is a fix for this issue without relying on using TO_CHAR
I assume you want to (sort of) fix a buggy program without changing the source code.
According to your image, you are using "Filter Predicates", this normally means Oracle isn't using index (though I don't know what displays execution plans this way).
If you have an index on PARTS, Oracle will probably use this index.
create index myindex on mytable (parts);
If Oracle thinks this index is inefficient, it may still use full table scan. You may try to 'fake' Oracle into thinking this an efficient index by lying about the number of distinct values (the more distinct values, the more efficient)
exec dbms_stats.set_index_stats(ownname => 'myname', indname => 'myindex', numdist => 100000000)
Note: This WILL impact performance of other querys using this table
"Fix" is rather simple: take control over what you're doing.
It is evident that ID column's datatype is VARCHAR2. Therefore, don't make Oracle guess, instruct it what to do.
No : select * from pcparts where Parts = 'Monitor' and id = 255322 and Brand = 'Asus'
Yes: select * from pcparts where Parts = 'Monitor' and id = '255322' and Brand = 'Asus'
--------
VARCHAR2 column's value enclosed into single quotes
I have something what I think is a srange issue. Normally, I think that a Query should last less time if I put a restriction (so that less rows are processed). But I don't know why, this is not the case. Maybe I'm putting something wrong, but I don't get error; the query just seems to run 'till infinity'.
This is the query
SELECT
A.ENTITYID AS ORG_ID,
A.ID_VALUE AS LEI,
A.MODIFIED_BY,
A.AUDITDATETIME AS LAST_DATE_MOD
FROM (
SELECT
CASE WHEN IFE.NEWVALUE IS NOT NULL
then EXTRACTVALUE(xmltype(IFE.NEWVALUE), '/DocumentElement/ORG_IDENTIFIERS/ID_TYPE')
ELSE NULL
end as ID_TYPE,
case when IFE.NEWVALUE is not null
then EXTRACTVALUE(xmltype(IFE.NEWVALUE), '/DocumentElement/ORG_IDENTIFIERS/ID_VALUE')
ELSE NULL
END AS ID_VALUE,
(select u.username from admin.users u where u.userid = ife.analystuserid) as Modified_by,
ife.*
FROM ife.audittrail ife
WHERE
--IFE.AUDITDATETIME >= '01-JUN-2016' AND
attributeid = 499
AND ROWNUM <= 10000
AND (CASE WHEN IFE.NEWVALUE IS NOT NULL then EXTRACTVALUE(xmltype(IFE.NEWVALUE), '/DocumentElement/ORG_IDENTIFIERS/ID_TYPE') ELSE NULL end) = '38') A
--WHERE A.AUDITDATETIME >= '01-JUN-2016';
So I tried with the two clauses commented (one per each time of course).
And with both of them happens the same; the query runs for so long time that I have to abort it.
Do you know why this could be happening? How could I do, maybe in a different way, to put the restriction?
The values of the field AUDITDATETIME are '06-MAY-2017', for example. In that format.
Thank you very much in advance
I think you may misunderstand how databases work.
Firstly, read up on EXPLAIN - you can find out exactly what is taking time, and why, by learning to read the EXPLAIN statement.
Secondly - the performance characteristics of any given query are determined by a whole range of things, but usually the biggest effort goes not in processing rows, but finding them.
Without an index, the database has to look at every row in the database and compare it to your where clause. It's the equivalent of searching in the phone book for a phone number, rather than a name (the phone book is indexed on "last name").
You can improve this by creating indexes - for instance, on columns "AUDITDATETIME" and "attributeid".
Unlike the phone book, a database server can support multiple indexes - and if those indexes match your where clause, your query will be (much) faster.
Finally, using an XML string extraction for a comparison in the where clause is likely to be extremely slow unless you've got an index on that XML data.
This is the equivalent of searching the phone book and translating the street address from one language to another - not only do you have to inspect every address, you have to execute an expensive translation step for each item.
You probably need index(es)... We can all make guesses on what indexes you already have, and need to add, but most dbms have built in query optimizers.
If you are using MS SQL Server you can execute query with query plan, that will tell you what index you need to add to optimize this particular query. It will even let you copy /paste the command to create it.
here i combine two table and get the result.
SELECT *
FROM dbo.LabSampleCollection
WHERE CONVERT(nvarchar(20), BillNo) + CONVERT(Nvarchar(20), ServiceCode)
NOT IN (SELECT CONVERT(nvarchar(20), BillNo) + CONVERT(Nvarchar(20), ServiceCode)
FROM dbo.LabResult)
the problem is Its take more time to execute. is there is any alternative way to handle this.
SELECT *
FROM dbo.LabSampleCollection sc
WHERE NOT EXISTS ( SELECT BillNo
FROM dbo.LabResult r
WHERE r.BillNo = sc.BillNo
AND r.ServiceCode = sc.Servicecode)
No need to combine the two fields, just check if both are available in the same record. It would also be better to replace the * with the actual columns that you wish to retrieve. (The selected column BillNo in the second select state is just there to limit the results of the second query.
Are you familiar with query execution plans, if not then i strongly recommend you read up on them? If you are going to be writing queries and troubleshooting/trying to improve performance they are one of the most useful tools (along with some basic understanding of what they are how SQL server optimization engine works).
You can access them from SSMS via the activity monitor or by running the query itself (us the Income actual execution plan button or ctrl-M) and they will tell you exactly which part of the query is the most inefficient and why. There are many very good articles on the web on how to improve performance using this valuable tool e.g. https://www.simple-talk.com/sql/performance/execution-plan-basics/
I'm writing my own SQLIteBrowser and I have one final problem, which apparently is quite often discussed on the web, but it doesn't seem to have a good general solution.
So currently I store the SQL which the user entered. Whenever I need to fetch rows, I execute the SQL by adding "Limit n, m` at the end of the SQL.
For normal SQLs, which I mostly use, this seems good enough. However if I want to use limit myself in the query, this will obviously give an error, because the resulting sql can look like this:
select * from table limit 30 limit 1,100
which is obviously wrong. Is there some better way to do this?
My idea was, that I could scan the SQL and check if there is a limit clause already used and then ignore it. Of course it's not as simlpe as that, because if I have an sql like this:
select * from a where a.b = ( select x from z limit 1)
it obviously should still apply my limit in such a case, so I could scan the string from the end and look if there is a limit somehwere. My question now is, how feasable this is. As I don't know who the SQL parser works, I'm not sure if LIMIT has to be at the end of SQL or if there can be other commands at the end as well.
I tested it with order byand group by and I get SQL errors if limit is not at the end, so my assumption seems to be true.
I found now a much better solution which is quite simple and doesn't require me to parse the SQL.
The user can enter an arbitrary sql. The result is loaded into a table. Since we don't want to load the whole result at once, as this can return millions of records, only N records are retriueved. When the user scroll to the bottom of the table the next N items are fetched and loaded into the table.
The solution is, to wrapt the SQL into an outer sql with my page size limits.
select * from (arbitrary UserSQL) limit PageSize, CurrentOffset
I tested it with SQLs I regularly use, and this seem to work quite nicely and is also fast enough for my purpose.
However, I don't know wether SQLite has a mechanism to fetch the new rows faster, or if the sql has to be rerun every time. In that case it might not be a good solution fo rrealy complex queries with a long response time.
I am working on a join condition between 2 tables where one of the columns to match on is a concatentation of values. I need to join columnA from tableA to the first 2 characters of columnB from tableB.
I have developed 2 different statements to handle this and I have tried to analyze the performance of each method.
Method 1:
ON tB.columnB like tA.columnA || '%'
Method 2:
ON substr(tB.columnB,1,2) = tA.columnA
The query execution plan has a lot less steps using Method 1 compared to Method 2, however, it looks like Method 2 executes much faster. Also, the execution plan shows a recommended index for Method 2 that could improve its performance.
I am running this on an IBM iSeries, though would be interested in answers in a general sense to learn more about sql query optimization.
Does it make sense that Method 2 would execute faster?
This SO question is similar, but it looks like no one provided any concrete answers to the performance difference of these approaches: T-SQL speed comparison between LEFT() vs. LIKE operator.
PS: The table design that requires this type of join is not something that I can get changed at this time. I realize having the fields separated which hold different types of data would be preferrable.
I ran the following in the SQL Advisor in IBM Data Studio on one of the tables in my DB2 LUW 10.1 database:
SELECT *
FROM PDM.DB30
WHERE DB30_SYSTEM_ID = 'XXX'
AND DB30_VERSION_ID = 'YYY'
AND SUBSTR(DB30_REL_TABLE_NM, 1, 4) = 'ZZZZ'
and
SELECT *
FROM PDM.DB30
WHERE DB30_SYSTEM_ID = 'XXX'
AND DB30_VERSION_ID = 'YYY'
AND DB30_REL_TABLE_NM LIKE 'ZZZZ%'
They both had the exact same access path utilizing the same index, the same estimated IO cost and the same estimated cardinality, the only difference being the estimated total CPU cost for the LIKE was 178,343.75 while the SUBSTR was 197,518.48 (~10% difference).
The cumulative total cost for both were the same though, so this difference is negligible as per the advisor.
Yes, Method 2 would be faster. LIKE is not as efficient a function.
To compare performance of various techniques, try using Visual Explain. You will find it buried in System i Navigator. Under your system connection, expand databases, then click onyour RDB name. In the lower right pane you can then click on the option to Run an SQL Script. Enter in your SELECT statement, and choose the menu option for Visual Explain or Run and Explain. Visual explain will break down the execution plan for your statement and show you the cost for each part as estimated on your tables with the indexes available.
You can actually run with real examples in your database.
LIKE is always better at my run.
select count(*) from u_log where log_text like 'AUT%';
1 row(s) returned : 90ms taken
select count(*) from u_log where substr(log_text,1,3)='AUT';
1 row(s) returned : 493ms taken
I found this reference in an IBM redbook related to SQL performance. It sounds like the SUBSTR scalar function can be handled in an optimized manner by an iSeries.
If you search for the first character and want to use the SQE instead
of the CQE, you can use the scalar function substring on the left sign
of the equal sign. If you have to search for additional characters in
the string, you can additionally use the scalar function POSSTR. By
splitting the LIKE predicate into several scalar function, you can
affect the query optimizer to use the SQE.
http://publib-b.boulder.ibm.com/abstracts/sg246654.html?Open