Abysmal performance using DECRYPTBYKEY in SQL Server 2008 R2 - sql

I'm looking at a query that has relatively complex relationship with other tables. Here's what my original query looks like:
SELECT
SE_CUS_DELIVERY.RECEIPT_NAME,
SE_CUS_DELIVERY.ORDER_PHONE,
SE_CUS_DELIVERY.ORDER_ZIP,
SE_CUS_DELIVERY.ORDER_ADDR,
ISNULL(SE_CUS_DELIVERY.DELIV_QTY,SRCHGT.CREQTY),
SE_CUS_DELIVERY.ORDER_HAND
FROM LC_OUT_REQUEST_DETAIL ,
SE_INVENTORY ,
SE_CUSTOMER ,
SRCHGT ,
SE_CUS_DELIVERY
WHERE
LC_OUT_REQUEST_DETAIL.TOTDATE = '20140203'
AND LC_OUT_REQUEST_DETAIL.IO_GB = '021'
AND LC_OUT_REQUEST_DETAIL.LOCCD >= 'A0000'
... A lot of additional joins here
group by SRCHGT.CRDATE + SRCHGT.CRESEQ + SRCHGT.CRESEQ_SEQ + SE_CUS_DELIVERY.DELIV_SEQ ,
SE_CUS_DELIVERY.RECEIPT_NAME ,
SE_CUS_DELIVERY.ORDER_PHONE ,
SE_CUS_DELIVERY.ORDER_ZIP ,
SE_CUS_DELIVERY.ORDER_ADDR ,
ISNULL(SE_CUS_DELIVERY.DELIV_QTY,SRCHGT.CREQTY) ,
... Also a lot of group by's following here
order by LC_OUT_REQUEST_DETAIL.TOTDATE,
LC_OUT_REQUEST_DETAIL.TOT_NO asc,
LC_OUT_REQUEST_DETAIL.TOT_NO_SEQ
To my surprise, it takes about a second to retrieve more than 10,000 rows.
Nevertheless, I've encrypted data in some columns that contains sensitive data, and I modify my select query like so to get the original value:
open Symmetric Key Sym_Key_TestEnc
decryption by certificate Cert_Test
with password = 'somepasswordhere'
GO
SELECT
DECRYPTBYKEY(SE_CUS_DELIVERY.RECEIPT_NAME),
DECRYPTBYKEY(SE_CUS_DELIVERY.ORDER_PHONE),
DECRYPTBYKEY(SE_CUS_DELIVERY.ORDER_ZIP),
DECRYPTBYKEY(SE_CUS_DELIVERY.ORDER_ADDR),
ISNULL(SE_CUS_DELIVERY.DELIV_QTY,SRCHGT.CREQTY),
DECRYPTBYKEY(SE_CUS_DELIVERY.ORDER_HAND)
FROM LC_OUT_REQUEST_DETAIL,
SE_INVENTORY ,
SE_CUSTOMER ,
SRCHGT ,
SE_CUS_DELIVERY
WHERE
LC_OUT_REQUEST_DETAIL.TOTDATE = '20140203'
AND LC_OUT_REQUEST_DETAIL.IO_GB = '021'
AND LC_OUT_REQUEST_DETAIL.LOCCD >= 'A0000'
AND LC_OUT_REQUEST_DETAIL.LOCCD <= 'A9999'
AND LC_OUT_REQUEST_DETAIL.MAT_CD = SE_INVENTORY.MAT_CD
AND LC_OUT_REQUEST_DETAIL.JCOLOR = SE_INVENTORY.JCOLOR
....
group by SRCHGT.CRDATE + SRCHGT.CRESEQ + SRCHGT.CRESEQ_SEQ + SE_CUS_DELIVERY.DELIV_SEQ ,
SE_CUS_DELIVERY.RECEIPT_NAME ,
SE_CUS_DELIVERY.ORDER_PHONE ,
SE_CUS_DELIVERY.ORDER_ZIP ,
SE_CUS_DELIVERY.ORDER_ADDR ,
.......
GO
Close Symmetric key Sym_Key_TestEnc
Now the performance is abysmal. I've been running the same query for more than 5 minutes and it still does not complete.
According to the MSDN, there shouldn't be much issues performance wise
Symmetric encryption and decryption is relatively fast, and is
suitable for working with large amounts of data.
Which leads me to think that I must be doing something wrong. Or MSDN is lying to me, but that's probably not the case.
Is there a way to optimize the data decryption in this process?

Related

Most recent transaction date against a Works Order?

Apologies in advance for what will probably be a very stupid question but I've been using Google to teach myself SQL after making the move from years of using Crystal Reports.
We have Works Orders which can have numerous transactions against them. I want to find the most recent one and have it returned against the Works Order number (which is a unique ID)? I attempted to use MAX but that just returns whatever the Transaction Date for that record is.
I think my struggles may be caused by a lack of understanding of grouping in SQL. In Crystal it was just 'choose what to group by' but for some reason in SQL I seem to be forced to group by all selected fields.
My ultimate goal is to be able to compare the planned end date of the Works Order ("we need to finish this job by then") vs when the last transaction was booked against the Works Order, so that I can create an OTIF KPI.
I've attached an image of what I'm currently seeing in SQL Server 2014 Management Studio and below is my attempt at the query.
SELECT wip.WO.WO_No
, wip.WO.WO_Type
, stock.Stock_Trans_Log.Part_No
, stock.Stock_Trans_Types.Description
, stock.Stock_Trans_Log.Qty_Change
, stock.Stock_Trans_Log.Trans_Date
, wip.WO.End_Date
, wip.WO.Qty - wip.WO.Qty_Stored AS 'Qty remaining'
, MAX(stock.Stock_Trans_Log.Trans_Date) AS 'Last Production Receipt'
FROM stock.Stock_Trans_Log
INNER JOIN production.Part
ON stock.Stock_Trans_Log.Part_No = production.Part.Part_No
INNER JOIN wip.WO
ON stock.Stock_Trans_Log.WO_No = wip.WO.WO_No
INNER JOIN stock.Stock_Trans_Types
ON stock.Stock_Trans_Log.Tran_Type = stock.Stock_Trans_Types.Type
WHERE (stock.Stock_Trans_Types.Type = 10)
AND (stock.Stock_Trans_Log.Store_Code <> 'BI')
GROUP BY wip.WO.WO_No
, wip.WO.WO_Type
, stock.Stock_Trans_Log.Part_No
, stock.Stock_Trans_Types.Description
, stock.Stock_Trans_Log.Qty_Change
, stock.Stock_Trans_Log.Trans_Date
, wip.WO.End_Date
, wip.WO.Qty - wip.WO.Qty_Stored
HAVING (stock.Stock_Trans_Log.Part_No BETWEEN N'2Z' AND N'9A')
Query + results
If my paraphrase is correct, you could use something along the following lines...
WITH
sequenced_filtered_stock_trans_log AS
(
SELECT
*,
ROW_NUMBER() OVER (PARTITION BY WO_No
ORDER BY Trans_Date DESC) AS reversed_sequence_id
FROM
stock.Stock_Trans_Log
WHERE
Type = 10
AND Store_Code <> 'BI'
AND Part_No BETWEEN N'2Z' AND N'9A'
)
SELECT
<stuff>
FROM
sequenced_filtered_stock_trans_log AS stock_trans_log
INNER JOIN
<your joins>
WHERE
stock_trans_log.reversed_sequence_id = 1
First, this will apply the WHERE clause to filter the log table.
After the WHERE clause is applied, a sequence id is calculated. Restarting from one for each partition (each WO_No), and starting from the highest Trans_Date.
Finally, that can be used in your outer query with a WHERE clause that specifies that you only want the records with sequence id one, this it the most recent row per WO_No. The rest of the joins on to that table would proceed as normal.
If there is any other filtering that should be done (through joins or any other means) that should all be done before the application of the ROW_NUMBER().

How to improve query performance in Oracle

Below sql query is taking too much time for execution. It might be due to repetitive use of same table in from clause. I am not able to find out how to fix this query so that performance would be improve.
Can anyone help me out with this?
Thanks in advance !!
select --
from t_carrier_location act_end,
t_location end_loc,
t_carrier_location act_start,
t_location start_loc,
t_vm_voyage_activity va,
t_vm_voyage v,
t_location_position lp_start,
t_location_position lp_end
where act_start.carrier_location_id = va.carrier_location_id
and act_start.carrier_id = v.carrier_id
and act_end.carrier_location_id =
decode((select cl.carrier_location_id
from t_carrier_location cl
where cl.carrier_id = act_start.carrier_id
and cl.carrier_location_no =
act_start.carrier_location_no + 1),
null,
(select cl2.carrier_location_id
from t_carrier_location cl2, t_vm_voyage v2
where v2.hire_period_id = v.hire_period_id
and v2.voyage_id =
(select min(v3.voyage_id)
from t_vm_voyage v3
where v3.voyage_id > v.voyage_id
and v3.hire_period_id = v.hire_period_id)
and v2.carrier_id = cl2.carrier_id
and cl2.carrier_location_no = 1),
(select cl.carrier_location_id
from t_carrier_location cl
where cl.carrier_id = act_start.carrier_id
and cl.carrier_location_no =
act_start.carrier_location_no + 1))
and lp_start.location_id = act_start.location_id
and lp_start.from_date <=
nvl(act_start.actual_dep_time, act_start.actual_arr_time)
and (lp_start.to_date is null or
lp_start.to_date >
nvl(act_start.actual_dep_time, act_start.actual_arr_time))
and lp_end.location_position_id = act_end.location_id
and lp_end.from_date <=
nvl(act_end.actual_dep_time, act_end.actual_arr_time)
and (lp_end.to_date is null or
lp_end.to_date >
nvl(act_end.actual_dep_time, act_end.actual_arr_time))
and act_end.location_id = end_loc.location_id
and act_start.location_id = start_loc.location_id;
There is no Stright forward one answer for your question and the query you've mentioned.
In order to get a better response time of any query, you need to keep few things in mind while writing your queries. I will mention few here which appeared to be important for your query
Use joins instead of subqueries.
Use EXPLAIN to determine queries are functioning appropriately.
Use the columns which are having indexes with your where clause else create an index on those columns. here use your common sense which are the columns to be indexed ex: foreign key columns, deleted, orderCreatedAt, startDate etc.
Keep the order of the select columns as they appear at the table instead of arbitrarily selecting columns.
The above four points are enough for the query you've provided.
To dig deep about SQL optimization and tuning refer this https://docs.oracle.com/database/121/TGSQL/tgsql_intro.htm#TGSQL130

SQL View slow to run

I have the following SQL statement but it takes 20 seconds to run, how can I make it faster ?
SELECT TOP (100) PERCENT
dbo.pod.order_no,
dbo.pod.order_line_no,
dbo.poh.currency,
dbo.pod.warehouse,
dbo.pod.product,
dbo.poh.address1,
dbo.pod.description,
dbo.pod.date_required,
dbo.pod.qty_ordered,
dbo.pod.qty_received,
dbo.pod.qty_invoiced,
dbo.pod.status,
dbo.poh.date_entered,
dbo.stock.analysis_c,
dbo.stock.catalogue_number,
dbo.stock.drawing_number,
dbo.poh.date_required AS OriginalRequiredDate,
dbo.stock.standard_cost,
dbo.poh.supplier_ref,
dbo.stock.reorder_days,
dbo.pod.local_expect_cost,
dbo.poh.supplier,
dbo.pod.qty_ordered - dbo.pod.qty_received AS qty_outstanding,
dbo.stock.warehouse AS warehouse2
FROM dbo.stock
RIGHT OUTER JOIN dbo.pod
ON dbo.stock.product = dbo.pod.product
LEFT OUTER JOIN dbo.poh
ON dbo.pod.order_no = dbo.poh.order_no
WHERE (dbo.pod.status <> 'C')
AND (dbo.poh.status <> '9')
AND (dbo.stock.analysis_c IN ('FB', 'FP', 'RM', '[PK]'))
AND (dbo.pod.qty_ordered - dbo.pod.qty_received > 0)
AND (dbo.stock.warehouse = 'FH')
The execution plan says remote Query taking up 89% - These tables are located through a linked server.
I'd move the (dbo.stock.warehouse = 'FH') up the where clause to be the first item in the where clause since this is your main table. I'd then run the query through query profiler to see where the lag is, this might help narrow the area that needs to change
As in comments, there shouldn't be any TOP statement (what keeps putting it automatically?).
I'd rewrite your view like that (for readability):
SELECT P.order_no
, P.order_line_no
, T.currency
, P.warehouse
, P.product
, T.address1
, P.[Description]
, P.date_required
, P.qty_ordered
, P.qty_received
, P.qty_invoiced
, P.[Status]
, T.date_entered
, S.analysis_c
, S.catalogue_number
, S.drawing_number
, T.date_required AS OriginalRequiredDate
, S.standard_cost
, T.supplier_ref
, S.reorder_days
, P.local_expect_cost
, T.supplier
, P.qty_ordered - P.qty_received AS qty_outstanding
, S.warehouse AS warehouse2
FROM dbo.stock AS S
RIGHT JOIN dbo.pod AS P
ON S.product = P.product
LEFT JOIN dbo.poh AS T
ON P.order_no = T.order_no
WHERE P.[Status] <> 'C'
AND T.[Status] <> '9'
AND S.analysis_c IN ('FB', 'FP', 'RM', '[PK]')
AND P.qty_ordered - P.qty_received > 0
AND S.warehouse = 'FH';
Also, I'd create following indexes, which should increase performance (hopefully I didn't miss any columns):
CREATE NONCLUSTERED INDEX idx_Stock_product_warehouse_analysisC_iColumns
ON dbo.Stock (product, warehouse, analysis_c)
INCLUDE (catalogue_number, drawing_number, standard_cost, reorder_days);
CREATE NONCLUSTERED INDEX idx_Pod_product_orderNo_status_qtyOrdered_qtyReceived_iColumns
ON dbo.Pod (product, order_no, [status], qty_ordered, qty_received)
INCLUDE (order_line_no, warehouse, [Description], date_required, qty_invoiced, [status], local_expect_cost);
CREATE NONCLUSTERED INDEX idx_Poh_orderNo_Status_iColumns
ON dbo.Poh (order_no, [Status])
INCLUDE (currency, address1, date_entered, date_required, supplier_ref, supplier);
Since there isn't really much to work on, just general guesses that what could help. You have 5 criteria in your SQL that could help to reduce the amount of rows:
pod.status <> 'C'
pod.qty_ordered - pod.qty_received > 0
poh.status <> '9'
stock.analysis_c IN ('FB', 'FP', 'RM', '[PK]')
stock.warehouse = 'FH'
For each of these the selectivity of the criteria is essential. For example if 90% of your rows have pod.status C, then you should probably add a filtered index for status <> 'C' (and same thing with poh.status field too).
For stock table warehouse (and analysis_c): If the given criteria limits the data a lot, adding index to the field should help.
if pod.qty_ordered is usually less or equal to pod.qty_received, it might be a good idea to add a computed persistent column and index that and use it in the where clause.
Since these fields are in different tables, the query should start from the one that limits the data most, so you might want to index that table only, the others might not help at all. Also I assume you have already indexes for the fields you're joining the tables with. If not, that's the first thing to look at. Always when adding new indexes, it of course has a (small) impact on inserts / updates.
If the query does a lot of key lookups, it might be help if you add all the other columns from that table as included columns into the index, but that also has a impact on updates / inserts.

Why Multiple OrderBy consumes much time to execute?

I tried Querying from Table with multiple ORDER BY
SELECT TOP 50
TBL_ContentsPage.NewsId,
TBL_ContentsPage.author,
TBL_ContentsPage.Header,
TBL_ContentsPage.TextContent,
TBL_ContentsPage.PostedDate,
TBL_ContentsPage.status,
TBLTempSettings.templateID
FROM TBL_ContentsPage
INNER JOIN TBLTempSettings
ON TBL_ContentsPage.NewsId = TBLTempSettings.newsId
WHERE TBL_ContentsPage.mode = '1' AND TBLTempSettings.mode = '1' AND (TBLTempSettings.templateID = #templateID OR #templateID = 'all')
ORDER BY 0 + TBLTempSettings.rank DESC
But when I add TBL_ContentsPage.PostedDate DESC the query takes more than double time. TBLTempSettings.rank is indexed already.
To sort your query results, SQL Server burns CPU time.
The ORDER BY clause consumes all of the query results as fast as possible into memory in your app, and then sort.
Your application is already designed in a way that you can scale out multiple app servers to distribute CPU load, whereas your database server…is not.
The sort operations, besides using the TEMPDB system database for a temporary storage area, also add a great I/O rate to the operations.
Therefore, if you are used to seeing the Sort operator frequently in its queries and this operator has a high consumption operation, consider removing the mentioned clause. On the other hand, if you know that will always organize your query by a specific column, consider indexing it.
Try this one -
SELECT TOP 50 c.newsId
, c.author
, c.Header
, c.TextContent
, c.PostedDate
, c.status
, t.templateID
FROM TBL_ContentsPage c
JOIN (
SELECT *
FROM TBLTempSettings t
WHERE t.mode = '1'
AND (t.templateID = #templateID OR #templateID = 'all')
) t ON c.newsId = CAST(t.newsId AS INT)
WHERE c.mode = '1'
ORDER BY t.rank DESC

After server move a query doesn't work anymore

I need some help for a problem that's driving me crazy!
I've moved an ASP + SQL Server application from an old server to a new one.
The old one was a Windows 2000 server with MSDE, and the new one is a Windows 2008 with SQL Server 2008 Express.
Everything is ok, even a little faster, except just one damned function whose asp page gives a time out.
I've tried the query within that page in a management query windows and it never ends, while in the old server it took about 1 minute to be completed.
The query is this one:
SELECT DISTINCT
TBL1.TBL1_ID,
REPLACE(TBL1_TITOLO, CHAR(13) + CHAR(10), ’ ’),
COALESCE(TBL1_DURATA, 0), TBL1_NUMERO,
FLAG_AUDIO
FROM
SPOT AS TBL1
INNER JOIN
CROSS_SPOT AS CRS ON CRS.TBL1_ID = TBL1.TBL1_ID
INNER JOIN
DESTINATARI_SPOT AS DSP ON DSP.TBL1_ID = TBL1.TBL1_ID
WHERE
DSP.PTD_ID_PUNTO = 1044
AND DSP.DSP_FLAG_OK = 1
AND TBL1.FLAG_AUDIO_TESTO = 1
AND TBL1.FLAG_AUDIO_GRAFICO = ’A’
AND CRS.CRS_STATO > 2
OR TBL1.TBL1_ID IN (SELECT ID
FROM V_VIEW1
WHERE ID IS NOT NULL AND V_VIEW1.ID_MODULO = 403721)
OR TBL1.TBL1_ID IN (SELECT TBL1_ID
FROM V_VIEW2
WHERE V_VIEW2.ID_PUNTO = 1044)
ORDER BY
TBL1_NUMERO
I've tried to transform the 2 views in last lines into tables and the query works, even if a little slower than before.
I've migrated the db with it's backup/restore function. Could it be and index problem?
Any suggestions?
Thanks in advance!
Alessandro
Run:
--Defrag all indexes
sp_msForEachTable 'DBCC DBREINDEX (''?'')'
--Update all statistics
sp_msForEachTable 'UPDATE STATISTICS ? WITH FULLSCAN'
If that doesn't "just fix it", it's going to some subtle "improvement" in the SQL Server optimizer that made things worse.
Try the index tuning wizard (or whatever its SSMS2008 equivalent).
After that, you'll have to start picking the query apart, removing things until it runs fast. Since you have 2 OR clauses, you basically have 3 separate queries:
SELECT ... FROM ...
WHERE DSP.PTD_ID_PUNTO = 1044
AND DSP.DSP_FLAG_OK = 1
AND TBL1.FLAG_AUDIO_TESTO=1
AND TBL1.FLAG_AUDIO_GRAFICO=’A’
AND CRS.CRS_STATO>2
--UNION
SELECT ... FROM ...
WHERE TBL1.TBL1_ID IN (
SELECT ID
FROM V_VIEW1
WHERE ID IS NOT NULL
AND V_VIEW1.ID_MODULO = 403721
)
--UNION
SELECT ... FROM ...
WHERE TBL1.TBL1_ID IN (
SELECT TBL1_ID
FROM V_VIEW2
WHERE V_VIEW2.ID_PUNTO = 1044
)
See which one of those is the slowest.
p.s. A query taking a minute is pretty bad. My opinion is that queries should return instantly (within the limits of human observation)