sql timeout on view - sql

We have an Access application in our office that takes data from SQL Server Management Studio 2012 different views (order tables joined with other tables from different companies) and put them in a single Access table for statistics. Now the problem is that when the view becomes complicate, if I execute it from view design it gives error after 30 sec and Access gives same error when I open the view from Access so I cannot transfer data.
I changed the timeout for query and designers so if I execute a query from the view I have no problems but but this timeout has no effects on the view.
Here is my View:
SELECT dbo.ordiniqdopolib.DATAconsegna, dbo.ordiniqdopolib.Fornitore,
dbo.ordiniqdopolib.HOTEL, dbo.ordiniqdopolib.ORDINE,
dbo.ordiniqdopolib.PRODOTTO, dbo.ordiniqdopolib.NUMEROS,
dbo.ordiniqdopolib.COSTOUNITARIOS, dbo.ordiniqdopolib.COSTOTOTALES,
dbo.ordiniqdopolib.CONSEGNATO, dbo.ordiniqdopolib.IDDOCTRASP,
YEAR(dbo.ordiniqdopolib.DATAconsegna) AS anno, dbo.ordiniqdopolib.napoli,
dbo.ordiniqdopolib.CONTRNAP, dbo.REGDOCT.NDOC,
dbo.ordiniqdopolib.ID, dbo.ordiniqdopolib.UTENZA,
dbo.ordiniqdopolib.COMPPERIODO, dbo.ordiniqdopolib.COMPANNO,
dbo.ordiniqdopolib.dataov
FROM dbo.ordiniqdopolib
INNER JOIN dbo.REGDOCT
ON dbo.ordiniqdopolib.IDDOCTRASP = dbo.REGDOCT.ID
WHERE (dbo.ordiniqdopolib.Fornitore <> 'Inventario Napoli') AND
(dbo.ordiniqdopolib.ORDINE > 0) AND
((dbo.ordiniqdopolib.CONSEGNATO = - 1) OR (dbo.ordiniqdopolib.ORDINE < 0)) AND
(dbo.ordiniqdopolib.CONTRNAP = 0)
Any suggestions on how to avoid the 30-second timeout?

Related

Runtime of stored procedure with temporary tables varies intermittently

We are facing a performance issue while executing a stored procedure. It usually takes between 10-15 minutes to run, but sometimes it takes up to more than 30 minutes to execute.
We captured visualize plan execute files for the Normal run and Long run cases.
By checking the visualized plan we came to know that, one particular Insert block of code takes extra time in the long run. And by checking
"EXPLAIN PLAN FOR SQL PLAN CACHE ENTRY <plan_id> "
the table we found that the order of execution differs in the long run.
This is the block which takes extra time to run sometimes.
INSERT INTO #TMP_DATI_SALDI_LORDI_BASE (
"COD_SCENARIO","COD_PERIODO","COD_CONTO","COD_DEST1","COD_DEST2","COD_DEST3","COD_DEST4","COD_DEST5"
,"IMPORTO","COD_VALUTA","IMPORTO_VALUTA_ORIGINARIA","COD_VALUTA_ORIGINARIA","NOTE"
)
( SELECT
SCEN_P.SCENARIO
,SCEN_P.PERIOD
,ACCOUT_ADJ.ATTRIBUTO1 AS "COD_CONTO"
,DATAS_rev.COD_DEST1
,DATAS_rev.COD_DEST2
,DATAS_rev.COD_DEST3
,__typed_NString__($1, 50)
,'RPT_NON'
,SUM(
CASE WHEN INFO.INCOT = 'FOB' THEN
CASE ACCOUT_rev.ATTRIBUTO1 WHEN 'CalcInsurance' THEN
0
ELSE
DATAS_rev.IMPORTO
END
ELSE
DATAS_rev.IMPORTO
END
* (DATAS_ADJ.IMPORTO - DATAS.IMPORTO)
)
,DATAS_rev.COD_VALUTA
,SUM(
CASE WHEN INFO.INCOT = 'FOB' THEN
CASE ACCOUT_rev.ATTRIBUTO1 WHEN 'CalcInsurance' THEN
0
ELSE
DATAS_rev.IMPORTO_VALUTA_ORIGINARIA
END
ELSE
DATAS_rev.IMPORTO_VALUTA_ORIGINARIA
END
* (DATAS_ADJ.IMPORTO_VALUTA_ORIGINARIA - DATAS.IMPORTO_VALUTA_ORIGINARIA)
)
,DATAS_rev.COD_VALUTA_ORIGINARIA
,'CPM_SP_CACL_FY_E3 Parts Option ADJ'
FROM #TMP_TAGERT_SCEN_P SCEN_P
INNER JOIN #TMP_DATI_SALDI_LORDI_BASE DATAS_rev
ON DATAS_rev.COD_SCENARIO = SCEN_P.SCENARIO
AND DATAS_rev.COD_PERIODO = SCEN_P.PERIOD
AND LEFT(DATAS_rev.COD_DEST3, 1) = 'O'
INNER JOIN CONTO ACCOUT_rev
ON ACCOUT_rev.COD_CONTO = DATAS_rev.COD_CONTO
AND ACCOUT_rev.ATTRIBUTO1 IN ('CalcFOB','CalcInsurance') --FOB,Insurance(Ocean freight is Nothing by Option)
INNER JOIN #DSL DATAS
ON DATAS.COD_SCENARIO = 'LAUNCH'
AND DATAS.COD_PERIODO = 12
AND DATAS.COD_DEST1 = 'NC'
AND DATAS.COD_DEST2 = 'NC'
AND DATAS.COD_DEST3 = 'F001'
AND DATAS.COD_DEST4 = DATAS_rev.COD_DEST4
AND DATAS.COD_DEST5 = 'INP'
INNER JOIN CONTO ACCOUT
ON ACCOUT.COD_CONTO = DATAS.COD_CONTO
AND ACCOUT.ATTRIBUTO2 = 'E3'
INNER JOIN CONTO ACCOUT_ADJ
ON ACCOUT_ADJ.ATTRIBUTO3 = DATAS.COD_CONTO
AND ACCOUT_ADJ.ATTRIBUTO2 = 'HE3'
INNER JOIN #DSL DATAS_ADJ
ON LEFT(DATAS_ADJ.COD_SCENARIO,4) = LEFT(SCEN_P.SCENARIO,4)
AND DATAS_ADJ.COD_PERIODO = 12
AND DATAS_ADJ.COD_DEST1 = DATAS.COD_DEST1
AND DATAS_ADJ.COD_DEST2 = DATAS.COD_DEST2
AND DATAS_ADJ.COD_DEST3 = DATAS.COD_DEST3
AND DATAS_ADJ.COD_DEST4 = DATAS.COD_DEST4
AND DATAS_ADJ.COD_DEST5 = DATAS.COD_DEST5
AND DATAS_ADJ.COD_CONTO = ACCOUT_ADJ.COD_CONTO
LEFT OUTER JOIN #TMP_KDPWT_INCOTERMS INFO
ON INFO.P_CODE = DATAS.COD_DEST4
GROUP BY
SCEN_P.SCENARIO,SCEN_P.PERIOD,ACCOUT_ADJ.ATTRIBUTO1,DATAS_rev.COD_DEST1,DATAS_rev.COD_DEST2
,DATAS_rev.COD_DEST3, DATAS.COD_DEST4,DATAS_rev.COD_VALUTA,DATAS_rev.COD_VALUTA_ORIGINARIA,INFO.INCOT
)
I will share the order of execution details also for normal and long run case.
Could someone please help us to overcome this issue? And also we don't know how to fix the order of the join execution. Is there any way to fix the join order execution, Please guide us.
Thanks in advance
Vinothkumar
Without a lot more detailed information, there is no way to tell exactly why your INSERT statement shows this alternating runtime behaviour.
Based on my experience, such an analysis can take quite some time and there are only few people available that are capable to perform it. If you can get someone like that to look at this, make sure to understand and learn.
What I can tell from the information shared is this
using temporary tables to structure a multi-stage data flow is the wrong thing to do on SAP HANA. Instead, use table variables in SQLScript.
if you insist on using the temporary tables, make them at least column tables; this will allow to avoid a need for some internal data materialisation.
when using joins make sure that the joined columns are of the same data type. The explain plan is full of TO_INT(), TO_DECIMAL(), and other conversion functions. Those take time, memory, and make it hard for the optimiser(s) to estimate cardinalities.
as the statement uses a lot of temporary tables, the different join orders can easily result from different volumes of data that was present when the SQL was parsed, prepared and optimised. One option to avoid this is to have HANA ignore any cached plans for the statement. The documentation has the HINTS for that.
And that is about what I can say about this with the available information.

SQL Server View with many OR statements with very specific Account lookups is that a problem?

I have been working on a view in SQL Server which another developer added, which has a lot of OR statements with specific accounts - I am feeling like
this is bad practice, what is better? Another view to join?
requires a lot more testing to be sure to get the results
seems like the View now is possibly slower? Performance on doing this?
What an eye soar for a generic view, maintaining that and getting it rolled to production for every time an account is added (seems like some holding table would be ideal)
Here is the WHERE clause with all these new OR statements:
WHERE
Source = 'DST'
AND bs.Name <> 'Closed'
AND dst.BreakId IS NOT NULL
AND ((dst.Account = 79350523 AND dst.ReconRecord IS NULL) --for DST suspense
OR (dst.Account IN (98620036,98620664)) --for MFR suspense since we will need to include divnet (reconrecord is not null) in calculation
OR (dst.Account IN (3157-6218, 7848-4182, 7935-0411, 7935-8987, 8460-8721)) -- For PPS Suspense
OR (dst.Account IN (79340000, 79350304, 79350410, 79350700, 79358505, 79351733, 79352084))) -- For SPS Suspense
What would be an example of a better way to do this with another View to join OR perhaps to some table(s) to join in for "PPS" "SPS" etc.. ?
Thanks
Why not add properties (MFR, PPS, SPS) in account table for each purpose? So you don't have to update your view each times.
And update the where clause like :
WHERE Source = 'DST' AND bs.Name <> 'Closed'
AND dst.BreakId IS NOT NULL
AND (account.MFR = 1 OR account.PPS = 1 OR account.SPS = 1)

slow entity framework query , but fast Generated SQL

please consider this model
it's for a fitness center management app
ADHERANT is the members table
INSCRIPTION is the subscription table
SEANCE is the individual sessions table
the seance table contain very fews rows (around 7000)
now the query :
var q = from n in ctx.SEANCES
select new SeanceJournalType()
{
ID_ADHERANT = n.INSCRIPTION.INS_ID_ADHERANT,
ADH_NOM = n.INSCRIPTION.ADHERANT.ADH_NOM,
ADH_PRENOM = n.INSCRIPTION.ADHERANT.ADH_PRENOM,
ADH_PHOTO = n.INSCRIPTION.ADHERANT.ADH_PHOTO,
SEA_DEBUT = n.SEA_DEBUT
};
var h = q.ToList();
this take around 3 seconds wich is an eternity,
the same generated SQL query is almost instantaneous
SELECT
1 AS "C1",
"C"."INS_ID_ADHERANT" AS "INS_ID_ADHERANT",
"E"."ADH_NOM" AS "ADH_NOM",
"E"."ADH_PRENOM" AS "ADH_PRENOM",
"E"."ADH_PHOTO" AS "ADH_PHOTO",
"B"."SEA_DEBUT" AS "SEA_DEBUT"
FROM "TMP_SEANCES" AS "B"
LEFT OUTER JOIN "INSCRIPTIONS" AS "C" ON "B"."INS_ID_INSCRIPTION" = "C"."ID_INSCRIPTION"
LEFT OUTER JOIN "ADHERANTS" AS "E" ON "C"."INS_ID_ADHERANT" = "E"."ID_ADHERANT"
any idea on what's going on please, or how to fix that ?
thanks
it needs some research to optimize this :
if you neglect the data transfer from the db to the server then
as Ivan Stoev Suggested calling the ToList method is the expensive part
as for improving the performance it depends on your needs:
1.if you need add-delete functionality at the server side it is probably best to stick with the list
2.if no need for add-delete then consider ICollection ,, or even better
3.if you have more conditions which will customize the query even more best use IQuerable
customizing the query like selecting a single record based on a condition :
var q = from n in ctx.SEA.... // your query without ToList()
q.where(x=>"some condition") //let`s say x.Id=1
only one record will be transferred from the database to the server
but with the ToList Conversion all the records will be transferred to the server then the condition will be calculated
although it is not always the best to use IQuerable it depends on your business need
for more references check this and this

How concerned should I be about intermittent MS Access errors?

I'm using an MS Access (2010) database for semi-critical data management for my department. It's currently small, ~137 MB, and the queries I'm running are pretty simple, mostly just joins to combine multiple data sources.
I'm running into several types of intermittent errors, where a query function will work fine at first, but on subsequent runs, without my changing anything, it will fail. Sometimes it will start working again.
Most recently, I have a query that runs fine that I export to Excel. During the same Access session, it will work the first time, but then return an "object invalid or no longer set" error when I try to export it a second time. It works again after closing and re-opening the database. This is just one example.
I'm becoming concerned that Access may be a danger to my data, to the point I'm not comfortable continuing to use it. Is this sort of behavior typical of Access, and does it result in data loss or corruption?
Edit to add query code for the example issue. This is actually set up as a series of Access queries, the SQL of which are:
Final query =
SELECT DISTINCT Var1, Var2, ...VarX
FROM Query1
LEFT JOIN Union_query
ON (Query1.DOB = Union_query.DOB)
AND (Query1.FST_NM = Union_query.FST_NM)
AND (Query1.LST_NM = Union_query.LST_NM);
Query 1 =
SELECT *
FROM ROSTER_LATEST
INNER JOIN (SELECT max(UPDATE_DATE) AS LAST_DATE, SUB_ID
FROM ROSTER_LATEST GROUP BY SUB_ID)
AS GRAB_DATE
ON (ROSTER_LATEST.SUB_ID = GRAB_DATE.SUB_ID)
AND (_ROSTER_LATEST.UPDATE_DATE = GRAB_DATE.LAST_DATE);
Union Query =
SELECT *
FROM Query2
UNION
SELECT *
from Query3;
Query 2 =
SELECT Var1, Var2, ...VarX
FROM All_FHP
INNER JOIN Query1
ON (All_FHP.Date_of_Birth = Query1.DOB)
AND (All_FHP.Last_Name = Query1.LST_NM)
AND (All_FHP.First_Name = Query1.FST_NM);
Query 3 =
SELECT Var1, Var2, ...VarX
FROM CBP_LIST
INNER JOIN Query1
ON (CBP_LIST.Date_of_Birth = Query1.DOB)
AND (CBP_LIST.Last_Name = Query1.LST_NM)
AND (CBP_LIST.First_Name = Query1.FST_NM);
As a first step, it would be a good idea to save a backup copy and try "Compact & Repair."
With the database open, click File > Info > Compact & Repair.
See if this solves your issues.

After server move a query doesn't work anymore

I need some help for a problem that's driving me crazy!
I've moved an ASP + SQL Server application from an old server to a new one.
The old one was a Windows 2000 server with MSDE, and the new one is a Windows 2008 with SQL Server 2008 Express.
Everything is ok, even a little faster, except just one damned function whose asp page gives a time out.
I've tried the query within that page in a management query windows and it never ends, while in the old server it took about 1 minute to be completed.
The query is this one:
SELECT DISTINCT
TBL1.TBL1_ID,
REPLACE(TBL1_TITOLO, CHAR(13) + CHAR(10), ’ ’),
COALESCE(TBL1_DURATA, 0), TBL1_NUMERO,
FLAG_AUDIO
FROM
SPOT AS TBL1
INNER JOIN
CROSS_SPOT AS CRS ON CRS.TBL1_ID = TBL1.TBL1_ID
INNER JOIN
DESTINATARI_SPOT AS DSP ON DSP.TBL1_ID = TBL1.TBL1_ID
WHERE
DSP.PTD_ID_PUNTO = 1044
AND DSP.DSP_FLAG_OK = 1
AND TBL1.FLAG_AUDIO_TESTO = 1
AND TBL1.FLAG_AUDIO_GRAFICO = ’A’
AND CRS.CRS_STATO > 2
OR TBL1.TBL1_ID IN (SELECT ID
FROM V_VIEW1
WHERE ID IS NOT NULL AND V_VIEW1.ID_MODULO = 403721)
OR TBL1.TBL1_ID IN (SELECT TBL1_ID
FROM V_VIEW2
WHERE V_VIEW2.ID_PUNTO = 1044)
ORDER BY
TBL1_NUMERO
I've tried to transform the 2 views in last lines into tables and the query works, even if a little slower than before.
I've migrated the db with it's backup/restore function. Could it be and index problem?
Any suggestions?
Thanks in advance!
Alessandro
Run:
--Defrag all indexes
sp_msForEachTable 'DBCC DBREINDEX (''?'')'
--Update all statistics
sp_msForEachTable 'UPDATE STATISTICS ? WITH FULLSCAN'
If that doesn't "just fix it", it's going to some subtle "improvement" in the SQL Server optimizer that made things worse.
Try the index tuning wizard (or whatever its SSMS2008 equivalent).
After that, you'll have to start picking the query apart, removing things until it runs fast. Since you have 2 OR clauses, you basically have 3 separate queries:
SELECT ... FROM ...
WHERE DSP.PTD_ID_PUNTO = 1044
AND DSP.DSP_FLAG_OK = 1
AND TBL1.FLAG_AUDIO_TESTO=1
AND TBL1.FLAG_AUDIO_GRAFICO=’A’
AND CRS.CRS_STATO>2
--UNION
SELECT ... FROM ...
WHERE TBL1.TBL1_ID IN (
SELECT ID
FROM V_VIEW1
WHERE ID IS NOT NULL
AND V_VIEW1.ID_MODULO = 403721
)
--UNION
SELECT ... FROM ...
WHERE TBL1.TBL1_ID IN (
SELECT TBL1_ID
FROM V_VIEW2
WHERE V_VIEW2.ID_PUNTO = 1044
)
See which one of those is the slowest.
p.s. A query taking a minute is pretty bad. My opinion is that queries should return instantly (within the limits of human observation)