Report on SQL/SSRS 2k5 takes > 10 minutes, query < 3 mins - sql

We have SQL and SSRS 2k5 on a Win 2k3 virtual server with 4Gb on the virt server. (The server running the virt server has > 32Gb)
When we run our comparison report, it calls a stored proc on database A. The proc pulls data from several tables, and from a view on database B.
If I run Profiler and monitor the calls, I see activity
SQL:BatchStarting SELECT
DATABASEPROPERTYEX(DB_NAME(),
'Collation'),
COLLATIONPROPERTY(CONVERT(char,
DATABASEPROPERTYEX(DB_NAME(),
'collation')), 'LCID')
then wait several minutes till the actual call of the proc shows up.
RPC:Completed exec sp_executesql
N'exec
[procGetLicenseSales_ALS_Voucher]
#CurrentLicenseYear,
#CurrentStartDate, #CurrentEndDate,
''Fishing License'',
#PreviousLicenseYear,
#OpenLicenseAccounts',N'#CurrentStartDate
datetime,#CurrentEndDate
datetime,#CurrentLicenseYear
int,#PreviousLicenseYear
int,#OpenLicenseAccounts
nvarchar(4000)',#CurrentStartDate='2010-11-01
00:00:00:000',#CurrentEndDate='2010-11-30
00:00:00:000',#CurrentLicenseYear=2010,#PreviousLicenseYear=2009,#OpenLicenseAccounts=NULL
then more time, and usually the report times out. It takes about 20 minutes if I let it run in Designer
This Report was working, albeit slowly but still less than 10 minutes, for months.
If I drop the query (captured from profiler) into SQL Server Management Studio, it takes 2 minutes, 8 seconds to run.
Database B just had some changes and data replicated to it (we only read from the data, all new data comes from nightly replication).
Something has obviously changed, but what change broke the report? How can I test to find out why the SSRS part is taking forever and timing out, but the query runs in about 2 minutes?
Added: Please note, the stored proc returns 18 rows... any time. (We only have 18 products to track.)
The report takes those 18 rows, and groups them and does some sums. No matrix, only one page, very simple.

M Kenyon II
Database B just had some changes and data replicated to it (we only read from the data, all new data comes from nightly replication).
Ensure that all indexes survived the changes to Database B. If they still exist, check how fragmented they are and reorganize or rebuild as necessary.
Indexes can have a huge impact on performance.
As far as the report taking far longer to run than your query, there can be many reasons for this. Some tricks for getting SSRS to run faster can be found here:
http://www.sqlservercentral.com/Forums/Topic859015-150-1.aspx
Edit:
Here's the relevant information from the link above.
AshMc
I recall some time ago we had the same issue where we were passing in the parameters within SSRS to a SQL Dataset and it would slow it all down compared to doing it in SSMS (minutes compared to seconds like your issue). It appeared that when SSRS was passing in the parameter it was possibly recalculating the value and not storing it once and that was it.
What I did was declare a new TSQL parameter first within the dataset and set it to equal the SSRS parameter and then use the new parameter like I would in SSMS.
eg:
DECLARE #X as int
SET #X = #SSRSParameter
janavarr
Thanks AshMc, this one worked for me. However my issue now is that it will only work with a single parameter and the query won’t run if I want to pass multiple parameter values.
...
AshMc
I was able to find how I did this previously. I created a Temp table placed the values that we wanted to filter on in it then did an inner join on the main query to it. We only use the SSRS Parameters as a filter on what to put in the temp table.
This saved a lot of report run time doing it this way
DECLARE #ParameterList TABLE (ValueA Varchar(20))
INSERT INTO #ParameterList
select ValueA
from TableA
where ValueA = #ValueB
INNER JOIN #ParameterList
ON ValueC = ValueA
Hope this helps,
--Dubs

Could be parameter sniffing. If you've changed some data or some of the tables then the cached plan that will have satisfied the sp for the old data model may not be valid any more.
Answered a very similar thing here:
stored procedure performance issue
Quote:
f you are sure that the sql is exactly the same and that the params are the same then you could be experiencing a parameter sniffing problem .
It's a pretty uncommon problem. I've only had it happen to me once and since then I've always coded away the problem.
Start here for a quick overview of the problem:
http://blogs.msdn.com/b/queryoptteam/archive/2006/03/31/565991.aspx
http://elegantcode.com/2008/05/17/sql-parameter-sniffing-and-what-to-do-about-it/
try declaring some local variables inside the sp and allocate the vales of the parameters to them. The use the local variables in place of the params.
It's a feature not a bug but it makes you go #"$#

Related

SqlException: Data modification failed on system-versioned table because transaction time was earlier than period start time for affected records

I m getting the above error when running the Web Job in multi-threaded environment. I m calling one stored procedure to perform some action, stored procedure has code which Inserts/Updates/Delete records from pretty big temporal tables(3-4M records[not sure if its relevant here]). Every time the job is run it deals with(Insert/Update) around 40K-80K records based on condition. When the single thread is running everything goes fine. But as soon as number of parallel jobs count is set to 2 or more I m getting the error. From initial analysis seems like issue is with Auto generated column values with for SysStartTime and SysEndTime in history table. I have tried one of the solution from internet to reduce 1 second from the date to be saved in those columns as below
DEFAULT (dateadd(second,(-1),sysutcdatetime()))
But its not working. I have read few articles where it says temporal tables does not work properly in multi-threaded environment. Now I m not sure why the issue is happening and how to resolve this in multi-threaded environment.
Can someone here please help me understanding the reason behind the error and how to fix it.
NOTE: I can't make my code to run on single thread. Minimum three threads are required. Converting to single thread is not solution in this case.

Performance degrades when querying linked Oracle DB from MS SQL Server 2012

UPDATE
I have noticed that when I do a simple view from this table, results come back fairly quickly, but as soon as a I make WHERE clause restriction it slows way down.
I'm running a query that runs well, about 1.5 minutes. Yet when I include a single column from a linked Oracle DB the query takes 19 minutes. Is performance degradation like this normal?
I'm also not familiar with where I should start troubleshooting this type of issue, and am new Linked Servers.
Thank you,
UPDATE
Below is the code that creates the Oracle Linked Server DB view
SELECT Patient
,Account
,MR#
,Diagnosis
,ICD9
,TransferEMTALAFormsCmpltd
,Disposition
,AdmittingDxTranscribed
,AxisIPrimaryDx
,AgeDOB
,sex
,age
,EDRecordSentToEDM
,TimeRNSignature
,timemdsignature
,RNSgntr
,TriageByNameEntered as 'Triage_End'
,TriageStartTime as 'Triage_Start'
,AddedToAdmissionsTrack
,StatusAdmitConfirmed
,SUBSTRING(statusadmitconfirmed,1,4)+'-'+SUBSTRING(statusadmitconfirmed,7,2)+'-'+SUBSTRING(statusadmitconfirmed,1,4)+' '+SUBSTRING(statusadmitconfirmed,9,2)+':'+SUBSTRING(statusadmitconfirmed,11,2)+':00.000' as 'Admit_Cnrfm_String'
,AdmittingMD
,AreaOfCare
,EDMD
,EDMDID
,Specialty
,AccessProceduresED
,MDSgntr
,Arrival
,ArrivalED
,ChiefComplaint
,TransferringFacility
,ReferMD
,PrivateName
FROM [BMH-EDIS-CL]..[WELLUSER].[Patient_Chart] a
LEFT OUTER JOIN [BMH-EDIS-CL]..[WELLUSER].[Patient_Diagnoses] b
ON a.Master_Rec_Id=b.Master_Rec_Id
AND a.Slave_Rec_Id=b.Slave_Rec_Id
WHERE TriageStartTime > '201299999999'
Just getting a single result out of this view takes some time. When I try to add EDMD to the query I am working on, this is what causes the massive slow down, I have not yet been able to review Materialized Views as mentioned in the comments.

How does query execution on SQL Server from .NET differ from Management Studio?

I investigated a problem when running a certain set of searches (from a .NET 3.5 application) against a Full Text Search DB on a SQL Server 2008 R2. Using profiler I extracted the long running query (120 seconds until Command Timeout was reached) and ran it in my SQL Server Management Studio. Duration was "0 Seconds" and depending on which one I tried 0 to 6 rows were returned.
The query looks like follows:
exec sp_executesql
N'SELECT TOP 1000 [DBNAME].[dbo].[FTSTABLE].[ID] AS [Id], [DBNAME].[dbo].[FTSTABLE].[Title], [DBNAME].[dbo].[FTSTABLE].[FirstName], [ABOUT 20 OTHERS]
FROM [DBNAME].[dbo].[FTSTABLE]
WHERE ( (
( Contains(([DBNAME].[dbo].[FTSTABLE].[Title], [DBNAME].[dbo].[FTSTABLE].[FirstName], [ABOUT 10 OTHERS]), #FieldsList1))
AND ( Contains(([DBNAME].[dbo].[FTSTABLE].[Title], [DBNAME].[dbo].[FTSTABLE].[FirstName], [ABOUT 10 OTHERS]), #FieldsList2))
AND ( Contains(([DBNAME].[dbo].[FTSTABLE].[Title], [DBNAME].[dbo].[FTSTABLE].[FirstName], [ABOUT 10 OTHERS]), #FieldsList3))
))'
,N'#FieldsList1 nvarchar(10),#FieldsList2 nvarchar(10),#FieldsList3 nvarchar(16)'
,#FieldsList1=N'"SomeString1*"'
,#FieldsList2=N'"SomeString2*"'
,#FieldsList3=N'"SomeString3*"'
The query looks a little weird as it is generated from an OR Mapper, but right now I don't want to optimize the query, as in SSMS it runs in less than one second, which shows it is not really the query making trouble.
I wrote a small testprogram:
SqlConnection conn = new SqlConnection("EXACTSAMECONNECTIONSTRING_USING_SAME_USER_ETC")
conn.Open();
SqlCommand command = conn.CreateCommand()
command.CommandText = "EXACTLY SAME STRING, LITERALLY, AS ABOVE IN SSMS- exec sp_executessql.....";
command.CommandTimeout = 120;
var reader = command.ExecuteReader();
while(reader.NextResult())
{
Console.WriteLine(reader[0]);
}
I got from my local PC also a SQLException after 120 seconds when command timeout was exeeded.
The SQL Server was at no moment under load heavier than a few single percent. There were no blocks at that table at any time during my tests.
I solved it after some time: I reduced the TOP 1000 to TOP 200 and suddenly the query from .NET code executed also in less than a second.
The questions I have:
Why in general is there such a huge difference between SSMS and simplest SQLCommand .NET code?
Why did reducing to TOP 200 have any effect, especially considering there were max 6 rows in the result.
This is tied to how query plans are built. When you run it in SSMS, you probably replace the variables manually, so it's not the same.
You can read a full explanation here : http://www.sommarskog.se/query-plan-mysteries.html
edit : maybe start with the paragraph "The Default Settings" and look at the results with manual enabling or disabling of ARITHABORT. This is the most common cause.
So the preliminary answer (not yet fully verified due to its complexity) can be derived from Keorl's answer, or mostly from the link provided therein.
To describe the different symptoms, I'll explain what happens:
The SQL Server cached the query against the fulltext indexed table, which includes the execution plan of the query. This means, if the first query to run (which puts the plan into the cache) is a very rare query with an absurd execution plan, this plan is cached and used for all subsequent queries, ruining performance for most runs.
One thing I could reproduce in the end: rerunning the FT indexer/gatherer solved the problem (this time). Also here the explanation is simple: an index update throws away precompiled/cached queries. Thus a better query than the previously cached one could run as the first and store a much better overall plan in the cache.
Answer to Q1: Why in general is there such a huge difference between SSMS and simplest SQLCommand .NET code?
So why didn't this happen with SSMS? Also this can be extracted from Keorl's answer: SSMS circumvents this in setting ARITHABORT option, which results in its own newly compiled query which is then cached. Thus the different observations for the same query just using SSMS and Code.
Answer to Q2: Why did reducing to TOP 200 have any effect, especially considering there were max 6 rows in the result?
For Dynamic SQL as used in example above, cache is stored depending on hashes of the complete query. As the query is different for TOP 200 and TOP 1000 two different compiles would cached. Parameters are not part of the hash though, so queries with just changing parameters would still result in same cache entry being used.
Concluding this: Thanks Keorl for providing the means to find an answer.

string or binary data truncated after server reboot

After rebooting SQL Server 2005 Standard 9.0.3233, we have been experiencing the above error in some of our stored procedures which try to insert into a table variable from a specific column of a table. The base table has the column defined as varchar(10), but the table variable has the column being inserted into defined only as varchar(3). However, the SELECT statement only returns data with 3 or less characters.
We have not changed the data or the code base in any other way, and this is only happening on our production server. If I run the same query on a test server with the same SQL Server 2005 edition installed, but an older backup, the error does not occur. The same data is returned in both queries if the INSERT is removed, or the table variable column is extended to match the base table.
What I have noticed is that the execution plan is different when the same query is run on the two servers. On the server where the query works, there is a computed scalar operation which takes the column and does an implicit conversion to varchar(3), before it is then outputted to the nested loop join operation.
On the server that returns an error, there is a hash join and table scan of the base table instead. I have already tried to rebuild indices and update statistics on all tables involved, including using fullscan, and with the same stat_stream as in the server that works, but I can't get the same plan back.
For now we have fixed the few stored procedures which were broken by modifying the size of the table variable column, but I would like to know if there is a way to get the statistics and indices back so that they produce the same plans as before, in case there is more code out there which just hasn't executed yet.
This is known behavior and probably has nothing to do with your reboot. Effectively what's happening is that the optimizer is re-ordering the logical elements of your query for performance reason, but this is resulting in the truncation-error check being done before the WHERE clause's filtering.
The recommended solution is to wrap the column expression that gets assigned to your VARCHAR(3) in a CASE.. that duplicates the length test in your WHERE clause. I know that sounds illogical, but it usually fixes the problem.

Using SQL Stored Procedure as data for a Microsoft Dynamics CRM report

We need to have a semi complex report in CRM that displays some accumulated lead values. The only way I see this report working is writing a stored procedure that creates a couple of temporary tables and calculates/accumulates data utilizing cursors. Then is the issue of getting the data from the stored procedure to be accessible from the Reporting Server report. Does anyone know if that's possible? If I could have the option of writing a custom SQL statement to generate report data, that would be just excellent.
Any pointers ?
Edit:
To clarify my use of cursors I can explain exactly what I'm doing with them.
The basis for my report (which should be a chart btw) is a table (table1) that has 3 relevant columns:
Start date
Number of months
Value
I create a temp table (temp1) that contains the following columns:
Year
Month number
Month name
Value
First I loop through the rows in the first table and insert a row in the temptable for each month, incrementing month, while setting the value to the total value divided by months. I.e:
2009-03-01,4,1000 in table1 yields
2009,03,March,250
2009,04,April,250
2009,05,May,250
2009,06,June,250
in the temp1 table.
A new cursor is then used to sum and create a running total from the values in temp1 and feed that into temp2 which is returned to the caller as data to chart.
example temp1 data:
2009,03,March,250
2009,04,April,200
2009,04,April,250
2009,05,May,250
2009,05,May,100
2009,06,June,250
yields temp2 data:
2009,03,March,250,250
2009,04,April,450,700
2009,05,May,350,1050
2009,06,June,250,1300
Last column is the running totals, which starts at zero for each new year.
Have you considered using views. Use a heirarchy of views if it is very complicated. Each view would represent one of your temporary tables.
EDIT Based on comments
I was thinking of SQL views, basically the same SQL as you would have written in your stored procedures.
I haven't done this - just thinking how I would start. I would make sure when the stored procedures populate the temporary tables they use the Filtered views for pulling data. I would then set the access to execute the SP to have the same security roles as the Filtered views (which should be pretty much to allow members of the PrivReportingGroup).
I would think that would cover allowing you to execute the SP in your report. I imagine if you set up the SP before hand, the SSRS designer has some means of showing you what data is available and to select an SP at design time. But I don't know that for sure.
First, since most cursors are unneeded, what exactly are you doing in them. Perhaps there is a set-based solution and then you can use a view.
Another possible line of thought, if you are doing something like running totals in the cursor, is can you create a view as the source without the running total and have the report itself do that kind of calculation?
Additionally, SSRS reports can use stored procs as a data source, read about how in Books online.
I found the solution. Downloaded Report Builder 2.0 from Microsoft. This allows me to write querys and call stored procedures for the report data.
Microsoft SQL Server Report Builder link