Resolving an ADO timeout issue in VB6 - sql

I am running into an issue when populating an ADO recordset in VB6. The query (hitting SQLServer 2008) only takes about 1 second to run when I run it using SSMS. It works fine when the result set is small, but when it gets to be a few hundred records it takes a long time. 800+ records requires about 5 minutes to return (query still only takes 1 second in SSMS), and 6000+ takes well over 20 minutes. I have "fixed" the exception by increasing the command timeout, but I was wondering if there was a way to get it to work faster since it does not seem to be the actual query that requires so much time. Something such as compressing the results so it doesn't take as long. The recordset is opened as follows:
myConnection.CommandTimeout = 2000
myConnection.ConnectionString = "Provider=SQLOLEDB;" & _
"Initial Catalog=DB_NAME;" & _
"Data Source=SERVER_NAME" & _
"Network Library=DBMSSOCN;" & _
"User ID=USER_NAME;" & _
"Password=PASSWORD;" & _
"Use Encryption for Data=True;"
myConnection.Open
myRecordSet.Open STORED_PROC_QUERY_STRING, myConnection, adOpenStatic, adLockReadOnly
Set myRecordSet.ActiveConnection = Nothing
myConnection.Close
The data returns 3 columns used to fill a combo box.
UPDATE:
I ran SQL Profiler, and the instances from the client machine make more reads and takes more time by a factor of 100 than both metrics for the queries in SSMS. The text of the query is the same for both SSMS and the client machine according to the profiler, so I don't think it should be using a different execution plan. Could the network library or the Provider have any impact on this?
Profiler stats:
From the client application: 7041720
reads, 59458 ms duration, 3900 row
counts
From SSMS: 30802 reads, 238 ms
duration, 3900 row counts
It seems like it is using a different execution plan, but the query is exactly the same and I am not sure how to check the execution plan the client might be using if it is different from what is shown in SSMS.

800+ records requires about 5 minutes = query problem.
look at your execution plan:
In SSMS, run:
SET SHOWPLAN_ALL ON
then run your query, it will not produce the expected result set, but an exceution plan on how the database is retrieving your data. Most bad queries usually table scan (look at every row in the table, which is slow), so look for the word "SCAN" in the StmtText column. Try to figure out why the index is not being used on that table (name will be in there by the word "SCAN"). If you join in multiple tables and have multiple SCANs concentrate on the largest tables first.
Without more info this is the best "generic" help you can get.
EDIT
From reading your question, I'm not sure if you mean it is always fast from SSMS no matter the rows, but slow from VB as the rows increase. If that is the case check this: http://www.google.com/search?q=sql+server+fast+from+ssms+slow+from+application&hl=en&num=100&lr=&ft=i&cr=&safe=images
could be something like: parameter sniffing or inconsistent connection parameters (ANSI nulls, arithabort, etc)
for the connection settings, try running these from SSMS and from VB6 (add them to the result set) and see if there are any differences:
SELECT SESSIONPROPERTY ('ANSI_NULLS') --Specifies whether the SQL-92 compliant behavior of equals (=) and not equal to (<>) against null values is applied.
--1 = ON
--0 = OFF
SELECT SESSIONPROPERTY ('ANSI_PADDING') --Controls the way the column stores values shorter than the defined size of the column, and the way the column stores values that have trailing blanks in character and binary data.
--1 = ON
--0 = OFF
SELECT SESSIONPROPERTY ('ANSI_WARNINGS') --Specifies whether the SQL-92 standard behavior of raising error messages or warnings for certain conditions, including divide-by-zero and arithmetic overflow, is applied.
--1 = ON
--0 = OFF
SELECT SESSIONPROPERTY ('ARITHABORT') -- Determines whether a query is ended when an overflow or a divide-by-zero error occurs during query execution.
--1 = ON
--0 = OFF
SELECT SESSIONPROPERTY ('CONCAT_NULL_YIELDS_NULL') --Controls whether concatenation results are treated as null or empty string values.
--1 = ON
--0 = OFF
SELECT SESSIONPROPERTY ('NUMERIC_ROUNDABORT') --Specifies whether error messages and warnings are generated when rounding in an expression causes a loss of precision.
--1 = ON
--0 = OFF
SELECT SESSIONPROPERTY ('QUOTED_IDENTIFIER') --Specifies whether SQL-92 rules about how to use quotation marks to delimit identifiers and literal strings are to be followed.
--1 = ON
--0 = OFF
make your query like (so you can see the connection settings in VB6):
SELECT
col1, col2
,SESSIONPROPERTY ('ARITHABORT') AS ARITHABORT
,SESSIONPROPERTY ('ANSI_WARNINGS') AS ANSI_WARNINGS
FROM ...

Related

Simple queries take very long

When I execute a query for the first time in DBeaver it can take up to 10-15 seconds to display the result. In SQLDeveloper those queries only take a fraction of that time.
For example:
Simple "select column1 from table1" statement
DBeaver: 2006ms,
SQLDeveloper: 306ms
Example 2 (other way around; so theres no server-side caching):
Simple "select column1 from table2" statement
SQLDeveloper: 252ms,
DBeaver: 1933ms
DBeavers status box says:
Fetch resultset
Discover attribute column1
Find attribute column1
Late bind attribute colummn1
2, 3 and 4 use most of the query execution time.
I'm using oracle 11g, SQLDeveloper 4.1.1.19 and DBeaver 3.5.8.
See http://dbeaver.jkiss.org/forum/viewtopic.php?f=2&t=1870
What could be the cause?
DBeaver looks up some metadata related to objects in your query.
On an Oracle DB, it queries catalog tables such as
SYS.ALL_ALL_TABLES / SYS.ALL_OBJECTS - only once after connection, for the first query you execute
SYS.ALL_TAB_COLS / SYS.ALL_INDEXES / SYS.ALL_CONSTRAINTS / ... - I believe each time you query a table not used before.
Version 3.6.10 introduced an option to enable/disable a hint used in those queries. Disabling the hint made a huge difference for me. The option is in the Oracle Properties tab of the connection edit dialog. Have a look at issue 360 on dbeaver's github for more info.
The best way to get insight is to perfom the database trace
Perform few time the query to eliminate the caching effect.
Than repeat in both IDEs following steps
activate the trace
ALTER SESSION SET tracefile_identifier = test_IDE_xxxx;
alter session set events '10046 trace name context forever, level 12'; /* binds + waits */
Provide the xxxx to identify the test. You will see this string as a part of the trace file name.
Use level 12 to see the wait events and bind variables.
run the query
close the conenction
This is important to not trace other things.
Examine the two trace files to see:
what statements were performed
what number of rows was fetched
what time was elapsed in DB
for the rest of the time the client (IDE) is responsible
This should provide you enough evidence to claim if one IDE behaves different than other or if simple the DB statements issued are different.

How does query execution on SQL Server from .NET differ from Management Studio?

I investigated a problem when running a certain set of searches (from a .NET 3.5 application) against a Full Text Search DB on a SQL Server 2008 R2. Using profiler I extracted the long running query (120 seconds until Command Timeout was reached) and ran it in my SQL Server Management Studio. Duration was "0 Seconds" and depending on which one I tried 0 to 6 rows were returned.
The query looks like follows:
exec sp_executesql
N'SELECT TOP 1000 [DBNAME].[dbo].[FTSTABLE].[ID] AS [Id], [DBNAME].[dbo].[FTSTABLE].[Title], [DBNAME].[dbo].[FTSTABLE].[FirstName], [ABOUT 20 OTHERS]
FROM [DBNAME].[dbo].[FTSTABLE]
WHERE ( (
( Contains(([DBNAME].[dbo].[FTSTABLE].[Title], [DBNAME].[dbo].[FTSTABLE].[FirstName], [ABOUT 10 OTHERS]), #FieldsList1))
AND ( Contains(([DBNAME].[dbo].[FTSTABLE].[Title], [DBNAME].[dbo].[FTSTABLE].[FirstName], [ABOUT 10 OTHERS]), #FieldsList2))
AND ( Contains(([DBNAME].[dbo].[FTSTABLE].[Title], [DBNAME].[dbo].[FTSTABLE].[FirstName], [ABOUT 10 OTHERS]), #FieldsList3))
))'
,N'#FieldsList1 nvarchar(10),#FieldsList2 nvarchar(10),#FieldsList3 nvarchar(16)'
,#FieldsList1=N'"SomeString1*"'
,#FieldsList2=N'"SomeString2*"'
,#FieldsList3=N'"SomeString3*"'
The query looks a little weird as it is generated from an OR Mapper, but right now I don't want to optimize the query, as in SSMS it runs in less than one second, which shows it is not really the query making trouble.
I wrote a small testprogram:
SqlConnection conn = new SqlConnection("EXACTSAMECONNECTIONSTRING_USING_SAME_USER_ETC")
conn.Open();
SqlCommand command = conn.CreateCommand()
command.CommandText = "EXACTLY SAME STRING, LITERALLY, AS ABOVE IN SSMS- exec sp_executessql.....";
command.CommandTimeout = 120;
var reader = command.ExecuteReader();
while(reader.NextResult())
{
Console.WriteLine(reader[0]);
}
I got from my local PC also a SQLException after 120 seconds when command timeout was exeeded.
The SQL Server was at no moment under load heavier than a few single percent. There were no blocks at that table at any time during my tests.
I solved it after some time: I reduced the TOP 1000 to TOP 200 and suddenly the query from .NET code executed also in less than a second.
The questions I have:
Why in general is there such a huge difference between SSMS and simplest SQLCommand .NET code?
Why did reducing to TOP 200 have any effect, especially considering there were max 6 rows in the result.
This is tied to how query plans are built. When you run it in SSMS, you probably replace the variables manually, so it's not the same.
You can read a full explanation here : http://www.sommarskog.se/query-plan-mysteries.html
edit : maybe start with the paragraph "The Default Settings" and look at the results with manual enabling or disabling of ARITHABORT. This is the most common cause.
So the preliminary answer (not yet fully verified due to its complexity) can be derived from Keorl's answer, or mostly from the link provided therein.
To describe the different symptoms, I'll explain what happens:
The SQL Server cached the query against the fulltext indexed table, which includes the execution plan of the query. This means, if the first query to run (which puts the plan into the cache) is a very rare query with an absurd execution plan, this plan is cached and used for all subsequent queries, ruining performance for most runs.
One thing I could reproduce in the end: rerunning the FT indexer/gatherer solved the problem (this time). Also here the explanation is simple: an index update throws away precompiled/cached queries. Thus a better query than the previously cached one could run as the first and store a much better overall plan in the cache.
Answer to Q1: Why in general is there such a huge difference between SSMS and simplest SQLCommand .NET code?
So why didn't this happen with SSMS? Also this can be extracted from Keorl's answer: SSMS circumvents this in setting ARITHABORT option, which results in its own newly compiled query which is then cached. Thus the different observations for the same query just using SSMS and Code.
Answer to Q2: Why did reducing to TOP 200 have any effect, especially considering there were max 6 rows in the result?
For Dynamic SQL as used in example above, cache is stored depending on hashes of the complete query. As the query is different for TOP 200 and TOP 1000 two different compiles would cached. Parameters are not part of the hash though, so queries with just changing parameters would still result in same cache entry being used.
Concluding this: Thanks Keorl for providing the means to find an answer.

What SQL query is used when opening a table in SAS Guide via ODBC access

I have in SAS Enterprise Guide 6.1 a libname assigned to a database through ODBC.
If in Server List panel I select table attached to the libname, and open it with right mouse menu, I get the resulting table which I can browse.
Is is possible to see somehow which SQL query the opening of the table sends to the ODBC interface?
Addition 1:
I would like to compare the performance when running a proc sql query:
proc sql;
select *
from temp.cases (obs=100);
quit;
And when opening the table with right mouse menu and with Tools > Options > Data > Performance > Maximum number of rows ... settings set to the value of 100.
In order to be able to explain the differences in performance I would need to know which query the opening of the table with right-mouse menu uses. Does is read the full table, and then show 100 lines, or read just 100 lines and then show those 100 lines. There could be a huge difference in performance between these two ways of showing data.
Or, is the only way to find out the query used in opening of the data to look at the log of the server which processed the ODBC query?
Addition 2:
The problem I had was caused by the string length of some fields, which became the maximum 32767. With 48 string fields that was 48 * 32767 = 1.5 M per one row! Apparently no strings had an "end-of-record" mark, which caused the huge data traffic between SAS Server and SAS Client.
After the data was reformatted to have only the string length with maximum of 255, one row took only 48 * 255 = 12 k, which make a tremendous difference in the speed, when viewing the data by "Opening" the table in SAS Guide viewer! Similar performance loss was not seen when outputting the same data into "SAS Report".
I'm not sure that it is possible to see a SQL version of what is happening. Since it is SAS, it is probably using a data step or equivalent to populate the table browsing (like using obs= option on a data step set statement).
However, if you are simply looking to find a proc sql equivalent. The outobs option may work for you.
proc sql outobs=10;
create table temp2 as select * from temp;
run;

SQL queries in batch don't execute

My project is in Visual Foxpro and I use MS SQL server 2008. When I fire sql queries in batch, some of the queries don't execute. However, no error is thrown. I haven't used BEGIN TRAN and ROLLBACK yet. What should be done ??
that all depends... You don't have any sample of your queries posted to give us an indication of possible failure. However, one thing I've had good response with from VFP to SQL is to build into a string (I prefer using TEXT/ENDTEXT for readabilty), then send that entire value to SQL. If there are any "parameter" based values that are from VFP locally, you can use "?" to indicate it will come from a variable to SQL. Then you can batch all in a single vs multiple individual queries...
vfpField = 28
vfpString = 'Smith'
text to lcSqlCmd noshow
select
YT.blah,
YT.blah2
into
#tempSqlResult
from
yourTable YT
where
YT.SomeKey = ?vfpField
select
ost.Xblah,
t.blah,
t.blah2
from
OtherSQLTable ost
join #tempSqlResult t
on ost.Xblah = t.blahKey;
drop table #tempSqlResult;
endtext
nHandle = sqlconnect( "your connection string" )
nAns = sqlexec( nHandle, lcSqlCmd, "LocalVFPCursorName" )
No I don't have error trapping in here, just to show principle and readability. I know the sample query could have easily been done via a join, but if you are working with some pre-aggregations and want to put them into temp work areas like Localized VFP cursors from a query to be used as your next step, this would work via #tempSqlResult as "#" indicates temporary table on SQL for whatever the current connection handle is.
If you want to return MULTIPLE RESULT SETs from a single SQL call, you can do that too, just add another query that doesn't have an "into #tmpSQLblah" context. Then, all instances of those result cursors will be brought back down to VFP based on the "LocalVFPCursorName" prefix. If you are returning 3 result sets, then VFP will have 3 cursors open called
LocalVFPCursorName
LocalVFPCursorName1
LocalVFPCursorName2
and will be based on the sequence of the queries in the SqlExec() call. But if you can provide more on what you ARE trying to do and their samples, we can offer more specific help too.

Is there a size limit for the SQL text in a PeopleSoft App Engine SQL Step/Action?

I'm getting the following error: AeSymResolveStatement [775] ... Meta-SQL error at or near position 34338 in statement (108,512). The SQL statement itself is over 40,000 chars long, hence the question.
The DB is oracle. Running on Tools 8.49.24.
I know that there is a limit on the size of the SQL used in an Application Engine (SQL Step). I had once recieved a similar error while trying to use an exceptionally long SQL in an Application Engine.
I wouldn't be surprised if that same limit applies to SQL Objects.
To fix the problem, I was able to split the SQL into 2 (was an update statement). Hopefully that's possible in your case as well.
There is no such limit.
You can confirm this yourself by creating an SQL like:
select 'x' from PS_INSTALLATION where
1 = 1 and
1 = 1 and
1 = 1 and
1 = 1 and
/* ... copy paste '1 = 1 and' 90000 times or so times more */
1 = 1
Although it makes pside quite slow, It saves and validates just fine.
There are limits within PeopleCode, mostly due to the limits on string length, however I have never found a limit on stored SQL statements.
Personally I'd look at breaking the statement into pieces in some way.
You could:
Using the inbuilt looping mechanism of App Engines
Use a mixture of SQL and PeopleCode
Use a temporary table and perform intermediate SQLs, storing in the temp table
Apart from giving your database a heart seizure, not the mention the DBA when he sees the statement in the SQL monitor. You are saving yourself a world of pain if you ever have to look at the statement again.
I think the SQLs in App Engines are stored as longs, so it would be 4GB under Oracle, something similarly huge under DB2.