I have a SQL Server stored procedure that returns multiple result sets to a .NET app. For performance reasons I don't want to wait for all of them to be returned, but work on a result set as soon as it is returned, so processing and retrieving other result sets happens in parallel.
Is it possible with .NET and SQL Server?
This is not possible. SQL cannot start a statement until the previous one finishes. A statement does not finish until it produced the entire result set it will produce. The result set is a stream that the client must consume.
There are many ways to execute calls in parallel, by sending distinct requests for each interested result. But that require that you code your app appropriately (use multiuple connections and async calls) and absolutely cannot be done by a stored procedure.
Related
In SQL Server Management Studio, when running a query that produces a very large resultset, it appears to sometimes display the results of the resultset as it's loading them, rather than them all appearing at once.
My normal assumption would be that it's simply it populating the grid(s) in SSMS with the results of the finished query, and that the SQL query itself is finished.
However, if I run the following query:
SELECT 1
SELECT * FROM EnormousTable
INSERT INTO SomeOtherTable([Column1]) SELECT 'Test3'
That last INSERT does not occur until after the results from the larger resultset have been fully returned.
I have two main questions:
1. What is happening here? Is SSMS breaking down the query into separate batches even without GO statements? Please note that I'm not a DBA, so if there's some fundamental reason for this behaviour that 'any DBA would know', there's a good chance I don't know it.
2. Is there a way to attain similar functionality in .NET? What I mean by this is, when running a set of queries that will produce multiple resultsets, whether or not it's possible to have a DataSet get populated with the results of each successive query as it finishes (without waiting for all the queries to finish), without me having to manually break down the query (unless that's what SSMS is actually doing under the hood).
We need to pass around 80K rows to a SP in sql Azure. Previously we are able to do so without any glitches. But currently once we call the SP from c#, it's some times taking 10-15 minutes to start execution in DB and many times the SP is not getting Executed.
one thing I have noticed once we make the call from c#, some operation is getting started in DB. And If I try to alter the Sp, the mentioned operation blocks it. The info about the blocking sessionid is not available in sp_who2 or from "sys.dm_exec_requests"
Any help to resolve this issue is highly appreciated.
I am able to fix the issue with Setting the maxLength of the Columns of the datatable which I sent to DB
My colleague created a project in Talend to read data from Oracle database.
I used his project and so I have his Job context with connection parameters to Oracle DB and Talend successfully connects on that computer.
I've created a trivial job which is composed of two components: tOracleInput which should be reading data and tLogRow which should be redirecting output to Talend's terminal.
The problem is that when I start the job - data is not outputted to terminal and instead of row amount outputted per second I see Starting ... status.
Would it be connection issues, inappropriate java version on my computer or something else?
Starting... status means that the query is being executed. Usually it takes a few seconds to execute a simple query against the database. This is because of the Oracle database behavior that it starts to return the data without completing a full table scan. To use this feature you can use joins and filters, but not group by / order by.
On the other hand if you're using a view or executing a complex query, or just simply use DISTINCT it could happen that the query execution takes a few minutes. This is because the oracle database generates the ResultSet on the database side before returning the records.
I have a utility which:
grabs sql commands from a flat text file
executes them sequentially against SQL Server located on the same machine
and reports an error if an UPDATE command affects ZERO ROWS (there should never be an update command in this file that doesn't affect a record, hence it being recorded as an error)
The utility also logs any failed commands.
Yet the final data in the database seems to be wrong/stale, even though my utility is reporting no failed updates and no failed commands.
I know the first and most obvious culprit is some kind of logic or runtime error in my programming of the utility itself, but I just need to know of it's THEORETICALLY possible for SQL Server to report that at least one row was affected and yet no apply the change.
If it helps, the utility always seems to correctly execute the same number of commands and the final stale/wrong data is always the same i.e. it seems to correctly execute a certain number of commands that are being successfully queried against the database, then failing.
Thanks.
EDIT:
I should also note that this utility is exhibiting this behavior across 4 different production servers each with their own dedicated local database server, and that these are beefy machines with 8-16 GB RAM each that are managed by a professional sysadmin.
Based on what you say...
It is possible for the "xx rows affected" to be misleading if you have a trigger firing. You may be reading the count from the trigger. If so, add SET NOCOUNT ON to the trigger
Alternatively, the data is the same, so you actually do dummy update with the same values. Add a WHERE clause to test for differences for example.
BEGIN TRANSACTION
UPDATE MyTable
SET Message = ''
WHERE ID = 2
ROLLBACK TRANSACTION
Messages:
(1 row(s) affected)
The first query run on a large dataset on a Firebird database after starting our application is always very slow. Subsequent calls to the same query (it is a stored procedure) are fine. I assume that this is to do with something being loaded into memory but I could do with a explanation of what and whether there is anything that can be done to get around the issue.
If is a stored procedure the first query it compiles the stored procedure also it fetches the buffers and caches the result.
On the second query the procedure is not compiled again (precached) and the results are instant (the fetches are also in memory for some operating systems so no need for disk io)
one way is to optimize the sp or the tables
How larger are they? (number of records for each table)
one simple way to optimize this is to put a cron script that will run once per day/hour to prefill the caches so you will get fast sp
Maybe it's not about the query, but the connection time (delay) is long? There was such a problem with [old] Firebird/Interbase engines.
You didn't explain which Firebird version you are using but, in version 2.50, there is a bug (CORE 3227 - slow compilation of stored procedures) that can be the cause of your problem. More details:
http://www.firebirdnews.org/?p=5282&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+FirebirdNews+%28Firebird+News%29