What is the DB2 equivalent of SQL Server's SET NOCOUNT ON? - sql

What is the DB2 equivalent of SQL Server's SET NOCOUNT ON?
"From the SQL Server documentation:
SET NOCOUNT ON... Stops the message that shows the count of the number of rows affected by a Transact-SQL statement or stored procedure from being returned as part of the result set...
For stored procedures that contain several statements that do not return much actual data, or for procedures that contain Transact-SQL loops, setting SET NOCOUNT to ON can provide a significant performance boost, because network traffic is greatly reduced."
my problem is if I update a row in a table, a trigger runs that update another
row in a different table.
In Hibernate I get this error: "Batch update returned unexpected row
count from update; actual row count: 2; expected: 1".
I think because of the trigger DB2 returns 2 instead of 1, what
is correct. However, is there any way to make DB2 to return 1
without removing the trigger or can I disable the check in Hibernate?
How to handle this issue?
Can anyone plz tell "Set NoCount on"(sql server) equivalent in db2?

There is no equivalent to SET NOCOUNT in DB2 because DB2 does not produce any informational messages after a DML statement has completed successfully. Instead, the DB2 driver stores that type of information in a local, connection-specific data structure called the SQL communications area (SQLCA). It is up to the application (or whatever database framework or API the application is using) to decide which SQLCA variables to examine after executing each statement.
In your case, your application has delegated its database interaction to Hibernate, which compares the number of affected rows reported by DB2 in the SQLCA with the number of rows Hibernate expected its UPDATE statement to change. Since Hibernate isn't aware of the AFTER UPDATE trigger you created, it expects the update statement to affect only one row, but the SQLCA shows that two rows were updated (one by Hibernate's update statement, and one by the AFTER UPDATE trigger on that table), so Hibernate throws an exception to complain about the discrepancy.
This leaves you with two options:
Drop the trigger from that table and instead define an equivalent followup action in Hibernate. This is not an ideal solution if other applications that don't use Hibernate are also updating the table in question, but that's the sort of decision a team gets to make when they inflict Hibernate on a database.
Keep the AFTER UPDATE trigger where it is in DB2, and examine your options for defining Hibernate object mappings to determine if there's a way to at least temporarily disable Hibernate's row count verification logic. One approach that looks particularly encouraging is to specify the ResultCheckStyle.NONE option as part of a custom #SQLUpdate annotation.
For SQL Server and Sybase, there appears to be a third option: Hide the activity of an AFTER UPDATE trigger from Hibernate by activating SET NOCOUNT ON within the trigger. Unfortunately, there is no equivalent in DB2 (or Oracle, for that matter) that allows an application to selectively skip certain activities when tallying the number of affected rows.

Related

Stored procedures vs standard select update, avoid locks

In order to retrieve an ID, I first do a select and then an update, in two consequent queries.
The problem is that I am having problems with locked rows. I've read that putting both this statements, Select and Update in one stored procedure it helps with the locks. Is this true?
The queries I run are:
select counter
from dba.counter_list
where table_name = :TableName
update dba.counter_list
set counter = :NewCounter
where table_name = :TableName
The problem is that it can happen that multiple users are selecting the same row and also possible that they update the same row.
Assumptions:
you're using Sybase ASE
your select returns a single value for counter
you may want the old counter value for some purpose other than performing the update
Consider the following update statement which should eliminate any race conditions that may occur with multiple users running your select/update logic concurrently:
declare #counter int -- change to the appropriate datatype
update dba.counter_list
set #counter = counter, -- grab current value
counter = :NewCounter -- set to new value
where table_name = :TableName
select #counter -- send previous counter value to client
the update obtains an exclusive lock on the desired row (or page/table depending on table design and locking scheme)
with an exclusive lock in place you're able to retrieve the current value and set the new value with a single statement
Whether you submit the above via a SQL batch or a stored proc call is up to you and your DBA to decide ...
if statement cache is disabled, a SQL batch will need to be compiled each time it's submitted to the dataserver
if statement cache is enabled, and you submit this SQL batch on a regular basis then there's a chance the previous query plan is still in statement/procedure cache thus eliminating the (costly) compilation step
if a copy of previous stored proc (query) plan is not in procedure cache then you'll incur the (costly) compilation step when loading a (proc) query plan into procedure cahe
a stored proc is typically easier to replace in the event of a syntax/logic/performance issue (as opposed to editing, and possibly compiling, a front-end application)
... add your (least) favorite argument for SQL batch vs stored proc (vs prepared statement?) vs ??? ...
Is the table counter_list accessed by multiple clients concurrently ?
The best practices for OLTP is to call a stored procedure that will perform the update logic in one transaction.
Check that the table dba.counter_list has an index on column table_name.
Check also that it is row level locked.

Do I have to include "SELECT ##RowCount" if I have more than one SQL statement?

I know that, if I execute a single SQL statement that UPDATEs or DELETEs some data, that it will return the number of rows affected.
But if I have multiple SQL statements in a sql script, and I want to know the number of rows affected from the last statement executed, will it still return that automatically, or do I need a
SELECT ##RowCount
at the end of the script?
The code in question is not a Stored Procedure. Rather, it is a parameterized SQL script stored in an arbitrary location, executed using the ExecuteStoreCommand function in Entity Framework, as in:
var numberOfRowsAffected = context.ExecuteStoreCommand<int>(mySqlScript, parameters);
It depends on the NOCOUNT setting when executing your quer(y/ies).
If NOCOUNT is ON then no DONE_IN_PROC messages will NOT be returned.
If NOCOUNT is OFF, the default setting, then DONE_IN_PROC messages will be returned, (eg. counts).
Both of these situations are different to executing,
SELECT ##ROWCOUNT;
which will return a result set with a single scalar value, different from a DONE_IN_PROC message. This will occur, regardless of the setting of NOCOUNT.
I believe that SELECT ##ROWCOUNT is sometimes used to make Entity Framework "play" with more complex TSQL statements because EF both requires
Requires a count for post validation
And will accept a scalar number result set as a substitute for a DONE_IN_PROC message.
Its important that SELECT ##ROWCOUNT; is executed immediately after the last query statement because many statements will reset ##ROWCOUNT and therefore yield an unexpected result.
Just to be specific on answer part, you would need to add SELECT ##RowCount to return number of rows affected by last statement.
I think confusion might be due to rows returned in SSMS window while executing query.By default SSMS shows number of rows returned for all sql statements but it returns affected rows as message not a dataset.
##ROWCOUNT will automatically return number of rows effected by the last statement.
Please find the msdn link here
https://msdn.microsoft.com/en-us/library/ms187316.aspx

Suppress result sets from trigger

I have a sql statement that first updates, then selects:
UPDATE myTable
SET field1=#someValue
WHERE field2=#someValue2
SELECT 1 returnValue
The process that consumes the reults of this statement is expecting a single result set, simple enough.
The problem arises because an update trigger was added to the table that produces a result set, i.e. it selects like so:
SELECT t_field1, t_field2, t_field3 FROM t_table
The obvious solution is to split up the statments. Unfortunatley, the real world implementation of this is complex and to be avoided if possible. The trigger is also nessecary and cannot be disabled.
Is there a way to supress the results from the update, returning only the value from the select statement?
The ability to return result sets from triggers is deprecated in SQL Server 2012 and will be removed in a future version (maybe even in SQL Server 2016, but probably in the next version). Change your trigger to return the data in some other way. If it is needed just for debugging, use PRINT instead of SELECT. If it is needed for some other reasons, insert the data into a temporary table and perform the SELECT from the calling procedure (only when needed).

Debug Insert and temporal tables in SQL 2012

I'm using SQL Server 2012, and I'm debugging a store procedure that do some INSERT INTO #temporal table SELECT.
There is any way to view the data selected in the command (the subquery of the insert into?)
There is any way to view the data inserted and/or the temporal table where the insert maked the changes?
It doesn't matter if is the total rows, not one by one
UPDATE:
Requirements from AT Compliance and Company Policy requires that any modification can be done in the process of test and it's probable this will be managed by another team. There is any way to avoid any change on the script?
The main idea is that the AT user check in their workdesktop the outputs, copy and paste them, without make any change on environment or product.
Thanks and kind regards.
If I understand your question correctly, then take a look at the OUTPUT clause:
Returns information from, or expressions based on, each row affected
by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
returned to the processing application for use in such things as
confirmation messages, archiving, and other such application
requirements.
For instance:
INSERT INTO #temporaltable
OUTPUT inserted.*
SELECT *
FROM ...
Will give you all the rows from the INSERT statement that was inserted into the temporal table, which were selected from the other table.
Is there any reason you can't just do this: SELECT * FROM #temporal? (And debug it in SQL Server Management Studio, passing in the same parameters your application is passing in).
It's a quick and dirty way of doing it, but one reason you might want to do it this way over the other (cleaner/better) answer, is that you get a bit more control here. And, if you're in a situation where you have multiple inserts to your temp table (hopefully you aren't), you can just do a single select to see all of the inserted rows at once.
I would still probably do it the other way though (now I know about it).
I know of no way to do this without changing the script. Howeer, for the future, you should never write a complex strored proc or script without a debug parameter that allows you to put in the data tests you will want. Make it the last parameter with a default value of 0 and you won't even have to change your current code that calls the proc.
Then you can add statements like the below everywhere you will want to check intermediate results. Further in debug mode you might always rollback any transactions so that a bug will not affect the data.
IF #debug = 1
BEGIN
SELECT * FROM #temp
END

multiple select statements in single ODBCdataAdapter

I am trying to use an ODBCdataadapter in C# to run a query which needs to select some data into a temporary table as a preliminary step. However, this initial select statement is causing the query to terminate so that data gets put into the temp table but I can't run the second query to get it out. I have determined that the problem is the presence of two select statements in a single dataadapter query. That is to say the following code only runs the first select:
select 1
select whatever from wherever
When I run my query directly through SQL Server Management Studio it works fine. Has anyone encountered this sort of issue before? I have tried the exact same query previously on similar databases using the same C# code (only the connection string is different) and had no problems.
Before you ask, the temp table is helpful because otherwise I would be running a whole lot of inner select statements which would bog down the database.
Assuming you're executing a Command that's command type is CommandText you need a ; to separate the statements.
select 1;
select whatever from wherever;
You might also want to consider using a Stored Procedure if possible. You should also use the SQL client objects instead of the ODBC client. That way you can take advantage of additional methods that aren't available otherwise. You're supposed to get better perf as well.
If you need to support multiple Databases you can just use the DataAdapter class and use a Factory o create the concrete types. This gives you the benefits of using the native drivers without being tied to a specific backend. ORMS that support multiple back ends typically do this. The Enterprise Library Data Access Application Block while not an ORM does this as well.
Unfortunately I do not have write access to the DB as my organization has been contracted just to extract information to a data warehouse. The program is one generalized for use on multiple systems which is why we went with ODBC. I suppose it would not be terrible to rewrite it using SQL Management Objects.
ODBC Connection requires a single select statement and its retrieval from SQL Server.
If any such functionality is required, a Hack can do the purpose
use the query
SET NOCOUNT ON
at the top of your select statement.
When SET NOCOUNT is ON, the count (indicating the number of rows affected by a Transact-SQL statement) is not returned.
When SET NOCOUNT is OFF, the count is returned. It is used with any SELECT, INSERT, UPDATE, DELETE statement.
The setting of SET NOCOUNT is set at execute or run time and not at parse time.
SET NOCOUNT ON mainly improves stored procedure (SP) performance.
Syntax:
SET NOCOUNT { ON | OFF }