I am working on executing a set of update queries which are dynamically generated to be executed on SQL Server using iBatis2. I have written update element in sqlMap as below which executes within scope of a transaction :
<update id="updateDepartments" parameterClass="Office">
declare #sql nvarchar(400);
<iterate property="departmentList">
<!-- form the update query and store in #sql-->
exec sp_executesql #sql
</iterate>
</update>
I have a couple of questions related to the way above queries execute.
Do they execute as a batch or individually i.e. does the number of network calls to database server are equal to the number of update queries generated ?
How can the client code know how many rows actually got updated if the queries execute ? The return value shows as 1 always even though multiple rows got updated.
Is there a better way to do this using iBatis2 ?
Example of Dynamic update queries formed are:
update Department set cost1=1000 where department_name='sales'
update Department set cost2=2000 where department_name='finance'
update Department set cost3=3000 where department_name='marketing'
Parameters passed as part of paramterClass are List of objects containing:
1. Department name
2. Column name to be updated
3. Value to be updated for column in 2.
example,
['sales', 'cost1', 1000]
['finance', 'cost2', 2000]
It may be possible to perform this as a batch execution but I'm not positive it can be. I haven't used iBatis 2 in a long time now.
What I'm sure will work is executing each SQL statement separately. There's pretty much no overhead in calling it multiple times, unless you are performing thousands of updates at once. Are you?
I think you could call is each time using a parameter class like:
class updateDptParams {
String name;
String column;
String value;
// setters & getters omitted for brevity
}
Then, the mapper could look like:
<update id="updateDepartment" parameterClass="updateDptParams">
update Department set ${column}=${value} where department_name=#{name}
</update>
Note that column and value are injected as strings (using ${}), since they are supposed to have variable types. However, name is a standard iBatis JDBC parameter (using #{}) since it's always a VARCHAR. Make sure injected parameters come from a known source, and not from the user interface or other external source; otherwise your code will be vulnerable to SQL Injection.
Finally, if you are updating thousands of rows, this solution can still be good. It could be improved batching updates, or performing multiple updates at once using complex SQL statements. I'm not sure how easy or error-prone this potential optimization could be, though.
Related
Actually I have a Stored Procedure(which takes a object as input) which return either two or three table(Result sets). Now i have to categorized the objects by the no of result sets. How can i do it in programmatic way??
The procedure is non-editable. Otherwise it was a small job which was done by adding a flag.
Thanks in Advance
Create a dataset and fill it by calling the stored procedure. Then count the tables of the dataset.
Count the number of successful NextResult() calls to your SqlDataReader.
Update following the discussion in comments.
Result sets become available immediately. That means, the first one can become available long before stored procedure completion and as long as the procedure is running there is no way (apart from the source code analysis) to determine how many more result sets would still become available. So, you need to run the procedure to the end, get all the result sets and only then you would be able to count them properly. What you suggest is effectively running stored procedure on SQL Server to the end and analyzing there how many result sets became available. That can (sort of) be done through EXEC ... WITH RESULT SETS (SQL Server 2012 and on) with error handling but that's going to be drastically inefficient.
If you can create a new procedure why not re-implement the existing one with an extra return value?
On an earlier stage, our system was provided with tables that hold last used autonumber (instead of using sequences). We are now redoing the client solution for the system, and need to 'reinvent' how to fetch next record number - by SQL.
The client application is made in FileMaker, the database still resides in Oracle. The challenge is to update last used autonumber AND supply it to the new record initiated in the client - in one operation.
A SELECT statement can retrieve the last used number.
An UPDATE statement can increment the last used number.
A function selecting and returning the number is not allowed to contain update statements.
A procedure may do the update, and may retain the new value returning it into an OUT parameter, but does not return the new value to the client - unless the client in some way can read the OUT parameter from the procedure (I do not think it reads DBMS_OUTPUT).
If the procedure proceeds to do an INSERT on the table where the client is preparing an INSERT, the inserts will not be identical, as far as I can see.
So - is there a syntax that will make the OUT value accessible to the client as result of an SQL statement including a procedure call (perhaps making the OUT parameter in some way refer to the client's new record recnr field), or is this altogether a blind alley?
regarding syntax - You need to wrap your PL/SQL procedure with out param into function (you can use overloaded function with the same name in the same package) and return out value as function result.
Regarding design - I do not recommend to use "home-made" mechanism to replace sequences. Sequences are much better optimised and more reliable solution.
What is the DB2 equivalent of SQL Server's SET NOCOUNT ON?
"From the SQL Server documentation:
SET NOCOUNT ON... Stops the message that shows the count of the number of rows affected by a Transact-SQL statement or stored procedure from being returned as part of the result set...
For stored procedures that contain several statements that do not return much actual data, or for procedures that contain Transact-SQL loops, setting SET NOCOUNT to ON can provide a significant performance boost, because network traffic is greatly reduced."
my problem is if I update a row in a table, a trigger runs that update another
row in a different table.
In Hibernate I get this error: "Batch update returned unexpected row
count from update; actual row count: 2; expected: 1".
I think because of the trigger DB2 returns 2 instead of 1, what
is correct. However, is there any way to make DB2 to return 1
without removing the trigger or can I disable the check in Hibernate?
How to handle this issue?
Can anyone plz tell "Set NoCount on"(sql server) equivalent in db2?
There is no equivalent to SET NOCOUNT in DB2 because DB2 does not produce any informational messages after a DML statement has completed successfully. Instead, the DB2 driver stores that type of information in a local, connection-specific data structure called the SQL communications area (SQLCA). It is up to the application (or whatever database framework or API the application is using) to decide which SQLCA variables to examine after executing each statement.
In your case, your application has delegated its database interaction to Hibernate, which compares the number of affected rows reported by DB2 in the SQLCA with the number of rows Hibernate expected its UPDATE statement to change. Since Hibernate isn't aware of the AFTER UPDATE trigger you created, it expects the update statement to affect only one row, but the SQLCA shows that two rows were updated (one by Hibernate's update statement, and one by the AFTER UPDATE trigger on that table), so Hibernate throws an exception to complain about the discrepancy.
This leaves you with two options:
Drop the trigger from that table and instead define an equivalent followup action in Hibernate. This is not an ideal solution if other applications that don't use Hibernate are also updating the table in question, but that's the sort of decision a team gets to make when they inflict Hibernate on a database.
Keep the AFTER UPDATE trigger where it is in DB2, and examine your options for defining Hibernate object mappings to determine if there's a way to at least temporarily disable Hibernate's row count verification logic. One approach that looks particularly encouraging is to specify the ResultCheckStyle.NONE option as part of a custom #SQLUpdate annotation.
For SQL Server and Sybase, there appears to be a third option: Hide the activity of an AFTER UPDATE trigger from Hibernate by activating SET NOCOUNT ON within the trigger. Unfortunately, there is no equivalent in DB2 (or Oracle, for that matter) that allows an application to selectively skip certain activities when tallying the number of affected rows.
suppose I fetch an RS, based on certain conditions and start looping though it , then , on certain situations , I update insert or delete records, which may have been part of this RS, using separate prepared statements.
How does this effect the result set ? My inclination is to think that since the Statement which fetched this RS was executed earlier in the process, this RS will now be blind to the changes made by my prepared statements.
Pseudocode :
Preapare Statement ps1
execute ps1 -> get Result Set rs1
loop through rs1
{
Update or delete records using other prepared statements
}
Read Consistency
Oracle guarantees that the set of data seen by a statement is consistent with respect to a single point in time and does not change during statement execution (statement-level read consistency)
That is why, If you have a query such as
insert into t
select * from t;
Oracle will simply duplicate all rows without going into an infinite loop or raising an error.
There are other implications because of this.
1) Oracle reads from the rollback segment to provide you with this read-consistent image of your data. So, if your rollback segments are nor correctly sized, or you commit across fetches, you'll get the "Snapshot too old" error, since your rollback data is no longer available.
Ok , so if that is the case , is it possible to refresh it while making updates ? I mean aside from making the cursor updateable and using the inbuilt functions of the result set.
2) Each query sees the data at the point of time it began. If by refresh you mean refiring the query, then the data you see might be different again, if you do commits in your pl/sql body or within a pl/sql loop or if some other transactions are running in your system concurrently.
It doesn't. The result set of a query/cursor is kept by the database, even if you alter or remove the rows that are the base of this result set. So you are correct, it is blind to changes made after the statement is executed.
I am trying to use an ODBCdataadapter in C# to run a query which needs to select some data into a temporary table as a preliminary step. However, this initial select statement is causing the query to terminate so that data gets put into the temp table but I can't run the second query to get it out. I have determined that the problem is the presence of two select statements in a single dataadapter query. That is to say the following code only runs the first select:
select 1
select whatever from wherever
When I run my query directly through SQL Server Management Studio it works fine. Has anyone encountered this sort of issue before? I have tried the exact same query previously on similar databases using the same C# code (only the connection string is different) and had no problems.
Before you ask, the temp table is helpful because otherwise I would be running a whole lot of inner select statements which would bog down the database.
Assuming you're executing a Command that's command type is CommandText you need a ; to separate the statements.
select 1;
select whatever from wherever;
You might also want to consider using a Stored Procedure if possible. You should also use the SQL client objects instead of the ODBC client. That way you can take advantage of additional methods that aren't available otherwise. You're supposed to get better perf as well.
If you need to support multiple Databases you can just use the DataAdapter class and use a Factory o create the concrete types. This gives you the benefits of using the native drivers without being tied to a specific backend. ORMS that support multiple back ends typically do this. The Enterprise Library Data Access Application Block while not an ORM does this as well.
Unfortunately I do not have write access to the DB as my organization has been contracted just to extract information to a data warehouse. The program is one generalized for use on multiple systems which is why we went with ODBC. I suppose it would not be terrible to rewrite it using SQL Management Objects.
ODBC Connection requires a single select statement and its retrieval from SQL Server.
If any such functionality is required, a Hack can do the purpose
use the query
SET NOCOUNT ON
at the top of your select statement.
When SET NOCOUNT is ON, the count (indicating the number of rows affected by a Transact-SQL statement) is not returned.
When SET NOCOUNT is OFF, the count is returned. It is used with any SELECT, INSERT, UPDATE, DELETE statement.
The setting of SET NOCOUNT is set at execute or run time and not at parse time.
SET NOCOUNT ON mainly improves stored procedure (SP) performance.
Syntax:
SET NOCOUNT { ON | OFF }