Is bulk update faster than single update in db2? - sql

I have a Table with 10 columns and in that table I have thousands/millions of rows.
In some scenario, I want to update more than 10K records at a time. currently my scenario code works sequentially like,
for i in (primary key ids for all records to be updated)
executeupdate(i)
what I thought is instead of running same query 10K times, I will add all ids in a string and run a single update query like,
executeupdate(all ids)
actual DB queries can be like this,
suppose I have primary key ids like,
10001,10002,10003,10004,10005
so in first case My queries will be like
update tab1 set status="xyz" where Id="10001"
update tab1 set status="xyz" where Id="10002"
update tab1 set status="xyz" where Id="10003"
update tab1 set status="xyz" where Id="10004"
update tab1 set status="xyz" where Id="10005"
and My bulk update query will be like,
update tab1 set status="xyz" where id in ("10001","10002","10003","10004","10005")
so My question is, will I get any Performance improvement (executime time) by doing bulk update
or total query execution time will be same as for each record index scan will happen and update will take place?
Note : I am using DB2 9.5 as database
Thanks.

In general, a "bulk" update will be faster, regardless of database. Of course, you can test the performance of the two, and report back.
Each call to update requires a bunch of overhead, in terms of processing the query, setting up locks on tables/pages/rows. Doing a single update consolidates this overhead.
The downside to a single update is that it might be faster overall, but it might lock underlying resources for longer periods of time. For instance, the single updates might take 10 milliseconds each, for an elapsed time of 10 seconds for 1,000 of them. However, no resource is locked for more than 10 milliseconds. The bulk update might take 5 seconds, but the resources would be locked for more of this period.
To speed these updates, be sure that id is indexed.
I should note. This is a general principle. I have not specifically tested single versus multiple update performance on DB2.

You will definitely see a performance improvement, because you will reduce the number of roundtrips.
However, this approach does not scale very well; thousands of ID's in one statement could get a bit tricky. Also, there is a limit on the size of your query (could be 64k). You could consider to 'page' through your table and update - say - 100 records per update statement.

I came here with same question a week back. Then I faced a situation where I had to update a table with around 3500 rows in mySQL database through JDBC.
I updated same table twice: once through a For loop, by iterating through a collection of objects, and once using a bulk update query. Here are my findings:
When I updated the data in the database through iteration, it took around 7.945 seconds to execute completely.
When I came up with a rather gigantic (where 'gigantic' means 183 pages long) update query and executed the same, it took around 2.24 seconds to complete the update process.
clearly, bulk update wins by a huge margin.
Why this Difference?
To answer this, let's see how a query actually gets executed in DBMS.
Unlike procedural languages, you instruct the DBMS what to do, but not how to do. The DBMS then does the followings.
Syntax checking, or more commonly called 'Parsing'. And parsing comprises of steps like Lexical Analysis, Syntactic Analysis, Semantic Parsing.
A series of optimization (Although the definition of 'optimization' itself may vary from product to product. At least that's what I learned while surfing through the internet. I don't have much knowledge about it though.).
execution.
Now, when you update a table in database row by row, each of the queries you execute goes through parsing, optimization and execution. In stead if you write a loop to create a rather long query, and then execute the same, it is parsed only once. And the amount of time you save by using batch update in place of iterative approach increases almost linearly with number of rows you update.
A few tips that might come handy while updating data in your database
It is always a good practice to use indexed columns as reference while writing Any query.
Try to use integers or numbers and not strings for sorting or searching data in database. Your server is way more comfortable in comparing two numbers than comparing two strings.
Avoid using views and 'in' clause. they make your task easier, but slows down your database. Use joins in stead.

If you are using .NET (and there's probably a similar option in other languages like Java), there is a option you can use on your DB2Connection class called BeginChain, which will greatly improve performance.
Basically, when you have the chain option activated, your DB2 client will keep all of the commands in a queue. When you call EndChain, the queue will be sent to the server at once, and processed at one time.
The documentation says that this should perform much better than non-chained UPDATE/INSERT/DELETEs (and this is what we've seen in my shop), but there are some differences you might need to be aware of:
No exceptions will be thrown on individual statements. They will all be batched up in one DB2Exception, which will contain multiple errors in the DB2Error property.
ExecuteNonQuery will return -1 when chaining is active.
Additionally, performance can be improved further by using a query with Parameter Markers instead of separate individual queries (assuming status can change as well, otherwise, you might just use a literal):
UPDATE tab1
SET status = #status
WHERE id = #id
Edit for comment: I'm not sure if the confusion is in using Parameter Markers (which are just placeholders for values in a query, see the link for more details), or in the actual usage of chaining. If it is the second, then here is some example code (I didn't verify that it works, so use at your own risk :)):
//Below is a function that returns an open DB2Connection
//object. It can vary by shop, so put it whatever you do.
using (var conn = (DB2Connection) GetConnection())
{
using (var trans = conn.BeginTransaction())
{
var sb = new StringBuilder();
sb.AppendLine("UPDATE tab1 ");
sb.AppendLine(" SET status = 'HISTORY' ");
sb.AppendLine(" WHERE id = #id");
trans.Connection.BeginChain();
using (var cmd = trans.Connection.CreateCommand())
{
cmd.CommandText = sb.ToString();
cmd.Transaction = trans;
foreach (var id in ids)
{
cmd.Parameters.Clear();
cmd.Parameters.Add("#id", id);
cmd.ExecuteNonQuery();
}
}
trans.Connection.EndChain();
trans.Commit();
}
}

One other aspect I would like to point out is the commit interval. If the single update statement updates few 100 K rows, the transaction log also grows acordingly, it might become slower. I have seen reduction in total time while using ETL tools like informatica which fired sets of update statements per record followed by a commit- compared to a single update statement based on conditions to do it in a single go. This was counter-intuitive for me.

Related

sql simultaneous update

I'm trying to add a like and dislike button to my application and the backend runs on SQL, I've done my research and i read somewhere that it will handle updates in the same table and different rows, but i count the number of likes after the "SELECT " query and then i update the value so what happens if someone likes it at the same time and the value is +1 when it's going to update it? will it update it to +2 or will it stay the same?
thanks
You can do this in a single query:
UPDATE items SET likes = likes + 1 WHERE item_id = :id;
That statement, assuming you're using a DB that supports transactions, will run as a transaction, and so in isolation.
If you SELECT, modify, and then UPDATE in separate queries, then you run into the concurrent-update problem you described, where one user's changes can overwrite another's with an unfortunate interleaving.
You could do it using locks to ensure no bad interleaving, but that will increase complexity, and open up potential for concurrency bugs.

Performance considration between Cursor with 30000 record or create 30000 update statement

Which is better?
1)A cursor that loop 30000 record and perform update one by one
2)Create a script that has 30000 update command
thanks
Both should take about the same time, mainly subject to how the CURSOR is declared.
Reason? You have 30,000 individual updates which is usually the main factor
Note that 30,000 individual UPDATES in one batch will probably fail because of batch size and compile time anyway...
SQL is a set based language and you can most likely do a single UPDATE to update all rows in one go. If you can't, it is because of 2 reasons
You need "per row" logic: this can usually be achieved by CASE expressions, UDFs etc
You don't understand sets and SQL
With more information (the SQL and logic) we could help you more...
There is a very easy way to tell: Do it and measure the time.
Other than that, having 30000 lines does not make a lot of sense when you can have just 10.
Making updates this way for reasons other than data migration or maintenance doesn't sound like wise either, and in those cases performance is not an issue - but maintenance and legibility always is.
You know, that depends on context.
It helps, though, to learn. SQL for example. You are on a low level not to see the real optimizations possible here. SQL is a lot more than just Update, Insert and simple Select statements.
1)A cursor that loop 30000 record and perform update one by one
Linear step by step processing. No way to paralellize as SQL itself has no threading mechanisms available to the user; Optimizations are one by one - i.e. the query optimizer looks at items one statement at a time.
2)Create a script that has 30000 update command
Assuming the script is external, it could split the work and run it concurrent on multiple connections, i.e. run more than one parallel.
But there is more:
Make a script that calculates the new values.
Bulk import them into a temporary table using the buld copy API
Issue ONE update statment that takes the updated values from the temporary table to the final one.
Maybe have a script that issues a merge statement for multi update? There are tons of variations there if you know the SQL api more than "update, open cursor, simple select".
I do that - though a lot more data (batches of 50.000, sometimes 4-6 at the same time). The problem being that sql bulk copy has some overhead. But I manage 75.000 inserts per second that way.
A lot depends on the business questions and the complexity of the logic - if it is simple updates then the question is: Calculated or externally driven? Multiple values by 2 = calculated, updating addresses = data driven (i.e. you need the new data from somewhere).

Postgres: How to fire multiple queries in same time?

I have one procedure which updates record values, and i want to fire it up against all records in table (over 30k records), procedure execution time is from 2 up to 10 seconds, because it depends on network load.
Now i'm doing UPDATE table SET field = procedure_name(paramns); but with that amount of records it takes up to 40 min to process all table.
Now im using 4 different connections witch fork to background and fires query with WHERE clause set to iterate over modulo of row id's to speed this up, ( WHERE id_field % 4 = ) and this works well and cuts down table populate to ~10 mins.
But i want to avoid using cron, shell jobs and multiple connections for this, i know that it can be done with libpq, but is there a way to fire up a query (4 different non-blocking queries) and do not wait till it ends execution, within single connection?
Or if anyone can point me out to some clues on how to write that function, using postgres internals, or simply in C and bound it as a stored procedure?
Cheers Darius
I've got a sure answer for this question - IF you will share with us what your ab workout is!!! I'm getting fat by the minute and I need answers myself...
OK I'll answer anyway.
If you are updating one table, on one database server, in 40 minutes 'single threaded' and in 10 minutes with 4 threads, the bottleneck is not the database server; otherwise, it would get bogged down in I/O. If you are executing a bunch of UPDATES, one call per record, the network round-trip time is killing you.
I'm pretty sure this is the case and not that it's either an I/O bottleneck on the DB or the possibility that procedure_name(paramns); is taking a long time. (If that were the procedure taking 2-10 seconds it would take like 2500 min to do 30K records). The reason I am sure is that starting 4 concurrent processed cuts the time in 1/4. So especially it is not an i/o issue on the DB server.
This might be the one excuse for putting business logic in an SP on the server. Optimization unfortunately means breaking the rules. The consequence is difficult maintenance. but, duh!!
However, the best solution would be to get this set up to use 'bulk update' queries. That might mean you have to take several strange and unintuitive steps such as this:
This will require a lot of modfication if multiple users can run it concurrently.
refactor the system so procedure_name(paramns) can get all the data it needs to process all records via a select statement. May need to use creative joins. If it's an SP of course now you are moving the logic to the client.
Use that have the program create an XML or other importable flat file format with the PK of the record to update, and the new field value or values. Write all the updates to this file instead of executing them on the DB.
have a temp table on the database that matches the layout of this flat file
run an import on the database - clear the temp table and import the file
do an update of a join of the temp table and the table to be updated, e.g., UPDATE mytbl, mytemp WHERE myPK=mytempPK SET myval=mytempnewval (use the right join syntax of course).
You can try some of these things 'by hand' first before you bother coding, to see if it's worth the speed increase.
If possible, you can still put this all in an SP!
I'm not making any guarantees, especially as I look down at my ever-fattening belly, but, this has the potential to melt your update job down to under a minute.
It is possible to update multiple rows at once. Below an example in postgres:
UPDATE
table_name
SET
column_name = temp.column_name
FROM
(VALUES
(<id1>, <value1>),
(<id2>, <value2>),
(<id3>, <value3>)
) AS temp("id", "column_name")
WHERE
table_name.id = temp.id
PHP has some functions for asynchrone queries:
pg_ send_ execute()
pg_ send_ prepare()
pg_send_query()
pg_ send_ query_ params()
No idea about other programming languages, you have to dig into the manuals.
I think you can't. Single connection can handle single query at once. It's described in libpq documentation chapter "Asynchronous Command Processing":
"After successfully calling PQsendQuery, call PQgetResult one or more times to obtain the results. PQsendQuery cannot be called again (on the same connection) until PQgetResult has returned a null pointer, indicating that the command is done."

SQL, selecting and updating

I am trying to select 100s of rows at a DB that contains 100000s of row and update those rows afters.
the problem is I don't want to go to DB twice for this purpose since update only marks those rows as "read".
is there any way I can do this in java using simple jdbc libraries? (hopefully without using stored procedures)
update: ok here is some clarification.
there are a few instance of same application running on different servers, they all need to select 100s of "UNREAD" rows sorted according to creation_date column, read blob data within it, write it to file and ftp that file to some server. (I know prehistoric but requirements are requirements)
The read and update part is for to ensure each instance getting diffent set of data. (in order, tricks like odds and evens wont work :/)
We select data for update. the data transfers through the wire (we wait and wait) and then we update them as "READ". then release lock for reading. this entire thing takes too long. By reading and updating at the same time, I would like to reduce lock time (from time we use select for update to actual update) so that using multiple instances would increase read rows per second.
Still have ideas?
It seems to me there might be more than one way to interpret the question here.
You are selecting the rows for the
sole purpose of updating them and
not reading them.
You are selecting the rows to show
to somebody, and marking them as
read either one at a time or all as a group.
You want to select the rows and mark
them as read at the time you select
them.
Let's take Option 1 first, as that seems to be the easiest. You don't need to select the rows in order to update them, just issue an update with a WHERE clause:
update table_x
set read = 'T'
where date > sysdate-1;
Looking at option 2, you want to mark them as read when a user has read them (or a down stream system has received it, or whatever). For this, you'll probably have to do another update. If you query for the primary key, in addition to the other columns you'll need in the first select, you will probably have an easier time of updating, as the DB won't have to do table or index scans to find the rows.
In JDBC (Java) there is a facility to do a batch update, where you execute a set of updates all at once. That's worked out well when I need to perform a lot of updates that are of the exact same form.
Option 3, where you want to select and update all in one shot. I don't find much use for this, personally, but that doesn't mean others don't. I suppose some kind of stored procedure would reduce the round trips. I'm not sure what db you are working with here and can't really offer specifics.
Going to the DB isn't so bad. If you aren't returning anything 'across the wire' then an update shouldn't do you too much damage and its only a few hundred thousand rows. What is your worry?
If you're doing a SELECT in JDBC and iterating over the ResultSet to UPDATE each row, you're doing it wrong. That's an (n+1) query problem that will never perform well.
Just do an UPDATE with a WHERE clause that determines which of those rows needs to be updated. It's a single network round trip that way.
Don't be too code-centric. Let the database do the job it was designed for.
Can't you just use the same connection without closing it?

Oracle SQL technique to avoid filling trans log

Newish to Oracle programming (from Sybase and MS SQL Server). What is the "Oracle way" to avoid filling the trans log with large updates?
In my specific case, I'm doing an update of potentially a very large number of rows. Here's my approach:
UPDATE my_table
SET a_col = null
WHERE my_table_id IN
(SELECT my_table_id FROM my_table WHERE some_col < some_val and rownum < 1000)
...where I execute this inside a loop until the updated row count is zero,
Is this the best approach?
Thanks,
The amount of updates to the redo and undo logs will not at all be reduced if you break up the UPDATE in multiple runs of, say 1000 records. On top of it, the total query time will be most likely be higher compared to running a single large SQL.
There's no real way to address the UNDO/REDO log issue in UPDATEs. With INSERTs and CREATE TABLEs you can use a DIRECT aka APPEND option, but I guess this doesn't easily work for you.
Depends on the percent of rows almost as much as the number. And it also depends on if the update makes the row longer than before. i.e. going from null to 200bytes in every row. This could have an effect on your performance - chained rows.
Either way, you might want to try this.
Build a new table with the column corrected as part of the select instead of an update. You can build that new table via CTAS (Create Table as Select) which can avoid logging.
Drop the original table.
Rename the new table.
Reindex, repoint contrainst, rebuild triggers, recompile packages, etc.
you can avoid a lot of logging this way.
Any UPDATE is going to generate redo. Realistically, a single UPDATE that updates all the rows is going to generate the smallest total amount of redo and run for the shortest period of time.
Assuming you are updating the vast majority of the rows in the table, if there are any indexes that use A_COL, you may be better off disabling those indexes before the update and then doing a rebuild of those indexes with NOLOGGING specified after the massive UPDATE statement. In addition, if there are any triggers or foreign keys that would need to be fired/ validated as a result of the update, getting rid of those temporarily might be helpful.