SQL update command and table locking - sql

i have an SQL table and VB.NET application.
the application loads the sql table to a datatable then it starts updating data to records by fetching some websites, it takes an average of 1.4 sec to fill datatable row with new data.
now i was wondering if its ok to use the sql update command to update a single record in the sql table and run it every time a record is updated which means run the update command for a single record every 1.4 sec
problem is other applications use this table in the same time and one of them writes to the same table but other columns,will the table get locked for other applications during this process?

SQL won't lock the table by default, but you probably should lock the table while updating it to prevent data corruption if those apps are doing alterations. performance will take a small hit, yes, but better that than having to rebuild it because it got messed up. this is a good explanation of locking
http://www.developerfusion.com/article/84509/managing-database-locks-in-sql-server/
if the other applications are just querying the table while you're updating, there shouldn't be any impact BUT they might get some odd results if they query it mid-update. locking is mainly about the risk of 2 people modifying the same record at the same time.

You need to find out why it takes 1.4 second to update a single record. Chances are it's because VB.NET needs to do some processing (while it's fetching some websites). For example, it could be taking you 1.3 seconds to perform necessary calculations (client time), and 0.1 second to update a single record (server time). In this case, you could perform update in batches, to minimize database access time.
Table will get locked, but only for a short time, so you don't need to worry about that, in general.

Related

What to do if the speed of data availability is faster than insert?

I am inserting large amounts of data into a table.
For example once every 15 minutes, N records of data become available to be inserted into the table.
My question is, what should I do if inserting N records takes more than 15 minutes? That's, the next insertion cannot begin because the previous one is still in progress.
Please assume that I've used the most affordable hardware and even dropping indexes before starting to insert data does not make inserting faster than 15 minutes.
My preference is not to drop indexes though, because at the same time, the table is queried. What's the best practice in such scenario?
P.S. I don't have any actual code. I am just thinking of and questioning about a possible scenario.
If you are receiving/loading a large quantity of data every quarter hour, you have an operational requirement, not an application requirement, so use an operational solution.
All database have a "bulk insert" utility, sql server is no exception and even calls the function BULK INSERT:
BULK INSERT mytable FROM 'my_data_file.dat'
Such utilities are built for raw speed and will outstrip any alternative application solution.
Write a shell script to receive the data into a file, formatting it as required using shell utilities, and invoke BULK INSERT.
Wire the process up to crontab (or the equivalent Windows scheduler such as AT if you are running on Windows).
First thing is to look for basic optimizations for inserts.
You can find many posts about it:
What is the fastest way to insert large number of rows
Insert 2 million rows into SQL Server quickly
Second thing is to see why it takes more than 15 minutes? Many things can explain that - locks, isolation level etc. So try to challenge it (for example can some portion of the queries can read uncommitted records?).
Third thing - finding the right quota for insert, and consider splitting to several smaller chunks of data, with intermediate commits. Many inserts in one transaction without committing may have a bad affect on the server (log file/locks wise - you need to be able to rollback the entire transaction).

Speed-up SQL Insert Statements

I am facing an issue with an ever slowing process which runs every hour and inserts around 3-4 million rows daily into an SQL Server 2008 Database.
The schema consists of a large table which contains all of the above data and has a clustered index on a datetime field (by day), a unique index on a combination of fields in order to exclude duplicate inserts, and a couple more indexes on 2 varchar fields.
The typical behavior as of late, is that the insert statements get suspended for a while before they complete. The overall process used to take 4-5 mins and now it's usually well over 40 mins.
The inserts are executed by a .net service which parses a series of xml files, performs some data transformations and then inserts the data to the DB. The service has not changed at all, it's just that the inserts take longer than they use to.
At this point I'm willing to try everything. Please, let me know whether you need any more info and feel free to suggest anything.
Thanks in advance.
Sounds like you have exhausted the buffer pools ability to cache all the pages needed for the insert process. Append-style inserts (like with your date table) have a very small working set of just a few pages. Random-style inserts have basically the entire index as their working set. If you insert a row at a random location the existing page that row is supposed to be written to must be read first.
This probably means tons of disk seeks for inserts.
Make sure to insert all rows in one statement. Use bulk insert or TVPs. This allows SQL Server to optimize the query plan by sorting the inserts by key value making IO much more efficient.
This will, however, not realize a big speedup (I have seen 5x in similar situations). To regain the original performance you must bring the working set back into memory. Add RAM, purge old data, or partition such that you only need to touch very few partitions.
drop index's before insert and set them up on completion

Postgres: How to fire multiple queries in same time?

I have one procedure which updates record values, and i want to fire it up against all records in table (over 30k records), procedure execution time is from 2 up to 10 seconds, because it depends on network load.
Now i'm doing UPDATE table SET field = procedure_name(paramns); but with that amount of records it takes up to 40 min to process all table.
Now im using 4 different connections witch fork to background and fires query with WHERE clause set to iterate over modulo of row id's to speed this up, ( WHERE id_field % 4 = ) and this works well and cuts down table populate to ~10 mins.
But i want to avoid using cron, shell jobs and multiple connections for this, i know that it can be done with libpq, but is there a way to fire up a query (4 different non-blocking queries) and do not wait till it ends execution, within single connection?
Or if anyone can point me out to some clues on how to write that function, using postgres internals, or simply in C and bound it as a stored procedure?
Cheers Darius
I've got a sure answer for this question - IF you will share with us what your ab workout is!!! I'm getting fat by the minute and I need answers myself...
OK I'll answer anyway.
If you are updating one table, on one database server, in 40 minutes 'single threaded' and in 10 minutes with 4 threads, the bottleneck is not the database server; otherwise, it would get bogged down in I/O. If you are executing a bunch of UPDATES, one call per record, the network round-trip time is killing you.
I'm pretty sure this is the case and not that it's either an I/O bottleneck on the DB or the possibility that procedure_name(paramns); is taking a long time. (If that were the procedure taking 2-10 seconds it would take like 2500 min to do 30K records). The reason I am sure is that starting 4 concurrent processed cuts the time in 1/4. So especially it is not an i/o issue on the DB server.
This might be the one excuse for putting business logic in an SP on the server. Optimization unfortunately means breaking the rules. The consequence is difficult maintenance. but, duh!!
However, the best solution would be to get this set up to use 'bulk update' queries. That might mean you have to take several strange and unintuitive steps such as this:
This will require a lot of modfication if multiple users can run it concurrently.
refactor the system so procedure_name(paramns) can get all the data it needs to process all records via a select statement. May need to use creative joins. If it's an SP of course now you are moving the logic to the client.
Use that have the program create an XML or other importable flat file format with the PK of the record to update, and the new field value or values. Write all the updates to this file instead of executing them on the DB.
have a temp table on the database that matches the layout of this flat file
run an import on the database - clear the temp table and import the file
do an update of a join of the temp table and the table to be updated, e.g., UPDATE mytbl, mytemp WHERE myPK=mytempPK SET myval=mytempnewval (use the right join syntax of course).
You can try some of these things 'by hand' first before you bother coding, to see if it's worth the speed increase.
If possible, you can still put this all in an SP!
I'm not making any guarantees, especially as I look down at my ever-fattening belly, but, this has the potential to melt your update job down to under a minute.
It is possible to update multiple rows at once. Below an example in postgres:
UPDATE
table_name
SET
column_name = temp.column_name
FROM
(VALUES
(<id1>, <value1>),
(<id2>, <value2>),
(<id3>, <value3>)
) AS temp("id", "column_name")
WHERE
table_name.id = temp.id
PHP has some functions for asynchrone queries:
pg_ send_ execute()
pg_ send_ prepare()
pg_send_query()
pg_ send_ query_ params()
No idea about other programming languages, you have to dig into the manuals.
I think you can't. Single connection can handle single query at once. It's described in libpq documentation chapter "Asynchronous Command Processing":
"After successfully calling PQsendQuery, call PQgetResult one or more times to obtain the results. PQsendQuery cannot be called again (on the same connection) until PQgetResult has returned a null pointer, indicating that the command is done."

Update thousands of records in a DataSet to SQL Server

I have half a million records in a data set of which 50,000 are updated. Now I need to commit the updated records back to the SQL Server 2005 Database.
What is the best and efficient way to do this considering the fact that such updates could be frequent (though concurrency is not an issue but performance is)
I would use a Batch Update.
Also documented here.
I agree with David's answer, as that's what I use. However, there is an alternative approach you could take which is worth considering (all situations are different after all) - it's something I would consider in the future if I had another similar requirement.
You could bulk insert the updated records into a new table in the DB (using SqlBulkCopy) which is an extremely fast way of loading data into the db (example). Then run an UPDATE statement on your main table to pull in the updated values from this new table which you would drop at the end.
The batched update approach of using SqlDataAdapter allows you to easily deal with any errors on specific rows (e.g. you could tell it to continue in the event of an error with a specific updated row so it doesn't stop the whole process).

SQL Server, Converting NTEXT to NVARCHAR(MAX)

I have a database with a large number of fields that are currently NTEXT.
Having upgraded to SQL 2005 we have run some performance tests on converting these to NVARCHAR(MAX).
If you read this article:
http://geekswithblogs.net/johnsPerfBlog/archive/2008/04/16/ntext-vs-nvarcharmax-in-sql-2005.aspx
This explains that a simple ALTER COLUMN does not re-organise the data into rows.
I experience this with my data. We actually have much worse performance in some areas if we just run the ALTER COLUMN. However, if I run an UPDATE Table SET Column = Column for all of these fields we then get an extremely huge performance increase.
The problem I have is that the database consists of hundreds of these columns with millions of records. A simple test (on a low performance virtual machine) had a table with a single NTEXT column containing 7 million records took 5 hours to update.
Can anybody offer any suggestions as to how I can update the data in a more efficient way that minimises downtime and locks?
EDIT: My backup solution is to just update the data in blocks over time, however, with our data this results in worse performance until all the records have been updated and the shorter this time is the better so I'm still looking for a quicker way to update.
If you can't get scheduled downtime....
create two new columns:
nvarchar(max)
processedflag INT DEFAULT 0
Create a nonclustered index on the processedflag
You have UPDATE TOP available to you (you want to update top ordered by the primary key).
Simply set the processedflag to 1 during the update so that the next update will only update where the processed flag is still 0
You can use ##rowcount after the update to see if you can exit a loop.
I suggest using WAITFOR for a few seconds after each update query to give other queries a chance to acquire locks on the table and not to overload disk usage.
How about running the update in batches - update 1000 rows at a time.
You would use a while loop that increments a counter, corresponding to the ID of the rows to be updated in each iteration of the the update query. This may not speed up the amount of time it takes to update all 7 million records, but it should make it much less likely that users will experience an error due to record locking.
If you can get scheduled downtime:
Back up the database
Change recovery model to simple
Remove all indexes from the table you are updating
Add a column maintenanceflag(INT DEFAULT 0) with a nonclustered index
Run:
UPDATE TOP 1000
tablename
SET nvarchar from ntext,
maintenanceflag = 1
WHERE maintenanceflag = 0
Multiple times as required (within a loop with a delay).
Once complete, do another backup then change the recovery model back to what it was originally on and add old indexes.
Remember that every index or trigger on that table causes extra disk I/O and that the simple recovery mode minimises logfile I/O.
Running a database test on a low performance virtual machine is not really indicative of production performance, the heavy IO involved will require a fast disk array, which the virtualisation will throttle.
You might also consider testing to see if an SSIS package might do this more efficiently.
Whatever you do, make it an automated process that can be scheduled and run during off hours. the feweer users you have trying to access the data, the faster everything will go. If at all possible, pickout the three or four most critical to change and take the database down for maintentance (during a normally off time) and do them in single user mode. Once you get the most critical ones, the others can be scheduled one or two a night.