SQL Server - Insert too slow into table - sql-server-2012

My table have 47 columns without data,
No index, no triger, no constraints, no foreign keys.
Just simple.
But speed insert is ~50 row/s. Too slow!
I have tried many times in different times
Please help me how to check and find really problem in my case.
Thanks bros!

Thanks all for support.
I was restart database and the problem has been resolved.

Related

Speeding up deletes that have joins

i am running a stored procedure to delete data from two tables:
delete from TESTING_testresults
from TESTING_testresults
inner join TESTING_QuickLabDump
on TESTING_QuickLabDump.quicklabdumpid = TESTING_TestResults.quicklabdumpid
where TESTING_quicklabdump.[Specimen ID]=#specimen
delete from TESTING_QuickLabDump
from TESTING_Quicklabdump
where [specimen id]=#specimen
one table is 60m rows and the other is about 2m rows
the procedure takes about 3 seconds to run.
is there any way i can speed this up? perhaps using EXISTS?
meaning IF EXISTS...THEN DELETE - because the delete should not be occurring every single time
something like this
if #specimen exists in TESTING_QuickLabDump then do the procedure with the two deletes
thank you !!!
Rewriting the query probably wont help speeding this up. Use the profiler to find out which parts of the query are slow. For this, make it profiler output the execution plan. Then, try adding appropriate indexes. Perhaps one or both tables could use an index over [specimen id].
For a table with 60 mil rows I would definitely look into partitioning the data horizontally and/or vertically. If it's time-sensitive data then you ought to be able to move old data into a history table. That's usually the first and most obvious thing people do so I would imagine if that were a possibility you would have already done it.
If there are many columns then it would definitely benefit you to denormalize the data into multiple tables. If you did this, I would suggest renaming the tables and creating a view of all the partitioned tables named after the original table. Doing that should ensure existing code isn't broken.
If you 'really' want to fine tune the speed then you should look into getting a faster hard drive and learn a little about hard drives work. Whether the data is stored towards the inner or outer section of the hd will affect speed of access slightly for example. And solid state hard drives have come a long way so you might look into getting one of those.
Beside indexing "obvious" fields, also look in your database schema and check if you have any FOREIGN KEYs whose ON DELETE CASCADE or SET NULL might be triggered by your delete (unlike Oracle, MS SQL Server will tend to show these in the execution plan). Fortunately, this is usually fairly easy to fix by indexing the child endpoint of the FOREIGN KEY.
Also check if you have any expensive triggers.

Inserting large amount of data into SQL Server 2005 table

I am inserting around 2 million records into a SQL Server 2005 table. The table currently have clustered as well as non-clustered indexes. I want to improve the performance of the insert query in that table . Can anyone have idea about
Drop all the indexes (including primary if your data for insert are
not preordered with the same key)
Insert the data
recreate all the dropped indexes
You can try to disable the indexs on the table before inserting and enabling them again after. It can be a huge timesaver if you're inserting large amounts of data into a table.
Check out this article for SQL server on how to do such a thing: http://msdn.microsoft.com/en-us/library/ms177406.aspx
If there is no good reason you aren't using bulk-insert, I'd say that your best option is to do this. Ie: Select rows into a format you can then bulk re-insert.
By doing ordinary inserts in this amount, you are putting a huge strain on your transaction logs.
If bulk-insert is not an option, you might win a little bit by splitting up the inserts into chunks - so that you don't go row-by-row, but don't try to insert and update it all in one fell swoop either.
I've experimented a bit with this myself, but haven't had the time to get close to a conclusive answer. (I've started the question Performance for RBAR vs. set-based processing with varying transactional sizes for the same reason.)
You should drop the indexes and then insert data and then recreate indexes.
You can insert up to 1000 rows in one insert.
values (a,b,c), (d,f,h)
Sort the the data via the primary key on the insert.
Use with (hold lock)

SQL query slow when joining with a sub query and only happened for one big Database

All my complex stored procedure works instantly for most of DBs,
But it will take more than 30 sec to execute them on DB X, maybe more but we didn't find yet.
DB X doesn't have the most data but our supporter 'delete' some data and re-insert them recently.
I've corrected the table identity Index but it doesn't help.
Then I found when a light table 'Left Join' a sub query which will return the main data, the execution becomes slow.
The sub query itself is quick and also If I insert sub query to a temp hash table and left join the hash table, the query is fast!
Anyone know what happened to this DB X, and the solution ?
I found it caused by missing index, but I don't understand how come now?
I also worried about the speed for inserting a big amount data if set index on server.
Have statistics been updated and indexes been rebuilt? Or disabled?
Especially after a lot of inserts, deletes etc
Thanks
I found it caused by missing index, but I don't understand how come now? I also worried about the speed for inserting a big amount data if set index on server.

Is it quicker to insert sorted data into a Sybase table?

A table in Sybase has a unique varchar(32) column, and a few other columns. It is indexed on this column too.
At regular intervals, I need to truncate it, and repopulate it with fresh data from other tables.
insert into MyTable
select list_of_columns
from OtherTable
where some_simple_conditions
order by MyUniqueId
If we are dealing with a few thousand rows, would it help speed up the insert if we have the order by clause for the select? If so, would this gain in time compensate for the extra time needed to order the select query?
I could try this out, but currently my data set is small and the results don’t say much.
With only a few thousand rows, you're not likely to see much difference even if it is a little faster. If you anticipate approaching 10,000 rows or so, that's when you'll probably start seeing a noticeable difference -- try creating a large test data set and doing a benchmark to see if it helps.
Since you're truncating, though, deleting and recreating the index should be faster than inserting into a table with an existing index. Again, for a relatively small table, it shouldn't matter -- if everything can fit comfortably in the amount of RAM you have available, then it's going to be pretty quick.
One other thought -- depending on how Sybase does its indexing, passing a sorted list could slow it down. Try benchmarking against an ORDER BY RANDOM() to see if this is the case.
I don't believe order speeds in INSERT, so don't run ORDER BY in a vain attempt to improve performance.
I'd say that it doesn't really matter in which order you execute these functions.
Just use the normal way of inserting INSERT INTO, and do the rest afterwards.
I can't say about sybase, but MS SQL inserts faster if records are sorted carefully. Sorting can minimize number of index expansions. As you know it is better to populate the table ant then create index. Sorting data before insertion leads to the similar effect.
The order in which you insert data will generally not improve performance. The issues that affect insert speed have more to do with your databases mechanisms for data storage than the order of inserts.
One performance problem you may experience when inserting a lot of data into a table is the time it takes to update indexes on the table. However again in this case the order in which you insert data will not help you.
If you have a lot of data and by a lot I mean hundreds of thousands perhaps millions of records you could consider dropping the indexes on the table, inserting the records then recreating the indexes.
Dropping and recreating indexes (at least in SQL server) is by far the best way to do the inserts. At least some of the time ;-) Seriously though, if you aren't noticing any major performance problems, don't mess with it.

SQL Server 2008 Slow Table, Table Partitioning

I have a table that has grown to over 1 million records... today (all valid)
I need to speed it up... would Table Partitioning be the answer? If so can i get some help on building the query?
The table has 4 bigint value keys and thats all with a primary key indexed and a index desc on userid the other values are at max 139 (there is just over 10,000 users now)
Any help or direction would be appreciated :)
You should investigate your indexes and query workload before thinking about partitioning. If you have done a large number of inserts, your clustered index may be fragmented.
Even though you are using SQL Server Express you can still profile using this free tool: Profiler for Microsoft SQL Server 2005/2008 Express Edition
you probably just need to tune your queries and/or indexes. 1 million records shouldn't be causing you problems. I have a table with several hundred million records & am able to maintain pretty high performance. I have found the SQL Server profiler to be pretty helpful with this stuff. It's available in SQL Server Management Studio (but not the express version, unfortunately). You can also do Query > Include Actual Execution Plan to see a diagram of where time is being spent during the query.
I agree with the other comments. With a reasonably small database (largest table 1MM records) it's unlikely that any activity in the database should provide a noticeable load if queries are optimized and the rest of the code isn't abusing the database with redundant queries. It's a good opportunity to get a feeling for the interplay between database queries and the rest of the code.
See my experiments on sql table partitioning here [http://faiz.kera.la/2009/08/02/does-partitioning-improve-performance-for-sql-tables/]. Hope this is helpful for you... And for your case, 1M is not a considerable figure. May be you need to fine tune the queries than going for partitioning.