SQL Server 2012 slow - sql-server-2012

I recently moved my .NET development from an older Core2Duo 2.93 GHz PC to a new Core i7-3820 3.6 GHz machine. No changes were made to the project code or database layout. Both machines use the same SQL Server 2012 Express with Advanced Services (not LocalDB).
I observe a significant slowdown of INSERT INTO commands: what used to take 1-2 milliseconds per row on the old machine, now takes 8-9 milliseconds on the new one. The only fix I was able to find was to use multiple row inserts in one command, which seems to spread the overhead of the command over many rows. As in SQL Server 2008 R2, in SQL Server 2012 the limit for the number of rows in one such command is 1000:
http://msdn.microsoft.com/en-us/library/dd776382.aspx
However, the multiple row workaround is not applicable to all scenarios; e.g. table adapter updates that go row by row and take a long time to complete.
Has anyone experienced this problem? How would I go about resolving it?

The cpu is definitely better but what about the disk io? Inserts obviously rely on that subsystem.
It happens I was just reading this article http://www.altdevblogaday.com/2012/05/16/sql-server-high-performance-inserts/
It seems table value parameters can increase performance dramatically
The comments lead to the following page which has some impressive numbers.
In the comments http://florianreischl.blogspot.de/2012/03/performance-comparison-of-sql-server.html
I've yet to use this myself but tvp seem the way to go to pump data from a client to sql server if you have 2008 or up.

to be honest, writing records one at a time is ALWAYS slow.. CONSIDER - if it fits within your architecture - to write the rows to a text file and then BULK INSERT it, and your performance will go up tremendously.

Related

Performance hit on DB2 transactional database after linking to SQL Server 2005

We have an AS400 mainframe running our DB2 transactional database. We also have a SQL Server setup that gets loaded nightly with data from the AS400. The SQL Server setup is for reporting.
I can link the two database servers, BUT, there's concern about how big a performance hit DB2 might suffer from queries coming from SQL Server.
Basically, the fear is that if we start hitting DB2 with queries from SQL Server we'll bog down the transactional system and screw up orders and shipping.
Thanks in advance for any knowledge that can be shared.
Anyone who has a pat answer for a performance question is wrong :-) The appropriate answer is always 'it depends.' Performance tuning is best done via measure, change one variable, repeat.
DB2 for i shouldn't even notice if someone executes a 1,000 row SELECT statement. Take Benny's suggestion and run one while the IBM i side watch. If they want a hint, use WRKACTJOB and sort on the Int column. That represents the interactive response time. I'd guess that the query will be complete before they have time to notice that it was active.
If that seems unacceptable to the management, then perhaps offer to test it before or after hours, where it can't possibly impact interactive performance.
As an aside, the RPG guys can create Excel spreadsheets on the fly too. Scott Klement published some RPG wrappers over the Java POI/HSSF classes. Also, Giovanni Perrotti at Easy400.net has some examples of providing an Excel spreadsheet from a web page.
I'd mostly agree with Buck, a 1000 row result set is no big deal...
Unless of course the system is looking through billions of rows across hundreds of tables to get the 1000 rows you are interested in.
Assuming a useful index exists, 1000 rows shouldn't be a big deal. If you have IBM i Access for Windows installed, there's a component of System i Navigator called "Run SQL Scripts" that includes "Visual Explain" that provides a visual explanation of the query execution plan. View that you can ensure that an index is being used.
On key thing, make sure the work is being done on the i. When using a standard linked table MS SQL Server will attempt to pull back all the rows then do it's own "where".
select * from MYLINK.MYIBMI.MYLIB.MYTABE where MYKEYFLD = '00335';
Whereas this format sends the statement to the remote server for processing and just gets back the results:
select * from openquery(MYLINK, 'select * from mylib.mytable where MYKEYFLD = ''00335''');
Alternately, you could ask the i guys to build you a stored procedure that you can call to get back the results you are looking for. Personally, that's my preferred method.
Charles

Sql Server Queries taking time

I am facing some issues with my stored procedures.
I am having 1 stored procedure for a Stack Bar graph, showing one months data.
Earlier on my local server it was taking more than 40 seconds, so I made some changes and now it takes 4 seconds. The same query when I run on my live server takes more than 40 seconds.
The count of the records are the same as in my local server.
Can anybody tell me what I should do to make it more fast on my live server?
A good start is to run SQL Server Management Studio (SSMS), load up the query, and switch on 'Display Actual Execution Plan', this will show you exactly what SQL is doing with your query. It will also show you a relative '%cost' in relation to the steps in the query. This helps to identify which table/join/aggregate whatever is causing the query to take so long.
I also believe that in the latest version of SSMS it advises which indexes should be added.
Hope this helps.
Rich.
It's complicated question . It's a lot of parameters
Can change time of execution
CPU speed,
Ram,
Indexes
Assuming server will be more powerful in terms of processing power and RAM, indexes is something you would like to look into.
use indexes with your mysql tables and also it may be because of your hosting server's performance. The server may be faced with down time.

TempDB one physical datafiles per CPU core in SQL Server 2000

I know it is recommended for SQL 2005 onward but does it also apply on SQL Server 2000?
Any link for reference will also be appreciated.
Any thing you read about SQL Server 2000 is probably out of date because of how technology
has moved on since then.
However, this appears to be best for SQL Server 2000. But not SQL Server 2005+
This says SQL Server 2000 is different to SQL Server 2005+ (my bold)
Only one file group in TempDB is allowed for data and one file group for logs, however you can configure multiple files. With SQL Server 2000 the recommendation is to have one data file per CPU core, however with optimisations in SQL Server 2005/2008 it is now recommend to have 1/2 or 1/4 as many files as CPU cores. This is only a guide and TempDB should be monitored to see if PAGELATCH waits increase or decrease with each change.
Paul Randal
On SQL Server 2000, the recommendation was one tempdb data file for each processor core. On 2005 and 2008, that recommendation persists, but because of some optimizations (see my blog post) you may not need one-to-one - you may be ok with the number of tempdb data files equal to 1/4 to 1/2 the number of processor cores.
SQL Server Engineers. This is interesting that it is internal MS and appears at odd with the first two articles
Now, I'd go by the first two and decide if you need to actually do anything.
As Paul Randal also says (my bold):
One of the biggest confusion points is that the SQL CAT team recommends 1-to-1, but they're coming from a purely scaling perspective, not from an overall perf perspective, and they're dealing with big customers with top-notch servers and IO subsystems. Most people are not.
Have you demonstrated that:
you require this?
have a bottleneck
you have separate disk arrays per file?
you understand TF 1118
...

SQL Server Express 2008 Stored Procedure execution time spikes periodically

I have a big stored procedure on a SQL Server 2008 Express SP2 database that gets run about every 200 ms. Normal execution time is about 50ms. What I am seeing is large inconsistencies in this run time. It will execute for while, say 50-100 times at 40-60ms which is expected, then seemingly at random the same stored procedure will take way longer, say 900ms or 1.5 seconds to run. Sometimes more than one call of the same procedure in a row will take longer too.
It appears that something is causing sql server to slow down dramatically every minute or so, but I can't figure out what. There is no timing pattern between the occurences.
I have the same setup on two different computers, one of which is a clean XP Pro load with no virus checking and nothing installed except SQL server.
Also, The recovery options for all the databases are set to "Simple".
I would suggest breaking out applicable sections into their own stored procedures; there is only one query plan cached per batch.
It looks like my problems happen simultaneously with the SQL Server Plan Cache Object Counts hitting 999 and resetting.

should i advocate migrating from access to (my)sql

We have a windows MFC app that is written against an access database on a company server. The db is not that big: 19 MB. There are at most 2-3 users accessing it at any one time. It is used in a factory environment where access speed (or lack thereof) over the intranet becomes noticeable as it is part of the manufacturing time for our widgets.
The scenario is this: as each widget is completed, it gets a record in the db.. by the end of the year, the db is larger and searching for a record takes longer and longer. The solution so far has been to manually move older records to an archival table about once a year.
We are reworking other portions of this app right now, and it would be a good time to move to another db if we are going to do it.
It is my understanding that if we were using sql, the search time would not go up as the table gets bigger because the entire .mdb does not have to be sent over the network each time. Is this correct? Does anyone have any insight about whether it could be worth it to go to the trouble (time and money) of migrating to a new db, or should I just add more functionality to the application we have now, and maybe automatically purge the older records from time to time, and add additional facilities to the app to get at the older records when needed?
Thanks for any wisdom you can share..
Since your database is small and very few users, I could not make a solid case for migration. I would definetly set up an script to archive old records on a more frequent basis (don't archive into same db, this would somewhat defeat the purpose).
But also make sure two things are correct as well.
INDEXES. If your queries start slowing down, make sure you have proper indexes
http://support.microsoft.com/kb/304272
Your network connection between computers is fast. Maybe upgrade to gigabit cards and router? Possibly put the db on a scsi drive (raid 10 for speed and redundancy)
Throwing advanced technology at simple problems is an expensive way to go and not always the answer!
First of all, the information that the whole table and the whole database is transferred across the network is simply incorrect. If the queries are indexed, then the search times should not go up that much over time.
As others have mentioned spending the time + money to setup and maintain and then have someone maintain and manage and support that database server is certainly a possibility here. However, keep in mind that simply migrating a JET based application to sql server in many cases will run slower, and in fact sql server is slower then JET when no network is involved.
So, I would take some time to ascertain why things slow down so much, and also check into how indexing is setup.
So, just keep in mind that it is pure folklore and myth that the whole tables and whole database is transferred over the network. This concept is ONLY DUE to most people really not having any computer training and not knowing and understanding how the JET data engine works.
I would probably move to either Microsoft SQL Server 2008 R2 Express Edition (free) or MySQL (free) if there is both funding and time to put in a data access layer. Because you will be making requests of a remote server and not operating on data at the local workstation this move is very involved from the development standpoint.
However you should analyze whether or not its more cost effective to perform your archival process quarterly or monthly, and just move the archive database to SQL Server 2008 R2 Express Edition. (You can install the Microsoft SQL Server Management Studio client tools on workstations and query the archival database for faster reports on historical data without rewriting your entire production application; similar solutions exist for using MySQL or other OSS/free RDBMS).
I have cilents with 300 mb databases although they should be upsizing to SQL Server for other reasons. 19 Mb is relatively small. If performance is bad enough that archiving speeds things up then check the indexes to the tables for all your sorting and selection fields. Albert gave you a good URL there to check.
Entire MDB files do not go down the wire. Unless you are missing indexes.
Instead of shipping the DB over the network to the client and then performing queries, you could instead write a small wrapper on the server that handles requests, looks up the result in the Access DB (using SQL + the Access ODBC driver), and returns the result. This avoids the overhead of a large migration you might not need and still gets rid of the basic problem the users are experiencing.
Moving to a "proper" database solution is the best long term solution, but if your needs scale linearly and slowly over the next 30 years, it's hard to justify an expensive migration. That said, if you expect to really ramp up, or want to be more "future-proof", migrating now will likely save money/time.
It is my understanding that if we were
using sql, the search time would not
go up as the table gets bigger because
the entire .mdb does not have to be
sent over the network each time. Is
this correct?
This general idea is true for almost all databases. The idea of a database is to separate your application from the actual data. The data resides in a database server. Your application doesn't.
Does anyone have any insight about
whether it could be worth it to go to
the trouble (time and money) of
migrating to a new db
Yes. Having proposed this many times. It's expensive. It's complicated. Your MS-Access database will never get better or faster.
Other database servers will (and can) get faster and more sophisticated. After all, you're not sending .MDB files through a network anymore. The limitations are reduced. You're working with standard SQL through ODBC. Any database will work at the end of ODBC. You can fire vendors to find better, faster, cheaper products. Once you stop using Access you have choices.
Either stop using Access now or plan to suffer with it forever. And remake this decision every year until the end of time.