SISS and SQL Server CPU Performence - sql

I'm running a 32 core SQL Server Box. Which also runs a SISS Server, where SISS packages are stored and run.
The load on the database is very low, nightly updates, a handfull are tables updated during daytime, and otherwise is just a lot of select statements. Typical DW with a frontend that caches data.
My issues is that when I run a SISS package, in the studio, then it executes within a hour. But if I run it on the SISS Server it runs for hours. The package basically aggregates data from various tables, and places it in one table. Other package of other types, also run very slow when run from SISS Server.
CPU usage is never above 7%, memory is at 29gb of 32 on the server.
Is there a way that I can prioritize CPU away from the SQL Server and over to SISS Server?
I believe CPU priotization is the issue, but I might be wrong.

When SSIS runs out of memory it will typically slow down a lot before it fails, as it will start writing and reading temp files for some operations. I suspect this is your problem.
To avoid this I would reduce the maximum Memory allocated to the SQL Server Database Engine, e.g. to 8GB. Obviously you need to consider potential impacts on other SQL Server Database Engine operations, but it usually does surprisingly well with less Memory.
PS: you are admirably consistent with your mis-spelling of SSIS ... :-)

Related

Parallel Execution of SSIS (ETL) across multiple SQL databases

I have multitenancy database (one DB per customer). And at present we have set the SQL memory to 60 percent of RAM and running ETL across one site at a time.
Is it possible to run ETLs across multiple sites/database at the same time?
Note - During ETL execution other operation would be slow as ETL uses maximum RAM, hence wanted to know can two ETLs be run at a same across different database.
SSIS Packages can be executed in parallel. However, as your Operational databases are on the same server, and the database that you are importing to is also on the same server, I am failing to see how that would speed up the ETL, unless the databases are on separate disks which can help parallelize the reads and writes.
Also it may be more dangerous, as if you cannot afford running ETL at the point in time (like when the businesses open), you will have to stop all packages, and then restart all of them

Azure SQL Database vs. MS SQL Server on Dedicated Machine

I'm currently running an instance of MS SQL Server 2014 (12.1.4100.1) on a dedicated machine I rent for $270/month with the following specs:
Intel Xeon E5-1660 processor (six physical 3.3ghz cores +
hyperthreading + turbo->3.9ghz)
64 GB registered DDR3 ECC memory
240GB Intel SSD
45000 GB of bandwidth transfer
I've been toying around with Azure SQL Database for a bit now, and have been entertaining the idea of switching over to their platform. I fired up an Azure SQL Database using their P2 Premium pricing tier on a V12 server (just to test things out), and loaded a copy of my existing database (from the dedicated machine).
I ran several sets of queries side-by-side, one against the database on the dedicated machine, and one against the P2 Azure SQL Database. The results were sort of shocking: my dedicated machine outperformed (in terms of execution time) the Azure db by a huge margin each time. Typically, the dedicated db instance would finish in under 1/2 to 1/3 of the time that it took the Azure db to execute.
Now, I understand the many benefits of the Azure platform. It's managed vs. my non-managed setup on the dedicated machine, they have point-in-time restore better than what I have, the firewall is easily configured, there's geo-replication, etc., etc. But I have a database with hundreds of tables with tens to hundreds of millions of records in each table, and sometimes need to query across multiple joins, etc., so performance in terms of execution time really matters. I just find it shocking that a ~$930/month service performs that poorly next to a $270/month dedicated machine rental. I'm still pretty new to SQL as a whole, and very new to servers/etc., but does this not add up to anyone else? Does anyone perhaps have some insight into something I'm missing here, or are those other, "managed" features of Azure SQL Database supposed to make up the difference in price?
Bottom line is I'm beginning to outgrow even my dedicated machine's capabilities, and I had really been hoping that Azure's SQL Database would be a nice, next stepping stone, but unless I'm missing something, it's not. I'm too small of a business still to go out and spend hundreds of thousands on some other platform.
Anyone have any advice on if I'm missing something, or is the performance I'm seeing in line with what you would expect? Do I have any other options that can produce better performance than the dedicated machine I'm running currently, but don't cost in the tens of thousand/month? Is there something I can do (configuration/setting) for my Azure SQL Database that would boost execution time? Again, any help is appreciated.
EDIT: Let me revise my question to maybe make it a little more clear: is what I'm seeing in terms of sheer execution time performance to be expected, where a dedicated server # $270/month is well outperforming Microsoft's Azure SQL DB P2 tier # $930/month? Ignore the other "perks" like managed vs. unmanaged, ignore intended use like Azure being meant for production, etc. I just need to know if I'm missing something with Azure SQL DB, or if I really am supposed to get MUCH better performance out of a single dedicated machine.
(Disclaimer: I work for Microsoft, though not on Azure or SQL Server).
"Azure SQL" isn't equivalent to "SQL Server" - and I personally wish that we did offer a kind of "hosted SQL Server" instead of Azure SQL.
On the surface the two are the same: they're both relational database systems with the power of T-SQL to query them (well, they both, under-the-hood use the same DBMS).
Azure SQL is different in that the idea is that you have two databases: a development database using a local SQL Server (ideally 2012 or later) and a production database on Azure SQL. You (should) never modify the Azure SQL database directly, and indeed you'll find that SSMS does not offer design tools (Table Designer, View Designer, etc) for Azure SQL. Instead, you design and work with your local SQL Server database and create "DACPAC" files (or special "change" XML files, which can be generated by SSDT) which then modify your Azure DB such that it copies your dev DB, a kind of "design replication" system.
Otherwise, as you noticed, Azure SQL offers built-in resiliency, backups, simplified administration, etc.
As for performance, is it possible you were missing indexes or other optimizations? You also might notice slightly higher latency with Azure SQL compared to a local SQL Server, I've seen ping times (from an Azure VM to an Azure SQL host) around 5-10ms, which means you should design your application to be less-chatty or to parallelise data retrieval operations in order to reduce page load times (assuming this is a web-application you're building).
Perf and availability aside, there are several other important factors to consider:
Total cost: your $270 rental cost is only one of many cost factors. Space, power and hvac are other physical costs. Then there's the cost of administration. Think work you have to do each patch Tuesday and when either Windows or SQL Server ships a service pack or cumulative update. Even if you don't test them before rolling out, it still takes time and effort. If you do test, then there's a second machine and duplicating the product instance and workload for test.
Security: there is a LOT written about how bad and dangerous and risky it is to store any data you care about in the cloud. Personally, I've seen way worse implementations and processes on security with local servers (even in banks and federal agencies) than I've seen with any of the major cloud providers (Microsoft, Amazon, Google). It's a lot of work getting things right then even more work keeping them right. Also, you can see and audit their security SLAs (See Azure's at http://azure.microsoft.com/en-us/support/trust-center/).
Scalability: not just raw scalability but the cost and effort to scale. Azure SQL DB recently released the huge P11 edition which has 7x the compute capacity of the P2 you tested with. Scaling up and down is not instantaneous but really easy and reasonably quick. Best part is (for me anyway), it can be bumped to some higher edition when I run large queries or reindex operations then back down again for "normal" loads. This is hard to do with a regular SQL Server on bare metal - either rent/buy a really big box that sits idle 90% of the time or take downtime to move. Slightly easier if in a VM; you can increase memory online but still need to bounce the instance to increase CPU; your Azure SQL DB stays online during scale up/down operations.
There is an alternative from Microsoft to Azure SQL DB:
“Provision a SQL Server virtual machine in Azure”
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-provision-sql-server/
A detailed explanation of the differences between the two offerings: “Understanding Azure SQL Database and SQL Server in Azure VMs”
https://azure.microsoft.com/en-us/documentation/articles/data-management-azure-sql-database-and-sql-server-iaas/
One significant difference between your stand alone SQL Server and Azure SQL DB is that with SQL DB you are paying for high levels of availability, which is achieved by running multiple instances on different machines. This would be like renting 4 of your dedicated machines and running them in an AlwaysOn Availability Group, which would change both your cost and performance. However, as you never mentioned availability, I'm guessing this isn't a concern in your scenario. SQL Server in a VM may better match your needs.
SQL DB has built in availability (which can impact performance), point in time restore capability and DR features. You have the option to scale up / down your DB based on your usage to reduce the cost. You can improve your query performance using Global query (shard data). SQl DB manages auto upgrades and patching and greatly improves the manageability story. You may need to pay a little premium for that. Application level caching / evenly distributing the load, downgrading when cold etc. may help improve your database performance and optimize the cost.

SQL Server Express database timeout is temporary fixed by executing sp_updatestats stored procedure

We have an ASP.net MVC website running in a Webecs.com virtual dedicated server with 1Gb CPU, 3 Gb of RAM and using a SQL Server Express database within the same server. Now and then, the database is giving a timeout error that is fixed temporary by executing sp_updatestats stored procedure.
Initially, we thought it was a RAM problem and we raised the RAM in the server to the current 3Gb amount. Even though the problem is less frequent now, it is still happening when the traffic in the website increases and more queries are executed. We have been monitoring the CPU and RAM usage and it does not seem to be the problem, CPU is around 30% with some picks up to 90% and RAM is around 80%.
We have the exact same website running in a different, more potent server with SQL Server 2008 R2 and is working without problems.
Any ideas what’s happening here?
Edit
Queries are of a normal size, nothing too big.
We have profile it with glimpse.
There is not n+1 queries, there is an average of 10 queries per page and sometimes the timeout happens in the login page where there is only one query.
The database isn't too big either.

How SSIS Dataflow really works?

I have an SSIS package which transfers the data from one database to another.
The SSIS package runs on an application server.
I am thinking of moving one of the two databases to another data server. Will there be an impact in performance? How is the data flows in SSIS i.e. does all the data go in the application server where the SSIS runs and then to the destination database?
SSIS is a client-side process, so if it is running on a server other than the machine running the DBMS the traffic will be going over the network. Your question is not very clearly worded, but I think you want to know whether moving a DB will affect performance given that the SSIS package is already running on a separate machine.
If the SSIS job is already running on an application server that is a physically separate machine to the DB server then moving one of the databases will probably not affect the performance unless it has a radically slower network connection than the other.
I recently came across the same situation and we upgraded our source system to a better configuration box. I didn't have to do anything on my part, but the data load times from the source to SQL box were cut down from approximately 40 minutes to under 12 minutes on average. To answer your question, you will only see any performance variance depending on 1) Your new systems resources and 2) If you make changes to the box hosting your SQL Server.

SQL Server Express 2008 Stored Procedure execution time spikes periodically

I have a big stored procedure on a SQL Server 2008 Express SP2 database that gets run about every 200 ms. Normal execution time is about 50ms. What I am seeing is large inconsistencies in this run time. It will execute for while, say 50-100 times at 40-60ms which is expected, then seemingly at random the same stored procedure will take way longer, say 900ms or 1.5 seconds to run. Sometimes more than one call of the same procedure in a row will take longer too.
It appears that something is causing sql server to slow down dramatically every minute or so, but I can't figure out what. There is no timing pattern between the occurences.
I have the same setup on two different computers, one of which is a clean XP Pro load with no virus checking and nothing installed except SQL server.
Also, The recovery options for all the databases are set to "Simple".
I would suggest breaking out applicable sections into their own stored procedures; there is only one query plan cached per batch.
It looks like my problems happen simultaneously with the SQL Server Plan Cache Object Counts hitting 999 and resetting.