Simply Queries in Premium Azure SQL DB Elastic Pool hogs DTU's - azure-sql-database

Here is the situation I am in:
In Azure, we have an elastic database pool that is a Premium 1000 eDTU pool. We have 6 databases. One of the databases runs merges and aggregations on reporting data at regular intervals and has been chugging along for several weeks with the MAX eDTU's at around 60% and average around 40%.
Monday this week, everything started going to crap. We rolled NO CHANGES to ANYTHING. Looking that the DB metrics in the azure portal, as well as the sys.dm_db_resource_stats view, our CPU% is getting hammered at 99-100% for blocks of up to an hour while the memory in that view is consistently around 10%. We have reports that run off of this database, and some of the queries (when run in management studio) will bring the server to its knees. These are not poorly-written or gigantic queries either. For example, we have one query that runs two CTE's and joins the results together for final output. The first returns 30 records and the second returns 20 (each returning 4-5 columns). When I parameterize the query and simply select the individual result sets it runs in <1s however when I do the simple join, I had to kill the execution because it was at 10 minutes and taking up 99.9% of the eDTU's in the pool. Please take my word for it that this is a simple join on a tiny result set.
I replaced the CTE's with table variables and got the same result. I then replaced the table variables with temp tables (#tables) and voila! The query ran in ~2s...
Is somebody aware of a cap or throttling? Here is a graph of the database usage stats for the past 8 days or so:
As you can see, early on 6/27 things just started going haywire...Any insight would be greatly appreciated.

Related

Running an SQL query in background

I'm trying to update a modest dataset of 60k records with a value which takes a little time to compute. From a small trial run of 6k records in the production environment, it took 4 minutes to complete, so the full execution should take around 40 minutes.
However this trial run showed that there were SQL timeouts occurring on user requests when accessing data in related tables (but not necessarily on the actual rows which were being updated).
My question is, is there a way of running non-urgent queries as a background operation in the SQL server without causing timeouts or table locking for extensive periods of time? The data within the column which is being updated during this period is not essential to have the new value returned; aka if a request happened to come in for this row, returning the old value would be perfectly acceptable rather than locking the set until the update is complete (I'm not sure the ins and outs of how this works, obviously I do want to prevent data corruption; could be a way of queuing any additional changes in the background)
This is possibly a situation where the NOLOCK hint is appropriate. You can read about SQL Server isolation levels in the documentation. And Googling "SQL Server NOLOCK" will give you plenty of material on why you should not over-use the construct.
I might also investigate whether you need a SQL query to compute values. A single query that takes 4 minutes on 6k records . . . well, that is a long time. You might want to consider reading the data into an application (say, using Python, R, or whatever) and doing the data manipulation there. It may also be possible to speed up the query processing itself.

Loading huge flatfiles to SQL table is too slow via SSIS package

I receive about 8 huge delimited flatfiles to be loaded into an SQL server (2012)table once every week. Total number of rows in all the files would be about 150 million and each file has different number of rows. I have a simple SSIS package which loads data from flatfiles(using foreach container) into a history table. And then a select query runs on this history table to select current weeks data and loads into a staging table.
We ran into problems as history table grew very large(8 billion rows). So I decided to back up the data in history table and truncate. Before truncation the package execution time ranged from 15hrs to 63 hrs in that order.We hoped after truncation it should go back to 15hrs or less.But to my surprise even after 20+ hours the package is still running. The worst part is that it is still loading the history table. Latest count is around 120 million. It still has to load the staging data and it might take just as long.
Neither history table nor staging tables have any indexes, which is why select query on the history table used to take most of the execution time. But loading from all the flatfiles to history table was always under 3 hrs.
I hope i'm making sense. Can someone help me understand what could be the reason behind this unusual execution time for this week? Thanks.
Note: The biggest file(8GB) was read at flatfile source in 3 minutes. So I'm thinking source is not the bottle neck here.
There's no good reason, IMHO, why that server should take that long to load that much data. Are you saying that the process which used to take 3 hours, now takes 60+? Is it the first (data-load) or the second (history-table) portion that has suddenly become slow? Or, both at once?
I think the first thing that I would do is to "trust, but verify" that there are no indexes at play here. The second thing I'd look at is the storage allocation for this tablespace ... is it running out of room, such that the SQL server is having to do a bunch of extra calesthenics to obtain and to maintain storage? How does this process COMMIT? After every row? Can you prove that the package definition has not changed in the slightest, recently?
Obviously, "150 million rows" is not a lot of data, these days; neither is 8GB. If you were "simply" moving those rows into an un-indexed table, "3 hours" would be a generous expectation. Obviously, the only credible root-cause of this kind of behavior is that the disk-I/O load has increased dramatically, and I am healthily suspicious that "excessive COMMITs" might well be part of the cause: re-writing instead of "lazy-writing," re-reading instead of caching.

BigQuery performance: Is this correct?

Folks, I'm using BigQuery as a superfast database for my analytics queries, but I'm very disappointed with its performance.
Let me show you the numbers:
Just one Table at "from" clause
Select about 15 fields with group by each, about 5 fields with SUM()
Total table rows: 3.7 millions
Total rows returned: 830K
When I execute this query on BigQuery's console, it takes about 1 minute to process. Is this ok for you? I was expecting that it will return in about 2 seconds... If I execute this query on a columnar database, like Sybase IQ, it takes less than 2 seconds.
Big Query is a highly scalable database, before being a "super fast" database. It's designed to process HUGE amount of data distributing the processing among several different machines using a technique named Dremel. Because it's designed to use several machines and parallel processing, you should expect to have super-scalability with a good performance.
For example: analyzing all the wikipedia revisions in 5-10 seconds isn't bad, is it? But even a much smaller table would take about the same time.
Sybase IQ is often installed in a single database and it doesn't use Dremel. That said, it's going to be faster than Big Query in many scenarios...as designed.
Cheers!
Since you are returning 830k rows and BQ is always creating a temporary result table, the creation is more than a small result.
Have you turned on large results?
We are working in a shared environment and sometime loads ( table creation ) takes a while.
Certainly the performance differ from a dedicated environment. You get your dedicated environment for 20K$ a month.

Simple Oracle query started running slow when returning more than 251 results

As the title implies, I have a very simple Oracle query that is returning in 5 seconds when I go beyond returning 251 results. I am using SQL Developer, and attaching using the built in connection utility (there is no facility for an ODBC connection in this application).
The query found here is fast (fast enough) (pa_stu holds roughly 40k rows):
Select * From pa_stu Where rownum < 252;
Oracle returns the data to me in .521 second, according to SQL Developer.
The following query, and ones that pull larger sets of data, are the culprit:
Select * From pa_stu Where rownum < 253;
Oracle returns the data to me for that last one in 5.327 second, according to SQL Developer.
All queries being used for testing have the same explain plan. That is, the filter predicate of ROWNUM<251 (change the 251 to whatever number is being used) and a TABLE ACCESS of FULL.
The results above are consistent, plus, bumping up the evaluated number to about 1000 doubles the result time to roughly 10 seconds (consistently). It is as if some buffering is going on somewhere, and that buffer is too small. Additionally, this is happening on only one of our Oracle servers. The other, more highly used one (holds different data as well) has no problem returning 100's of thousands of records using similar statements.
The databases are controlled by a DBA, and, I have run all of this by her. She does not have a solution. This actually started happening a month or so back, and was not the case many months ago, if that is meaningful. It was just not as noticeable as it is now.
Thank you for any help.

What would be the most efficient method for storing/updating Interval based data in SQL?

I have a database table with about 700 millions rows plus (growing exponentially) of time based data.
Fields:
PK.ID,
PK.TimeStamp,
Value
I also have 3 other tables grouping this data into Days, Months, Years which contains the sum of the value for each ID in that time period. These tables are updated nightly by a SQL job, the situation has arisen where by the tables will need to updated on the fly when the data in the base table is updated, this can be however up to 2.5 million rows at a time (not very often, typically around 200-500k up to every 5 minutes), is this possible without causing massive performance hits or what would be the best method for achieving this?
N.B
The daily, monthly, year tables can be changed if needed, they are used to speed up queries such as 'Get the monthly totals for these 5 ids for the last 5 years', in raw data this is about 13 million rows of data, from the monthly table its 300 rows.
I do have SSIS available to me.
I cant afford to lock any tables during the process.
700M recors in 5 months mean 8.4B in 5 years (assuming data inflow doesn't grow).
Welcome to the world of big data. It's exciting here and we welcome more and more new residents every day :)
I'll describe three incremental steps that you can take. The first two are just temporary - at some point you'll have too much data and will have to move on. However, each one takes more work and/or more money so it makes sense to take it a step at a time.
Step 1: Better Hardware - Scale up
Faster disks, RAID, and much more RAM will take you some of the way. Scaling up, as this is called, breaks down eventually, but if you data is growing linearly and not exponentially, then it'll keep you floating for a while.
You can also use SQL Server replication to create a copy of your database on another server. Replication works by reading transaction logs and sending them to your replica. Then you can run the scripts that create your aggregate (daily, monthly, annual) tables on a secondary server that won't kill the performance of your primary one.
Step 2: OLAP
Since you have SSIS at your disposal, start discussing multidimensional data. With good design, OLAP Cubes will take you a long way. They may even be enough to manage billions of records and you'll be able to stop there for several years (been there done that, and it carried us for two years or so).
Step 3: Scale Out
Handle more data by distributing the data and its processing over multiple machines. When done right this allows you to scale almost linearly - have more data then add more machines to keep processing time constant.
If you have the $$$, use solutions from Vertica or Greenplum (there may be other options, these are the ones that I'm familiar with).
If you prefer open source / byo, use Hadoop, log event data to files, use MapReduce to process them, store results to HBase or Hypertable. There are many different configurations and solutions here - the whole field is still in its infancy.
Indexed views.
Indexed views will allow you to store and index aggregated data. One of the most useful aspects of them is that you don't even need to directly reference the view in any of your queries. If someone queries an aggregate that's in the view, the query engine will pull data from the view instead of checking the underlying table.
You will pay some overhead to update the view as data changes, but from your scenario it sounds like this would be acceptable.
Why don't you create monthly tables, just to save the info you need for that months. It'd be like simulating multidimensional tables. Or, if you have access to multidimensional systems (oracle, db2 or so), just work with multidimensionality. That works fine with time period problems like yours. At this moment I don't have enough info to give you, but you can learn a lot about it just googling.
Just as an idea.