In our company we have "office Monday", that means every office/shop/department (circa 2000+ distinct user) should generate their reports, especially shops (SSRS with connection to tabular 1500 compatibility level). We are facing very high resource usage in 3+ hours range (CPU 100% - multiple cores) and the session queue is growing up and never flush. A report that takes 2 minutes off-peak can take more than an hour due to overload. We have on-premise machine. For the rest of the week problem, didn't occured (workload is 10 time lower, usage of CPU in peak is less than 30%).
Unfortunately, from a business point of view, we cannot spread the load over the remaining days of the week. We also have no influence on how many users will run the reports at a given time (load distribution throughout the day).
What we have tried already:
rewrite queries in reports from old MDX to Dax (always checking the performance of single query with Serving Timing in Dax Studio)
rewrite measures to less expenssive
Tuning our model (for example. change to the less consuming datatype, removing unused columns)
We can't migrate this model to Azure.
We can't make any hardware changes on this machine.
Maybe we can change some server properties? Model properties? Connections properties?
Can we manipulate for which reports / queries Tabular should keep the cache if out of resources? For example, for a group of store reports which we know will generate many similar inquiries (e.g. only the store number will change)
Any advices?
If you are reporting for the previous week could you automate ssrs to output the reports on Sunday night?
Related
I've been hearing conflicting statements on how much records / data size, tableau can handle.
In the last week two people have told me they have dashes which are, 100m and 600m records. They do incremental refreshes.
If I have a dash with xxx million records. Do clients only receive the data that is in their aggregated view.
So, if I have a source with 200million records. In the dash it shows the aggregated total per week per product. Let's say this is 400 cells(underneath it's millions of records). Is the client only receiving 400 data points.
If I then add filters to sub product or user level data, would that mean all of these data is imported due to the filters? If this is the case, how does this affect speed?
Ultimately, Tableau can handle as much data as your datasource can handle. If you are set up so Tableau connects to a datasource directly, only the results of a query are transmitted to the user. I've got billion row datasources in BigQuery that return reasonably fast aggregated numbers to Tableau.
If your datasource is not fast then this won't give good results in Tableau.
If you are using extracts, where, in effect, Tableau pulls all the data locally, things will usually be faster, but you will have local drive and memory limits on the size of the dataset. And each user will need an extract. Unless you are using Tableau server in which case the extract can be on the server.
Dashboards built on big datasources sometimes get slow when there are a lot of filters because populating each filter requires a datasource query (which may be triggered every time you use a filter). There are strategies to speed up dashboards with this problem by using partial extracts that generate all the values used for filtering (you can sometimes use parameters for a similar speed gain). Or even just designing the filters intelligently. But speed is usually the limiting factor not the size of the source table.
The only real limit on how much Tableau can handle is how many points are displayed. And that depends on RAM. In my experience a 4GB machine will choke on a chart will a couple of million points (e.g. a map plotting every postcode in the UK). But on a 16GB RAM machine I have never found a limit other than how fast the points are drawn.
We've been using SQL Server merge replication for a few years to synchronise data between our data centres, but we are now suffering with a big performance issue. This may be because the amount of data we are synchronising has increased a lot this year.
Our publisher is an always-on data centre in the UK. Our subscriber is a mobile data centre that travels around the world and is on for periods of up to a week at a time, approx. 25 times a year. However, it also spends the same amount of time (if not more) switched off whilst on its travels - it is a well travelled data centre!
We have 5 database that we synchronise on these servers. However, one of our databases has high numbers of data changes between periods of subscriber downtime and our issue is that it take days to catch up when the server is powered up - the other databases are fine.
Downloads from publisher to subscriber run at about 1.5 rows a second (which is annoying when we have hundreds of thousands of rows) but strangely uploads from subscriber to publisher run about ten times faster.
Things I have checked / tried:
• all tables have non-clustered primary keys on guid columns that have the rowguid property set
• changing the generation levelling threshold doesn't help
• setting the agent profile to high volume doesn't help
• running a trace at the publisher and subscriber shows the queries are all running very fast (less than 20 m/s generally, but there are gaps of 200 m/s or so between some batches of queries)
• analysis on our WAN link shows we have huge amounts of bandwidth spare
• analysis on our servers show we have huge amounts of Ram and CPU spare
Some of the places the subscriber is at do suffer from high latency but this doesn't seem to have an impact - 300 m/s or 100m/s and we still get the same poor performance.
One things I did wonder about - does the replication confirm to the publisher every time it has successfully processed a row at the subscriber? If we have thousands of rows and there is a latency on the line will this compound the issue if it confirms each item? If this does happen, is there a way to batch up messages between publisher and subscriber?
Any help that you can offer will be gladly received!
Thanks
Mark
We got to the bottom of this in the end.
It was our use of nvarchar(max) columns that was stopping the replication from using batches. What used to take 3 hours now takes 50 seconds just by changing the data type.
Here is the lesson learnt: "nvarchar(max) is a replication killer"
Thanks
I work for a fleet tracking company and this question is specifically about how I plan to do reports. Let me explain our environment. We have 1x Database, 1x Load Distributing process, and 3x Report Processing servers (let's assume these are equal in every way). When a customer requests a report, all the parameters of that report go in the database. I'm currently working on a load distributing app that will take pending reports from the database and delegate them to the 3 report processing servers that build and email the reports. When a server finishes a report (or an error arises), it notifies the load distributing app. Reports can come in all sizes, from 1 days worth of GPS data for 1 vehicles to 3 months of GPS data for hundreds of vehicles.
I can think of a few ways to do the load balancing but I'm not quite happy with them. I could have each server only do 5 reports at most, but 1 server might get 5 small reports while another gets 5 large reports. I could do a "Round Robin" approach and just hand out the reports sequentially across the servers, but this still doesn't protect against overloading any of the servers.
The best idea I think I have right now is to keep a count of how much GPS data is needed by each report (an easy task to do) and as I assign reports to each server I keep a running total for each server. When a server finishes a report (and notifies the load balancer), subtract that report's amount of GPS data from the running total for that server. This way, I could assign the next report to the server with the smallest amount of GPS data to work with. I could also set a max so that a server cannot get over worked (the problem that is causing us to refactor our whole reports process to begin with). If there are more reports when all servers hit their max, it can just queue them up and attempt them later when the servers finish a few of their reports.
I'm not convinced it's the best approach for finishing reports as quickly as possible. These are just the best I have come up with so far.
How can I optimize my approach to load balancing reports of different sizes across multiple servers?
Assuming that you have only one major table which you select data from, then I would configure one server to do all the big reports first and leave the other two to do smallest to largest. Otherwise big reports might never get done.
For the smaller reports, you want to try, in the absence of anything better, to have them try and run 'similar' reports, meaning those that cluster around similar values in the index mainly used. For example if a server has just completed a report for June 2011, then the next best report to run is same period, not jumping to November 2012. This is dependent on the actual table though, but I am presuming you have lots of date ordered data comprising the bulk of the selection. All you are really trying to do is group reports that are likely to reuse cached indexes/etc as this should give best throughput.
I have a similar scheduling problem, and any queries that are directed to major tables go one server (slow queue) and anything else goes to another ( fast queue), with some exceptions for special cases.
I have a query that runs on a data warehouse. I ran the report last month. It gave me some results in say x minutes. The same report when run on the same database without any modifications to the database returns the same results but in y minutes now.
y>x. The difference between the time is so large.
The amount of data and the indexes are also the same. There is no difference in them.
Now clients ask for me for a reason for this. What are the possible reasons for this?
You leave a lot of questions open
is the database running on a dedicated server.
do you run the reports from clients or directly on the server.
have there been changes to the phyisical network, have some settings been changed.
did they (by accident) change the protocol to communicate with the server (tcp, named-pipes, ...)
have you tried defragmenting
have you rebooted the server
do you have an execution plan before and after
Most likely the query plan has changed. Some minor difference in data has pushed the query optimisers calculations onto a new, less optimal plan.
Here are a few:
The amount of data in the warehouse has changed.
Indexes might have been modified.
Your warehouse is split across different servers and there is connectivity lag between them...
Your database server is processing something else as well due to which it has lesser memory and cpu for ur reports to run.
We have an Analysis Services cube that needs to be as real-time as possible. It's a relatively small cube that currently takes a couple of seconds to process.
Are there any guidelines for this? I'm curious what other folks are doing.
Also, what would be the impact of processing the cube too frequently? Would the main concern be the load on the SSAS server and the source DB? In our case it would be fairly nominal. How would SSAS clients be affected? Current SSAS consumers are Excel, PerformancePoint, and Sharepoint/Excel Services.
I would say the first issue you'd have to consider is how much is this cube going to grow over time? If it is constantly updated and processed that couple seconds could quickly turn into 20 minutes.
For example, we currently have a cube that has 20 million rows (probably more now hehe) with financial data related to hospital billing and charges that takes about 20 mins to process and we do it once a day in the morning. Depending on the time of the year we sometimes do process during the day again but there have been no complaints as long as we notify people we are doing this.
Have you considered a real-time (ROLAP) partition to store the current day's data? This way, you get the performance of MOLAP for all your data prior to the current day, which you can process nightly, but have ROLAP's low latency for the data collected since the last cube process.
If your cube is small enough, you could even stretch that to be the current week's data, or more.
As far as the disadvantages of processing frequently, check out the below article, which says: "If the processing job succeeds, an exclusive lock is put on the object when changes are being committed, which means the object is temporarily unavailable for query or processing. During the commit phase of the transaction, queries can still be sent to the object, but they will be queued until the commit is completed."
http://technet.microsoft.com/en-us/library/ms174860.aspx
So your users will see an impact in query performance.
It may be that you have to 'put it out there' and track how it performs.
Once you can see how people are using the cube, you can determine if constant reprocessing is really necessary and if it is, you may have to optimise how this occurs.
Spcifically using "usage based optimisation" as described here:
http://www.databasejournal.com/features/mssql/article.php/3575751/Usage-Based-Optimization-in-Analysis-Services-2005.htm