I have a view that quickly returns 28000 rows of data within 3 seconds. However, when I use this view to create SSRS Matrix (pivot) report, it takes almost 2 minutes to run.
More detail about the view:
Gets data from a linked server
Only about 10 columns with date field and amounts (Date field is what I use pivot on in SSRS to get Amount total)
What I have tried so far:
Dumped view into a temp table
Added OPTION (RECOMPILE);
The report is very simple. Without any parameters. This is one of those reports that users can run and do a data dump into excel before importing it into another system.
Any suggestions?
I would look into doing as much of the aggregation as you can on the server, if that's what's taking the time, especially as it sounds like a relatively static report. Give the data to SSRS in a state where it has to do as little work as possible.
If your query then takes up to two minutes to run on SQL Server, you could look into performance tuning, indexing, etc.
Related
Example of C1FlexPivotPage
I have a very large dataset (4 million rows) which is extracted through a SQL query. I need to show data in a grid, which is part of C1FlexPivotPage, but it takes so long to calculate and display results. This data needs to be custom formatted like pivot tables, filters, and aggregate functions.
So my question is, what is the best practice for speeding up the process of creating these custom reports, such as the Excel pivot table part, which immediately updates the grid view?
Thanks for any advice.
ok so I have this report I have to write in SSRS with a very specific format. It looks like the screenshot below at the bottom. Ignore the arrows and colors. It's pulling from an Oracle database. Each number value cell in this table/matrix has a different sql query to pull it in because they come from different tables, etc.
the top half of the numbers in the table are each from a query. the bottom half of the table is all calculations from the numbers in the top half. I already have the queries for the top half and was trying to figure out how I could just use those to make this table in SSRS with just those and then creating calculations in the bottom half for the report. I can't use a table or a matrix because each query is a different dataset and you can only have one dataset per tablix.
I was thinking maybe doing textboxes and drawing the grid manually, which would be a huge pain. I get errors about not having an aggregate and being out of scope or something and haven't figured out the reason for this yet as it is not my ideal solution.
My current solultion that will eventually get me there is unioning every single query and then creating columns with static values for the rows and columns in the grid and turning it into a matrix. Problem with this is it continues to increase in complexity as I create each further down the table calculation, and the code becomes larger and larger, and takes a long time to create, and I have to do like 6 reports similar in nature to the format of this one. Will probably be a thousand lines of sql and force me to have to make a stored proc because of the ssrs character limitation.;
So my question in a more simple way is, how can I take multiple sql queries that return a static value and make them a single value in a tablix that doesn't repeat, then create more blank rows in that tablix that are calculations of other cells values, i.e. Textbox1 - textbox2, textbox3/textbox4 ?
I got it figured out using expressions with multiple datasets. The answer seemed too easy once I found it. Basically just created a table tablix using my first dataset. Created more detail rows with insert row inside group below. Then I went to the expression builder for each one and found the other dataset and double clicked it to get the expression to pull from the other dataset. For example the bac_labor dataset value would look like this. =Sum(Fields!BAC_LABOR.Value, "BAC_Labor")
Then for calculations can use either same thing like =Sum(Fields!BAC_LABOR.Value, "BAC_Labor") + Sum(Fields!BAC_LABOR_OVERHEAD.Value, "BAC_Labor") or could do something like this =ReportItems!Textbox2.Value - ReportItems!Textbox1.Value to reference a cell value. This saves a ton of time, development effort, and reduction of code for calculations, compared to adding together 500 character select statements to make calculations. Also no need to use stored procs and union or join every select statement together with this method.
I am working with office 2016 excel and connecting to an Oracle db
I am creating a file for fetching orders, the part number, their desired delivery dates, the actual delivery dates, the average lead time, the average consumption and finally the current stock level.
I have created an sql to fetch the orders and the dates and the lead time. However, in the database, the current stock level is gotten through a procedure that takes input the part number and the location (can be '%' for overall stock level). Now, I can build a query that takes all orders, all part numbers and run the procedure for every single part number, and then build my powerquery but that seems like a hideous waste of processing power.
Question is: Is there a way to append or merge into the powerquery by calling the procedure after I have filtered the initial source results? Thus only running the procedure for orders made the last month or only on filtered part numbers?
I have tried looking the usual places, support.office, google and here but my problem is I (overwhelmingly) get only results from how to append or merge queries (which is trivial and basically a version of the unwanted situation)
Stian,
If your procedure is not a stored procedure, but a function, then you can create separate query for this function, and then add new column to a filtered table. This column is getting its value by executing this function with a parameter from another column(s).
The other way doing this is creating a view having all the columns you'd like, including that one generated by procedure. Then you query this view in PowerQuery, and apply your filters to the result. This should trigger query folding, which passes your filters to the server, so it optimizes query and doesn't fetch unneeded rows.
As far as I know, Native Queries (when you directly white an SQL query to execute) are not subject to query folding. Also, they are quite unsecure and generally an ad-hoc solution. Keep this in mind.
Google for "Query folding in Power Query" if you'd like more info.
Still, for the most optimal scenario I would consider using function instead of procedure (if that's possible, of course), and combining if with a view to get the data you want.
Hopefully the title says it all.. But just in case. I am in the midst of reworking some custom-built in-house software, written in vb.net, that queries the database, gathers the returned data and processes it within the vb.net, then outputs the results to Crystal Reports to display.
We have found this to be incredibility wasteful of resources, and cruelly slow. Like I mentioned before I am reworking this program so that the Crystal Report itself gathers the information and displays it. This morning I am reworking a seemingly simple portion, a Monthly Report that just counts the total records, sums somethings and other small tasks.
So my question is this, what is the crystal syntax to count the number of records that is returned, that is between two dates.(parameters that are passed in from the vb side)
I want to write this:
COUNT({Table.Column} in {?FromDate} to {?ToDate});
Crystal Reports then highlights everything inside the count and tells me a field is required here.
I also tried to create a running total, but it only tallied 1. So if that is the correct course of action please explain.
Thanks!!
Crystal reports formula syntax does not allow for a conditional count in this way. I usually perform this sort of thing with two separate formulas: The first populates 1 or 0 for the logic test, the second sums the first, e.g.
{#InRange}:
if {Table.Column} in {?FromDate} to {?ToDate} then 1 else 0
{#sumInRange}:
sum({#InRange}
I'm pretty sure I've achieved similar results in a single formula, but it makes the code less readable and harder to maintain, sometimes requiring While Printing commands that make the formula results inaccessible for use in other areas of the report. Using two formulas also allows for easy filtering on this first one.
I have noticed that when I run a report that contains nothing but an execution time field, the report spends a long time loading whenever I have a large embedded dataset from sql. Why is this? The report is not even looking for any data from the dataset, so what is it doing with it? The load time is clearly proportional to the amount of data in the dataset(s). Is there a better way to create datasets? SQL Report Builder seems pretty unusable as is, since some of my datasets contain millions of records.
Report generation involves three stages:
data retrieval
processing
rendering
These stages run sequentially, so during the data retrieval stage Reporting Services does not know how the data will be used in the report and so it will execute the queries for all datasets in the report. The processing stage then takes the results of the dataset queries and applies the report structure, such as tables and grouping, to the data.
This is why your report is executing the dataset query even though the report only contains an execution time field, and is also why it's important to ensure that datasets only return the data required for the report. Minimising the amount of data retrieval is important for good report performance.
Are you sure you don't have any tablix on your report? A tablix has a dataset property and if you run the report with that property set to dataset1 for example, SSRS will execute the query on dataset1