Counting SSRS Report Executions (omitting Auto-Refresh counts) - sql

I am performing an analysis on the frequency of SSRS Report executions.
However, I need a method of separating reports ran 'manually' (by user interaction) and those that occur due to an 'Auto Refresh' (Auto Refresh Parameter on SSRS Report).
Is there any method of separating these out when querying the ReportServer Database, or at least ignoring any executions which were due to an Auto-Refresh event?
Thanks in advance.

You can query the ExecutionLog views in the Report Server database. The fields you'll likely need to pay attention to are ItemAction and Source, but you'll need to determine which combinations you consider to be an execution. I'd start with ItemAction="Render" and Source="Live", possibly also looking at Format(Web vs PDF, etc).
Best thing to do is play with a report and see what data it generates in the log table, and then determine which ones you are interested in capturing.

Related

SSRS 2008 Drillthrough Delay

I have two reports in SSRS 2008, Dashboard and Drillthrough.
Dashboard contains many datasets (all stored procedures), and takes about 4-5 seconds to run.
Clicking on an aggregated value in one of the tables in Dashboard takes the user to Drillthrough, which has one dataset - a stored procedure accepting two parameters (int and char(1), which are passed from Dashboard), which runs very quickly in SSMS.
The Drillthrough dataset is large, averaging around 10,000 rows, which are displayed in a table. The report is configured to have 200 rows per page, and so can have a lot of pages.
The problem:
When I click a link in Dashboard, nothing happens for about a minute. There are several issues I have with this:
The fact that the screen does not immediately switch to the 'Report is being generated' screen means confusion for the user, who sees no response (in cases where the report is embedded in a web page). Is this normal behaviour?
The Drillthrough query itself runs very quickly in SSMS, therefore, why is it taking so long on the Report Server? Where is the hold up likely to be? (I read up on 'parameter sniffing' in relation to this, but as the query runs quickly in SSMS, it seems that my problem wouldn't be due to issues around that.)

SSRS Caching and/or Snapshot

I am fairly new to SSRS reports so I am looking for guidance. I have SSRS reports that have 3 visible parameters: Manager, Director, and VP. The report will display data based on the parameters selected. Initially, the report was taking a very long time to load and my research led me to creating a snapshot of the report.
The initial load of the report is really quick (~5 secs) but the parameters are set to "Select All" in all sections. When the report is later filtered to say, only 1 VP, the load time can vary anywhere between 20 to 90 seconds. Because this report will be used by all aspects of management within the organization, load time is critical.
Is it possible to load the filtered data quicker? Is there anything I can do?
Any help will be much appreciated.
Thank you!
This is a pretty broad efficiency issue. One of the big questions is whether or not the query takes a long time to run in the database or just in SSRS. Ideally you would start with optimizing the query and indexing, but that's not always enough. So the work has to be done somewhere, all you can do is shift the work to be done before the report is run. Here are a couple options:
Caching
Turn on caching for the report.
Schedule a subscription to run with each possible value for the parameter. This will cause the report to still load quickly once an individual is specified.
Intermediate Table
Schedule a SQL stored procedure to aggregate and index the data in a new table in your database.
Point the report to run from this data for quick reads.
Each option has it's pros and cons because you have to balance where the data preparation work is done. Sometimes you have to try a few options to see what works best for your situation.

Gain a Customized report

Goal:
Display the result based on the picture below in reporting Service 2008 R2.
Problem:
How should I do it?
You also have to remember that in reality the list contains lots of data, maybe miljon
In terms of the report itself, this should be a fairly standard implementation.
You'll need to create one Tablix, with one Group for Customer (one row), one Group for Artist (two rows, one for the headers and one for the Artist name, then a detail row for the Title.
It looks like you need more formatting options for the Customers Textbox - you could merge the cells in the Customer header row, then insert a Rectangle, which will give you more options to move objects around in the row.
For large reports you've got a few options:
Processing large reports: http://msdn.microsoft.com/en-us/library/ms159638(v=sql.105).aspx
Report Snapshots: http://msdn.microsoft.com/en-us/library/ms156325(v=sql.105).aspx
Report Caching: http://msdn.microsoft.com/en-us/library/ms155927(v=sql.105).aspx
I would recommend scheduling a Snapshot overnight to offload the processing to a quiet time, then making sure the report has sensible pagination set up so not too much data has to be handled at one time when viewed (i.e. not trying to view thousands of reports at one time when viewed in Report Manager).
Another option would be to set up an overnight Subscription that could save the report to a fileshare or send it as an email.
Basically you're thinking about reducing the amount of processing that needs to be done at peak times and processing the report once for future use to reduce overall resource usage.
I would use a List with text-boxes inside to for that kind of display.
In addition you may consider to add page break after each customer.
Personally I Experienced Lots of performance issues when dealing with thousands of rows, not to mention millions.
My advise to you is to re-consider the report main target: if the report is for exporting purposes - then don't use the ssrs for that.
If the report is for viewing - then perhaps it is possible to narrow down the data using parameters per user's choice.
Last thing, I wish you Good luck :)

Optimize Reporting in Reporting Services

I have 10 reports and 250 customers. All the reports are ran by my customers and each report take parameters. Depending on the parameters, same report connects to different database and gets result. I know with different parameters caching is not an option. But I dont want to run these reports on live data during day time. Is there anything I can do (snapshot, subscription) that can run overnight and either sends these reports or save a snapshot that could be used for next 24 hours?
Thanks in advance.
As M Fredrickson suggests, subscriptions might work here depending on the number of different reports to be sent.
Another approach is to consolidate your data query to a single shared datasource. Shared datasources can have caching enabled, and there are several options for refreshing that cache, such as on first access or on a timed schedule. See MSDN for more details.
The challenge with a cached datasource is to figure out how to remove all parameters from the actual data query by moving them elsewhere, usually the dataset filter in the report, or into the filters of the individual data elements, such as your tablixes.
I use this approach to refresh a 10 minute query overnight, and then return the report all day long in less than 30 seconds, with many different possible parameters filtering the dataset.
You can also mix this approach with others by using multiple datasets in your report, some cached and some not.
I would suggest going the route of subscriptions. While you could do some fancy hack to get multiple snapshots of a single report, it would be cleaner to use subscriptions.
However, since you've got 250 customers, and 10 different reports, I doubt that you'll want to configure and manage 2,500 different subscriptions within Report Manager... so I would suggest that you create a data driven subscription for each of the reports.

Automating problem query identification in Oracle 11g

In our test bed, a number of test suites will be run in a row (unattended), producing reports for later consumption. I want to include in those reports queries which are candidates for further investigation, along with the data that justifies their inclusion in that list. We should be able to associate any query identified this way with the test suite that exposed it as a concern.
When we use SQL Server, this is relatively straight forward - a call to DBCC FREEPROCCACHE clears all of the counters before a suite begins, then at test end we run a query against sys.dm_exec_query_stats, which gives us access to the execution counts and min/max/total time(s) of each cached query plan, with hooks available to retrieve the parameterized SQL statement (we use FORCED parametrization in our mssql instances) and the query plan.
Ref: http://msdn.microsoft.com/en-us/library/ms189741%28SQL.90%29.aspx
My question: how do I implement an approximation for this when my target app has been connected to Oracle 11g? My reading thus far suggests that everything I'm after is available via the AWR, and that it should be possible to access the supporting views directly, but I haven't been able to close the circle on my own.
Why do you need to access the supporting views directly? It would seem to me that the simplest solution would be
Each test suite starts and ends by explicitly generating an AWR snapshot so it knows the starting and ending snapshot ID and so that you can generate AWR reports for each suite individually.
You run AWR reports for each test suite
You review the AWR reports looking in particular at the various Top SQL sections
It's absolutely possible to get all the information from the underlying views directly, but if you don't need to do so, that's obviously easier.
Just for sanity, I should point out that I am assuming you are licensed to use AWR. Technically, even querying the AWR views requires that you have licensed the Performance and Tuning Pack. If you want to hit the views directly rather than generating full AWR reports because of licensing concerns, you're not saving yourself any license headaches by hitting the views.
The Oracle equivalent of DBCC FREEPROCCACHE is
SQL> alter system flush shared_pool;
The closest to the SQL Server counters are V$SYSSTAT and V$SYSTEM_EVENT.
However Oracle actually tracks these at the session level too in v$SESSION_WAIT, V$SESSION_WAIT_CLASS and V$SESSION_EVENT so you don't need to reset them at the system level.
And you don't need the Diagnostic/Tuning pack licenses to access them.
They don't go down to the SQL level. That is available in V$SQL, though would not be specific to that session. You can use session level tracing to track down individual SQLs that may be causing problems
Justin's answer had the correct outline, but I needed more details about the implementation.
Each test suite starts and ends by explicitly generating an AWR snapshot so it knows the starting and ending snapshot ID and so that you can generate AWR reports for each suite individually.
You run AWR reports for each test suite
You review the AWR reports looking in particular at the various Top SQL sections
I explicitly generate the snapshots by calling dbms_workload_repository.create_snapshot, the result gets saved off for later.
select dbms_workload_repository.create_snapshot() as snap_id from dual
In order to get the report, I need the database id and the instance number. Those are easily obtained from v$database and v$instance.
select d.DBID, i.instance_number as inst_num from v$database d, v$instance i
The report is available as text (dbms_workload_repository.awr_report_text) or html (dbms_workload_repository.awr_report_html). The arguments are the same in both cases, including a options flag which will include information from the Automatic Diagnostic Database Monitor (ADDM). It wasn't immediately obvious to me that I could leverage the ADDM results, so I turn that off. The return value is a column of varchar, so the function call gets wrapped
select output from table(dbms_workload_repository.awr_report_html(1043611354,1,5539,5544,0))
This result is easily written to a file, which is assembled with the other artifacts of the test.
Documentation of these methods is available online