I am trying to build a report to display the usage of cube such as executed query,user name, start time of the query ,duration of the query and some other information. This report will be helpful for optimization of performance .
I followed the process mentioned in this link,
https://technet.microsoft.com/en-us/library/cc917676.aspx
Table 'OLAPQueryLog' was successfully created but no rows are getting added in it . Technically rows should get populated as the cube is accessed by any user.(When a query is executed on the cube) .
I would really appreciate any immediate help .Please reach out to me if you have any concerns.
Most probably the problem is that SSAS does not have permissions to write to the log table. Make sure that the account that SSAS service is running under, has Insert permissions to the SQL database that hosts the QueryLog table. You can check the Event Viewer on the machine that runs SSAS to see the error message SSAS is producing.
Related
I am working in Microsoft Visual Studio to create my RDL's. I have a sub report which refuses to run when we put it up on SSRS. This report runs fine inside of Visual studio in preview mode even when pointed at a copy of the prod database (still takes 30 min to run but it completes). The report only returns 1 row with counts of a large amount of data for a summary.
The full error text is:
An error has occurred during report processing. (rsProcessingAborted)
o Cannot read the next data row for the dataset DataSet1. (rsErrorReadingNextDataRow)
A severe error occurred on the current command. The results, if any, should be discarded. Operation cancelled by user.
This report used to work but the query was not correctly pulling the information. I had to change the query and expand what it was pulling from the database. Is there any way this could be caused by not enough memory being given to SSRS? We are using SSRS 2008 r2.
Turns out my problem was solved by putting in the recommended indexes provided by SQL Query Analyzer. After the indexes were created the query ran in ~4 min no problems.
#GregGalloway was able to answer the question I should have asked. I am adding a more concise question here, while maintaining the original lengthy text
How do I use a table valued function as the query for a partition, when the function is in separate database from my fact and referenced dimensions?
Overview: I am building a SSAS multidimensional cube that is built off of a single fact table in our application's data warehouse, and want to use the result set from a table valued function as my fact table's partition query. We are using SQL Server (and SSAS) 2014
Condition: For each environment (Dev,Tst,Prd) there are 2 separate databases on the same server, one for the application data warehouse [DW_App], the other for custom objects [DW_Custom]. I cannot create any objects in [DW_App], but have a lot of freedom in [DW_Custom]
Background info: I have not been able to find much information on using a TVF and partitions in this way. My thinking is that it will help streamline future development by giving me a single place to update the SQL if/when I modify the fact table.
So in testing out my crazy idea of using a TVF as the query for my partitions I have run into a bit of a conundrum. I am able to use my TVF when I explicitly state the Database in my FROM clause.
SELECT * FROM [DW_Custom].[dbo].[CubePartition](#StartDate, #EndDate)
However, that will not work, because the cube will be deployed in multiple environments before production, and it needs to point to different DBs for each. So I tried adding a new data source, setting my partition query to point to the new data source, and then remove the database name. IE:
SELECT * FROM [dbo].[CubePartition](#StartDate, #EndDate)
I get an error that
The SQL syntax is not valid. The relational database returned the following error message: Deferred prepare could not be completed. Invalid object name 'dbo.CubePartition'
If I click through this error and the subsequent warnings about the cube not being able to process if I continue I am able to build and deploy the cube. However I cannot process it, because I get an error that one of my dimensions does not exist.
Looking into the query that was generated and it is clear that it is querying my dimensions as well as fact, which do not exist inside of '[DW_Custom]' which explains that error perfectly fine.
So I guess 2 questions:
Is it possible to query another DB (on the same server) from inside of an SSAS partition query?
If not, is there any way I can use a variable as the database name in the query, and update that variable based on the project configuration (Dev,Tst,Prd)
Bonus question: Is the reason that I can not find much about doing it this way because it is an obviously bad idea that I am overlooking, and if so why?
How about creating a second SSAS Data Source pointing to the DW_Custom database (or whatever it's called in the particular environment you're deploying to)? Then when you deploy from Dev to Prod, you need only change that connection string. When you create your partitions, then specify the DW_Custom data source and then specify the query without database name:
SELECT * FROM [dbo].[CubePartition](#StartDate, #EndDate)
As long as the query plan for that table-valued function is efficient compared to a plain SELECT statement, then I don't see a problem with that.
I have created SSRS report. Its working fine when I try to view this report on report server.
Now I am trying to use Subscriptions feature of Report Server to export report's in PDF format on schedule basis. For that.
I created New Subscriptions for some of the reports, and I scheduled the reports using shared schedule. The scheduling is working fine.
The issue is with the status of the report subscriptions under subscription tab, it shows
"Failure writing file \viaserver\Shared\Test.pdf: An impersonation error occurred using the security context of the current user"
Have you checked data source setting of the report, and whether the RS execution account has permission to write file at "\viaserver\Shared\"?
Hi Raju
The problem could be a bit complicated as you do not have the log for the report. So just some general suggestions, hope they can help you:
To check whether the whether the report ran successfully within the subscription, you can query executionlog2 in the report server database, by default the query should be like:
select * from ReportServer..ExecutionLog2 where RequestType = 'Subscription'
Instead of generating PDF subscription, can you run the report directly? Or is this report very large? It is because when generating report into pdf format, paginating can consume resource a lot. If you reduce the report size, can subscription run successfully?
The RS subscription relies on SQL Agent, can you find out whether the SQL Agent job ran successfully? To find out the job related to the subscription, you can use below query, where the scheduleID column is what you need to locate the job:
use ReportServer
go
SELECT S.ScheduleID AS SQLAgent_Job_Name
, SUB.Description AS Sub_Desc
, SUB.DeliveryExtension AS Sub_Del_Extension
, C.Name AS ReportName
, C.Path AS ReportPath
FROM ReportSchedule RS JOIN Schedule S ON RS.ScheduleID = S.ScheduleID
JOIN Subscriptions SUB ON RS.SubscriptionID = SUB.SubscriptionID
JOIN Catalog C ON RS.ReportID = C.ItemID AND SUB.Report_OID = C.ItemID
WHERE C.Name LIKE '%%'
Here are some additional points to help you debug the issue; I ran into many of these issues today - using SQL 2016 on Server 2012 R2. I'm writing this up mainly for my own reference but in hopes that (a) it will help someone else, and (b) someone can give additional info as to what is happening.
We have a domain set up, and I was trying to use a domain account as the account to impersonate. Use a local account. The account should have logon rights.
Set up a share. Only UNC paths are allowed, so be sure that the user set up in the previous bullet point has (a) NTFS folder permissions and (b) share permissions. These are 2 separate things. Do this even if the share is on the local machine.
I like to stop reporting services, rename the log file in <drive>\Program Files\Microsoft SQL Server\MSRS(SSRS version).MSSQLSERVER\Reporting Services\LogFiles\, then start it again so there's a fresh file and you're not scrolling down to the bottom of a 5MB file.
Sacrifice something - an animal, your first born - to the SQL Server Reporting Services gods. We're still trying to determine what happened, but at this stage we get the PDFs to generate to a local folder but it still writes out each one having an error, same errors that we were getting when it wasn't writing out the PDFs. It is writing out the PDF files. I do not know why it also writes out the same error as it was when it was failing to write out each PDF, I am thoroughly confused, and so is SSRS, apparently. I'll be sure to update this with additional info as we figure this out, but it's definitely not the easiest MS product we've had the pleasure of dealing with.
Is there a Script which finds the current activity
from application->login->Database->Table->Column level ?
I have used
SP_who2, sp_who2'Active',Sysprocesses
Activity Monitor
Audit
Profiler
Trigger
Extended Events
and coludnt get column level data connections, i was able to get the sql statements, table name, database,instance, application, login name ...but I couldn't get Column Names
the reason I am trying to find to track all usage and re architect the Database..
any help is appreciated
SP_who2 and sp_who are the ones I have even used to get the required information. You can as well check against sys.sysprocesses to know about processes that are running on an instance of SQL Server.
If you want the columns involved in the queries then consider using SQL Server Tracing probably.
I need to find out which of our Crystal Reports users are actually running, so we can get rid of those that are no longer being used. I can do some kind of query of the latest jobs run on the SQL server as described here:
How to get the last run job details in SQL
However I'm not sure how I could tie that back to the actual reports. I've tried opening a Crystal Report in Crystal Reports 2008 while running a trace on the SQL server using SQL Profiler, however I don't see any database calls in the tracethat would allow me to determine the name of the report being run.
How could I find out which Crystal Reports are actually in use?
I usually embed a unique identifier (the report's name or ID) in either a sql-expression field or the command's query.
If the DBA finds an issue with a given query (e.g. non-performant), this approach easily identifies the source.
SQL-expression:
//{%report_name}
(
'Daily Obstetrics Review [OB003]'
)
In the command:
SELECT 'Daily Obstetrics Review [OB003]' AS REPORT_NAME
...
FROM ...