Can you refresh an SSIS Data Cube, from a SSMS database job? - sql

I have data cubes on a server, generated by SSIS. I would like to refresh the data cubes, without the need to create SSIS Packages, but with a database job or normal SQL.
Is this possible?

sure, just create a job step and on "type" select "SQL Server Analysis Services Command" and type your refresh command
FYI, this is an example:
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2">
<Object>
<DatabaseID>DATABASE_NAME</DatabaseID>
<CubeID>CUBE</CubeID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
or an even simpler way, if you need only one refresh each 24 hours, you could configure the proactive caching as "Scheduled Molap"

Related

OLAP Cube Parallelism not working as expected

We are having a OLAP SSAS cube setup and the Cube Processing is triggered from SQL Server Agent Job (on SQL Server 2014). The SQL Server Agent Job step is as below:
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel MaxParallel="3">
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Object>
<DatabaseID>{DatabaseID}</DatabaseID>
<CubeID>{CubeName}</CubeID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
This step is supposed to trigger three parallel measure pull call from the SQL Server databases. But what we could see is it is triggering 10 parallel select queries to the Database. There are few other cubes which has similar parallel settings and fire only 3 queries at a time. The issue of 10 calls are happening only in case of this cube and is happening only in one specific environment. Is there any settings at SSAS cube level which overrides the parallel setting set by the SQL Agent Job?
I think the setting you want is: Maximum Number of Connections in the Data Souce.
https://learn.microsoft.com/en-us/analysis-services/multidimensional-models/set-data-source-properties-ssas-multidimensional?view=asallproducts-allversions
Specifies the maximum number of connections allowed by Analysis Services to connect to the data source. If more connections are needed, Analysis Services will wait until a connection becomes available. The default is 10. Constraining the number of connections ensures that the external data source is not overloaded with Analysis Services requests.
The maxParallel attribute for Parallel element controls the number of processing threads.
https://learn.microsoft.com/en-us/analysis-services/xmla/xml-elements-properties/parallel-element-xmla?view=asallproducts-allversions
Indicates the maximum number of threads on which to run commands in parallel. If not specified or set to 0, the instance of Analysis Services determines an optimal number of threads based on the number of processors available on the computer.

SQL script to change all the cubes to process partially

I have a requirement to "write a script to change all the cubes to process partially". This script needs to be something that can be scheduled using a SQL agent job.
I'm happy with SQL Agent but I don't know much about SSAS so I'm struggling a bit. If when using SSMS to process a cube you select the "Script" option you get something like the following:
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300" xmlns:ddl400="http://schemas.microsoft.com/analysisservices/2012/engine/400" xmlns:ddl400_400="http://schemas.microsoft.com/analysisservices/2012/engine/400/400" xmlns:ddl500="http://schemas.microsoft.com/analysisservices/2013/engine/500" xmlns:ddl500_500="http://schemas.microsoft.com/analysisservices/2013/engine/500/500">
<Object>
<DatabaseID>MyCube</DatabaseID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
Does this look like it will satisfy the requirement? If so how do I alter it to only process Partially?

Process dimension and cube via XMLA script ignoring Dimension Key errors

In SSAS, there is an option of ignoring Dimension key errors when we manually process dimension from Visual Studio. But I did not see equivalent of it in XMLA script despite lots of binging and googling. If it is possible, kindly help.
XMLA script just mention about the dimenion/fact/database you want to process with options. Rest all the settings of cube (ex: ignore duplicate keys) is inherited from the cube itself. So, if you have set those properties in SSAS cube then it will be taken care.
However, you can process each dimension separately to avoid other key related issues via XMLA but it isn't straight forward, you have to get the XMLA script of each dimension Ex:
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">
<Object>
<DatabaseID>Database_Name</DatabaseID>
<DimensionID>Dimension_Name</DimensionID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
Basically, you can avoid dimension keys error from SSAS cube itself. For example, you will get duplicate error when you have both NULL and blank in the table.
Updates
You can change dimension setting by going to Database > Process > Change setting
Then click on dimension key error tab and set your desired values
Once you are done then click OK and click on script to generate relevant XMLA script.
You will notice that now your XMLA will have ErrorConfiguration node with the values you have selected.
XMLA - ReportAndStop
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<ErrorConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">
<KeyErrorLimit>2</KeyErrorLimit>
<KeyErrorLimitAction>StopLogging</KeyErrorLimitAction>
<KeyNotFound>ReportAndStop</KeyNotFound>
<KeyDuplicate>ReportAndStop</KeyDuplicate>
<NullKeyConvertedToUnknown>ReportAndStop</NullKeyConvertedToUnknown>
<NullKeyNotAllowed>ReportAndStop</NullKeyNotAllowed>
</ErrorConfiguration>
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">
<Object>
<DatabaseID>Database_Name</DatabaseID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
You can also generate the same by changing all defaults to other value and once you got the XMLA then give it a desired value.
Two easy approaches here.
A. From Visual Studio
B. From SQL Server Management Studio
The generated XMLA Script does not show ErrorConfiguration element in the XMLA Script, however it does automatically take care of Ignore Errors that you configured. You can use these XMLA Script any where in SQL Server Agent or Services to process cubes/dimensions automatically.

ssas 2005 cube marked as unprocessed after being processed by sql server job SQL Server Analysis Services Command

I have a SSAS 2005 cube deployed. There is a SQL Server 2005 job running nightly using SQL Server Analysis Services Command like below. The job runs no problem. But after each run, I saw the cube properties in SSMS became'unprocessed'. But the 'Last processed' is the finished date and time of the job, which means the job did processed the cube. A deploy from the BIDS will bring the cube to processed status. But when the job finishes, the cube became 'unprocessed'. Any idea why the cube is marked as unprocessed by the job?
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel>
<!--################# Dimensinos ################-->
<!--Branch-->
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2">
<Object>
<DatabaseID>RMIS_cube</DatabaseID>
<DimensionID>Vw Dim Retail Branch</DimensionID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
<!--Products-->
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2">
<Object>
<DatabaseID>RMIS_cube</DatabaseID>
<DimensionID>Vw Dim Products</DimensionID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
<!--Time-->
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2">
<Object>
<DatabaseID>RMIS_cube</DatabaseID>
<DimensionID>Vw Dim Time</DimensionID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
<!--################# Facts ################-->
<!--Sales-->
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2">
<Object>
<DatabaseID>RMIS_cube</DatabaseID>
<CubeID>Rmis</CubeID>
<MeasureGroupID>Vw Fact Sales</MeasureGroupID>
<PartitionID>Vw Fact Sales</PartitionID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
I would assume that the script does not process all partitions contained in the cube. One way to find out what part of the cube is not processed is as follows:
In BIDS or Management Studio, right-click the cube, select "Process"
Set "Process Options" to "Process Default"
Click on the button "Impact Analysis"
A dialog pops up which in the bottom lists the objects that are unprocessed.

Nhibernate generate plain sql query instead of execution statement

Using SQL profiler, I was able to find the query generated from Nhibernate was executed in the
EXEC sp_executesql N'select ...'
fashion.
I am wondering if there is any way to force Nhibernate to generate the plain
Select ...
statement instead.
The reason I want this is because apparently SQL Server generated different execution plans for them, and in my scenario, the plain "select ..." runs MUCH faster.
-----Update----- Nov 30, 2012
I just found this link Why does sp_executesql run slower when parameters are passed as arguments
And I believe the popular answer(with 4 up votes upto now) explained the reason well.
So Now the question is
Can I generate a straight query instead of parametrized one using nhibernate?
No, NHibernate issues commands to SQL Server using sp_executesql and this cannot be changed. However, you should be able to troubleshoot any slow queries to resolve the performance issues. The first thing that I would like at is to check that the parameters supplied with the sp_executesql call have the same data types as the columns they reference.
In your Session Factory configuration you can enable ShowSql. This will output the SQL queries generated to the output window while you are debugging. You'll need to make sure to set your BatchSize to 0 to see all the queries. If batching is enabled, you won't be able to see the queries it groups up (to optimize performance).
NHibernate Profiler is also an Invaluable (but commercial) tool for debugging your code. http://www.hibernatingrhinos.com/products/NHProf
You should clear your execution plans on your server and than try agian:
DBCC FREEPROCCACHE
And/or you can force a recompilation of the execution plan by injecting option(recompile) into your query.
Nhibernate uses log4net and you just need to add an appender as mentioned here incase you are using log4net:
https://devio.wordpress.com/2010/04/14/logging-sql-statements-generated-by-nhibernate/
For example:
<appender name="DebugSQL" type="log4net.Appender.FileAppender">
<param name="File" value=".\nhdb.log"/>
<param name="AppendToFile" value="true" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern
value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" />
</layout>
</appender>
<logger name="NHibernate.SQL" additivity="false">
<level value="DEBUG" />
<appender-ref ref="DebugSQL" />
</logger>