Process cube without access to the DataSource - ssas

I am processing an AX Cube in SSAS Server.
I dont have access to the AX Cube data source and the processing fails.
I am using inherit authentication while processing the cube.Please let me know the solution to the problem.
I need to process the cube in order to bring the data to the cube as it showing no data because it has been backup and restored.

In order to process cube you have to have access to database it is taking data from.
In other words, in order to process cube SSAS needs to read data from data source. You might need to ask DBA for user to access source database and provide this information in data source settings.
If you inherit authentication settings, SSAS will try to access source database with credentials you have provided to access cube if you do it manually or credentials of task which runs cube processing.

Related

ASP.NET Core : get data from SQL Server of from SSAS OLAP Cube?

I have a problem with an application which encompasses an SSAS project, with an OLAP cube, and a client project using ASP.NET Core and Blazor WebAssembly, and a SSRS project.
The ASP.NET Core app retrieves reports from the SSRS server, but the report parameters are written in C# and Blazor; and the problem I have is about how to get available values for these parameters.
For example, if a filter is about anesthesists, I want to display in a combobox all the anesthesists names, but from where do I get this information?
I have 2 choices: either from the OLAP cube, using the AdoMdClientNetCore Visual Studio extension, or from the source database in SQL Server.
I would like to know if there are some good practices concerning this subject; I googled here and there but without relevant results.
I would recommend to get data from SSAS. Reasons for this:
Working structure of your project - Client project <-> SSRS <-> SSAS <-> Some DB. And Some DB datasource is beyong the scope of the project. SSAS acts as a single point of contact with Some DB, if the Client App will access the DB - it will create another contact point to the DB. This extra contact point has to be configured, maintained etc.
SSAS updates its data, reading from its data sources, in timely batch manner during so called "Processing" jobs, unless you use special ROLAP mode. This means some delay in information passing from DB to SSAS. Report gets data from SSAS, so, reading directly from DB could bring in inconsistency some rare cases.
Separation of concern. SSAS accesses DB with some queries. If the Client App accesses the DB as well, modifications made to SSAS have to be transferred to the Client App, complicating development and support of the solution.

Processing OLAP cube using SQL agent and impersonation information

Yesterday I made some changes to an existing cube in SSAS. I added a new fact table to the Data Source View, which I linked to the appropriate dimension tables.
I then proceeded by opening up the mycube.cube [Design] tab and the section Cube Structure. From there I added a new measure group to the cube corresponding to the newly added fact table. I verified that the new fact table was implemented in the cube designer "scheme" and that the keys were correctly set.
I then saved the cube, waiting for the sql agent to process the cube during the night (it is a rather extensive cube, so I avoid processing it during office hours).
This morning I see that the OLAP cube has been processed, and that the sql agent's View history does not entail any errors or warnings.
However, the cube does not have the newly added measurement group. I made the same steps on a test server earlier, and that worked without any trouble.
The only difference I can see is the impersonation information in the Data Source. My questions are therefore:
If I make changes as a user in SSAS without deploying the cube from within SSAS, and I am not the user specified under Use a specific Windows user name and password under the Impersonation Information tab in the Data Source, will the changes not be made by the SQL agent?
Do I need to be the user which is stated as the Owner of the SQL agent task?
Regards,
Cenderze
First you have to make sure the edits you made to your cube are processed into the Job steps.For this, edit the Job and make sure the steps are modified considering your late edit.
Then to make sure a job is executed as a user particularly you may have to setup a Proxy account into the Job.This Proxy account needs the rights to read sources and to write into the Analysis Server's target.

Row Level Security (RLS) for a SSAS Tabular Model

I am new to SSAS technologies for developing analytical models. I have to build several tabular models for a huge application in which security is quite relevant. What I would like to do is to re-use the row level security existing in the sources of the cube and apply it to the cube itself.
For example, if I build a tabular model from two tables of a schema, and these two table have RLS enabled, I would like the cube to take this security into account, so that when I access reports and log in as user, I will only see aggregated data according to the permissions I have.
Searching through the web I found ways of implementing RLS within the cube, but none about inheriting it from the sources. But again, I am new to the technology, so I preferred to ask here.
Thanks
The most obvious solution to your request is running SSAS Tabular in thin mode (called DirectQuery mode). As long as in the Existing Connections dialog in Visual Studio you set ImpersonateCurrentUser, when a user queries the SSAS model, SSAS will in turn send one or more SQL queries to the database under the end user's credentials. RLS in the SQL database will come into play here.
One caveat is that I would only recommend DirectQuery in SSAS 2016 not prior. Another caveat is that performance will be slow compared to a cached model in SSAS. So if performance isn't acceptable then turn off DirectQuery and reimplement RLS inside SSAS. Also DirectQuery uses zero caching of results currently so the load against SQL will not be offloaded to SSAS at all. Finally, if you use DirectQuery and ImpersonateCurrentUser you may have to setup Kerberos if your SQL server isn't on the same server as SSAS so that user credentials can double hop.

Logging user access in SSAS and Powerpivot

Is there anyone who has an idea how i can log user access in SSAS and Power Pivot?
The problem we face that we need to keep track of who is accessing what. it is not enough to save the question when the result set changes over time (SELECT * ... does not result in the same today as yesterday), so the whole or parts result set need to be logged. I can imagine that it is possible to solve for reports created in SSAS, as it is being in a SQL server. Power Pivot and self-service BI, I'm more thoughtful about how it should be done.
If your Power Pivots are stored in a BI-enabled SharePoint site then those Power Pivot workbooks are effectively on a regular SSAS tabular server also. So any logging that you can put in place for an SSAS tabular database you could probably adapt for PowerPivot workbooks. But in the SharePoint SQL Server instance you will also find a database called DefaultPowerPivotServiceApplication[GUID]. In this database you will have a table called [Usage].[Requests] that records who access what file and when.

SSAS: Can I define a Cube In BIDS, but create/process the partitions in SSMS/SSIS?

I am new to SSAS, and exploring Partitioning. My data source is very large (web logs):a few hundred million records.
I would like to define my cube IN BIDS, and create a SSIS Package to create partitions. For now, I am Generating The MDX to Create & process the partitions, and Executing them in SSMS.
I find that this is faster and less error prone them manually creating the partitions using the BIDS UI.
I am trying to understand the expected workflow: because every time I go back to BIDS, and make changes to measures & process the cube the partitions I created in SSMS are deleted and the old partitions I created in BIDS are re-published.
What I would like to be able to do is Refresh my partitions from the server to BIDS. Is This possible? If not, what type of workflow is expected in this case?
Thanks!
Partitions are part of the cube structure, and if you change these outside of BIDS, then you change the structure of the deployed Analysis Services database. BIDS keeps the definition of the structure locally as a set of XML files, the partition definitions are contained in a file with the extension .partitions. If you deploy an Analysis Services project from BIDS, it updates the deployed Analysis Services database to match the structure of the local XML files, thus overwriting whatever you changed outside of BIDS.
You can get a deployed database back to an Analysis Services project as follows: Open BIDS, click File/New/Project, and then select "Import Analysis Services Database" from the "Business Intelligence Projects" project type, and select the directory where you want to save the project files locally in the bottom part of the dialog. As soon as you click OK, a wizard opens that allows you to select the server and Analysis Services database to get the definition from, and when you click Next, starts writing it to the directory that you selected.
Partitions are a part of your cube structure. You should create the partitions in BIDS and then process them based on your requirement using SSIS or SSMS.