SSAS cube Deployment - ssas

I was trying my hands on building Cubes using AdventureWorksOlap database. I successfully build what I was trying to do. Now my concern is that I want to deploy the cube to a server so that rest of the team members can use this cube as a datasource while generating their SSRS reports (might be some other tools).
I have heard that SSAS does not allows Sql Authentication. So,
1) how will the members access the cube?
2) What authentication changes do I need to incorporate?
3) How can the other developer using his computer's SSMS access the cube and make changes to it (just like we can do it in the OLTP database)?
4) I need to prepare a dashboard using this cube. Any suggestions on this one.
Thanks in advance.

1) Windows Authentication
2) none.
3) Once the cube is deployed you cant change it. Actually you can change some things like partitions and roles, but you cant add a dimension for example. You need to change the project on BIDS and redeploy it

I would recommend starting with Excel Pivot Tables to learn what type of dashboard you will want to create. By working with the end-users, you can understand what information they want/need to see.
Regarding security, as mentioned, by design cubes use Windows auth only. Here's a blog that talks about circumventing standard security.
Also, I have posted a series of videos on how to create OLAP cubes SSAS in SQL Server 2008 using BIDS. You may find this series helpful.

Related

SSAS: DirectQuery to SQL database using the current user

I want to define all the acces rights in my SQL Server database so it is nice and centralized. I am implementing basic stuff like grant select on schemas and tables, encryption of columns and RLS.
On top of the database I build a Tabular model with SSAS with DirectQuery connection.
On top of the Tabular model I want to build a report with DirectQuery connection.
directQuery states:
Security can be enforced by the back-end source database by using row-level security features from the database.
Impersonation in Analysis Service Tabular states:
Impersonate Current User Specifies data should be accessed from the datasource using the identity of the user who sent the request. This setting applies only to DirectQuery mode.
Issue:
I cannot choose "use current user" as impersonation mode in my ssas tabular model. -> "ssas the datasource contains an impersonationmode that is not supported for processing operations"
changing the impersonation mode in ssms yields this error, VS2019 looks similar with the same content
I can deploy it as a specific user but that means that everybody uses the access rights of that specified user.
My tabular model uses compatibility level 1400. It is deployed to a Microsoft Analysis Server 15.0.32.50, Tabular Mode. (The model cannot use DirectQuery when in compatibility 1500 for some arcane reason. Please don't make this your topic unless you absolutely have to.)
SQL Server Version is 2019, 15.0.2000.5
The on-premise Report Server must be used.
SSAS, database and report server run on the same SQL Server.
Is it possible to implement this solution using database, ssas and report server on the same machine? If so, how?
Alrighty, so after messing around with this stuff for wayyy too long, I narrowed it down to the SQL-Server setup. Something there in the configuration is causing a bunch of issues.
Using direct query to pass down user information in the way described above is perfectly valid.

ASP.NET Core : get data from SQL Server of from SSAS OLAP Cube?

I have a problem with an application which encompasses an SSAS project, with an OLAP cube, and a client project using ASP.NET Core and Blazor WebAssembly, and a SSRS project.
The ASP.NET Core app retrieves reports from the SSRS server, but the report parameters are written in C# and Blazor; and the problem I have is about how to get available values for these parameters.
For example, if a filter is about anesthesists, I want to display in a combobox all the anesthesists names, but from where do I get this information?
I have 2 choices: either from the OLAP cube, using the AdoMdClientNetCore Visual Studio extension, or from the source database in SQL Server.
I would like to know if there are some good practices concerning this subject; I googled here and there but without relevant results.
I would recommend to get data from SSAS. Reasons for this:
Working structure of your project - Client project <-> SSRS <-> SSAS <-> Some DB. And Some DB datasource is beyong the scope of the project. SSAS acts as a single point of contact with Some DB, if the Client App will access the DB - it will create another contact point to the DB. This extra contact point has to be configured, maintained etc.
SSAS updates its data, reading from its data sources, in timely batch manner during so called "Processing" jobs, unless you use special ROLAP mode. This means some delay in information passing from DB to SSAS. Report gets data from SSAS, so, reading directly from DB could bring in inconsistency some rare cases.
Separation of concern. SSAS accesses DB with some queries. If the Client App accesses the DB as well, modifications made to SSAS have to be transferred to the Client App, complicating development and support of the solution.

Row Level Security (RLS) for a SSAS Tabular Model

I am new to SSAS technologies for developing analytical models. I have to build several tabular models for a huge application in which security is quite relevant. What I would like to do is to re-use the row level security existing in the sources of the cube and apply it to the cube itself.
For example, if I build a tabular model from two tables of a schema, and these two table have RLS enabled, I would like the cube to take this security into account, so that when I access reports and log in as user, I will only see aggregated data according to the permissions I have.
Searching through the web I found ways of implementing RLS within the cube, but none about inheriting it from the sources. But again, I am new to the technology, so I preferred to ask here.
Thanks
The most obvious solution to your request is running SSAS Tabular in thin mode (called DirectQuery mode). As long as in the Existing Connections dialog in Visual Studio you set ImpersonateCurrentUser, when a user queries the SSAS model, SSAS will in turn send one or more SQL queries to the database under the end user's credentials. RLS in the SQL database will come into play here.
One caveat is that I would only recommend DirectQuery in SSAS 2016 not prior. Another caveat is that performance will be slow compared to a cached model in SSAS. So if performance isn't acceptable then turn off DirectQuery and reimplement RLS inside SSAS. Also DirectQuery uses zero caching of results currently so the load against SQL will not be offloaded to SSAS at all. Finally, if you use DirectQuery and ImpersonateCurrentUser you may have to setup Kerberos if your SQL server isn't on the same server as SSAS so that user credentials can double hop.

Update Dimensions/Levels/Measures programmatically

Summary :I m involved in a project that requires us to update/upgrade a existing cube programmatically. Is this even possible (apart from using AMO)?
Details: We have a cube that deploys to all client environments via a installer. Now as we continue to develop, we make changes to the cube, like changes in calculated measures, adding a new level to a dimension or editing an existing level/measure. we need to deploy these changes to client environments in the form of an updates.
Now these environments are not directly accessible by us nor do they have bids installed, meaning, we cant use bids to make changes and deploy it to the prod environment. Hence the requirement of a script/s to accomplish it.
Is there an approach that can enable to release these updates to the cube programmatically (not via AMO)? eg: a reporcess of a cube can be triggered in the form of a xmla statement.
We also need to be considerate of any customization/s that the client would have made (like addition of measures or levels for a given dimension) and preserve them.
Please let me know if i have clearly explained the issue at hand.
Thanks
Srikanth
Instead of AMO, you can also directly issue XMLA ALTER statements. Actually, AMO converts everything to low level XMLA as well, which are then sent to the Analysis Services server. However, the official documentation of the XMLA ALTER statement at http://msdn.microsoft.com/en-us/library/ms186630.aspx is difficult to read. It would be easier to capture the XMLA statements resulting from the AMO issued by BIDS when you click deploy. You can do this via SQL Server Profiler as documented here: http://technet.microsoft.com/en-us/library/ms174946.aspx.
And, as soon as you have more than a few trivial changes, it may be much easier to re-deploy the complete Analysis Services database instead of capturing just the changes and trying to create ALTER statements.

The Pentaho BI Platform Workflow Issue

I have been working with Pentaho for the last few days. I have been able to setup the Pentaho Report Designer to generate a sample report by follow their documentation. Then I follow this article http://www.robertomarchetto.com/www/how_to_use_pentaho_report_designer_tutorial and managed to export the report to Pentaho BI server.
All I don't understand is Pentaho workflow. What should be the process I should follow which means what's the purpose of exporting the export to Pentaho BI server? Why there is a Data Integration tool? Why there is a BI sever when I can export the report from the Designer tool?
Requirement
All I want to do is retrieve the data from the MYSQL DB. Put them into a data-mart. Then from the data-mart generate a report.(According to what I have read, creating a data mart is the efficient way).
How can I get it done?
Pentaho Data Integration can be used to make this report generation automated.
In report designer you will be passing a parameter or set of parameters to generate a single report output.
With Data integration you can generate the reports for different set of parameters. for eg: if reports are generated on daily basis, we can make it automated for the whole month, so that there is no need of generating reports daily and manually.
And using the Pentaho Business Intelligence server we can make all these operations scheduled.
To generate Data/Table(Fact tables/dimension table) in MYSQL DB From difference source like files/different DB - Data Integration tool comes in to picture .
To create Schema on top of Fact tables - Mondrian tool
To handle user/roles on top of created cubes -Meta data editor
To create simple reports on top of small tables - Report Designer
For sequential Execution (at a go) usage of DI jobs/transformation , Reports, Java script - Design Studio
thanks to user surya.thanuri # forums.pentaho.com
The Data Integration tool is mostly for ETL, it's a separate tool and you can ignore it unless you are doing complex analysis of data from multiple dissimilar data sources. You don't need to 'export' reports to the pentaho server, you can write them directly to a directory then refresh the repository from inside the Pentaho web application. Exporting them is just one workflow technique.
You're going to find that there are about a dozen ways to do any one thing with Pentaho. For instance I use the CDA datasources with my reports vice placing the sql code inside my report. Alternatively you can link up to a Data Integration server to execute the Data Integration scripts to view a result set.
Just to answer your datamart question. In general a datamart should probably be supported by either the Data Integration tool (depending on your situation I don't exactly recommend this) or database functions/replication streams (recommended).
Just to hazard a guess, it sounds like someone tossed you a project saying: We need a BI system, here's the database where the data is stored, here are the reports we're already getting. X looked at Pentaho and liked it. You should use that.
First thing you need to do is understand the shape of the data, volume, tables, interrelations. Figure out what the real questions they want to answer are. Determine whether they need real time reporting, etc..etc. Just getting the datamart together itself, if you even need one, can take quite awhile. I think you may have jumped the gun on Pentaho itself.
thanks to user flamierd # forums.pentaho.com