How to restrict cube access on pentaho bi server 6 - pentaho

I have been desperately trying to find how to restrict Mondrain cube access on pentaho BI server 6. Knowing, my schema file is been developped manually (without using the Workbench tool). I have been desperatly been looking for answers every way.

To the best of my knowledge, you have to use the Schema Workbench tool or edit the Mondrian .xml file manually to restrict cube access to certain pre-existing work groups.
I used this resource from Pentaho to figure out how to create roles to restrict cube access: Pentaho Mondrian Access Control

Related

Best way to give azure SQL permissions to Power BI?

I'd like to allow a Power BI report to access a single azure SQL database in such a way that it could allow for cleaner deployment/replication across multiple products. As of now, I manually provide the reports with a read only SQL login, but having to do this each time a new report is created would be sub-optimal.
Is there any way to integrate Power BI with Azure's MSI, or anything of the sort to allow for smoother deployment?
You can connect to the Azure SQL database through the PowerBI online service and then publish this as a 'content pack'.
Then you can use the PowerBI Service Connector to access the dataset without creds.

I cannot find my published cube in Pentaho User Console

I have a problem finding my cubes. I have created my Data WareHouse with Pentaho in a PostgreSQL database. Then, I created my cubes with Schema Workbench just fine and published my cubes successfully, or at least this is what they say when I try to publish them. However, when I go to my User Console, refresh the Mondrian cache and open Saiku I just cannot find my cubes. This is not normal at all. Since I followed the exact steps of some tutorials, but I cannot find my cubes.
I believe that this problem is linked to versions of the technologies that I am using. So, here are the version of each technology:
Pentaho BI server: 6
Schema Workbench:3.12.0.1-196
Please advise if they are compatible or not

The Pentaho BI Platform Workflow Issue

I have been working with Pentaho for the last few days. I have been able to setup the Pentaho Report Designer to generate a sample report by follow their documentation. Then I follow this article http://www.robertomarchetto.com/www/how_to_use_pentaho_report_designer_tutorial and managed to export the report to Pentaho BI server.
All I don't understand is Pentaho workflow. What should be the process I should follow which means what's the purpose of exporting the export to Pentaho BI server? Why there is a Data Integration tool? Why there is a BI sever when I can export the report from the Designer tool?
Requirement
All I want to do is retrieve the data from the MYSQL DB. Put them into a data-mart. Then from the data-mart generate a report.(According to what I have read, creating a data mart is the efficient way).
How can I get it done?
Pentaho Data Integration can be used to make this report generation automated.
In report designer you will be passing a parameter or set of parameters to generate a single report output.
With Data integration you can generate the reports for different set of parameters. for eg: if reports are generated on daily basis, we can make it automated for the whole month, so that there is no need of generating reports daily and manually.
And using the Pentaho Business Intelligence server we can make all these operations scheduled.
To generate Data/Table(Fact tables/dimension table) in MYSQL DB From difference source like files/different DB - Data Integration tool comes in to picture .
To create Schema on top of Fact tables - Mondrian tool
To handle user/roles on top of created cubes -Meta data editor
To create simple reports on top of small tables - Report Designer
For sequential Execution (at a go) usage of DI jobs/transformation , Reports, Java script - Design Studio
thanks to user surya.thanuri # forums.pentaho.com
The Data Integration tool is mostly for ETL, it's a separate tool and you can ignore it unless you are doing complex analysis of data from multiple dissimilar data sources. You don't need to 'export' reports to the pentaho server, you can write them directly to a directory then refresh the repository from inside the Pentaho web application. Exporting them is just one workflow technique.
You're going to find that there are about a dozen ways to do any one thing with Pentaho. For instance I use the CDA datasources with my reports vice placing the sql code inside my report. Alternatively you can link up to a Data Integration server to execute the Data Integration scripts to view a result set.
Just to answer your datamart question. In general a datamart should probably be supported by either the Data Integration tool (depending on your situation I don't exactly recommend this) or database functions/replication streams (recommended).
Just to hazard a guess, it sounds like someone tossed you a project saying: We need a BI system, here's the database where the data is stored, here are the reports we're already getting. X looked at Pentaho and liked it. You should use that.
First thing you need to do is understand the shape of the data, volume, tables, interrelations. Figure out what the real questions they want to answer are. Determine whether they need real time reporting, etc..etc. Just getting the datamart together itself, if you even need one, can take quite awhile. I think you may have jumped the gun on Pentaho itself.
thanks to user flamierd # forums.pentaho.com

SSAS cube Deployment

I was trying my hands on building Cubes using AdventureWorksOlap database. I successfully build what I was trying to do. Now my concern is that I want to deploy the cube to a server so that rest of the team members can use this cube as a datasource while generating their SSRS reports (might be some other tools).
I have heard that SSAS does not allows Sql Authentication. So,
1) how will the members access the cube?
2) What authentication changes do I need to incorporate?
3) How can the other developer using his computer's SSMS access the cube and make changes to it (just like we can do it in the OLTP database)?
4) I need to prepare a dashboard using this cube. Any suggestions on this one.
Thanks in advance.
1) Windows Authentication
2) none.
3) Once the cube is deployed you cant change it. Actually you can change some things like partitions and roles, but you cant add a dimension for example. You need to change the project on BIDS and redeploy it
I would recommend starting with Excel Pivot Tables to learn what type of dashboard you will want to create. By working with the end-users, you can understand what information they want/need to see.
Regarding security, as mentioned, by design cubes use Windows auth only. Here's a blog that talks about circumventing standard security.
Also, I have posted a series of videos on how to create OLAP cubes SSAS in SQL Server 2008 using BIDS. You may find this series helpful.

Pentaho Mondian : Mondrian Schema xml Vs Pentaho metadata domain xmi Vs CDA cda files

I have been exploring the Pentaho ecosystem. Please forgive any naive things in the question.
There are a couple of things about these config files ( containing domains names and its mappings etc ) that I can't seem to put my finger on.
So, if you use mondrian directly, you setup these xml config files.
Now, suppose I use Pentaho BI server instead of just plan mondrian, then there are these metadata domain xmi files in the solution repository.
Q1) Do these pentaho metadata domain xmi files obviate the need for the mondrian schema xml files ?
Now, also CDA ( community dashboard access ) looks interesting. And if I install this plugin there would be .cda config files in the solution repository. The cda files contain both connection and also domain mapping details.
Q2) Do these cda files obviate the need to the two config files discussed in Q1 ?
Q3) Suppose I want to use olap4j to write an mdx query to Pentaho BI server referencing a cda file. Does that question make sense ?
thanks
XMI files are purely for the adhoc reporting wizard - nothing to do with analysis/olap or mondrian.
mondrian.xml are the mondrian schema files to allow you to use the OLAP engine. Whether you use olap in the biserver or not you'll need a schema file to use mondrian
CDA files are a buffer between the underlying data source and the dashboard front end. Again if you want to use mondrian/olap underneath (which you will want to with a dashboard) then you need a mondrian schema first. CDA can cache too which is neat. CDA can access virtually any data source because it can also use Kettle/PDI as a data source - and that can read anything.
You can put an MDX query into CDA so there's no need to use olap4j. CDA actually uses the PRD libraries to talk to mondrian - it's all nicely incestuous! :) If you want to use the results of a query in your own app/front end then CDA returns a JSON dataset which you can play with.
Alternatively look at saiku - thats geared up for providing an easy way to access data from Mondrian for user interface developers.
Finally you wont get many pentaho answers on here - the Forum or the IRC channel is a much better place to go with questions like these!