I have a data catalog where people can browse through the DWH tables. People can select tables and send a request to the IT team. This is in the form of a table which gives all the table names and column names that a person wants to have.
In the current situation, my team has to manually create a SSAS Tabular Model with the requested tables in Visual Studio: (Create a new model, connect to the DWH, select the requested tables and columns, assign user access and deploy the model on the analysis service.)
My question: is there somebody who knows a way to automate this process? Is it possible to create Tabular Models with scripts automatically? I've come across Tabular Model Scripting Language but I'm unsure from the documentation if it's possible to create NEW tabular models. Seems like it's only possible to script and make changes in already existing models.
Any form of suggestion or guidance will be appreciated, thanks beforehand.
Michael Kovalsky has a great solution for this on GitHub. See https://github.com/m-kovalsky/ModelAutoBuild. You start with an Excel template and then use scripting in the Tabular Editor tool to create the model. It may not have all the elements you need in your models, but it is a great starting point.
Related
I have a tabular model built in Analysis Services and I would like to know if it is possible to export documentation with my data sources, tables, roles, relationships and measures?
Also, I would like to know the best way to document the SSAS Cube. If anyone can send me a sample documentation for me just to use as a base to build my own documentation.
Thanks!
I would start with DAX Studio https://daxstudio.org/, this allows you to run queries on the meta data of the cube (DMV), and get the measure and other information out. Another good tool to use would be Tabular Editor https://tabulareditor.com/.
In DAX studio, you would run a query such as
select * from $system.MDSCHEMA_MEASURES
Which would list the measures, the definition and other metadata about them. You can then query other items to get the relationships, table definitions etc.
I've got a nice SSAS tabular model with users processing away. Certain users need access to certain information, such as confidential info (e.g., SS numbers), that should not be visible to everyone. How should I handle this?
This indicates that there is no way to use roles to remove columns, only rows. Is my only option to make a copy of the model and maintain both? This can't be such an edge case...
I guess I can jury-rig something with a scm fork and code-generation, but I'd rather not go down that road.
Alternatively, is there any way to hide the columns (per user/role), so that at least they don't show up in client tools?
One method that requires very little additional development is to use the method described in the following blog post. http://blog.westmonroepartners.com/a-workaround-for-column-security-in-the-sql-server-analysis-services-bism-tabular-model/
The blog contains a link to an SSIS package which will replicate an existing cube, with the exception of the sensitive data columns. The users who cannot view the sensitive data columns can be given access to the second cube that does not contain sensitive data columns.
One way to achieve this is to create Perspectives. You can create different perspectives for different group of users. And end users can connect to their specific model.
I have been working with Pentaho for the last few days. I have been able to setup the Pentaho Report Designer to generate a sample report by follow their documentation. Then I follow this article http://www.robertomarchetto.com/www/how_to_use_pentaho_report_designer_tutorial and managed to export the report to Pentaho BI server.
All I don't understand is Pentaho workflow. What should be the process I should follow which means what's the purpose of exporting the export to Pentaho BI server? Why there is a Data Integration tool? Why there is a BI sever when I can export the report from the Designer tool?
Requirement
All I want to do is retrieve the data from the MYSQL DB. Put them into a data-mart. Then from the data-mart generate a report.(According to what I have read, creating a data mart is the efficient way).
How can I get it done?
Pentaho Data Integration can be used to make this report generation automated.
In report designer you will be passing a parameter or set of parameters to generate a single report output.
With Data integration you can generate the reports for different set of parameters. for eg: if reports are generated on daily basis, we can make it automated for the whole month, so that there is no need of generating reports daily and manually.
And using the Pentaho Business Intelligence server we can make all these operations scheduled.
To generate Data/Table(Fact tables/dimension table) in MYSQL DB From difference source like files/different DB - Data Integration tool comes in to picture .
To create Schema on top of Fact tables - Mondrian tool
To handle user/roles on top of created cubes -Meta data editor
To create simple reports on top of small tables - Report Designer
For sequential Execution (at a go) usage of DI jobs/transformation , Reports, Java script - Design Studio
thanks to user surya.thanuri # forums.pentaho.com
The Data Integration tool is mostly for ETL, it's a separate tool and you can ignore it unless you are doing complex analysis of data from multiple dissimilar data sources. You don't need to 'export' reports to the pentaho server, you can write them directly to a directory then refresh the repository from inside the Pentaho web application. Exporting them is just one workflow technique.
You're going to find that there are about a dozen ways to do any one thing with Pentaho. For instance I use the CDA datasources with my reports vice placing the sql code inside my report. Alternatively you can link up to a Data Integration server to execute the Data Integration scripts to view a result set.
Just to answer your datamart question. In general a datamart should probably be supported by either the Data Integration tool (depending on your situation I don't exactly recommend this) or database functions/replication streams (recommended).
Just to hazard a guess, it sounds like someone tossed you a project saying: We need a BI system, here's the database where the data is stored, here are the reports we're already getting. X looked at Pentaho and liked it. You should use that.
First thing you need to do is understand the shape of the data, volume, tables, interrelations. Figure out what the real questions they want to answer are. Determine whether they need real time reporting, etc..etc. Just getting the datamart together itself, if you even need one, can take quite awhile. I think you may have jumped the gun on Pentaho itself.
thanks to user flamierd # forums.pentaho.com
I'm trying out Microsoft Master Data Services and I would like to add data to the database programmatically. I'm starting to get the model/entity/member structure but I'm not yet sure. If you have a nice explanation for this structure, please share.
Say somebody added a new employee in an ERP system and I would like to send that to the MDS. How would I do that? Is the data that I want to add a new member? Because if I look at the following information (http://technet.microsoft.com/en-us/library/hh230995), the only way to import data is through entities?
Thanks in advance for any useful information about this!
Lets start with the basics.
Entities in Master Data Services (MDS) are roughly analogous to tables in a regular database.
Every entity must live in a model.
A model can contain any number of entities.
The Metadata* methods you see on that page can be used to create, read and update models and entities. Once you have modeled your ERP tables as an MDS model, you can use the EntityMembersCreate API (with the relevant model/entity information) to create a member (roughly analogous to a row in a table). You can use EntityMembersUpdate to update members and EntityMembersDelete to delete them.
Another way to get large amounts of data into MDS is by using Entity Based Staging. Entity Based Staging allows you to use tools like SSIS to get bulk data into MDS. A good primer here: http://msdn.microsoft.com/en-us/sqlserver/hh802433.aspx.
I hope this helps. Feel free to ask more questions.
I like using a generic data-access-object that classes in my model inherit from. Each class has a one to one relationship with tables in the database.
We're using SSIS to replicate data from our CRM (as well as other data sources) into our MDS (for the time-being). If you're not familiar with the tool, I'd recommend in terms of moving data around - it's relatively easy to pickup the basics. If you go this route, here's a great resource I followed to push data into our MDS system:
http://www.sqlchick.com/entries/2013/2/16/importing-data-into-master-data-services-2012-part-2.html
I'm looking for an easy way to transfer a database schema I have developed inside visual studio as a strongly typed dataset (xsd file) into a corresponding sql server database. Silly me I assumed the process would be forthright, but I can't find out how to do it. I assume I could duplicate the tables column by column, but that seems so error prone. Does anyone know of a way to perform the schema transfer like this? Maybe a tool to translate the xsd file into a corresponding sql server ddl file?
Final thought once I have the schema transferred moving data around between the two data stores will be straight forward, its just getting the schemas synced that has me stumped...
Thanks,
Keith
Why didn't you implement your data model directly in SQL Server ?! It is more common and engineered and I think this is why Microsoft has not provided any wizard or tool for this case. As well you can make your data model as scripts or .sql files and they can be managed via SVN and whenever you need the model implementation you can sue them.