Can any dimension be added to a SSAS CUBE dynamically - dynamic

We build an SSAS ROLAP cube where data source is memSQL. The cube is built using Visual Studio 2019 and the driver used to connect to memSQL data source is "MySQL .NET Provider 8.0.19". The cube is built and processed successfully. As it is a ROLAP cube, so one of the requirement we have in our hand is to add new dimension/measure dynamically without developer intervention. Now I am looking for some expert advise, how dynamically can we add a dimension or a measure (may be through any Autosys job which will schedule to run in every hour and check for new dimension or measure).
Is it possible to do through any back-end C# code which will update the XMLA; whenever we are trying to add a new dimension or measure ?

I found a good example code at Analysis Services Cube Programatically/Automatically Generated via c# and AMO
This helped me to build a code for dynamic cube using memSQL as data source.

Related

Creating SSAS Tabular Partitions in SSMS is not reflected in SSDT solution

I'm using SSDT to create my Tabular model, I'm creating a table that I'm partitioning (Two weeks of data - 24 Partitions per year) See below.
Usually I'm preparing 2 years of data partitioned (meaning 48 partitions).
When I'm deploying the model to Analysis Services I can access it from SSMS by connecting to my Analysis Services instance
My question is,
I've managed to create an automated script that generates the XMLA query of creating the partitions in SSMS, I'm executing it and I can see the partitions being created, However when returning to SSDT and opening the solution these partitions are not reflected there. is there a way to "force" SSDT to read the meta data from the analysis services instance upon opening the solution again?
Additionally, If I continue developing the model in SSDT, once I'll deploy it again all the changes I made via SSMS will be overridden, is there a way to avoid that?
Creating partitions manually in SSDT can be very painful...
I've managed to create a script that will automate it, but not in SSDT
Any suggestions?
As userfl89 already pointed out, any partitions that you create in SSMS need to be "backported" into your SSDT project, for example by using the "Import From Server (Tabular)" option when creating a new project. Otherwise, you risk losing the partitions (and the data contained in them) when deploying from SSDT.
Alternatively, you can use BISM Normalizer - a plugin for Visual Studio - to merge changes (such as partitions) back and forth between SSDT and the deployed database.
There's also the Analysis Services Deployment Wizard which takes the contents of your projects \bin\ folder and lets you deploy to a database, specifying that you don't want to overwrite existing partitions.
Lastly, if you haven't already, I would recommend taking a look at Tabular Editor. It's an alternative to SSDT for developing the model, so there will be some learning involved of course, but the good news is that you can do partial deployments, in order to avoid affecting the partitions on the already deployed database.
The database that you're accessing in SSDT is your workspace database. The workspace database is essentially a local copy of the tabular model. The partitions you added to the model in SSMS were created, the workspace database is just out of sync. Your can overwrite your workspace database with the current version of the model by deleting/moving the files used in your local SSAS project, then creating a new Analysis Services project in SSDT and using the "Import From Server (Tabular)" option, then selecting the current version of the tabular model. This will create a new workspace database using the current version of the model. When doing this, make sure that when you delete or move the files from your local SSAS project, the files you move are for your local project, not the actual model. If you need to verify the location of the files used by the model, the DataDir property of the SSAS instance in SSMS will show this file path.

Processing mdx cube in pentaho

I'm using pentaho BI server(biserver-ce-5.0.1-stable) version.
once i create the data source for reporting and analyse purpose(OLAP cube) it's working fine using at that time Data but i need to know how to process it according to time period(need to change the cube data after mid night).
please share the ideas.
Download Pentaho Data Integration from here.
Go step by step when you created data source and write it down.
Then make Job (or Transformation if it's simple) and make a scheduler.
Here you will find info about PDI.
Good luck!

Generating a Cognos report using cognos report studio for a SQL query

I am a newbie in Cognos. I am trying to create a report using report studio, I have the required package, and I want to create a report using a complex SQL query which has joins between three tables. Can anyone please give a suggestion to start building the report? Thanks!
How to write SQL for Cognos reports
FM is to create complex models. If you need a quick-and-dirty report you can specify custom SQL as the tutorial shows.
It would depend on how the data you see in Report Studio is modelled, typically a tool called Framework Manager is used connect to and model the source tables and views. Framework Manager creates the packages that you see in Report Studio, it can define relationships between entities like tables (if they're not already defined in the source database).
Not sure how to tell from Report Studio if the relationship has been defined properly other than trying to pull over fields from each table into the same Report Studio Query. It is best to talk to whoever designed the Framework Manager model or look at the model itself.

Analysis Services - cube on different server to source data

I'm trying to create an analysis service cube.
The source data for the dimensions etc is coming from tables on a Sql Server 2000 box.
I want to create the cube though on a sql server 2008r2 box.
How do i do this? BIDS seems to want to put the cube in the same place as the data feed? Does that source data have to be on the same server as the cube?
The source data can be located anywhere. The connection string in your data source defines where the data is read from. Set the deployment server in your project properties to tell BIDS where to deploy the cube to.

Deploy only cube schema, without processing

Is there a way to only deploy cube schema, but without processing the cube. It seems in Visual Studio, when yo deploy a cube, by default, it is "Deploy and Process".
The problem is processing takes so much time, and my main purpose is just writing some MDX script and see if it works well against the cube structure. It seems processing whole cube is just over kill. So I ask.
You can also set this a a deployment option under the project properties. The value should be set to "do not process".
Yes, you can programatically deploy just the MDX script. There is also a download on Microsoft's web site called BIDS helper that has a facility to do this. It's a plugin for visual studio that provides various tools, including a facility to download a MDX script to a cube.