I have performed ETL operation and i created a data warehouse and i loaded the data to it and so far its fine.. my ETL seems to work allright since my data warehouse contains all the data i needed. And then i created an SSAS project from my data warehouse following the AdventureWorks DW example. I deployed the cube and processed it. Then i tried to browse the cube. But here is a problem, it seems as the members do exist but the measures are empty. because my dimensions contain the data but when i throw the measures to it .. it is just gonna be empty cells. What causes this?
In the cube designer, check the "Dimension Usage" tab. Make sure that the intersection between a dimension and a measure group has something there (If there's meant to be a relation).
What happens if you drag & drop the measure to the browser, without any dimensions?
Also what version of SSAS are you on, is it 2005? That had IgnoreUnrelatedDimensions set differently to 2008 I think..
Make sure you've linked your dimensions with measure groups (i.e. that you fact table references you dimension tables).
Also make sure you have foreign keys defined in your data warehouse (as soon as wizards in Visual Studio use them when proposing cube structure).
In order to deploy to a different dwh server
Right Click on the CubeNAME
Go to properties
In Configuration properties > Go to Deployment
In Processing option - change Default to - Do not Process.
if in case the DB is very large and keep huge data, it will get deployed and you can process it later.
For Target - Set name of Server
Click on OK, Deploy and later process
Related
Yesterday I made some changes to an existing cube in SSAS. I added a new fact table to the Data Source View, which I linked to the appropriate dimension tables.
I then proceeded by opening up the mycube.cube [Design] tab and the section Cube Structure. From there I added a new measure group to the cube corresponding to the newly added fact table. I verified that the new fact table was implemented in the cube designer "scheme" and that the keys were correctly set.
I then saved the cube, waiting for the sql agent to process the cube during the night (it is a rather extensive cube, so I avoid processing it during office hours).
This morning I see that the OLAP cube has been processed, and that the sql agent's View history does not entail any errors or warnings.
However, the cube does not have the newly added measurement group. I made the same steps on a test server earlier, and that worked without any trouble.
The only difference I can see is the impersonation information in the Data Source. My questions are therefore:
If I make changes as a user in SSAS without deploying the cube from within SSAS, and I am not the user specified under Use a specific Windows user name and password under the Impersonation Information tab in the Data Source, will the changes not be made by the SQL agent?
Do I need to be the user which is stated as the Owner of the SQL agent task?
Regards,
Cenderze
First you have to make sure the edits you made to your cube are processed into the Job steps.For this, edit the Job and make sure the steps are modified considering your late edit.
Then to make sure a job is executed as a user particularly you may have to setup a Proxy account into the Job.This Proxy account needs the rights to read sources and to write into the Analysis Server's target.
I am new to SSAS & Modifying the existing SSAS Cube to Add New Columns, Rename and Delete Existing Columns.
I Made required changes in the Data Source.
I Refreshed the Data Source View and able to see the new changes.
I Added the new columns in the Dimension by drag and drop from DSV Dimension Table. Deleted the Unnecessary/ Error Columns. I created Required Measures in Fact Table by Selecting Required Columns from the DSV Fact Table.
When I browse the cube from BIDS I can only see the attributes and measure which were available before I made changes. But not the latest changes. I did not deploy the cube into server.
Where do I need to make changes for this to view the New Changes.
As suggested by sdlaursen, your changes will not reflect to the cube until you deploy and then process the cube.
Any structural change to the cube is not reflected until you deploy it, process a cubes only refreshes the data, and not the structure of the cube.
You can only observe your changes when browsing the dimensions etc. in BIDS if you in fact have deployed and processed your cube/ssas database to reflect your changes. There is no other way. If you do not want overwrite your production cube, you can choose to deploy to a different TEST database by temporarily changing deployment server/database in your BIDS project.
I have 3 databases with the same structure, but different data, since they are from different clients.
Now, I have an existing SSAS project. Its Data Source Views, Cubes and Dimensions can only use or access one DB.
What I want is to be able to use multiple databases with the same structure, and create a cube using them.
Each client must also be able to use the cube, but they can only see their own data.
Are these possible? Can you please provide insights and some useful references?
Easy Solution
The easiest way to solve this would be to just have three Analysis Services databases. Setup would be easy, you would have just three structurally identical databases, and no need to manage security within the cubes, only access to the cube. It is easy to manage, and difficult to make errors allowing users to get access to data they should not see. And as nobody should be allowed to access data form other companies, there is no need for one common cube.
Just deploy your project three times using a different Analysis Services database name.
Then change the data source object of the deployed databases to point to the different relational databases.
For the first step, in Business Intelligence Development Studio, right click on the project node in Solution Explorer, select the bottom entry ("Properties"), and then select "Deployment". Here, you can enter the server to deploy the solution to, as well as the database name. After closing the dialog, right click on the project node again, and select Deploy. Repeat this step, using three different database names.
Then, connect to your Analysis Services server in SQL Server Management Studio, open each database, and edit the data source object of each database to point to its relational database.
After that, re-process the Analysis Services database.
Alternatively, you can also do everything in BIDS, i. e. between changing the target database for deployment and deploying, change the data source there, and after deployment, possibly, re-process the Analysis Services database.
If you assume you will need to change and deploy the cube definition several times, you probably could make use of configurations which you can edit in the project properties dialog using the "Configuration Manager" button. You would have three configurations, one for each target Analysis Services database. You could select one of the configurations in the dropdown list in the toolbar for each deployment without the need to edit properties again and again.
If you need to do this often, I think it would not be difficult to automate the steps to change the database and reprocess the cube, either via XMLA, or via AMO, or in PowerShell. But to implement this this would be another question.
More Complex Solution
If you really want to have everything in one cube, then you will have to have a union of the tables from the different sources in the data source view. If all three relational databases are on the same SQL Server instance, you can define this either as a named query in the data source view, or as a view in one of the databases, maybe even better as a view or table in a separate relational database. You can access a table or view from another database running in the same instance of SQL Server in the form NameOfDB.Schema.Tablename.
In case these databases are on different instances, you could use linked servers.
And of course, you will have to manage the keys in these different databases so that the same dimension entry has the same key, and different dimension entries have different keys. And you will have to set up security in the cube so that no user can see data that is not meant to be seen.
While you could use different data source objects in Analysis Services for different tables or named queries in Analysis Services, each of these only uses one, as actually, this is one SQL statement that is sent to this source. And dimensions need to be based on one data source view object like one named query, view, or table. For fact tables, you could get around this using partitions, but not for dimensions.
A while back I created a cube in BIDS 2008 (no R2) with a single facts table and around 28 dimensions. This got deployed to SQL 2008 and auto-updates itself with data from the ERP system (using a data-warehouse and SSIS and all that).
The customer liked it and wanted me to make another one.
The next one, however, has around a 100 dimension views in SQL.
I created the Datasource view (looks enormous) but is there some way to automate creating the dimensions based on the DataSource View tables?
My sanity is at stake here :-).
EDIT:
I did it manually for the moment, but I'd still like a method for possible future cubes.
Within BIDS, the only automation would be this: If you create a cube using the wizard, all the dimension objects that you configure will be created for you, as long as the dimension does not yet exist. However, these automatically created dimensions just have the key attribute. You will have to add all other attributes for each dimension manually.
I'm trying to configure storage mode ROLAP for a partition in an existing SSAS cube. The cube is a little messy in that the measure group is defined by a named query (as opposed to a table) and the dimensions are defined in several different data source views (DSV).
This is the error message I get when querying the cube with mdx:
Executing the query ...
Server: The operation has been cancelled.
Errors in the high-level relational engine. The 'dbo_dim_account' table that is required for a join cannot be reached based on the relationships in the data source view.
Execution complete
Note that MOLAP storage mode with proactive caching works fine. This problem occurs only when storage mode is ROLAP or HOLAP.
Also, I have tried to add the tables of all dimensions to the DSV of the cube in question but that doesn't seem to help.
Any ideas?
Not an expert here, but you could try importing the AS DB in Visual Studio.NET - "Import Analysis Services database" in the New Project... dialog.
Once in there, you can see the table schemas for the Data Source View (which is where the relational tables are defined that the cubes are extracted from). Next, look to make sure the "dbo_dim_account" table is there and that your fact table is related to it.
It may be that a dimension and fact have to be in the same DSV for the relation to work?
Also, maybe the SSAS flight recorder or Application log would have more issues?