I have the following error :
" Errors in the OLAP storage engine: Rigid relationships between attributes cannot be changed during incremental processing of a dimension. The error occurred when processing attribute 'Month'.. Source attribute: 'Date'. Key column value(s) of the source attrbibute: '112234' ".
What happened :
I made insert yesterday with record '112234' in the source, after which I processed the dimension, but I realized I was wrong. I did an update in the source, changing '112234' to '112233', after which I tried to process the dimension again, but received the above error message.
I thought that was because of incremental issue. So I waited for today to do the insert with today's key, '112235', but at the dimesion processing I got the same error as before, even though the value is higher than the one from yesterday.
How can I get rid of '112234' and to get the dimension to take the latest value, '112235', and increment from there? --> I don't want to change the relationships.
Related
We are running our etl incrementally on ga_sessions table partitioned by date, and for one particular day etl fails with this error ** Cannot read non-required field as required STRUCT Field: trafficSource.adwordsClickInfo.targetingCriteria**.
This failure is just for that particular day alone, and works for rest of the days. Not sure, what is the error all about and is there a way, where I can ignore those records or find those ?
Attaching the schema information along with sample query,
Below query works for other days.
Failure when 16th November is included.
Suppose I have a dimension DIM_Users with two attributes UserId [bigint] and Reputation [int]. In this case I can successfully process the table.
But, after I add DisplayName [nvarchar(255)] attribute to the dimension, processing fails with the next message:
Errors in the OLAP storage engine: The attribute key cannot be found
when processing: Table: 'cube_DIM_Users', Column: 'DisplayName',
Value: 'Justin ᚅᚔᚈᚄᚒᚔ'. The attribute is 'Display Name'.
Comparing the screenshots I've noticed that the first time 5987286 UserIds were processed (which is the correct value), but the second time only 70000.
And also I see that the value "Justin ᚅᚔᚈᚄᚒᚔ" looks strange, but I can't figure out how it can affect processing of the Attribute Key.
Any ideas about what's wrong with my dimension?
I've found this article but it doesn't help.
It seems this problem is caused by a collation mismatch between your data source and ssas. You will get a better understanding for possible collation issues if you fire a sql select like SELECT DISTINCT UserId FROM yourTable WHERE UserId LIKE 'Justin%'. There should be more than one entry, which potentially causes collation issues.
Please try the following workaround, if your attribute "User Id" is unique. Add an artificial unique key for each UserId row to your dimension table, e.g. an incrementing integer. Assign this created key to the key column of your attribute and assign your "UserId" to the name column.
Hint: If you expand the key column properties of an attribute in a ssas dimension, you can also change the collation ssas is using for processing. I've tried this in the past but sometimes it didn't resolve collation based issues for me.
I'm trying to add a new column to my SSAS cube. The column is a date field, and links to my DimDate table (a Date dimension). This date represents the project completion date.
However.... not all of the projects have a project completion date due to old projects not ever being assigned this value. And this is expected. We don't want to put bogus dates into the field just to get SSAS to work.
When processing the cube, it crashes with:
Errors in the OLAP storage engine: The attribute key cannot be found when
processing: Table: 'dbo_FactMyTable', Column: 'MyDate_id', Value: '0'.
The attribute is 'Date Id'.
I can't disable "missing values" for the entire project because in most cases, this really is an error. How can I disable missing values for this dimension?
Or is there a better way to handle missing dates/values like this?
Small correction - based on your question, you need to change Processing error handling for special Measure Group, not Dimension. You can do it for all dimensions linked to some measure group, but not to specific dimension.
You can process individual measure group with _Table: 'dbo_FactMyTable'_ first with necessary missing value settings, and then - process rest of your cube with default settings.
Main problem here - how to process rest of the cube. You might have sophisticated system which creates processing XMLA scripts dynamically based on data update knowledge (I do it with SSIS); in this case you would not ask this question. Suppose your environment is simpler - you update cube and would like to process it as a whole completely. In such scenario I would sudgest the following workflow:
Process Default all Dimensions (will do initial processing or in structure changes)
Process Update all Dimensions
Process Cube with Unprocess - invalidating it
Process your special measure group
Process Cube with Process Default
This will first update Dimensions, then - clear processing status flag from all measure groups in the Cube. After that you process your measure group with special flags; this set processing status for this MG. And then during Process Default on Cube - only unprocessed MGs will be covered, which excludes your special MG from processing scope.
The answer is a bit complicated, but this article did a great job of explaining it, including screen shots for the SSAS-challenged like me.
http://msbusinessintelligence.blogspot.com/2015/06/handling-null-dates-in-sql-server.html?m=1
I have a dimension with attribute AGE.
I have applied discretization on that attribute where the bucket count is 20.
Everything works fine when we have enough values for AGE column in the underlying database.
But recently we updated the table and none but one row has value in AGE column.
Now I am getting processing error saying there is not enough value to create the bucket.
Can I bypass this error and still process the cube? I want the cube not to give processing error even if we do not have enough data in the underlying table to create buckets.
Unfortunately, no. The only way is to re-tune DiscretizationMethod property to None manually.
I also tried changing directly in XML:
From Automatic to None:
But failed as expected: no changes were applied.
I have created Dimension - Vendors,Distributors and Time
And I have Fact tables Purchase and Total Paid Amount.
I have also set below properties for Dimension :
UnknownMember : Hidden
and for the Key Column Property
NullProcess : Error
I have also done properly Attribute Relationship on the Cube. So that it is giving me proper result as below.
It is showing values without Unknown Members.
Now, On my report I have put the slicer for the Venders.
And it still shows Unknown as a member. I don't even have any null key in any table.
Anyone has any idea, how to get rid of this unknown member with Best Practices?
I am using SQL Server 2012 with Latest NU update. So that I can create Power View reports using connection on sharepoint site.
Thank you,