Been looking around for quite a bit to see if someone could provide me with any directions and/or tests to fix this issue. Unsuccessful so far.
I'm working on a clients multidimensional cube (they have several in the same warehouse), and have created my own development copy from that exact cube so i don't break anything in production, while developing.
The issue is that whenever i edit my cube, and then deploy it removes the data from the cube, and in some programs the cube disappears all together. The cube itself is still visible in SSMS but contains no data.
I then have to do a full process of the entire database to get data back, which is rather annoying given it takes around 30-40 minutes where i then cannot work on it and its a minor change i've made (such as changing the Order property of a dimension from Name to Key or creating a Measure group)
Some settings/extra info:
When i deploy i have specified the cube to Do Not Process due to some prior processing issues when processing from BIDS
I have a delta process to keep data up to date, that runs continuously and doesn't fail. It moves no data to the failed cube however, but other cubes present works just fine.
In script view the first mdx statement under calculations is a calculate statement as some source suggested could be an issue if not.
It is deployed from VS 2008 (clients version)
Deploying to Localhost
The view upon which some dimensions are built, contain Union statements, but only contain a few records
Scenarios where it fails:
Refresh data source view
Create new dimension
Change dimension properties
Create measure groups
Updating dimensions
Properly more that i either haven't tested or can't remember
Does anyone have any idea of the issue and how to fix it? I really appreciate it if someone could point me in the right direction. I haven't found a solution yet.
Well, this is expected behaviour. SSAS creates aggregations during processing; in case the structure of the cube/dimension is changed then the existing aggregations become invalid and the entire cube goes into the "Unprocessed" state. As you have found out yourself you need to do the full process then to be able to browse the cube.
Here's a blog post with the list of actions and their effect on the state of the cube: http://bimic.blogspot.com/2011/08/ssas-which-change-makes-cubedimension.html
I suggest you create a small data set for the development purposes and test the cube on that data before moving to production. You can also limit the data loaded to the cube by switching to the query (instead of the table) in the partition designer; in the query you can then use WHERE condition to limit the records loaded to the cube and make the processing go faster.
Related
We are seeing some odd behaviour on our SSAS instances. We process our cubes as part of an overnight job on different environments, on our prod environment we process the cube on a separate server and then sync it out to a set of user facing servers. We are however seeing this behaviour even on environments where we process and query on a single instance.
The first user that hits any environment with fresh data seems to trigger a reload of the cube data from disk. Given we have 2 cubes that run to some 20Gb this takes a while. During this we are seeing low CPU utilisation, but, we can see the memory footprint of the SSAS instance spooling up, this is very visible if the instance has just been started as it seems to start using a couple of hundred Mb initially and then spool up to 22Gb at which point is becomes responsive for end users. During the spool up DAX stuiod/Excel/SSMS all seem to hang a far as the end user is concerned. Profiler isn't showing anything usfeul other than very slow responses to META data discover requests.
Is there a setting somewhere that can change this? Or do I have to run some DAX against the cube to "prewarm" it?
Is this something I've missed in the past because all my models were pretty small (sub 1Gb)
This is SQL 2016 SP2 running Tab Models at compat 1200.
Many thanks
Steve
I see that you are suffering from an acute OLAP cube cold. :)
You need to get it warmer (as you've guessed it, you need to issue a command against it, after (re)starting the service).
What you want to do, is issue a discover command - a query like this one should be enough:
SELECT * FROM $System.DBSCHEMA_CATALOGS
If you want the full story, and a detailed explanation on how to automate this warming, you can find my post here: https://fundatament.com/2018/11/07/moments-before-disaster-ssas-tabular-is-not-responding-after-a-server-restart/
Hope it helps.
Have fun. :)
I have been using UDF's for a few months now with a lot of success. Recently, I set up separate projects for development, and stream a sample of 1/10 of our web tracking data into these projects.
What I'm finding is that the UDF's I use in production, which operate on the full dataset, are working, while the exact same query in our development project consistently fails, despite querying 1/10 of the data. The error message is:
Query Failed
Error: Resources exceeded during query execution: UDF out of memory.
Error Location: User-defined function
I've looked through our Quotas and haven't found anything that would be limiting the development project.
Does anybody have any ideas?
If anybody can look into it, here are the project ids:
Successful query in production: bquijob_4af38ac9_155dc1160d9
Failed query in development: bquijob_536a2d2e_155dc153ed6
Jan-Karl, apologies for the late response; I've been out of the country to speak at some events in Japan and then have been dealing with oncall issues with production.
I finally got a chance to look into this for you. The two job ids you sent me are running very different queries. The queries look the same, but they're actually running over views, which have different definitions. The query that succeeded is a straight select * from table whereas the one that has the JS OOM is using a UDF.
We're in the midst of rolling out a fix for the JS OOM issue, by allowing the JavaScript engine to use more RAM.
...
...and now for some information that's not really relevant to this case, but that might be of future value...
...
In theory, it could be possible for a query to succeed in one project and fail in another, even if they're running over exactly the same dataset. This would be unusual, but possible.
Background : BigQuery operates and maintains copies of customer data in multiple datacentres for redundancy. Different projects are biased to run in different datacentres to help with load spreading and utilisation.
A query will run in the default datacentre for its project if the data is fresh enough. We have a process that replicates the data between datacentres, and we avoid running in a datacentre that has a stale copy of the data. However, we run maintenance jobs to ensure that the files that comprise your data are of "optimal" size. These jobs are scheduled separately per datacentre, so it's possible that your underlying data files for the same exact table would have a different physical structure in cell A and cell B. It would be possible for this to affect aspects of a query's performance, and in extreme cases, a query may succeed in cell A but not B.
In order to test part of my SSIS process, I want to simulate part of the SSAS process failing.
The Package runs several processing steps in OLAP and we want to be sure that it will run even in the case of a partial failure.
How can I simulate this?
Since I'm assuming you aren't doing this testing in your production environment, you could temporarily drop one of the tables/views that your cube depends on.
Depending on how you trap errors you could remove some dimension keys from the fact table.
I created some views as dimensions and fact. I set up proactive caching on measures hoping to see the update on measures automatically. In storage settings, I set up Automatically MOLAP for the partition (just one partition for the measure group) and in the Options, set Silence interval and Silence override interval to 10 seconds and 10 minutes respectively. Also checked Bring online immediately and Enable ROLAP aggregations. In the notification tab, I specified tracking table (SQL Server) as the fact view.
I deployed project. But when I manually add or delete a row in the underlying table of the fact view, there is no update in the cube browser after refreshing.
To prove this, I create another project with all the same dimensions and facts, except instead using views I used actual tables. With all the same proactive caching, this time I am able to see the changes (add/delete rows from fact table) after refreshing in the cube browser.
So anyone can explain this? Thanks.
You should configure SSAS to track the underlying tables instead of the view.
I am having a strange problem when building a cube on SSAS. I have a fact table, let's say FactActivity. Then I have a dimension DimActivity, which has a 1 to 1 relationship with this fact, and all the foreign keys are bound to the dimension. So date dimensions, product dimensions and so on, are all bound to the DimActivity.
When I build the whole cube, it seems it is building the fact before the dimension, therefore it gives me errors. If I however, manually build the dimension before the fact, it works.
Is there anywhere in the SSAS that I can configure the build order, other than doing this from SSIS with the use of the Analysis Services Processing Task?
Many thanks!
Processing a cube will not process the dimensions it relates to because they are constructed as separate entities in SSAS. In practice, this means that a dimension can exist, be processed and accessed without a relationship to a cube.
There is no such thing as a "general build order to configure". It is up to you to decide how AS objects should be processed. There are many tools that facilitate this, and they will all do the same thing: construct XMLA scripts to run on the AS server.
SSIS: Analysis Services Processing task
Configure a SQL agent job.
Perform a manual process using SSMS.
Program your processing activities using AMO
...
Important is that you should process your dimensions before you process your cube. A simple solution is to process the entire SSAS database (containing your cubes and dimensions). This way, SSAS will automatically process the dimensions before processing the cubes.
Documentation on processing Analysis Services objects
When Processing a Dimension or the whole cube, before you click 'Run', click the 'Change Settings...' button. There you can change the way it should process. This link describes the effect of the options available.
http://technet.microsoft.com/en-us/library/ms174774.aspx
HTH
For others who are encountering similar problems....
The reason I was getting occasionally cube processing errors, is that the refreshing was happening at the same time - due to scheduled hourly imports.
I am now using logs to see what SSIS package is running. When importing activity, I inserted a record into this table, with a "Running" status.
Before processing the cube, I have a semaphore to check if records in this table, which are data imports and have a "Running" status. I only allow the refresh of the cube to happen if no imports are currently running. When the cube is processing, the imports also have a semaphore, and will not start importing, unless no cube processing is currently "Running".
After implementing this logic, I've never gotten any errors when processing the cubes.