Difference between 'process default' and 'process full'? - ssms

What is the difference between ‘Process Default’ and ‘Process Full’?
When I do ‘Process Default’ it works fine, but if I use ‘process full’ I get this error?
Why is this??

See doc at https://learn.microsoft.com/en-us/analysis-services/tabular-models/process-database-table-or-partition-analysis-services?view=asallproducts-allversions#bkmk_process_db
Default won't load data unless it has to. If you process a newly created object, or a database that has unprocessed objects, it will load that data. But if the data is loaded already, then SSAS won't touch it.
Full loads everything and recalculates everything every time.
Full is failing for you as the message states, because of a login issue. You need to update the credentials being used.

When only the meta data changes, choose Process Default
When the data changes, choose Process Full
Pro tip: Always use the "Script" button and execute the code, you'll get more debug information when it fails.

Related

how do i fix qlik sense/qlik view "The requested resource cannot be found." when trying to export data

I'm new to qlik - having been handed over a project from someone else.
Qlik has been used primarily as a report generator. I've got a sheet that I right click on and export data - and has been working fine - but now that it's over 1 million rows - the export data takes a really long time and when it finally says "here's a link to download your exported data" - regardless of whether I click on that link immediately, or wait a bit and then click it, it always just says "The requested resource cannot be found." Every time.
This sheet was working fine with about 800,000 rows, and other sheets in the same app still work fine - so it's definitely just the volume - feels like I'm going over some "report will be available for this long" type time limit and qlik is just deleting it immediately upon creation.
I've no idea how to even begin troubleshooting/fixing this - any suggestions?
My guess here is that you are reaching some timeout and/or limit - memory or file size.
Few things on top of my head:
NPrinting is used for such type of tasks. The downside is that there is a cost involved (you'll have to purchase license)
Depends on your data but you can export/store data during the reload process as csv (STORE MyTable INTO ..\Ouptut\MyCSVFile.csv (txt);)
Write script that connects to the Qlik Sense app (Engine JSON-RPC API) and traverse through the required table(s) data and store the response in whatever format you want

Is it possible to make an MS access 2007 code patch file?

I've been asked to make some changes to an access app, which looks easy enough to do. The data and the code reside in one file. The trouble is while I'm working on a copy of the file from some point in time the production file keeps changing, the plant operates 24/7. I'll need to make a patch to quickly apply to the production file so they can do an immediate switchover without shutting things down.
Is it possible to make a code patch this way in access, so I don't have to type my changes in all over again?
Otherwise, how can I best split the code and the data schema from the data, so I can work on as much code as possible that is independent of the data growth?
Thanks.
I found the wizard that splits a single access file into a front end and back end file. Now I can update the front end file whenever I want to without losing the latest data.
In my case, I had to hold the shift key when opening the database to bypass the macro hiding the menus from the production users.

SQL Server Reporting Studio report showing "ERROR#" or invalid data type error

I struggled with this issue for too long before finally tracking down how to avoid/fix it. It seems like something that should be on StackOverflow for the benefit of others.
I had an SSRS report where the query worked fine and displayed the string results I expected. However, when I tried to add that field to the report, it kept showing "ERROR#". I was eventually able to find a little bit more info:
The Value expression used in [textbox] returned a data type that is
not valid.
But, I knew my data was valid.
Found the answer here.
Basically, it's a problem with caching and you need to delete the ".data" file that is created in the same directory as your report. Some also suggested copying the query/report to a new report, but that appears to be the hard way to achieve the same thing. I deleted the .data file for the report I was having trouble with and it immediately started working as-expected.
After you preview the report, click the refresh button on the report and it will pull the data again creating an updated rdl.data file.
Another solution to this issue is to click Refresh Fields in the Dataset Properties menu.
This will update the list of fields, and force SSRS to get new data, rather than relying on a cached version.

SSIS Package Not Populating Any Results

I'm trying to load data from my database into an excel file of a standard template. The package is ready and it's running, throwing a couple of validation warnings stating that truncation may occur because my template has fields of a slightly smaller size than the DB columns i've matched them to.
However, no data is getting populated to my excel sheet.
No errors are reported, and when I click preview for my OLE DB source, it's showing me rows of results. None of these are getting populated into my excel sheet though.
You should first make sure that you have data coming through the pipeline. In the arrow connecting your Source task to Destination task (I'm assuming you don't have any steps between), double click and you'll open the Data Flow Path Editor. Click on Data Viewer, then Add and click OK. That will allow you to see what is moving through the pipeline.
Something to consider with Excel is that is prefers Unicode data types to Non-Unicode. Chances are you have a database collation that is Non-Unicode, so you might have to convert the values in a Data Conversion task.
ALSO, you may need to force the package to execute in 32bit runtime. The VS application develops in a 32bit environment, so the drivers you have visibility to are 32bit. If there is no 64bit equivalent, it will break when you try and run the package. Right click on your project and click Properties and under the Debug menu you'll need to change the setting Run64BitRuntime to FALSE.
you dont provide much informatiom. Add a Data View between your source and your excel destination to see if data is passing through. Do do it, just double click the data flow path, select data view and then add a grid.
Run your app. If you see data, provide more details so we can help you
Couple of questions that may lead to an answer:
Have you checked that data is actually passed through the SSIS package at run time?
Have you double checked your mapping?
Try converting within the package so you don't have the truncation issue
If you add some more details about what you're running, I may be able do give a better answer.
EDIT: Considering what you wrote in your comment, I'd defiantly try the third option. Let us know if this doesn't solve the problem.
Just as an assist for anyone else running into this - I had a similar issue and beat my head against the wall for a long time before I found out what was going on. My export WAS writing data to the file, but because I was using a template file as the destination, and that template file had previous data that had been deleted, the process was appending the data BELOW the previously used rows. So, I was writing out three lines of data, for example, but the data did not start until row 344!!!
The solution was to select the entire spreadsheet in my template file, and delete every bit of it so that I had a completely clean sheet to begin with. I then added my header lines to the clean sheet and saved it. Then I ran the data flow task and...ta-daa!!! Perfect export!
Hopefully this will help some poor soul who runs into this same issue in the future!

Is it possible to force an error in an Integration Services data flow to demonstrate its rollback?

I have been tasked with demoing how Integration Services handles an error during a data flow to show that no data makes it into the destination. This is an existing package and I want to limit the code changes to the package as much as possible (since this is most likely a one time deal).
The scenario that is trying to be understood is a "systemic" failure - the source file disappears midstream, or the file server loses power, etc.
I know I can make this happen by having the Error Output of the source set to Failure and introducing bad data but I would like to do something lighter than that.
I suppose I could add a Script Transform task and look for a certain value and throw an error but I was hoping someone has come up with something easier / more elegant.
Thanks,
Matt
mess up the file that you are trying to import by pasting some bad data or saving it in another format like UTF-8 or something like that
We always have a task at the end that closes the dataflow in our meta data tables. To test errors, I simply remove the ? that is the variable for the stored proc it runs. Easy to do and easy to fix back the way it was and it doesn't mess up anything datawise as our error trapping then closes the the data flow with an error. You could do something similar by adding a task to call a stored proc with an input variable but assign no parameters to it so it will fail. Then once the test is done, simply disable that task.
Data will make it to the destination if it is not running as a transaction. If you want to prevent populating partial data you have to use transactions. Then there is an option to set the end result of a control flow item as "failed" irrespective of the actual result but this is not available in data flow items. You will have to either produce an actual error in the data or code in a situation that will create an error. There is no other way...
Could we try with transaction level property of the package?
On failure of the data flow it will revert all the data from the target.
On successful dataflow only it will commit the data to target otherwise it will roll back the data from target.