SSAS corrupt string store data file for one of the table columns error - ssas

I'm getting error on SSAS when redeploy the project. The error is;
The JSON DDL request failed with the following error: Error happened while loading table data. Possible cause is: corrupt string store data file for one of the table columns.Error happened while loading table data.A duplicate value has been detected in the Unique Value store associated with the dictionary.Database consistency checks (DBCC) failed while checking the data segments.Error happened while loading table '', file '1245.H$Countries (437294994)$Country (437295007).POS_TO_ID.0.idf'.Database consistency checks (DBCC) failed while checking the data segments.Error happened while loading table '', file '1245.H$Countries (437294994)$City ....
I checked the table Countries but there is no duplicated data.
Is there anybody who can help please?

As the error implies, the model has some corrupted data (not to be confused with duplicated data).
Microsoft has some resolutions for there kinds of errors here: https://learn.microsoft.com/en-us/analysis-services/instances/database-consistency-checker-dbcc-for-analysis-services?view=asallproducts-allversions#common-resolutions-for-error-conditions
TL:DR:
Depending on the error, the recommended resolution is to either
reprocess an object, delete and redeploy a solution, or restore the
database.

Related

Criteria: Between #date# And #otherDate# cause Microsoft Access database to corrupt and then I get this error message: Unrecognized database format

In Microsoft Access, I made a simple query with criteria to list all entries in a table between two dates. It was working well for more than a year, but now I think I am facing a bug...
Between #2022-11-29# And #2023-01-26# causes a bug that corrupts the database and then I get the error message: Unrecognized database format
Between #2022-11-29# And #2023-01-25# causes no bug and no error message and I get the data
<#2022-11-29# causes no bug and no error message and I get the data
Any idea how we can find what is causing the bug and prevent database to get currupted?
Thank you!
I removed the data from shared folder and recreated the problem locally on one computer. Same issue...
*** UPDATE ***
I found this in the result when I query: <#2022-11-29#
I am trying to remove the entry but I always get the Invalid bookmark popup. any idea how to remove this line?
Here is how I fixed this:
I exported all the data I could to Excel from the corrumpted database and then removed problematic data in Excel manually.
I exported same data of the most recent backup in Excel too.
I merged all the data into excel. Verified the data manually and with formulas comparing differences between sheets.
I then deleted all the entries in Access tables.
I imported the data from Excel sheets to Access tables.
The problem seems to be gone.

Why does WiX Torch giving me an error code 0279?

I have a problem when building .wix MST difference file. I get the following error:
"The table definition of target database does not match the table definition updated database. A transform requires that the target database schema match the update database schema".
I tried finding solution on internet for almost 2 hours but no luck. I know It is probably caused by difference in MSI tables schema but I have no idea how to fix that.

databricks error IllegalStateException: The transaction log has failed integrity checks

I have a table that I need drop, delete transaction log and recreate, but while I am trying to drop I get following error.
I have ran repair table statement on this one and could be responsible for error but not sure.
IllegalStateException: The transaction log has failed integrity checks. We recommend you contact Databricks support for assistance. To disable this check, set spark.databricks.delta.state.corruptionIsFatal to false. Failed verification of:
Table size (bytes) - Expected: 0 Computed: 63233
Number of files - Expected: 0 Computed: 1
We think this may just be related to s3 eventual consistency. Please try waiting a few extra minutes after deleting the Delta directory before writing new data to it. Also, normal MSCK REPAIR TABLE doesn't do anything for Delta, as Delta doesn't use the Hive Metastore to store the partitions. There is an FSCK REPAIR TABLE, but that is for removing the file entries from the transaction log of a Databricks Delta table that can no longer be found in the underlying file system.
We don't recommend overwriting a Delta table in place, like you might with a normal Spark table. Delta is not like a normal table - it's a table, plus a transaction log, and many versions of your data (unless fully vacuumed). If you want to overwrite parts of the table, or even the whole table, you should use Delta's delete functionality. If you want to completely change the table, consider writing to an entirely new directory, such as /table/v2/... and separately deleting the other table.
To skip the issue from occurring can use below command (PySpark notebook):
spark.conf.set("spark.databricks.delta.state.corruptionIsFatal", False)

Is there size limit on appending ORC data files to Vora tables

I created a Vora table in Vora 1.3 and tried to append data to that table from ORC files that I got from SAP BW archiving process (NLS on Hadoop). I had 20 files, in total containing approx 50 Mio records.
When I tried to use the "files" setting in the APPEND statement as "/path/*", after approx 1 hour Vora returned this error message:
com.sap.spark.vora.client.VoraClientException: Could not load table F002_5F: [Vora [eba156.extendtec.com.au:42681.1640438]] java.lang.RuntimeException: Wrong magic number in response, expected: 0x56320170, actual: 0x00000000. An unsuccessful attempt to load a table might lead to an inconsistent table state. Please drop the table and re-create it if necessary. with error code 0, status ERROR_STATUS
Next thing I tried was appending data from each file using separate APPEND statements. On the 15th append (of 20) I've got the same error message.
The error indicates that the Vora engine on node eba156.extendtec.com.au is not available. I suspect it either crashed or ran into an out-of-memory situtation.
You can check the log directory for a crash dump. If you find one, please open a customer message for further investigation.
If you do not find a crash dump, it is likely a out-of-memory situation. You should find confirmation in either the engine log file or in /var/log/messages (if the oom killer ended the process). In that case, the available memory is not sufficient to load the data.

BIDS Package Error's on Truncate while EXPORTING to flat file

I have a BIDS package. The final "Data Flow Task" exports a SQL table to Flat File. I receive a truncation error on this process. What would cause a truncation error while exporting to flat file? The error was occurring within the "OLE DB" element under the Data Flow tab for the "Data Flow Task".
I have set the column to ignore truncation errors and the export works fine.
I understand truncation errors. I understand why they would happen when you are importing data into a table. I do not understand why this would happen when outputting to a flat file.
This might be occurring for many reasons. Please make sure some of the steps listed below:
1) Check the source Data types that has to match with destination data type. If there are different it might through Truncation Error.
2) Check if there are blocks :- You can check this by creating Data viewer before the Destination and see the data come through.