I'm getting error on SSAS when redeploy the project. The error is;
The JSON DDL request failed with the following error: Error happened while loading table data. Possible cause is: corrupt string store data file for one of the table columns.Error happened while loading table data.A duplicate value has been detected in the Unique Value store associated with the dictionary.Database consistency checks (DBCC) failed while checking the data segments.Error happened while loading table '', file '1245.H$Countries (437294994)$Country (437295007).POS_TO_ID.0.idf'.Database consistency checks (DBCC) failed while checking the data segments.Error happened while loading table '', file '1245.H$Countries (437294994)$City ....
I checked the table Countries but there is no duplicated data.
Is there anybody who can help please?
As the error implies, the model has some corrupted data (not to be confused with duplicated data).
Microsoft has some resolutions for there kinds of errors here: https://learn.microsoft.com/en-us/analysis-services/instances/database-consistency-checker-dbcc-for-analysis-services?view=asallproducts-allversions#common-resolutions-for-error-conditions
TL:DR:
Depending on the error, the recommended resolution is to either
reprocess an object, delete and redeploy a solution, or restore the
database.
I get the following error in Replication Monitor:
The row was not found at the Subscriber when applying the replicated UPDATE command for Table '[dgv].[POSCustomer]' with Primary Key =
The error is actually not about the missing row, but that the table's schema says dgv.
The publication that generated the error is supposed to only replicate to [ppv].[POSCustomer], and should not even be aware of [dgv].[POSCustomer]. And only rows created AFTER the initial snapshot is delivered are affected.
The background:
I'm setting up transactional replication for 3 on-premises databases PPV, DGV, and PAC to a single Azure SQL database.
The three databases belong to different legal entities, on two separate servers (PPV on one, DGV and PAC on another), and have identical schemas.
Tables with the same names from each dbs are set up to be replicated.
To differentiate them in the target db, I put them in three different schemas using the name of their source dbs, i.e ppv.POSCustomer, dgv.POSCustomer, pac.POSCustomer.
This is done by changing the setting in Publication properties -> Articles -> Article properties -> Destination object owner.
The initial snapshots are delivered without problems; however, after some time, the row was not found started showing up in the replication monitor.
I tried re-initializing the subscriptions several times, but the error keeps showing up after the snapshot is delivered.
All rows created after the snapshots are delivered are affected.
The databases are totally isolated from each other, there are no cross database queries, no stored procedures, no triggers that says a record from PPV.dbo.POSCustomer should be updated in DGV.dbo.POSCustomer, so I'm at a loss as why this error happened.
I used sp_browsereplcmd to trace the command that generated the error, which leads me to:
{CALL [sp_MSupd_dboPOSCustomer] (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,2019-05-14 00:00:00.000,,27280000.0000,10,,,,,,,,,,,,2019-05-14 18:30:04.000,,,,,,,,,,,,,,,,,,,,N'vinhn4-00001395',0x00000000d000080000)}
which I don't understand, and the sp is not part of our POS app.
How can I make this error go away? Manually inserting missing rows will not work, as all new rows are affected. Turning on -skiperrors is not an option. Replicating to different target databases have been done successfully before, but setting up cross database query is such a pain with Azure SQL that I'd prefer to avoid \if possible.
The issue I am facing in my nodejs application is identical to this user's question: Cannot insert new value to BigQuery table after updating with new column using streaming API.
To my understanding changes such as widening a table's schema may require some period of time before streamed inserts can reference the new columns otherwise a 'no such field' error is returned. For me this error is not always consistent as sometimes I am able to successfully insert.
However, I specifically wanted to know if you could alternatively use a load job instead of streaming? If so what drawbacks does it have as I am not sure of the difference even having read the documentation.
Alternatively, if I do use streaming but with the ignoreUnknownValues option, does that mean that all of the data is eventually inserted including data referencing new columns? Just that new columns are not queryable until the table schema is finished updating?
I am building a application with ASP.NET Core MVC6 and Entity Framework Core code first with build in DB context, the SQL database has already been populated with data records. I recently made some small changes to the data models and recreate the migration, with commands as ("dotnet ef migrations add Stage3", "dotnet ef database update") in VS 2015 Package Manager Console, but it ran into error as:
dotnet.exe : System.Data.SqlClient.SqlException (0x80131904): There is already an object named 'Company' in the database.
Company table is on the top of the tables relationship, it seems that because the table Company is already there and the EF can not update the new table structure. If I change DB name in the connection string, it will create new database with new table structure without any issues. I am not sure how to address this issue? After the application go live in the near future I will properly make more changes to the Modes and will have same issue again and I cannot delete database with live data to recreate new table structure, Maybe I should configure it in the Startup.cs file, but I haven't found any useful resources yet. Please give me some advises.
I have attempted to change the DB Initializer as attached screenshot, but not sure how to do it.
I checked the project code again, the migration has not been applied to __MigrationHistory table, the migration code actually contained the code to create whole database structure as sample below:
migrationBuilder.CreateTable(
name: "Company",
columns: table => new
{
CompanyId = table.Column<int>(nullable: false)
.Annotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn),
CompanyName = table.Column<string>(maxLength: 100, nullable: false),
IsAdmin = table.Column<bool>(nullable: false)
},
constraints: table =>
{
table.PrimaryKey("PK_Company", x => x.CompanyId);
});
And I haven't changed the project namespace. Recently I just made some changes on few table relationships such as site user permission table(company has many sites). I added a permission table, so now site user permission table can have multiple permissions type instead of single permission type.
Not sure how to set up automatic migrations in Entity framework core.
In Entity framework code first approch, there are four different database initialization strategies:
CreateDatabaseIfNotExists: This is default initializer. As the name
suggests, it will create the database if none exists as per the
configuration. However, if you change the model class and then run
the application with this initializer, then it will throw an
exception.
DropCreateDatabaseIfModelChanges: This initializer drops an existing database and creates a new database, if your model classes
(entity classes) have been changed. So you don't have to worry about
maintaining your database schema, when your model classes change.
DropCreateDatabaseAlways: As the name suggests, this initializer drops an existing database every time you run the application,
irrespective of whether your model classes have changed or not. This
will be useful, when you want fresh database, every time you run the
application, like while you are developing the application.
Custom DB Initializer: You can also create your own custom initializer, if any of the above doesn't satisfy your requirements or
you want to do some other process that initializes the database using
the above initializer.
So if you are using DropCreateDatabaseIfModelChanges or DropCreateDatabaseAlways then replace it with CreateDatabaseIfNotExists.
Please try this out.
I'm using Fluent NHibernate (and I'm a newbie). I have mapped a read-only table that already exists in the database (it's actually a view in the db). In addition, I have mapped new classes for which I want to create tables using SchemaExport.Create().
In my fluent mapping, I have specified "ReadOnly()" to mark the view as immutable. However, when I execute SchemaExport.Create(), it still tries to create the table so I get the error "There is already an object named 'vw_Existing'".
Is there a way to prevent NHibernate from trying to create that specific table?
I supposed I could export and modify the sql (SetOutputFile), but it would be nice to use SchemaExport.Create().
Thanks.
You're looking for
SchemaAction.None();