SQL Server MergeReplication with single GUID colum - sql

i have an table which contains an single column, which is the Primary key, uniqueidentified, rowguid = true. if creating an merge replication, the Agent Fails to start with:
"The article includes only the rowguidcol column. You must publish at least one other column."
is there any way to publish this table, without removing the rowguid and adding an second column?
thx

What is happening is the Snapshot Agent is misidentifying your rowguid column as the ROWGUIDCOL required for each table published in Merge Replication.
See the section Snapshot Considerations in Enhance Merge Replication Performance for more information on the column Merge Replication creates and uses to uniquely identify each row in a published article.
Due to the nature of your existing column, will most likely need to add a second dummy column to get this working for Merge Replication.

Related

SQL Server constraint enforcement being violated for split seconds

I have a table on premise that is about 21 million rows with a primary key constraint and when I search that table, there are no duplicates. This table is in an OLTP application database that is constantly moving.
I have the exact same table in Azure which has the same primary key constraint. This table is not an application table, it's just a copy of the one that is on-premise (the goal is to use this one for ad hoc queries, as a source for other systems, etc.).
When I use Azure Data Factory to select all_columns from table on premise to the table in Azure, it returns a violation of the primary key constraint. No matter how many times I run this data factory pipeline, it comes back with a primary key violation for duplicate keys (the keys are always changing though).
So I dropped the primary key constraint in Azure and ran the pipeline again, and sure enough, duplication exists.
Upon investigation, it appears that the on-premise database is doing an insert new record then update the old record to inactivate it. So for a fraction of a second, there are two active rows that ADF is grabbing to then try to insert into the table in Azure which of course fails because of duplicate primary keys.
Now to the best of my knowledge, this shouldn't be possible. You can't insert a new row that violates the primary key constraint. But ADF seems to be grabbing all the data and some of those rows are mid-flight where the insert has happened and the update to inactivate the old row hasn't happened yet.
For those that are curious, the insert happens and the update of the old row happens within less than a second... it's typically 10-20 microseconds. I don't know how this is possible and I don't know how to fix it (because I can't modify the application code). The database for the on-premise database is a SQL Server 2000 database and Azure SQL is an Azure SQL database.
Try with readpast hint. It should not select any rows in locking state.
SELECT * FROM yourtable WITH (readpast)
Since you have create_date and updated_date column then you can select rows older than 5 seconds to avoid duplication.
select * from yourtable where created_date<=dateadd(second,-5,getdate()) and updated_date<=dateadd(second,-5,getdate());
Need to enable the Fault tolerance in a Pipeline Azure Data Factory
Copy data from a Source SQL to a Sink SQL database. A primary key is defined in the sink SQL database, but no such primary key is defined in the source SQL server. The duplicated rows that exist in the source cannot be copied to the sink. Copy activity copies only the first row of the source data into the sink. The subsequent source rows that contain the duplicated primary key value are detected as incompatible and are skipped.
To configure Json Definition skip the incompatible rows in copy activity "enableSkipIncompatibleRow": true
Please Refer: https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-fault-tolerance
If possible to modify your application, need to check the Primary key constraint before insert or update using EXISTS() function.
Example:
IF EXISTS(SELECT * FROM Table_Name WHERE primary key condition)
BEGIN
UPDATE Table_Name
SET Col_Name= value
WHERE condition
END
ELSE
BEGIN
INSERT INTO Table_Name ( col_Name1,col_Name2,,.. )
VALUES ( ‘’,’’,’’,….)
END

Repeatable write to SQL Sink using Azure Data Factory is failing

Scenario: I'm copying data from Azure Table Storage to an Azure SQL DB using an upsert stored procedure like this:
CREATE PROCEDURE [dbo].[upsertCustomer] #customerTransaction dbo.CustomerTransaction READONLY
AS
BEGIN
MERGE customerTransactionstable WITH (HOLDLOCK) AS target_sqldb
USING #customerTransaction AS source_tblstg
ON (target_sqldb.customerReferenceId = source_tblstg.customerReferenceId AND
target_sqldb.Timestamp = source_tblstg.Timestamp)
WHEN MATCHED THEN
UPDATE SET
AccountId = source_tblstg.AccountId,
TransactionId = source_tblstg.TransactionId,
CustomerName = source_tblstg.CustomerName
WHEN NOT MATCHED THEN
INSERT (
AccountId,
TransactionId,
CustomerName,
CustomerReferenceId,
Timestamp
)
VALUES (
source_tblstg.AccountId,
source_tblstg.TransactionId,
source_tblstg.CustomerName,
source_tblstg.CustomerReferenceId,
source_tblstg.Timestamp
);
END
GO
where customerReferenceId & Timestamp constitute the composite key for the CustomerTransactionstable
However, when I update the rows in my source(Azure table) and rerun the Azure data factory, I see this error:
"ErrorCode=FailedDbOperation,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=A
database operation failed with the following error: &apos;Violation of
PRIMARY KEY constraint &apos;PK_CustomerTransactionstable&apos;.
Cannot insert duplicate key in object
&apos;dbo.CustomerTransactionstable&apos;. The duplicate key value is
(Dec 31 1990 12:49AM, ABCDEFGHIGK).\r\nThe statement has been
terminated.&apos;,Source=.Net SqlClient Data
Provider,SqlErrorNumber=2627,Class=14,ErrorCode=-2146232060,State=1,Errors=[{Class=14,Number=2627,State=1,Message=Violation
of PRIMARY KEY constraint &apos;PK_CustomerTransactionstable&apos;"
Now, I have verified that there's only one row in both the source and sink with a matching primary key, the only difference is that some columns in my source row have been updated.
This link in the Azure documentation speaks about the repeatable copying, however I don't want to delete rows for a time range from my destination before inserting any data nor do I have the ability to add a new sliceIdentifierColumn to my existing table or any schema change.
Questions:
Is there something wrong with my upsert logic? If yes, is there a better way to do upsert to Azure SQL DBs?
If I choose to use a SQL cleanup script, is there a way to delete only those rows from my Sink that match my primary key?
Edit:
This has now been resolved.
Solution:
The primary key violation will only occur if it's trying to insert a
record which already has a matching primary key. In my case there
although there was just one record in the sink, the condition on which
the merge was being done wasn't getting satisfied due to mismatch
between datetime and datetimeoffset fields.
Have you tried it using ADF Data Flows with Mapping Data Flows instead of coding it through a stored procedure? It may be much easier for you for upserts with SQL.
With an Alter Row transformation, you can perform Upsert, Update, Delete, Insert via UI settings and picking a PK: https://learn.microsoft.com/en-us/azure/data-factory/data-flow-alter-row
You would still just need a Copy Activity prior to your Data Flow activity to copy the data from Table Storage. Put it in a Blob folder and then Data Flow can read the Source from there.

How to transfer data using SSIS

I am new to SSIS packages and just require assistance on how to transfer data from one data source onto my own database.
Below is my data flow:
Now I have a ODBC Source (Http_Requests Source) where I take data from a PostgreSQL database table (see screenshot below for table columns and data):
Below is the OLE DB destination where it has the table I want to transfer the data to (this table is currently blank):
Now I tried to start debugging to extract the data but I get a few errors (displayed below):
I am a complete novice so I would like some guidance on what I need to include in order to get this SSIS package to transfer data across. Would I need to include a merge statement and how do I apply it. I heard you can write a merge as a proc and call on the proc as a sql command. Does that mean I will need to write a proc in SSMS and then call on it within the OLE DB Destination?
If somebody can provide an example and screenshot then that would be very helpful as I am really new to SSIS.
Thank you,
Check constraint on destination table or disable them before running it.
Below are query you can use.
-- Disable all table constraints
ALTER TABLE YourTableName NOCHECK CONSTRAINT ALL
-- Enable all table constraints
ALTER TABLE YourTableName CHECK CONSTRAINT ALL
Tick keep identity
box or drop primary key on the table. After you apply the changes do not forget to refresh metadata by opening the mappings in sis.
the error means that PerformanceId is an IDENTITY column on your destination table. IDENTITY columns are read only unless you tell it otherwise. So if we were in tSQL to be able to insert IDENTITY we would turn on IDENTITY_INSERT. Because you are in SSIS you can accomplish the same thing by checking the "keep identity" box.
HOWEVER when ever you get an error like this it is usually a sign that you should NOT be mapping ID to Performance ID. The question you have to ask is the Identity from your source supposed to be the identity of the destination table? Usually not, most of the time it would be another column as a surrogate key. Then you have to understand if it is even possible. because if there is a unique constraint or primary key then the identity cannot repeat which means you have to know that your source's id column will not cause a duplicate primary key violation.
More than likely the actual fix if for you to uncheck ID from the source and ignore the value.
The column PerformanceID (in the target) is almost certainly an identity column and that is why it is not working. You may not want to transfer it (and have SQL Server generate values for PerformanceID or you can check 'Keep Identity.'

SSIS Data Migration Primary Key Identity Conflicts

We have developed a large data migration from one DB schema to the other. We had built it based on the idea that the destination DB would be empty, however months ago we started putting clients on the new application which means their data is being housed in the new schema (the destination DB).
Now we're in a situation where the primary keys could overlap from the source to the destination DB and we're struggling to come up with a solution. The only solution I can think of is to check if the ID exists in the destination, updated the ID in the source to be 1 more than the greatest ID in the destination, and then migrate the record. This seems really cumbersome to have to do for hundreds of tables. Any ideas?
Sorry I don't know anything about SSIS but the following are a few ways to solve the problem using SQL.
When inserting into the destination tables, do not insert identities. As rows are inserted, capture the newly inserted identities and the old identities in a mapping table, see MERGE + OUTPUT INTO. Use the mapping table to update the tables that haven't been inserted, substituting the old identities with the new identities.
Of course for this to work, insertion into tables has to be done in an order that won't cause foreign key or constraint violations.
If you're not into doing all that, and you can lock users out of tables for short periods of time, DBCC CHECK INDENT could be used to 'reserve' identities. These new identities can then be used to update the old data and then insert with SET IDENTITY_INSERT ON.

Replication - Explicit value must be specified for identity column in table

I'm using Merge Replication. The Identity range management is AUTOMATIC
I HAVE A TRIGGER ON COMPANIES TABLE WHICH INSERTS ROWS IN SERIALNUMBERSCHEME TABLE which has documentID as identity column
While synchronizing I'm getting below error
A row insert at 'SERVER\MUMBAI.PROD_SUB' could not be propagated to 'SERVER\NEWYORK.PROD'. This failure can be caused by a constraint violation. Explicit value must be specified for identity column in table 'SerialNumberScheme' either when IDENTITY_INSERT is set to ON or when a replication user is inserting into a NOT FOR REPLICATION identity column.
Data is inserted properly at subscriber but not replicated at publisher
Any solution/suggesstion?
Sounds like your trigger gets fired when the replication agent applies the updates. Normally the trigger should run only at the publisher (or more precisely, at the site which inserts the original data). Then replication will replicate the effect of the trigger. I think that all you need is to mark the trigger as NOT FOR REPLICATION.
See Controlling Constraints, Identities, and Triggers with NOT FOR REPLICATION.