Duplicate message detection on failover - activemq

We have ActiveMq 5.15.2 in following configuration:
PostgreSQL for persistance
two nodes, one in standby
JDBC master slave with shared database
static cluster discovery
Everything seams to be fine, failover works as expected, but sometimes during failover (or restart of whole cluster) we are observing following exception:
WARN [ActiveMQ NIO Worker 6] org.apache.activemq.transaction.LocalTransaction - Store COMMIT FAILED:java.io.IOException: Batch entry 2 INSERT INTO ACTIVEMQ_MSGS(ID, MSGID_PROD, MSGID_SEQ, CONTAINER, EXPIRATION, PRIORITY, MSG, XID) VALUES (...) was aborted: Unique-Constraint activemq_msgs_pkey Detail: key(id)=(7095330) alerady exists
ActiveMQ propagates this exception directly to the client.
I thought, that ActiveMQ would be able to recognise duplicated message, but something goes wrong here.....
The client tries to deliver message with already existing ID, should not ActiveMQ compare this message to one already existing in storage (if possible, depending on DB) and if both messages are the same just ignore second message?
Or maybe ActiveMQ assumes that duplicated messages are allowed to be persisted and our DB structure is not correct (constraint on id)?
CREATE TABLE activemq_msgs
(
id bigint NOT NULL,
container varchar(250),
msgid_prod varchar(250),
msgid_seq bigint,
expiration bigint,
msg bytea,
priority bigint,
xid varchar(250)
);
ALTER TABLE activemq_msgs
ADD CONSTRAINT activemq_msgs_pkey
PRIMARY KEY (id);
Should we drop activemq_msgs_pkey ?

Our JDBC configuration was incorrect - autocommit was set to false, and in result messages were propagated in DB with delay.

Related

Subscription has received a request for an abort shutdown

I have a subscription with DB2 for i (as400) as source and Kafka as target (11.4.0.2). The replication method is REFRESH.
There was a table in the subscription, which was running well.
But after I delete it and add it to another new subscription (with same source and target, also REFRESH), it cannot be replicated. The replication end with following message:
Subscription XXX has received a request for an abort shutdown. There are no other error messages. The table mapping (flag for refresh already) seems being ignored by CDC.
From the log, it returns:
Received normal shutdown request number 9 with reason OTHER_ENGINE_COMPLETE
I have no idea what is happening as there are no obvious hints from logs.
I tried (1) recreate table mapping and (2) update table definitions, but not working.
Other tables in the subscription can be replicated.
Please check after you delete table mapping from earlier subscription, is it still showing by checking 'replication tables' in MC (select datastore -> replication tables) select and remove and remap in new subscription.
Thanks
Sudarshan K

In sql, how to always insert data in a table with concurrency?

In sql how to always insert data in a table with concurrency? Must ensure that the data is received, example table "Bet", all the app clients of that database server must have de ensure that their bet is placed.
On an INSERT statement, the only way to not have concurrency is to violate some kind of constraint. As long as you have declared the constraint (primary key, not null, foreign key, etc), the database will throw an error on any violation.
I'm not sure what API you are using to talk to the database, but it will certainly signal in some way that a database error has occurred. Then you need to handle that case appropriately by informing the application to invalidate the data.

Replication - Explicit value must be specified for identity column in table

I'm using Merge Replication. The Identity range management is AUTOMATIC
I HAVE A TRIGGER ON COMPANIES TABLE WHICH INSERTS ROWS IN SERIALNUMBERSCHEME TABLE which has documentID as identity column
While synchronizing I'm getting below error
A row insert at 'SERVER\MUMBAI.PROD_SUB' could not be propagated to 'SERVER\NEWYORK.PROD'. This failure can be caused by a constraint violation. Explicit value must be specified for identity column in table 'SerialNumberScheme' either when IDENTITY_INSERT is set to ON or when a replication user is inserting into a NOT FOR REPLICATION identity column.
Data is inserted properly at subscriber but not replicated at publisher
Any solution/suggesstion?
Sounds like your trigger gets fired when the replication agent applies the updates. Normally the trigger should run only at the publisher (or more precisely, at the site which inserts the original data). Then replication will replicate the effect of the trigger. I think that all you need is to mark the trigger as NOT FOR REPLICATION.
See Controlling Constraints, Identities, and Triggers with NOT FOR REPLICATION.

Adding new article to transactional replication gives error at subscriber

I have an updatable transactional replication set with SQL Server 2008. Everything is working fine. I added a new table to the existing publication thru sp_addarticle followed by sp_addsubscription. After that I ran the snapshot agent. Snapshot has been generated only for newly added table. So the new table was successfully replicated to subscriber. I could even able to replicate a newly inserted record into new table to subscriber. But vice versa is not possible. When I insert a record into new table in the subscriber database, I am getting an error
*Msg 515 'Cannot insert the value NULL into column 'msrepl_tran_version',
table Servername.dbo.Tablename'; column does not allow nulls. INSERT fails.'*.
Please help me to resolve this issue.
Many Thanks in advance. Geeta
Is it reproducible error? Is subscriber configured as Immediate Updating? In case of Immediate Updating subscriber the transaction fails whenever publisher (or network) is unavailable.
Check and change, if it does not have it, your table so that ms_repl_tran_version defaulted to GUID:
ALTER TABLE [dbo].[TableName] ADD
CONSTRAINT [DFLT_GUID_msrepl] DEFAULT (newid()) FOR
[msrepl_tran_version]

Replication related issue,

Replication related issue,
I am explaining my architecture .
I have created , its transactinal replication process
2 Publisher on table vendors script I have given below,
A Distributor
2 Subscribers
Data replication set up is like this as :
Table VENDORS gets replicated from 2-publishers to 2-subcribers via-Distributor.
While replication, ERROR issued in Distributor database as :
Here, What must happen is
Pub1 (creates pubs table vendors) –> inserts (vendors) data to Distributor. -> pull by subscribers
What is happening now for me is ,
Pub1 (creates pubs table vendors-done) -> Throws error at distributor database as
Replication-Replication Distribution Subsystem: agent abc-serv1\PRD01-star-star Billing-PROD-VREPL1\REPL01-25 failed.
Violation of PRIMARY KEY constraint 'PK_vendors'. Cannot insert duplicate key in object 'dbo.vendors'.
Error is issued while operation is done between Publishers to Distributor.
The Primary Key at the Publisher has to be maintained at the Subscriber when using Transactional Replication. It sounds as though a record with the given key value already exists at the Subscriber.
From your topology description you have two separate Publications.
So:
Subscriber 1 receives Publication 1
Subscriber 2 receives Publication 2
Is there any crossover i.e. can Subscriber 2 also receives Publication 1. If so then you will encounter Primary Key conflicts unless you manage the key ranges on both Publishers or use an alternative Replication technology, such as Merge Replication.