Is clientid of durable subsucription unique between brokers which are connected by network connector? - activemq

BrokerA and BrokerB are connected by network connector
BrokerA and BrokerB have Topic "testTopic" as same name
DurableSubscriber1 connects to testTopic of BrokerA
DurableSubscriber2 connects to testTopic of BrokerA too
DurableSubscriber3 connects to testTopic of BrokerB
ClientID of 3 DurableSubscribers is same ID "testID"
First, I create DurableSubscriber1. Next, I create DurableSubscriber2. But I can't create DurableSubscriber2. I think because DurableSubscriber can't be created as same ClientID.
However, after I create DurableSubscriber1, I can create DurableSubscriber3.
Are DurableSubsucriptions that is same clientid and connect to same Topic of different brokers different DurableSubsucriptions for each?

The durable subscriptions are distinct among ActiveMQ brokers so a client connected to broker A can create a durable subscription that has different messages stored than another client on broker B with the same client ID. This is one reason that durable subscriptions are not always a good choice for networks of brokers, as message can become stranded if a subscriber is diconnected and then reconnected to another broker as it will result in the messages from its older subscription becoming stranded until it returns to the broker.

Related

Subscription has received a request for an abort shutdown

I have a subscription with DB2 for i (as400) as source and Kafka as target (11.4.0.2). The replication method is REFRESH.
There was a table in the subscription, which was running well.
But after I delete it and add it to another new subscription (with same source and target, also REFRESH), it cannot be replicated. The replication end with following message:
Subscription XXX has received a request for an abort shutdown. There are no other error messages. The table mapping (flag for refresh already) seems being ignored by CDC.
From the log, it returns:
Received normal shutdown request number 9 with reason OTHER_ENGINE_COMPLETE
I have no idea what is happening as there are no obvious hints from logs.
I tried (1) recreate table mapping and (2) update table definitions, but not working.
Other tables in the subscription can be replicated.
Please check after you delete table mapping from earlier subscription, is it still showing by checking 'replication tables' in MC (select datastore -> replication tables) select and remove and remap in new subscription.
Thanks
Sudarshan K

ASynchronous Creation of Temp Tables in SQL Server

I'm currently trying to determine possible issues with creating temporary tables from a web application and how SQL Server naturally determines separate sessions.
SELECT blabla, lala FROM table INTO #TempTable
SELECT blabla FROM #TempTable
DROP TABLE #TempTable
In the above, while one user of a web application is waiting for the second line to execute, and another user fires off the same 3 lines, what would determine whether the 2nd user gets a "Object already exists" or else a new #TempTable is created for that user.
If each user was on a separate computer on the same network, would SQL server treat this as separate sessions and thus create separate temporary tables.
What about if it is run on the same computer on two different networks?
Each user connection to the database is it own session. These sessions are unique even if you're using connection pooling within SQL Server. Behind the scenes, SQL Server appends each #tempTable with a single session reference number, so they technically aren't even named the same thing during execution.
If the root of your problem is an error message about the object already existing when you are debugging. Try adding the code snippet below before you create the temp table:
IF OBJECT_ID('[tempdb]..[#tempTable]') IS NOT NULL
BEGIN
DROP TABLE #tempTable
END
SQL Server does not determine separate sessions.
It is a client application who create sessions. You can write an application where all traffic to the database use single connection (not so easy) or a separate connection is created for each page (common mistake). Both of the solutions are pretty bad.
In proper design you should use connection pooling and your code should reserve connections from the connection pool as needed.
Even if you are using connection pooling, it is possible that each command is executed on a different connection from the pool.

Duplicate message detection on failover

We have ActiveMq 5.15.2 in following configuration:
PostgreSQL for persistance
two nodes, one in standby
JDBC master slave with shared database
static cluster discovery
Everything seams to be fine, failover works as expected, but sometimes during failover (or restart of whole cluster) we are observing following exception:
WARN [ActiveMQ NIO Worker 6] org.apache.activemq.transaction.LocalTransaction - Store COMMIT FAILED:java.io.IOException: Batch entry 2 INSERT INTO ACTIVEMQ_MSGS(ID, MSGID_PROD, MSGID_SEQ, CONTAINER, EXPIRATION, PRIORITY, MSG, XID) VALUES (...) was aborted: Unique-Constraint activemq_msgs_pkey Detail: key(id)=(7095330) alerady exists
ActiveMQ propagates this exception directly to the client.
I thought, that ActiveMQ would be able to recognise duplicated message, but something goes wrong here.....
The client tries to deliver message with already existing ID, should not ActiveMQ compare this message to one already existing in storage (if possible, depending on DB) and if both messages are the same just ignore second message?
Or maybe ActiveMQ assumes that duplicated messages are allowed to be persisted and our DB structure is not correct (constraint on id)?
CREATE TABLE activemq_msgs
(
id bigint NOT NULL,
container varchar(250),
msgid_prod varchar(250),
msgid_seq bigint,
expiration bigint,
msg bytea,
priority bigint,
xid varchar(250)
);
ALTER TABLE activemq_msgs
ADD CONSTRAINT activemq_msgs_pkey
PRIMARY KEY (id);
Should we drop activemq_msgs_pkey ?
Our JDBC configuration was incorrect - autocommit was set to false, and in result messages were propagated in DB with delay.

Merge replication error the schema script could not be propagated to the subscriber SQL Server 2008

I am with replication so after trying for 1 month I am able to initialize publisher on remote system and subscriber on my local system. When I run job at subscriber end I get an error
the schema script AccountNotic1234.sch could not be propagated to the subscriber
Somewhere I read this error happens when tables are connected using foreign keys and you missing primary table but I am synchronizing all tables. So it can not be the issue.
When I run the subscription I get error for 3 tables in order not on 1st and 2nd don't know why?
What can be the possible reasons for this?

Replication related issue,

Replication related issue,
I am explaining my architecture .
I have created , its transactinal replication process
2 Publisher on table vendors script I have given below,
A Distributor
2 Subscribers
Data replication set up is like this as :
Table VENDORS gets replicated from 2-publishers to 2-subcribers via-Distributor.
While replication, ERROR issued in Distributor database as :
Here, What must happen is
Pub1 (creates pubs table vendors) –> inserts (vendors) data to Distributor. -> pull by subscribers
What is happening now for me is ,
Pub1 (creates pubs table vendors-done) -> Throws error at distributor database as
Replication-Replication Distribution Subsystem: agent abc-serv1\PRD01-star-star Billing-PROD-VREPL1\REPL01-25 failed.
Violation of PRIMARY KEY constraint 'PK_vendors'. Cannot insert duplicate key in object 'dbo.vendors'.
Error is issued while operation is done between Publishers to Distributor.
The Primary Key at the Publisher has to be maintained at the Subscriber when using Transactional Replication. It sounds as though a record with the given key value already exists at the Subscriber.
From your topology description you have two separate Publications.
So:
Subscriber 1 receives Publication 1
Subscriber 2 receives Publication 2
Is there any crossover i.e. can Subscriber 2 also receives Publication 1. If so then you will encounter Primary Key conflicts unless you manage the key ranges on both Publishers or use an alternative Replication technology, such as Merge Replication.