I created a publication (snapshot or transaction) on a Server-A. I'm trying to set up pull replication on Server-B.
I'm able to use replication properly but my snapshot is very big & the complete transaction takes around 1 hr to complete.
When I check my subscription status on subscriber, it says Job agent is already started & running. On publisher server I get status is "No replication transaction". Even when I know replication is working in the background on Subscriber.
I end up starting SQL Profiler on subscriber server to watch when replication has ending. Is there any other way to watch this?
I'm using SQL Server 2008 R2.
Based on my understanding of your question, what you are looking for is not really possible with snapshot replication and here is why; the publisher has a job that creates a snapshot of the database and saves it to your chosen folder. On the secondary (subscriber) there is a job that goes out to the publisher's folder and processes it. For instance, you can have the publisher set to run the job at 6 am and have your subscriber later process the file at 8 am. The only purpose of the publisher is to save a snapshot file and doesn't care when the subscriber processes it.
However, transactional replication is different from snapshot where both the publisher and subscriber can be monitored for latency (what I believe you are expecting for snapshot); reason, the publisher has a log reader job that is continuously sending changes to the distributor database. While that is happening, the subscriber has a job that continuously processes those changes from the distributor.
Here is a link to Microsoft TechNet explaining the various flavors of replication.
Hope this helps!
Related
I have two SQL Server DB instances on the same machine with one hosting a replica of the other. In addition, I have transaction replication setup for all tables on the main instance to the replica. I have verified that the log reader and associated agents are all running. However, one of the log reader agent says "Replicated transactions are waiting for the next Log back up or for mirroring partner to catch up". I have done a transactional log backup to trigger it but to no avail; tried deleting the subscriber and publisher and recreating it again…but still end up with this message.
I have a SQL Server (distributor and publisher) 2008 which is replicating using both snapshot and transactional replication to replicate to a couple of subscribers. There is plenty of information here https://learn.microsoft.com/en-us/sql/relational-databases/replication/disable-publishing-and-distribution on how to permanently disable replication.
I don't want to permanently disable replication, just temporarily for a network outage that is scheduled for later this week.
I have learned my lesson that when things go amuck it's a complete disable, remove, and re-setup to get everything working again, and there are too many publications to make this an option just to temporarily disable this.
It depends on whether there's going to be a network split between publisher and distributor or distributor and subscriber. Both of the below scenarios deal with transactional replication.
publisher and distributor - the log reader agent will not be able to mark records as delivered to the distribution database and so will stay in the transaction log of the publisher longer than normal. This may cause log growth (depending on how much free space is in your log file currently).
distributor and subscriber - assuming that the network outage is shorter than the minimum retention period for the distribution database, you should be able to just suspend the distribution jobs and everything should pick back up once the network is back online. Depending on the size of the backlog, it may be easier to re-initialize some (or all!) of your articles.
For snapshot replication, you shouldn't need to do much since the only time there's activity is when a snapshot is being created and delivered to the subscriber. You can just disable those jobs for the duration of your event and re-enable them when you're done.
I have two servers with Sql Server 2012, and with Merge replication configured. They were working correctly, but because a network problem, the Subscriber was some days off line, and when the connection returned, the Publisher is deleting the data that was saved on the Subscriber during the off line period. I tried to delete and reconfigure the Replication, but not working.
Can someone help me?
Thanks!
The publisher might be marking records to be deleted on the subscriber because they don't exist on the publisher anymore. If this is not the case, reinitialize the subscription to the publisher on the client machine, and see if it starts replicating successfully.
I have a couple questions regarding ServiceInsight that I was hoping someone could shed some light on.
Can I monitor multiple error queues and audit queues? If so how do I configure it to monitor those queues.
I understand that messages processed in the error queue are moved to the error.log queue. What happens to the messages processed in the audit queue, i.e where do they go after the management service processes them.
Where are the messages ultimately stored by the management process, i.e. are they stored in RavenDB and if so under what database.
In addition, how do I remove or delete message conversations in the endpoint explorer. For example, let’s say I just want to clear everything out.
Any additional insight (no pun intended) you can provide regarding the management and use of insight would be greatly appreciated.
Question: Can I monitor multiple error queues and audit queues? If so how do I configure it to monitor those queues.
Answer: ServiceInsight receives its data from a management service (AKA "ServiceControl") that collects its data from audit (and error) queues. A single instance of ServiceControl can connect to a single audit and error queues (in a single transport type). If you install multiple ServiceControl instances that collect auditing and error data form multiple queues, you can use serviceInsight to connect to each of the ServiceControl instances. Currently (in beta) ServiceInsight supports one connection at a time, but you can easily switch between connection or open multiple instances of ServiceInsight, each connecting to a different ServiceControl instance.
Question: I understand that messages processed in the error queue are moved to the error.log queue. What happens to the messages processed in the audit queue, i.e where do they go after the management service processes them.
Answer: audit messages are consumed, processed and stored in the ServiceControl instance auditing database (RavenDB).
Question: Where are the messages ultimately stored by the management process, i.e. are they stored in RavenDB and if so under what database.
Answer: Yes, they are stored (by default) in the embedded RavenDB database that is used by the management service (AKA "ServiceControl"). You can locate it under "C:\ProgramData\Particular\ServiceBus.Management"
Question: In addition, how do I remove or delete message conversations in the endpoint explorer. For example, let’s say I just want to clear everything out.
Answer: We will be adding full purge / delete support for this purpose in an upcoming beta update. for immediate purging of old messages, you can use the RavenDB studio based on the path specific above.
Please let me know of these answer your questions and do not hesitate to raise any other questions you may have!
Best regards,
Danny Cohen
Particular Software (NServiceBus Ltd.)
I have a logical publication which is basically a bunch of MT servers, who all access a DB subscription storage. These MTs are typically upgraded by taking 1/2 out of rotation, installing the new MT version, bringing them back online, and then repeating for the other half.
I am confused how a subscriber would subscribes to such a publication. In all of the examples I have seen, a subscriber needs to have a publisher's InputQueue specified in configuration in order for the subscription request to be received. But what InputQueue would I specify in this situation? I don't want subscription to fail if some of my publisher MT's happen to be down. Would I just subscribe manually by adding a record to the DB subscription storage?
Publishers usually publish as a result of processing some command from a client, and as such, you usually use a distributor to scale them out, as well as using the DB subscription storage. Subscribers are another kind of client so you would configure them to point to the distributor as well.