How can I force Akka.net Postgres persistence to reconnect - akka.net

I am having some issues with persistent actors which use the Postgres plugin where it seems the actors never manage to reconnect to the database after a database outage.
The persistent actors are stopped after 1 minute of inactivity so I am getting new actors all the time, but they never seem to be able to reconnect.
Restarting the pod the actor system is running on fixes the problem.
I can kind of replicate this locally by :
Stopping the database
Starting the actor system
Send a message which should force recovery
Recovery fails because of no database connection
I then start the database without restarting the actor system and send a new message which spawns a new persistent actor which fails with the same database error.
Is there some way of forcing Akka.Persistence to reconnect?

There are two ways you can go about solving this problem:
Recreate the entity actors using a BackoffSupervisor - which will recreate the actor according to an exponential backoff schedule.
Or recreate the entity actors using the third party Akka.Persistence.Extras NuGet package which works similarly to the BackoffSupervisor but is able to save messages that weren't successfully persisted provided that you implement the messaging protocol it expects. The documentation for that feature is here: https://devops.petabridge.com/articles/state-management/akkadotnet-persistence-failure-handling.html

Related

How to get user specific data in a queue from ActiveMQ

If as admin I wanted to know from a particular queue A, how many calls initiated by which person and how many get dequeued, and how many are still in queue # any time.
I just want to develop one UI in my application to show those user-specific records from ActiveMQ.
There is no built in functionality in the broker that does this sort of thing. You could develop your own broker plugin that tracks these things but you'd need to build some sort of DB or other storage as you would lose any in-memory stats when a broker is restarted. You should use caution when trying to push all requirements into the message broker for system level management as that is not its purpose and will likely result in other issues when you do.

Can someone please tell me how to link Sql 2012 database to activemq?

Please tell me what are the best practice for linking sql 2012 database server with activemq
What i want to achieve is whenever something changes in database my ativemq should get updated with a message that has the change that has happened in the db.
1.So how should i code in my db (like trigger?).
2.How should the data be sent to activemq? Do we need any interfaces like a java app?
Don't do it in the database. Do it in your app.
While SQL Server can host CLR procedures that can connect to your ActiveMQ server and submit messages, doing so is a bad idea:
writing SQLCLR code that connects to external resources is tricky (you need to understand SQL scheduling and impact of preemption, read CLR Hosted Environment)
You add coupling (your trigger now will fail if the ActiveMQ server is not available)
In order to achieve transactional correctness you must enroll ActiveMQ operation into your current transaction, so that the current transaction rolls back after the trigger executed, the ActiveMQ operation is also rolled back. DTC using XA and SQL Server are not for the faint of heart.
A much better solution is to queue locally, into your own DB. Either use a table as a queue or use Service Broker queues (which, btw, do support Activation). Then process this local queue instead, post-commit, from a separate listener process. More availability, less coupling, higher throughput. If ActiveMQ is really needed, at least a local intermediate queue will decouple the availability. Having each DB operation wait for an external ActiveMQ XA enrolled operation will plumed your throughput (DTC is slow) and you have to deal with the availability problem (no ActiveMQ and XA available, no updates in DB possible).

nServiceBus : How do I make a non-transactional call to a database from within the context of a transactional operation

Quick overview of our topology:
Web sites sending commands to an nServiceBus server, which accepts the commands and then publishes the correct pub/sub events. This service also has message handlers that can do some process against the DB in response to the command, for instance:
1 user registers on web site
2 web site sends nServicebus command to nServicebus service on another server.
3 nServicebus server has a handler for that specific type of command, which logs something to the database and sends a welcome email
Since instituting this architecture we started to get deadlocks on the DB. I have traced it down to MSDTC on the database server. If I turn that service OFF on the database server nServicebus starts throwing up errors, which to me shows that nServiceBus has been enlisting the DB update in the transaction.
I don't wish this to happen, I want to handle the DB failing myself, I only want the transaction to ensure the message is delivered to my nServicebus proxy service. I don't want a transaction from the web all the way through 2 servers to the DB and back.
Any suggestions?
EDIT: this post provides some clues, however I'm not entirely sure it's the proper way to proceed.. NServiceBus - Problem with using TransactionScopeOption.Suppress in message handler
EDIT2: The reason that we want the DB work outside the scope of the transaction is that the intent is to 'asynchronously' process these commands on another server so as not to slow down the web site and/or cause users to wait for these long running aggregation commands. If the DB is within the scope of the transaction, is that blocking execution on the website at the point where the original command is fired to the distributor? Is there a better nServicebus architecture for this scenario? We want the command to fire quickly and return control to the web site so the user can quickly proceed and not have to wait for our longish running DB command, which is updating aggregate counts and sending emails etc.
I wouldn't recommend having the DB work outside the context of the NServiceBus transaction. Instead, try reducing the isolation level of the transactions. This can be done by calling:
.IsolationLevel(System.Transactions.IsolationLevel.ReadCommited)
in the fluent configuration. You'll have to put this after .MsmqTransport() in v2.6. In v3.0 you can put this call almost anywhere.
RESPONSE TO EDIT2:
Just using NServiceBus will achieve your objective of not slowing down the website, regardless of the level of the transactions run on the other server. The use of transactions is to provide a guarantee that messages won't be lost in case of failure and also that you won't have to write your own deduplication logic.

NService Bus: Nitty-Gritty Deployment Issues

Please consider the following questions in the context of multiple publications from a scaled out publisher (using DB subscription storage) and multiple subscriptions with scaled out subscribers (using distributors) where installs and uninstalls happen regularly for initial deployments, upgrades, etc. using automated MSI's.
Using DB subscription storage, what happens if the DB goes down? If access to the Subscription DB is required in order to Publish a message, how will it be delivered? Will it get lost? Will the call to Bus.Publish throw an exception?
Assuming you need to have no down-time deployments: What if you want to move your subscription DB for a particular publication to a different server? How do you manage a transition like this?
Same question goes for a distributor on the subscriber side: What if you want to move your distributor endpoint? One scenario I can think of is if you have multiple subscriptions utilizing a single distributor machine, it might be hard if you want to move some of them to another distributor server to reduce load.
What would the install/uninstall scenarios look like for a setup like this (both initially, and for continuous upgrades)? It seems like you would want to have some special install/uninstall scripts for deployment of the "logical publication" and subscription DB, as well as for the "logical subscriptions" and the distributors. The publisher instances wouldn't need any special install/uninstall logic (since they just start publishing messages using the configured subscription DB, and then stop when they are uninstalled). The subscriber worker nodes wouldn't need anything special on install other than the correct configuration of the distributor endpoint, but would need uninstall logic to make sure they are removed from the distributors list of worker nodes.
Eventually the publisher will fail and the messages will build up in the internal queue. You will have to plan the size of disk you need to handle this based on the message size and how long you want to wait for a DB to come up. From there it is based how much downtime you can handle. You can use DB mirroring or clustering to make the DB have less downtime.
Mirroring and clustering technologies can also help with this. Depends on if you want to do manual or automatic failover and where your doing it(remote sites?).
Clustering MSMQ could help you here. If you want to drop a distributor and move it within a cluster you'd be ok. Another possibility is to expose your distributors via HTTP and load balance them behind either a software or hardware load balancing solution. Behind the load balancer you'd be more free to move things around.
Sounds like you have a good grasp on this one already :)
To your first question, about the high availability of the subscription DB, you can use a cluster for failover. If the DB is down, then the Bus.Publish will throw an exception, yes. It is recommended to keep the subscription DB separate from your applicative DB to avoid having to bring it down when upgrading your app. This doesn't have to be a separate DB server, a separate DB on the same DB server will be fine.
About moving servers, this is usually managed at a DNS level where for a certain period of time you'll have both running, until communication moves over.
On your third question about distributors - don't share a distributor between different publishers or subscribers.
As a rule of thumb, it is recommended to not add/remove subscribers when doing these kinds of maintainenance activities. This usually simplifies things quite a bit.

Failover scenarious for the Service Bus with NServiceBus or MassTransit

I need to build Identity server like Microsoft's http://login.live.com.
To handle failover I will have multiple web servers nodes. The plan is that all database write operations are done by sending messages to the database server. Database will be mirrored or replicated. The idea is that database subscribes to the write operations but that other nodes subscribe also. That way other nodes do not need to read from database and can update their caches.
I am just starting to learn the service bus architecture and what is not clear to me is how to handle failover scenario for the service bus.
Question:
If database server is not available, what will happen with the published messages ?
Will they be stored somewhere and where ?
Do I need additional machine or a cluster to handle failover of the service bus?
I read that SQL Server can be used as a message store but can I use durable MSMQ? I am queuing messages to be able to write them to the database so why would I store them to the DB first just to take them and write them again? OR, I am getting this wrong and DB is only used for the list of subscriptions and not for the Messages?
Whe implementing this kind of architecture, you should look at applying the principles of CQRS - queries (is this user/pwd combo valid) should not be done via the bus; commands (change pwd, forgot pwd) are sent via the bus, not published as events. While internally you will likely use events to keep the command and query sides in sync, this doesn't involve the client.
Queries can be done using simple ado.net against the replicated-read-slaves of your DB - what's known as the persistent view model in CQRS. If you like, you can put some simple WCF in front of that too.
When using MSMQ, all messages are delivered via store-and-forward. That means that they're first stored on the client before being delivered to the server, so if the server is down, the messages sit on the client waiting. For fault-tolerance, you will want your messages to be recoverable (written to disk) - this is the default in NServiceBus but not the default of standard MSMQ (don't know about MassTransit). You don't need the database for this.
In NServiceBus, the bus is not installed on a separate machine so you don't need to deal with its availability independently of the rest of the system. It's only when you look at scaling our your command processing to more nodes that you might consider using the message-based load balancer in NServiceBus (called the distributor) which, for high availability, should be installed on a cluster or fault-tolerant hardware.
This will depend on how it is setup, but in MassTransit you can leave the subscription active so the message will still be delivered to the queue for the DB. When the DB is active again, you can read the messages in the queue.
Each service connected to a service bus, in MassTransit, has an active queue for itself. The messages will be stored there.
I think this is a "it depends"... MassTransit has support for other MQs than MSMQ but is really built around MSMQ. We have no experienced great support for things such as failover from MSMQ. However, everything will continue to run without fault if the subscription service (i.e. the bus) fails - the services already know who to talk to. It's only when a change in a consumer (subscribe or unsubscribe) where this becomes a problem. For me, that's an event that happens almost never.
With MassTransit, we use the DB to store the subscription states but all the messages are stored in MSMQ.
If you'd like more details in one of these responses or have additional questions about MT, you can join us on the mailing list: http://groups.google.com/group/masstransit-discuss.