notify listener inside or outside inner synchronization - api

I am struggling with a decision. I am writing a thread-safe library/API. Listeners can be registered, so the client is notified when something interesting happens. Which of the two implementations is most common?
class MyModule {
protected Listener listener;
protected void somethingHappens() {
synchronized(this) {
... do useful stuff ...
listener.notify();
}
}
}
or
class MyModule {
protected Listener listener;
protected void somethingHappens() {
Listener l = null;
synchronized(this) {
... do useful stuff ...
l = listener;
}
l.notify();
}
}
In the first implementation, the listener is notified inside the synchronization. In the second implementation, this is done outside the synchronization.
I feel that the second one is advised, as it makes less room for potential deadlocks. But I am having trouble to convince myself.
A downside of the second imlementation is that the client might receive 'incorrect' notifications, which happens if it accessed and changed the module prior to the l.notify() statement. For example, if it asked the module to stop sending notifications, this notificaiton is sent anyway. This is not the case in the first implementation.
thanks a lot

It depends on where you are getting listener in your method, how many listeners you have, how the listener subscribes/unsubscribes
Assuming from your example, you have only one listener then you might be better to use critical sections (or monitors) for different parts of the class rather than locking the entire object.
You could have one lock for performing tasks within the method that are specific to the object/task at hand, and one for the listener subscribe/unsubscribe/notify (that is to ensure that the listener is not changed during a notification).
I would also use a ReadWriteLock protecting you listener references (either single or list of listeners)
Answering you comment:
I think that you should notify the listener after you have unlocked the class. This is because, the result of that notification could result in a different thread trying to gain access to the class, which it may not be able to do, under certain circumstances, leading to deadlock.
Notifying a listener (if protected like I have described) should not hold up any other thread that requires the facilities of the class. The best strategy is to create locks that are specific to the state of the class and locks that are specific to safe notification.
If you take your example of suspending notifications, this could be covered by the lock that governs notifications, so if a different thread 'suspends' notifications, either the suspend will be processed or the current notification complete, if the other thread suspends notification between the task being processed and the notification happening, the l.notify() will not happen.
Listener l = null;
synchronised(processLock_) {
... do stuff....
synchronised(notifyLock_) {
l = listener;
}
}
//
// current thread preempted by other thread that suspends notification here.
//
synchronised(notifyLock_) { // ideally use a readwritelock here...
l = allowNotify_ ? l: null;
}
if(l)
l.notify();

Related

Should I emit from a coroutine when collecting from a different flow?

I have a use case where I need to trigger on a specific event collected from a flow and restart it when it closes. I also need to emit all of the events to a different flow. My current implementation looks like this:
scope.launch {
val flowToReturn = MutableSharedFlow<Event>()
while (true) {
client
.connect() // returns Flow<Event>
.catch { ... } // ignore errors
.onEach { launch { flowToReturn.emit(it) } } // problem here
.filterIsInstance<Event.Some>()
.collect { someEvent ->
doStuff(someEvent)
}
}
}.start()
The idea is to always reconnect when the client disconnects (collect then returns and a new iteration begins) while having the outer flow lifecycle separate from the inner (connection) one. It being a shared flow with potentially multiple subscribers is a secondary concern.
As the emit documentation states it is not thread-safe. Should I call it from a new coroutine then? My concern is that the emit will suspend if there are no subscribers to the outer flow and I need to run the downstream pipeline regardless.
The MutableSharedFlow.emit() documentation say that it is thread-safe. Maybe you were accidentally looking at FlowCollector.emit(), which is not thread-safe. MutableSharedFlow is a subtype of FlowCollector but promotes emit() to being thread-safe since it's not intended to be used as a Flow builder receiver like a plain FlowCollector. There's no reason to launch a coroutine just to emit to your shared flow.
There's no reason to call start() on a coroutine Job that was created with launch because launch both creates the Job and starts it.
You will need to declare flowToReturn before your launch call to be able to have it in scope to return from this outer function.

Reactive programming - running jobs in a cluster

I need to run some jobs in a cluster, only one at a time.
Because my team uses Hazelcast, I ended up with a solution based on
Hazelcast ILock implementation. For the purpose of the question, I am going to make a generalisation about it. Let's suppose we have the following interfaces (that could be easily implemented e.g. by Hazelcast or Reddison (Redis)):
public interface MyDistributedLock {
boolean lock();
void unlock();
boolean isLockedByCurrentThread();
}
public interface MyLockDistributedFactory {
MyDistributedLock getLock(String name);
}
And lock method waiting if lock cannot be acquired:
private Mono<Void> lock(String name, Publisher<?> publisher, MyLockDistributedFactory myLockFactory) {
// important to release lock on the same thread as
// it was aquired
Scheduler scheduler = Schedulers.newSingle(name.toLowerCase());
return Mono.defer(() -> Mono.just(myLockFactory.getLock(name)))
publishOn(scheduler)
.doOnNext(MyDistributedLock::lock)
.doOnNext(lock -> LOGGER.info("Process acquired lock for resource {}", name))
.flatMapMany(lock -> Flux.from(publisher))
.publishOn(scheduler)
.doFinally(signalType -> {
MyDistributedLock lock = myLockFactory.getLock(name);
if (signalType == SignalType.CANCEL) {
// cancel ignores publishOn
scheduler.schedule(() -> {
lock.unlock();
LOGGER.info("Process released lock for resource {} due to signal type {}", name, signalType);
});
} else if (lock.isLockedByCurrentThread()) {
lock.unlock();
LOGGER.info("Process released lock for resource {} due to signal type {}", name, signalType);
}
})
.then();
}
And example of some job
private Mono<Void> someJobRunEveryOneHourOnEveryNodeInCluster() {
MyLockDistributedFactory hazelcast = ...;
return lock("some-job", Flux.just(1,2), hazelcast)
.repeatWhen(afterOneHour());
}
I wonder whether this is a good approach of using Project reactor (and correct implementation) or it should be done in a different way. Please advice.
it is a correct approach when using Reactor, because you took care of offsetting the blocking portion into a dedicated Scheduler/Thread.
But I'd say mutually exclusive code like this is not a very good fit for reactive programming in general: you lose one of the key benefits of doing more with less threads, you risk blocking other parts of the application should you forget to publishOn a dedicated thread, etc...

Is it better to use the Bus Start method or a class constructor to instantiate objects used by a service

I'm using nServiceBus 5 and have created a number of host endpoints, two of which listen for database changes. (The specifics of how to do this can be found here). The intention is to have a service running in the background which publishes an event message using the Bus when notified to do so by the database listener.
The code which creates the database listener object and handles events is in the Start method, implemented as part of IWantToRunWhenBusStartsAndStops.
So - Is putting the code here likely to cause problems later on, for example if an exception is thrown (yes, I do have try/catch blocks, but I removed them from the sample code for clarity)? What happens when the Start method finishes executing?
Would I be better off with a constructor on my RequestNewQuoteSender class to instantiate the database listener as a class property and not use the Start method at all?
namespace MySample.QuoteRequest
{
public partial class RequestNewQuoteSender : IWantToRunWhenBusStartsAndStops
{
public void Start()
{
var changeListener = new DatabaseChangeListener(_ConnectionString);
// Assign the code within the braces to the DBListener's onChange event
changeListener.OnChange += () =>
{
// code to handle database change event
changeListener.Start(_SQLStatement);
};
// Now everything has been set up.... start it running.
changeListener.Start(_SQLStatement);
}
public void Stop() { LogInfo("Service Bus has stopped"); }
}
}
Your code seems fine to me.
Just a few small things:
Make changeListener a class field, so that it won't be GC (not 100% sure if it would be but just to make sure);
Unsubscribe from OnChange on the Stop() method;
You may also want to have a "lock" around changeListener.Start(_SQLStatement); and the Stop so that there are no racing conditions (I leave that one up to you to figure out if you need it or not);
Does this make sense ?

WCF Proxy Client taking time to create, any cache or singleton solution for it

we have more than dozon of wcf services and being called using TCP binding. There are a lots of calls to same wcf service at various places in code.
AdminServiceClient client = FactoryS.AdminServiceClient();// it takes significant time. and
client.GetSomeThing(param1);
client.Close();
i want to cache the client or produce it from singleton. so that i can save some time, Is it possible?
Thx
Yes, this is possible. You can make the proxy object visible to the entire application, or wrap it in a singleton class for neatness (my preferred option). However, if you are going to reuse a proxy for a service, you will have to handle channel faults.
First create your singleton class / cache / global variable that holds an instance of the proxy (or proxies) that you want to reuse.
When you create the proxy, you need to subscribe to the Faulted event on the inner channel
proxyInstance.InnerChannel.Faulted += new EventHandler(ProxyFaulted);
and then put some reconnect code inside the ProxyFaulted event handler. The Faulted event will fire if the service drops, or the connection times out because it was idle. The faulted event will only fire if you have reliableSession enabled on your binding in the config file (if unspecified this defaults to enabled on the netTcpBinding).
Edit: If you don't want to keep your proxy channel open all the time, you will have to test the state of the channel before every time you use it, and recreate the proxy if it is faulted. Once the channel has faulted there is no option but to create a new one.
Edit2: The only real difference in load between keeping the channel open and closing it every time is a keep-alive packet being sent to the service and acknowledged every so often (which is what is behind your channel fault event). With 100 users I don't think this will be a problem.
The other option is to put your proxy creation inside a using block where it will be closed / disposed at the end of the block (which is considered bad practice). Closing the channel after a call may result in your application hanging because the service is not yet finished processing. In fact, even if your call to the service was async or the service contract for the method was one-way, the channel close code will block until the service is finished.
Here is a simple singleton class that should have the bare bones of what you need:
public static class SingletonProxy
{
private CupidClientServiceClient proxyInstance = null;
public CupidClientServiceClient ProxyInstance
{
get
{
if (proxyInstance == null)
{
AttemptToConnect();
}
return this.proxyInstance;
}
}
private void ProxyChannelFaulted(object sender, EventArgs e)
{
bool connected = false;
while (!connected)
{
// you may want to put timer code around this, or
// other code to limit the number of retrys if
// the connection keeps failing
AttemptToConnect();
}
}
public bool AttemptToConnect()
{
// this whole process needs to be thread safe
lock (proxyInstance)
{
try
{
if (proxyInstance != null)
{
// deregister the event handler from the old instance
proxyInstance.InnerChannel.Faulted -= new EventHandler(ProxyChannelFaulted);
}
//(re)create the instance
proxyInstance = new CupidClientServiceClient();
// always open the connection
proxyInstance.Open();
// add the event handler for the new instance
// the client faulted is needed to be inserted here (after the open)
// because we don't want the service instance to keep faulting (throwing faulted event)
// as soon as the open function call.
proxyInstance.InnerChannel.Faulted += new EventHandler(ProxyChannelFaulted);
return true;
}
catch (EndpointNotFoundException)
{
// do something here (log, show user message etc.)
return false;
}
catch (TimeoutException)
{
// do something here (log, show user message etc.)
return false;
}
}
}
}
I hope that helps :)
In my experience, creating/closing the channel on a per call basis adds very little overhead. Take a look at this Stackoverflow question. It's not a Singleton question per se, but related to your issue. Typically you don't want to leave the channel open once you're finished with it.
I would encourage you to use a reusable ChannelFactory implementation if you're not already and see if you still are having performance problems.

WCF events in server-side

I'm working on an application in WCF and want to receive events in the server side.
I have a web that upon request needs to register a fingerprint.
The web page request the connection of the device and then every second for 15 seconds requests the answer.
The server-side code is apparently "simple" but doesn't work.
Here is it:
[ServiceContract]
interface IEEtest
{
[OperationContract]
void EEDirectConnect();
}
class EETest : IEEtest
{
public void EEDirectConnect()
{
CZ ee = new CZ(); // initiates the device dll
ee.Connect_Net("192.168.1.200", 4011);
ee.OnFinger += new _IEEEvents_OnFingerEventHandler(ee_OnFinger);
}
public void ee_OnFinger()
{
//here i have a breakpoint;
}
}
every time I put my finger, it should fire the event. in fact if I
static void Main()
{
EETest pp = new EETest();
pp.EEDirectConnect();
}
It works fine. but from my proxy it doesn't fire the event.
do you have any tips, recommendations, or can you see the error?
Thanks everyone.
I can see two issues with your code, which may or may not be the problem.
A) Event Registration Race Condition
You call CZ.Connect_Net() and THEN you register with the event handler. So if your event fires between calling Connect_Net() and you registering a method to handle the event then you'll not see it. Register the event handler first and then call Connect_Net().
B) EEtest lifetime.
The life time of your EEtest class depends on the Instancing Mode you use in WPF, see http://mkdot.net/blogs/dejan/archive/2008/04/29/wcf-service-behaviors-instance-and-concurrency-management.aspx. Generally the default is Per-Call which means a new instance of EEtest is created just to service the call to EEDirectConnect. So when you invoke the method EEDirectConnect you get this:
EEDirectConnect invocation started.
WCF makes a new EEtest class.
WCF invokes the method on EEtest.
The method news up a CZ and invokes the Connect-net method.
The event handler is attached.
The method EEDirectConnect completes.
EEtest is now "unrooted" by WCF - it's eligible for GC, and hence CZ is eligible for GC.
So perhaps it takes a very short time (or it's synchronous) and the problem is A, or it's asynchronous and it takes a little bit longer and it's B.
BTW: To fix B you could use some sort of synchronisation mechanism (eg an Event) to block until ee_Onfinger fires.