Terminate hook processing in ejabberd module - module

I'm writing an ejabberd module. What it does is saving some messages into a queue. It actually works very good, there is only one thing I can't find in any documentation. I need to stop hook processing if I find a message coming from a particular user.
I.e. a message is sent to ejabberd, from user A to user B, my module (hooked to user_send_packet hook) processes this message and, if it finds that user A is the specified user, must not deliver it. From what I understood you can achieve this by stopping hook processing. How do you stop hook processing?

If what you want is to drop messages from A -> B, you can do it by subscribing to the fiter_packet hook, and from that return drop to drop the packets you don't want to allow.
From what I understood you can achieve this by stopping hook processing
no, stopping hook processing will prevent other handlers registered on that hook (if any) to be activated, but nothing else, the packet will continue as usual.

Related

Nsb: Custom behavior after every handler

We want to log every occurrence of a handler running to completion and we're wondering what's the cleanest way to do it.
More specifically, when a Handler completes, we want to write some basic information like the type of the message that was processed etc, to a Db.
One way to do it is by creating and sending a new message (publishing an event) at the end of each handler.
But we're wondering if there is another way to do this without "polluting" the message handlers with those extra line of code :) For example, if after a Handler runs to completion, another method defined elsewhere would pick up execution and handle the logic of writing to the database.
Hope I made myself clear enough. Thanks
You could use the auditing pipeline and forward the audit messages to your audit queue and handle a copy of all messages there...
Here is some more info: https://docs.particular.net/nservicebus/operations/auditing?version=core_7.2
Does that make sense?

Synchronizing dependent asychnronized functions Objective C

So I am running into a race condition and I have a few solutions on how to fix the issue. I am new to threading so obviously, my opinion and research is limited. I have a large amount of asynchronization calls that can happen if a user receives certain messages from server. Thus, my design is poor due to the dependent nature of my objects.
Lets say I have a function called
adduser:(NSString s){
does some asynchronize activity
}
Messageuser:(NSString s)
{
Does some more asychronize activity
}
if a user were to recieve a message telling it to addUser "Ryan". he would than create a thread and proceed with looking up Ryan and storing him. However, if the user has the application in suspended mode, and in the buffered of messages waiting to be recieved there is a addUser request and a MessageUser request, a race condition occures because it takes longer to complete Adduser than it does to complete MessageUser. Thus, If messageUser is called , and (in our example) "Ryan" has not been fully added, it throws an error.
What would be a possible solution to this issue. I looked into locks and semaphores, and what I am trying to do is, when MessageUser recieves a call, check to make sure there is no thread currently proccessing addUser. If there is none, proceed. Else wait, than proceed after it has finished.
Well it depends on how the messages are being issued in the first place and what the async response events are.
If the operations have dependencies (ordering requirements) then perhaps a background serial queue would be appropriate? That is a simple way to ensure the messages are processed in order.
If the async operations take completion blocks, then you could have the completion block issue the request for the next operation to be performed, though you may not know about that ahead of time.
If you need to solve this in a more general way then you need some kind of system for tracking prerequisites so you can skip work items that don't have their prerequisites met yet. That probably means your own background thread that monitors a list of waiting tasks and receives notification of all task completions so it can scan for items waiting on that completion and issue them.
It seems really complicated though... I suspect you don't really have such strong async parallel processing requirements and a much simpler design would be just as effective. Given your situation where you are receiving messages from a server, I think a serial queue would be the best option. Then you can process messages in the order the server sent them and keep things simple.
//do this once at app startup
dispatch_queue_t queue = dispatch_queue_create("com.example.myapp", NULL);
//handle server responses
dispatch_async(queue, ^{
//handle server message here, one at a time
});
In reality, depending on how you connect to your server you might be able to just move the entire connection handling to the background queue and communicate with it via messages from the UI, and update the UI by dispatching to the dispatch_get_main_queue() which will be the UI thread.

Nservicebus possible to publish an event when a message gets moved to error queue?

I have a saga that does a bulk import by creating a bunch of commands (It keeps track of the # of commands sent) then listens to an event indicating the task succeeded. I would also like to be notified when the command fails (moves into error queue).
I want to take advantage of nservicebus's retry functionality so I don't want to simply wrap it in a try catch, I really only want to publish this event when it is moving to the error queue.
Is it possible to create another end point that handles the generated commands but listens to the error queue? Or is there another better way to accomplish this?
You can take control over how the exceptions are handled using a custom fault handler

Chaining events/commands?

I have a feature I'm attempting to implement using NServiceBus but not sure the pattern to use here. (I'm fairly new to NServiceBus)
I'll try to explain where my uncertainty comes from:
User interaction triggers MVC controller to send a command to perform a domain operation. This command raises an event to notify others that this occurred.
A handler that subscribes to this event determines whether or not another domain operation should occur.
This is where I'm unclear as to the proper pattern to follow. At this point should the event handler:
just make the changes required?
send a new command to do it? If so, send it back to the originating service/process?
another option?
Part of me is wondering if I should be using an in-proc domain event to handle this, but I don't think the first command should have to wait on the second one before it returns. In fact it could happen much later. That is why I went the route of using the bus to handle it async. Also, an email will need to be generated once the second operation finishes. Should that be triggered from yet another event/command?
Any and all guidance appreciated.
If there is no need to wait for the second action then yes, it should be done asynchronously so the processing of the first command should publish an NServiceBus event. The handler for that event would (likely) be hosted in a separate endpoint which would then just do the work - no need to send another command there.
To add to Udi's answer, I would only turn around and send a command back to the originating service if the service at the originating endpoint is really the one that should be responsible for the behavior of that command. Otherwise, the service (endpoint) receiving the event should just do what it needs to do in response to the event (which sounds like your case).

Grails test JMS messaging

I've got a JMS messaging system implemented with two queues. One is used as a standard queue second is an error queue.
This system was implemented to handle database concurrency in my application. Basically, there are users and users have assets. One user can interact with another user and as a result of this interaction their assets can change. One user can interact with single user at once, so they cannot start another interaction before the first one finishes. However, one user can be in interaction with other users multiple times [as long as they started the interaction].
What I did was: crated an "interaction registry" in redis, where I store the ID of users who begin an interaction. During interaction I gather all changes that should be made to the second user's assets, and after interaction is finished I send those changes to the queue [user who has started the interaction is saved within the original transaction]. After the interaction is finished I clear the ID from registry in redis.
Listener of my queue will receive a message with information about changes to the user that need to be done. Listener will get all objects which require a change from the database and update it. Listener will check before each update if there is an interaction started by the user being updated. If there is - listener will rollback the transaction and put the message back on the queue. However, if there's something else wrong, message will be put on to the error queue and will be retried several times before it is logged and marked as failed. Phew.
Now I'm at the point where I need to create a proper integration test, so that I make sure no future changes will screw this up.
Positive testing is easy, unfortunately I have to test scenarios, where during updates there's an OptimisticLockFailureException, my own UserInteractingException & some other exceptions [catch (Exception e) that is].
I can simulate my UserInteractingException by creating a payload with hundreds of objects to be updated by the listener and changing one of it in the test. Same thing with OptimisticLockFailureException. But I have no idea how to simulate something else [I can't even think of what could it be].
Also, this testing scenario based on a fluke [well, chance that presented scenario will not trigger an error is very low] is not something I like. I would like to have something more concrete.
Is there any other, good, way to test this scenarios?
Thanks
I did as I described in the original question and it seems to work fine.
Any delays I can test with camel.