I'm trying to build an OS X mail client using MailCore2, and I need to know what current operations are currently running, and in what state they are — think Mail.app activity monitor window.
I've some things that I could use in the API : The MCOIMAPSession object has a operationQueueRunningChangeBlock property, but it only tells me when the session changes states (running => not running) but that is insufficient.
Right now I think I'll have to subclass/wrap those to do what I want.
MailCore does not provide an API to track running operations, nor should we, because that is your job. A typical pattern to implement this would be to either subclass the operation classes to tag each one with some kind of activity object, or aggregate activities in a separate queue and push and pop as operations are enqueued and dequeued respectively. The completion blocks of each request in the Objective-C interface should provide enough of the state of each operation for you, and some specific kinds of operations even include progress blocks/hooks.
Related
So I'm thinking of using RabbitMQ to send messages between all the varied apps in our organization. In the attached image is essentially the picture in my mind of how things would work.
So the message goes into the exchange, and splits out into three queues.
Payloads are always JSON text.
The consumers are long-running windows services whose only job is to sit and listen for messages destined for their particular application.When a message comes in, they look at the header to determine how this payload JSON should be interpreted, and which REST endpoint it should be sent to. e.g., "When I see a 'WORK_ORDER_COMPLETE' header I am going to parse this as a WorkOrderCompleteDto and send it as a POST to the CompletedWorkOrder WebAPI method at timelabor-api.mycompany.com. If the API returns other than 200, I reject the message and let rabbit handle it. If I get a 200 back from the API, then I ack the message to rabbit."
Then end applications are simply our internal line-of-business apps that we use for inventory, billing, etc. Those applications are then responsible for performing their respective function (decrementing inventory, creating a billing record, yadda yadda.
Does this in any way make a sensible understanding of a proper way to use Rabbit?
Conceptually, I believe you may be relying on RabbitMQ to do things that your application needs to do.
The assumption of the architecture seems to be that each message is processed by each of your consuming applications totally in a vacuum. What this means is that you don't care that a message processed successfully by Billing_App ultimately failed with Inventory_App. Maybe this is true, but in my experience, it isn't.
If the end goal is to achieve some consistent state in the overall data, you're going to need a some supervisory component orchestrating and monitoring the various operations to ensure that the state is consistent. This means, in effect, that your statement about rejecting a message back to RabbitMQ means you have a bit more thought to put into what happens when something fails.
I would focus on identifying some UML activity diagrams that describe your behavior and how it achieves the end-state, and use that as a guide to determine how the orchestration of your application needs to be designed.
In some exceptional situations I need somehow to tell consumer on receiving point that some messages shouldn’t be processed. Otherwise two systems will become out-of-sync (we deal with some outdates external systems, and if, for example, connection is dropped we have to discard all queued operations in scope of that connection).
Take a risk and resolve problem messages manually? Compensation actions (that could be tough to support in my case)? Anything else?
There are a few ways:
You can set a time-to-live when sending a message: await endpoint.Send(myMessage, c => c.TimeToLive = TimeSpan.FromHours(1));, but this will apply to all messages that are sent (or published) like this. I would consider this, after looking at your requirements. This is technical, but it is a proper messaging pattern.
Make TTL and generation timestamp properties of your message itself and let the consumer decide if the message is still worth processing. This is more business and, probably, the most correct way.
Combine tech and business - keep the timestamp and TTL in message headers so they don't pollute your message contracts, and filter them out using a custom middleware. In this case, you need to be careful to log such drops so you won't be left wonder why messages disappear now and then.
Almost any unreliable integration can be monitored using sagas, with timeouts. For example, we use a saga to integrate with Twilio. Since we have no ability to open a webhook for them, we poll after some interval to check the message status. You can start a saga when you get a message and schedule a message to check if the processing is still waiting. As discussed in comments, you can either use the "human intervention required" way to fix the issue or let the saga decide to drop the message.
A similar way could be to use a lookup table, where you put the list of messages that aren't relevant for processing. Such a table would be similar to the list of sagas. It seems that this way would also require scheduling. Both here, and for the saga, I'd recommend using a separate receive endpoint (a queue) for the DropIt message, with only one consumer. It would prevent DropIt messages from getting stuck behind the integration messages that are waiting to be processed (and some should be already dropped)
Use RMQ management API to remove messages from the queue. This is the worst method, I won't recommend it.
From what I understand, you're building a system that sends messages to 3rd party systems. In other words, systems you don't control. It has an API but compensating actions aren't always possible, because the API doesn't provide it or because actions are performed inside the 3rd party system that can't be compensated or rolled back?
If possible try to solve this via sagas. Make sure the saga executes the different steps (the sending of messages) in the right order. So that messages that cannot be compensated are sent last. This way message that can be compensated if they fail, will be compensated by the saga. The ones that cannot be compensated should be sent last, when you're as sure as possible that they don't have to be compensated. Because that last message is the last step in synchronizing all systems.
All in all this is one of the problems with distributed systems, keeping everything in sync. Compensating actions is the way to deal with this. If compensating actions aren't possible, you're in a very difficult situation. Try to see if the business can help by becoming more flexible and accepting that you need to compensate things, where they'll tell you it's not possible.
In some exceptional situations I need somehow to tell consumer on receiving point that some messages shouldn’t be processed.
Can't you revert this into:
Tell the consumer that an earlier message can be processed.
This way you can easily turn this in a state machine (like a saga) that acts on two messages. If the 2nd message never arrives then you can discard the 1st after a while or do something else.
The strategy here is to halt/wait until certain that no actions need to be reverted.
I'm struggling to understand how to implement Eventual Consistency with the exposed example of BacklogItems and Tasks from Vaughn Vernon. The statement I've understood so far is (considering the case where he splits BacklogItem and Task into separate aggregate roots):
A BacklogItem can contain one or more tasks. When all remaining hours from a the tasks of a BacklogItem are 0, the status of the BacklogItem should change to "DONE"
I'm aware about the rule that says that you should not update two aggregate roots in the same transaction, and that you should accomplish that with eventual consistency.
Once a Domain Service updates the amount of hours of a Task, a TaskRemainingHoursUpdated event should be published to a DomainEventPublisher which lives in the same thread as the executing code. And here it is where I'm at a loss with the following questions:
I suppose that there should be a subscriber (also living in the same thread I guess) that should react to TaskRemainingHoursUpdated events. At which point in your Desktop/Web application you perform this subscription to the Bus? At the very initialization of your app? In the application code? Is there any reasoning to place domain subscriptors in a specific place?
Should that subscriptor (in the same thread) call a BacklogItem repository and perform the update? (But that would be a violation of the rule of not updating two aggregates in the same transaction since this would happen synchronously, right?).
If you want to achieve eventual consistency to fulfil the previously mentioned rule, do I really need a Message Broker like RabbitMQ even though both BacklogItem and Task live inside the same Bounded Context?
If I use this message broker, should I have a background thread or something that just consumes events from a RabbitMQ queue and then dispatches the event to update the product?
I'd appreciate if someone can shed some clear light over this since it is quite complex to picture in its completeness.
So to start with, you need to recognize that, if the BacklogItem is the authority for whether or not it is "Done", then it needs to have all of the information to compute that for itself.
So somewhere within the BacklogItem is data that is tracking which Tasks it knows about, and the known state of those tasks. In other words, the BacklogItem has a stale copy of information about the task.
That's the "eventually consistent" bit; we're trying to arrange the system so that the cached copy of the data in the BacklogItem boundary includes the new changes to the task state.
That in turn means we need to send a command to the BacklogItem advising it of the changes to the task.
From the point of view of the backlog item, we don't really care where the command comes from. We could, for example, make it a manual process "After you complete the task, click this button here to inform the backlog item".
But for the sanity of our users, we're more likely to arrange an event handler to be running: when you see the output from the task, forward it to the corresponding backlog item.
At which point in your Desktop/Web application you perform this subscription to the Bus? At the very initialization of your app?
That seems pretty reasonable.
Should that subscriptor (in the same thread) call a BacklogItem repository and perform the update? (But that would be a violation of the rule of not updating two aggregates in the same transaction since this would happen synchronously, right?).
Same thread and same transaction are not necessarily coincident. It can all be coordinated in the same thread; but it probably makes more sense to let the consequences happen in the background. At their core, events and commands are just messages - write the message, put it into an inbox, and let the next thread worry about processing.
If you want to achieve eventual consistency to fulfil the previously mentioned rule, do I really need a Message Broker like RabbitMQ even though both BacklogItem and Task live inside the same Bounded Context?
No; the mechanics of the plumbing matter not at all.
I have implemented client server program using boost::asio library.
In my implementation there are times when io_service.run() blocks indefinitely. In case I pass another request to io_service, the blocked call begins to execute normally.
Is there any way to see what are the pending requests inside the io_service queue ?
I have not used work object to block the run call!
There are no official ways to query into the io_service to find all pending request. However, there are a few techniques to debug the problem:
Boost 1.47 introduced handler tracking. Simply define BOOST_ASIO_ENABLE_HANDLER_TRACKING and Boost.Asio will write debug output, including timestamps, an identifier, and the operation type, to the standard error stream.
Attach a debugger dig through the layers to find and examine operation queues. This answer covers both understanding handler tracking and using a debugger to examine an operation queue for the epoll_reactor.
Finally, if you believe it is a bug, then it may be worth updating to the latest version or checking the revision history for relevant changes. Regardless, describing the problem in more detail may allow others to help identify the source of the problem and potential solutions.
Now i spent a few hours reading and experimenting (i need more boost::asio functionality for work as well) and it turns out: Kind of.
But it is not as straightforward or readable as one might hope.
Under the hood (well, under the outermost hood) io_service has a bunch of other services registered, which do the work async_ operations of their respective fields require.
These are the "Services" described in the reference.
Now sadly, the services stay registered, wether there is work to do or not. For example if your io_service has a udp socket, it will still have all the corresponding services, even if the socket itself is inactive.
But you can ask your io_service which services it has. Lets say you want to know wether your io_service called m_io_service has an udp datagram_socket_service. Then you can call something like:
if (boost::asio::has_service<boost::asio::datagram_socket_service<boost::asio::ip::udp> >(m_io_service))
{
//Whatever
}
That does not help a lot, because it will be true no matter wether the socket is active or not. But after you know, that you have that service, you can get a ref to it using use_service instead of has_service but with the same elegant amount of <>.
And now you can inspect the service to see what it is up to. Sadly, it will not tell you what the outstanding handlers names are (probably partly because it does not know them) but if it is a socket, you can get its implemention_type and with that check whether it currently is_open or find either the local_endpoint as well as the remote_endpoint.
In case of a deadline_timer_service you can, among other stuff, find out when it expires_at.
See the reference for more information what the service is and is not willing to tell you.
http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/reference.html
This information should then hopefully allow you to determine which async_ operation did not return.
And if not, at the very least you can cancel any unexpectedly active services.
I am building one application on Mac OS X (10.6). In this application, I have one screen where user will provide input and that will be saved as a plist in local folder. This plist file needs to be trasferred to server using HTTP POST service. There should be check for server connectivity and if connections fails the files will be saved in local folder. With certain time duration, again the server connection will be checked and if found, then send all the files store in local folder one by one.
Basically, The GUI application will run continously to get input from user and in another thread there should be check for server connectivity and sending the files.
So my question is what might be the good approach to solve the problem and if any one can send some sample code, it would be great to me.
Thanks,
Barun
There are several approaches to threading in Objective-C! The easiest strategy is NSOperationQueue. Override NSOperation to handle your HTTP request, optionally set a completion block if you need to be notified when it's done, add an instance of it to an NSOperationQueue object and you're good to go. Set up an NSTimer to reschedule the upload if it fails the first time. You can use NSURLConnection to handle the web stuff. Note that NSURLConnection can make connections asynchronously or blocking. Since your NSOperation subclass runs in a separate thread already, you probably want to use the blocking method (if you don't you have to create a concurrent NSOperation subclass, which is a lot more work).
You can also use Grand Central Dispatch's API, detach a new thread to methods you specify, or use plain old c (I wouldn't recommend the last two but it's good to mention them). As a bonus, NSOperationQueue and Grand Central Dispatch both know "what's right" when you have multiple operations running at once, and will scale the number of threads to fit the number of core's in the user's computer to obtain the best performance.
Check the docs for NSOperationQueue, NSOperation, and NSURLConnection. The guides and example projects will have all the source code you need to get you started in the right direction.