Client Server Command Design pattern with variable delays - oop

I am writing a client program to control a server which is in turn controlling some large hardware. The server needs to receive commands to initialize, start, stop and control the hardware.
The connection from the client to the server is via a TCP or UDP socket. Each command is encapsulated in an appropriate message using a SCADA protocol (e.g. Modbus or DNP3).
Part of the initialization phase involves sending a sequence of commands from the client to the server. In some cases there must be a delay in seconds between the commands to prevent multiple sub-systems being initialized at the same time. The value of the delay depends on the type of command.
I'm thinking that the Command Design Pattern is a good approach to follow here. The client instantiates ConcreteCommands and the Invoker places it in a queue. I'm not sure how to incorporate the variable delay and whether there's a better pattern which involves a timer and a queue to handle sending
messages with variable delays.
I'm using C# but this is probably irrelevant since it's more of a design pattern question.

It sounds like you need to store a mapping of types to delay. When your server starts, could you cache those delay times? Then call a method that processes the command after a specified delay?
When the server starts:
Dictionary<Type, int> typeToDelayMapping = GetTypeToDelayMapping();
When a command reaches the server, the server can call this:
InvokeCommand(ICommand command, int delayTimeInMilliseconds)
Like so:
InvokeCommand(command, typeToDelayMapping[type]);

Related

Suspend operation of lwIP Raw API

I am working on a project using a Zynq (Picozed devboard). The application is run bare-metal, uses lwIP TCP in RAW mode and basically behaves like this:
Receive a batch of data via Ethernet, which is stored in RAM.
Process the batch of data.
Send back the processed data via Ethernet.
The problem is, I need to measure the execution time of the processing part. However, running lwIP in RAW mode forces me to call tcp_fasttmr() and tcp_slowtmr() every 250/500 ms, which makes accurate measurement pretty hard. Whenever I'm not calling the tcp_tmr() functions for some time, I start repeatedly receiving error messages via UART ("unable to alloc pbuf in recv_handler"). It seems this is called from some ISR related to error handling, but I cannot really find the exact location.
My question is, how do I suspend the network functionality so I don't need to call tcp_tmr() periodically? I tried closing the connection and disabling the interface (netif_set_down()) and disabling the timer interrupt, but it still seems to have no effect on my problem.
I don't know anything about that devboard or the microcontroller on it but you should have an ethernetif.c (lwIP port) file which should contain the processing of an Ethernet receive interrupt or similar. This should be calling the lwIP function netif->input with a packet to process.
Disabling the interface won't stop this behaviour, it will just stop the higher level processing of the packet. If you are only timing how long the execution time is for debugging, you could try disabling the Ethernet receive interrupt and stop calling tcp_tmr until you have processed the packets.

Does COM provide methods to delay shutdown until all RPCs are done?

I have two processes: a Client and a Server. Client makes a call that the Server starts processing, but the Server can start shutting down before the call is finished. This can cause objects required by the call to suddenly become destroyed, leading to a crash.
The Client and Server communicate through COM. Something that tells the amount of currently active RPCs from and to a given Server process would be extremely helpful in this case.
Does COM, as the layer of communication between these two processes, provide any aid in delaying shutdown when there is active interaction them?
I don't know which langage has been used to implement your COM client/server.
But as far as I understand, it looks like you are facing a COM multithreading issue. What is the threading model of your COM server? (I suppose it multithreaded)
If it's the case, you should synchronize your threads.
The over way would be to transform the threading model of your COM server into a single threaded model. In that case, server shutting down call will be executed after previous client call finishes.
I suspect you really want CoAddRefServerProcess inside your C++ object's constructor (and CoReleaseServerProcess in the destructor).
This will keep your server alive until the C++ objects go away.
However, this won't prevent the client from requesting new instances, so you may also want:
CoRevokeClassObject to prevent new instances of the client from obtaining proxies.
If you're feeling really nasty, CoDisconnectObject will forcibly disconnect the proxy from the server.
*

Is it possible to have asynchronous processing

I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client.
I am looking for suggestions of this implementation at the server end. Basically what I need is this:
1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client
2. server process now waits for new client connections
3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required.
Can we do something like this in Apache module:
1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection
2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any.
So can a Apache process do these things:
1. Have more than one connection associated with it
2. Asynchronously waiting for new connection and at the same time processing the previous connections?
This is a complicated and ineffecient model of updating. Your server will try to update clients that have closed down. And the server has to maintain all that client data and meta data (last update time, etc).
Usually, for continuous updates ajax is used in a polling model. The client has a javascript timer that when it fires, hits a service that provides updated data. The client continues to get updates at regular intervals without having to write an apache module.
Would this model work for your scenario?
More reasons to opt for poll instead of push
Periodic_Refresh
With a little patch to resume a SUSPENDED mpm_event connection, I've got an asynchronous Apache module working. With this you can do the improved polling:
javascript connects to Apache and asks for an update;
if there's no updates, then instead of answering immediately the module uses SUSPENDED;
some time later, after an update or a timeout happens, callback fires somewhere;
callback gives an update (or a "no updates" message) to the client and resumes the connection;
client goes to step 1, repeating the poll which with Keep-Alive will use the same connection.
That way the number of roundtrips between the client and the server can be decreased and the client receives the update immediately. (This is known as Comet's Reverse Ajax, AFAIK).

WCF Server Push connectivity test. Ping()?

Using techniques as hinted at in:
http://msdn.microsoft.com/en-us/library/system.servicemodel.servicecontractattribute.callbackcontract.aspx
I am implementing a ServerPush setup for my API to get realtime notifications from a server of events (no polling). Basically, the Server has a RegisterMe() and UnregisterMe() method and the client has a callback method called Announcement(string message) that, through the CallbackContract mechanisms in WCF, the server can call. This seems to work well.
Unfortunately, in this setup, if the Server were to crash or is otherwise unavailable, the Client won't know since it is only listening for messages. Silence on the line could mean no Announcements or it could mean that the server is not available.
Since my goal is to reduce polling rather than immediacy, I don't mind adding a void Ping() method on the Server alongside RegisterMe() and UnregisterMe() that merely exists to test connectivity of to the server. Periodically testing this method would, I believe, ensure that we're still connected (and also that no Announcements have been dropped by the transport, since this is TCP)
But is the Ping() method necessary or is this connectivity test otherwise available as part of WCF by default - like serverProxy.IsStillConnected() or something. As I understand it, the channel's State would only return Faulted or Closed AFTER a failed Ping(), but not instead of it.
2) From a broader perspective, is this callback approach solid? This is not for http or ajax - the number of connected clients will be few (tens of clients, max). Are there serious problems with this approach? As this seems to be a mild risk, how can I limit a slow/malicious client from blocking the server by not processing it's callback queue fast enough? Is there a kind of timeout specific to the callback that I can set without affecting other operations?
Your approach sounds reasonable, here are some links that may or may not help (they are not quite exactly related):
Detecting Client Death in WCF Duplex Contracts
http://tomasz.janczuk.org/2009/08/performance-of-http-polling-duplex.html
Having some health check built into your application protocol makes sense.
If you are worried about malicious clients, then add authorization.
The second link I shared above has a sample pub/sub server, you might be able to use this code. A couple things to watch out for -- consider pushing notifications via async calls or on a separate thread. And set the sendTimeout on the tcp binding.
HTH
I wrote a WCF application and encountered a similar problem. My server checked clients had not 'plug pulled' by periodically sending a ping to them. The actual send method (it was asynchronous being a server) had a timeout of 30 seconds. The client simply checked it received the data every 30 seconds, while the server would catch an exception if the timeout was reached.
Authorisation was required to connect to the server (by using the built-in feature of WCF that force the connecting person to call a particular method first) so from a malicious client perspective you could easily add code to check and ban their account if they do something suspicious, while disconnecting users who do not authenticate.
As the server I wrote was asynchronous, there wasn't any way to really block it. I guess that addresses your last point, as the asynchronous send method fires off the ping (and any other sending of data) and returns immediately. In the SendEnd method it would catch the timeout exception (sometimes multiple for the client) and disconnect them, without any blocking or freezing of the server.
Hope that helps.
You could use a publisher / subscriber service similar to the one suggested by Juval:
http://msdn.microsoft.com/en-us/magazine/cc163537.aspx
This would allow you to persist the subscribers if losing the server is a typical scenario. The publish method in this example also calls each subscribers on a separate thread, so a few dead subscribers will not block others...

WCF - AsyncPattern=true or IsOneWay=true

Few methods in my WCF service are quite time taking - Generating Reports and Sending E-mails.
According to current requirement, it is required so that Client application just submits the request and then do not wait for the whole process to complete. It will allow user to continue doing other operations in client applications instead of waiting for the whole process to finish.
I am in a doubt over which way to go:
AsyncPattern = true OR
IsOneWay=true
Please guide.
It can be both.
Generally I see no reason for WCF operation to not be asynchronous, other than developer being lazy.
You should not compare them, because they are not comparable.
In short, AsyncPattern=True performs asynchronous invocation, regardless of whether you're returning a value or not.
OneWay works only with void methods, and puts a lock on your thread waiting for the receiver to ack it received the message.
I know this is an old post, but IMO in your scenario you should be using IsOneWay on the basis that you don't care what the server result is. Depending on whether you need to eventually notify the client (e.g. of completion or failure of the server job) then you might also need to look at changing the interface to use SessionMode=required and then using a Duplex binding.
Even if you did want to use asynchronous 2-way communication because your client DID care about the result, there are different concepts:
AsyncPattern=true on the Server - you would do this in order to free up server resources, e.g. if the underlying resource (?SSRS for reporting, Mail API etc) supported asynchronous operations. But this would benefit the server, not the client.
On the client, you can always generate your service reference proxy with "Generate Asynchronous Operations" ticked - in which case your client won't block and the callback will be used when the operation is complete.