is it safe to send an Arc<Mutex<UnixDatagram>> over oneshot::channel betwen two threads? - rust-tokio

I am trying to send a socket reference between two thread:
1- The first thread holds a reference Arc<Mutex> in it .
2- the second thread get this reference lock it and send some data to a remote destination.
I am creating a oneshot::channel and I am sending a request to the first thread to return back a clone of the Arc<Mutex> .
This task is repeated multiple time for each request , the problem is that after certain amount of time I am not able to get the reference anymore . Does the channel lock resource while sending them ?

Related

Delete message from Azure Storage Queue using REST API

I am trying to simply Get a message off an Azure Storage Queue and then delete it using the REST API.
I can retrieve the message and get a popreceipt but when I try and use this to delete the message, I keep getting, "The specified message does not exist."
In the documentation (https://learn.microsoft.com/en-us/rest/api/storageservices/delete-message2) it looks like you only have to supply the popreceipt however further down the page, it says,
After a client retrieves a message with the Get Messages operation, the client is expected to process and delete the message. To delete the message, you must have two items of data returned in the response body of the Get Messages operation:
The message ID, an opaque GUID value that identifies the message in
the queue.
A valid pop receipt, an opaque value that indicates that the message has been retrieved.
So this implies you do need to send the MessageId as well but there is nothing in the docs that specifies where to place the messageid.
The URI in the docs says to pass DELETE to
https://myaccount.queue.core.windows.net/myqueue/messages/messageid?popreceipt=string-value and I have tried replacing messageid with the actual messageid from the GET but this does not seem to be correct.
Has anyone used this and can explain why I always get "The specified message does not exist" when trying to DELETE the message off the queue or am I missing something?
When GET message /dequeue a message is requested, the message becomes
invisible for certain amount of time .In this mean time ,if it is
not deleted in process which dequeued ,it will be visible again as
said by #gaurav mantri and have a chance to be picked up by another
process i.e ; when you perform get message operation with a visibility timeout reached, another message object is returned with a new pop_receipt .
So please check the time, when you dequeue the message and wait if
the time is out or delete it on time as sometimes encoding process
may take more time than the message invisibility timeout.
Also please note that the maximum timeout interval for Queue service
operations is 30 seconds. If the server timeout interval elapses
before the service has finished processing the request, the service
returns an error.
For the next process pop receipt is updated and so using old one
gives error error code 404 (Not Found) because message with a
matching pop receipt wont be found .
So please check by gradually increasing the invisibility timeout.
References:
Queue getmessage fails - some messages only. (microsoft.com)
Setting timeouts for Queue service operations (REST API) - Azure
Storage | Microsoft Docs

OpenSplice DDS subscriber does not receive data all the time

I have the following scenario when using OpenSplice DDS:
A publisher publishes a DDS topic instance continuously. A subscriber subscribes data using NoOpDataReaderListener, my test runs both publisher and subscriber at the same and the method on_data_available of NoOpDataReaderListener in the subscriber print out the data content using std::cout.
My problem is that data is not printed out all the time when you run the test many runs even though the on_subscription_matched method of NoOpDataReaderListener is invoked during every run which indicating the matching of publisher and subscriber is successful. But the on_data_available does not always print out the data content.
Even I replace the data content printing out inside on_data_available with std::cout << data do come in, that message doesn’t appear all the time, only probably 7 out of 10 runs (7 runs all DDS samples can be seen from Subscriber, another 3 runs no DDS samples come out from Subscriber at all). So I think it is more to do with DDS problem instead of std out not being flushed out. Does anyone have idea or how to diagnostic DDS communication channel problem?
Some more information, with wireshark, I have observed the following:
it looks like whether the run is successful (subscriber successfully getting data from publisher) depends on if the user unicast port in publisher process can send data (RTPS DATA instead of just RTPS Heartbeat) to user multicast locator (239.255.0.1::multicast_port) in subscriber process.

Can RabbitMQ (or similar message queuing system) be used to single thread requests per user?

The issue is we have some modern web applications that are integrated with a legacy system that was never designed to support multiple concurrent requests from a single user. Basically there are certain types of requests that the legacy system can only handle one-at-a-time from a single user. It can handle multiple concurrent requests coming from different users, but for technical reasons cannot handle multiple from a single user. In these situations, the user's first request will complete successfully, but any subsequent requests from that same user that come in while the first request is still executing will fail.
Because our apps are ajax enabled, multi-tab/multi-browser friendly, and just the fact that there are multiple apps - there are certain scenarios where a user could wind up having more than one of these types of requests being sent to the legacy system at the same time.
I'm trying to determine if something like RabbitMQ could be positioned in front of the legacy system and leveraged to single-thread requests per user/IP. The thinking being that the web apps would send all requests to MQ, and they'd stack into per-user queues and pass on to the legacy system one at a time.
I don't know if there would be concerns about the potential number of queues this could create - we have a user-base of approx 4,000.
And I know we could somewhat address this in the web apps individually, but since there are multiple apps it'd be duplicating logic across them, and you'd still have the potential for two different apps to fire off concurrent requests.
Any feedback would be appreciated. Thanks-
I'm not sure a unique queue per user will work as you would need to have a backend worker process listening for messages on that queue that would need to be dynamically created.
Below is one option but it does have a performance bottleneck potential as a single backend process would be handling all requests sequentially. You could use multiple worker processes but you wouldn't know if one had completed before the other causing a race condition if your app requires a specific sequence of actions.
You could simply put all transactions (from all users) into a single queue and have a backend process pull off of that queue and service the request. If there needs to be a response back to the user once the request was serviced, then the worker process could respond back to a separate queue with a correlationID that could be used to send the response date back to the correct user.
I've done this before with ExpressJS apps where the following flow would happen:
The user/process/ajax makes a request
Express takes the payload from the request object and sends it to a RabbitMQ queue with a unique correlationId (e.g. UUID).
Express then takes the response object and stores it in a responseStore object with the key being the correlationId
Meanwhile, a backend worker process pulls the item from the queue, does some work and then sends a message to a different response queue with the same correlationId
The ExpressJS application has a connection to the response queue and when it receives a message, it takes the correlationId from the response and looks for a response object stored with same correlationId in the responseStore. If it finds it, it takes the payload from the message and does something like response.send(payload) or response.json(payload)
To do this, you should also have a mechanism that stores the creation time of the response object in the responseStore along with the response object. Then have a separate process that will check the responseStore and clean up old response objects after a certain timeout in case there are issues with the backend process completing.
Look here for more info on RPC with RabbitMQ:
https://www.rabbitmq.com/tutorials/tutorial-six-javascript.html
Hope this helps.

Is it possible to have asynchronous processing

I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client.
I am looking for suggestions of this implementation at the server end. Basically what I need is this:
1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client
2. server process now waits for new client connections
3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required.
Can we do something like this in Apache module:
1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection
2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any.
So can a Apache process do these things:
1. Have more than one connection associated with it
2. Asynchronously waiting for new connection and at the same time processing the previous connections?
This is a complicated and ineffecient model of updating. Your server will try to update clients that have closed down. And the server has to maintain all that client data and meta data (last update time, etc).
Usually, for continuous updates ajax is used in a polling model. The client has a javascript timer that when it fires, hits a service that provides updated data. The client continues to get updates at regular intervals without having to write an apache module.
Would this model work for your scenario?
More reasons to opt for poll instead of push
Periodic_Refresh
With a little patch to resume a SUSPENDED mpm_event connection, I've got an asynchronous Apache module working. With this you can do the improved polling:
javascript connects to Apache and asks for an update;
if there's no updates, then instead of answering immediately the module uses SUSPENDED;
some time later, after an update or a timeout happens, callback fires somewhere;
callback gives an update (or a "no updates" message) to the client and resumes the connection;
client goes to step 1, repeating the poll which with Keep-Alive will use the same connection.
That way the number of roundtrips between the client and the server can be decreased and the client receives the update immediately. (This is known as Comet's Reverse Ajax, AFAIK).

WCF - starting a new thread asynchronously

Setup
WCF Service running in IIS 6
Caching - Enterprise.Caching
There's a business need to hold on to a message for a x amount of time(cache).
Another process will remove it from the cache. We may receive another message that will remove this message from the cache and prevent it from processing.
One way that I though of doing this is
Receive message1 and put in cache for (x) minutes
Start a new thread that expires in (x - 1) minutes
Receive second message that affects first - removes first message from cache
Thread expires
if message1 still exist forward to datastore
Any suggestion would be greatly appreciated.
I'd probably spawn a dedicated thread that checked every N seconds to see if any messages are ready to be moved out of the cache.
One important thing about caching WCF Message objects -- make sure you're creating a buffered copy of each Message before you store it (use Message.CreateBufferedCopy). With some transports, the WCF message will actually point to the network stream, and the network will be blocked until the message is read.