Does anyone have experience with how TCPCopy effects the performance of network transactions? - process

When looking at Process monitor from Microsoft Technet, I see the TCP send Receive and Copy.
I notice that the Copy appears to be a replication of a transmission, but dont understand why it happens sometimes and not others.
Screen shot of TCPCopy issue

I believe "TCPCopy" event indicates that the received TCP packet is cloned/mirrored by another program (typically antivirus). From your screenshot, it looks like there's an antivirus scanning Outlook (POP/SMTP/IMAP) and Firefox (HTTP/S) packets. Can you try disabling mail and web traffic scanning (or realtime scanning) in your antivirus, and check if you still notice TCPCopy events in Process Monitor?

Related

Limit total size of inflight iot message

I am using IoTHub device client SDK on an embedded device. The application will send telemetry message to iot hub periodically. The iot device connect to a wireless router and wireless connect to internet via WAN port.
When the wireless router lost internet connection, iot device will not get notified immediately about the disconnection. It takes about 60s to get notified, before that iot device will continue to send telemetry message with IoTHubDeviceClient_LL_SendEventAsync(), all those message get queued in SDK layer and eat memory. Since it's on embedded device with limited resource, memory get eaten up and cause program been killed by a lower memory killer app.
Is there way to specified total size of iot message can be queued in sdk layer? If exceed this quota, IoTHubDeviceClient_LL_SendEventAsync() will failed immediately.
Actually this is also needed for normal scenario too. When iot device send message, seems message been queued in low layer and get flushed out at certain time. I don't see any API that can control the flush. That create another problem, even when there is internet connection, from application level, there is no control of how many message been queued and how long it been queued, in turn, app has no control of how much memory been used by process. On my device, there is system monitor that will kill process use too much memory.
The question is what do you do even in that case if the message failure occurs in the case that the queue is full? Do you lose the information then because of lack of storage capacity? From the IoT perspective, I would recommend in this case to consider if your device is reliable IoT device to handle these edge cases as well. And also knowing the limits of the devices, and knowing how long it can be without the internet connection helps to mitigate these risks from your application, not SDK.
From the GitHub, default sendMessageAsync method throws timeout exception in case your message sending fails, unless you have some kind of retry policies implemented(according to the documentation C SDK does not allow custom retry policies
https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-reliability-features-in-sdks).
According to the documentation in case of connection failure based on the retry policy(if you have set it), SDK will try to initiate connection this way or that way and queue the messages created in the meantime:
https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/connection_and_messaging_reliability.md
So, an expectation here is that SDK does not take responsibility for the memory limits. This is up to the application to deal. Since your device has some limitations, I would recommend implementing your own queuing mechanism(maybe set no-retry as a policy and that way avoid queuing). That way you have under the control what will happen in the case that there is no internet connection and have under the control memory limitations. Maybe your business case accepts that you calculate an average value and instead of 50 you store 1 message over the time etc..
If this something you do not like, the documentation says also that you set the timeout for the queue - maybe not the memory limit but timeout yes, so maybe you can try to investigate this a bit deeper:
"There are two timeout controls in this system. An original one in the iothub_client_ll layer - which controls the "waiting to send" queue - and a modern one in the protocol transport layer - that applies to the "in progress" list. However, since IoTHubClient_LL_DoWork causes the Telemetry messages to be immediately* processed, sent and moved to the "in progress" list, the first timeout control is virtually non-applicable.
Both can be fine-tuned by users through IoTHubClient_LL_SetOption, and because of that removing the original control could cause a break for existing customers. For that reason it has been kept as is, but it will be re-designed when we move to the next major version of the product."

Having difficulty sending small lwip packets immediately using the lwip API

I am creating a server on a ST Cortex M3 device. I am using the lwip API and FreeRTOS. All is working, but the response time is way off. I am currently using lwip 1.3.2 and FreeRTOS 7.3.
A single client connects to the server and must have some time-critical data sent frequently. These packets are on the order of 6 or so bytes. Other times, I am sending upwards of 20K.
The problem I am having is that these smaller packets seem to be taking forever to be sent. I assume this is because lwip is waiting for more data to be enqueued to make more efficient transmissions. I cannot wait around for 2 or 3 seconds for the data to be sent; the client is expecting the data nominally in a few micro-seconds or milli-seconds.
I have tried using lwip_send and lwip_write. (I understand that one is the same as the other with a flag passed at the end. Just had to try...) I have tried setting TCP_NODELAY on the socket to no avail. I tried to set SO_SNDLOWAT to '1', but this always returned -1, so I do not think it is supported.
I do not want to redo all of my code using TCP RAW. Is there a way to invoke the tcp_output() function outside of TCP RAW mode? Is there any way to speed things up or is this just how slow lwip TCP with small packets is?
Any and all suggestions are welcome. Thanks.
--EDIT--
I would also like to add that once I am ready to transmit, I make sure that my TX task in FreeRTOS is at the highest priority. There are no other tasks running up to the point at which I call lwip_send/write.
I'm fairly experienced with bare metal lwIP on xilinx and lwip does not wait to send things out. It will pump packets out as fast as your interrupts are acknowledged based on the ethernet hardware. I've been using UDP only. What is coming to mind though, is your problem might be on the receive end. If you are doing TCP, maybe those small packets are coming out late because you are having receive issues. What you need to do is find in the code the lowest level point at which ethernet is transmit, put a general purpose output toggle on that. Then also put a general purpose output toggle on when a ethernet packet is received. Look at the signals on a scope. If it confirms your hypothesis, then move the output toggles around to narrow down the issue. Wash, rinse and repeat until you are down to where the issue its. It's crude and time consuming, but oftentimes this brute force approach solves many "impossible" embedded software problems, due to pure determination. Good luck!

Using WinPCap for UDP receiving

I would like to use WinPCap library for "reliable" UDP receiving in my C++ application. All examples that I found, using this library for capturing and then proceding. Is there any way (example) how to configure PCap for streaming mode and receive UDP only and on uder defined port or how to solve this. In this time I have reliable UDP server able to receiving 0.5Gb/s. But on slower PC I have a packet lose I can see packets in ethereal but not in application.
thanks
vsm
I assume that you have already tried all of the more standard methods of increasing the number of datagrams that you are able to process? Things like increasing the recv buffer size, speeding up the processing that you do per datagram and using IOCP to allow you to bring more threads to bear on the problem or using RIO if you can target Windows 8?
If so then using WinPCap might work but it sounds like a bit of an extreme solution.
What you need to do is create a filter so that you only capture the datagrams that you are interested in... The docs include examples which use filters.
I have server from here: http://www.gamedev.net/topic/533159-article-using-udp-with-iocp/. This code working with IOCP. Its working fine on WIndows XP. There is no problem with receiving 0.5Gb/s. But now on Win7 is little unreliable. Sometimes there are packets positions error. (my device generating udp packets and in its payload there is PacketNumber - number increasing with each packet. When error occured i write all packet numbers into file. I can see for exmaple: 10,11,290,13,14... ). Is there any known differences in WinXP and Win7 for IOCP and multi threading? Or do you konw any free UDP server with IOCP processing?
In procedding loop I only adding packets into buffer and checking their numbers.

Is the GameKit's communication reliable with GKMatchSendDataReliable?

I'm working with GameKit.framework and I'm trying to create a reliable communication between two iPhones.
I'm sending packages with the GKMatchSendDataReliable mode.
The documentation says:
GKMatchSendDataReliable
The data is sent continuously until it is successfully received by the intended recipients or the connection times out.
Reliable transmissions are delivered in the order they were sent. Use this when you need to guarantee delivery.
Available in iOS 4.1 and later. Declared in GKMatch.h.
I have experienced some problems on a bad WiFi connection. The GameKit does not declare the connection lost, but some packages never arrive.
Can I count on a 100% reliable communication when using GKMatchSendDataReliable or is Apple just using fancy names for something they didn't implement?
My users also complain that some data may be accidentally lost during the game. I wrote a test app and figured out that GKMatchSendDataReliable is not really reliable. On weak internet connection (e.g. EDGE) some packets are regularly lost without any error from the Game Center API.
So the only option is to add an extra transport layer for truly reliable delivery.
I wrote a simple lib for this purpose: RoUTP. It saves all sent messages until acknowledgement for each received, resends lost and buffers received messages in case of broken sequence.
In my tests combination "RoUTP + GKMatchSendDataUnreliable" works even beter than "RoUTP + GKMatchSendDataReliable" (and of course better than pure GKMatchSendDataReliable which is not really reliable).
It nearly 100% reliable but maybe not what you need sometimes… For example you dropped out of network all the stuff that you send via GKMatchSendDataReliable will be sent in the order you've send them.
This is brilliant for turn-based games for example, but if fast reaction is necessary a dropout of the network would not just forget the missed packages he would get all the now late packages till he gets to realtime again.
The case GKMatchSendDataReliable doesn't send the data is a connection time out.
I think this would be also the case when you close the app

Receiving SMS over SMPP

I have a project coming up where I need to send and receive messages through a specific mobile operator, which only provides an SMPP interface. The whole project will be a hosted website. I have already read quite a lot, but I do not yet quite understand what is actually needed from my side to use the protocol.
Should my application try to maintain a constant connection to the smpp?
Can I simply connect, send a message and then disconnect?
Are receiving messages based on push or pull?
Thanks for the help.
SMPP is a peer-to-peer protocol. That should mean that SMS Gateway (your side) and SMSC (your mobile operator) need to have a proper bind/connection established. Even when there are no SMS or DLRs to send/receive, there is a continous exchange of smpp PDU (enquire_link/enquire-link_resp) that ensure that the bind is established.
In detail, if you send an enquire_link PDU and you get no response (enquire_link_resp) the bind is broken. Your sms won't be delivered (will remain enqueued in your gateway store), and you won't receive MOs (incoming sms) or DLRs (delivery report). To re-establish the connection you should re-initiate the connection.
So, my answer would be that you need a constant connection to SMSC.
You are stating you want to receive messages, as a result at least a bind_receiver is needed. Because you don't know when messages are going to come in, you will have to be constantly connected, rather than disconnecting after each event.
With regards to your question about "push or pull" this depends on how you solve the first problem. If you can build a solution that is constantly connected, the result will be a push (the carrier will push it to you as soon as they receive the message). If (for some reason) you cannot maintain a constant connection, you'll end up building a pull mechanism. You'll connect to the carrier ever X seconds to see if they have a message waiting for you.
I do need to highlight 2 pitfalls though:
A number of carriers in the world, do not store or even accept messages if you are not connected, therefore, depending on which carrier you interact with, you might be forced to use a continuous connection.
Most carriers do not allow you to open and close connections in quick succession. Once you disconnect, you can not reconnect for a time frame of X seconds.
Therefore a constant connection is really the way to go. Alternatively, you can look into a company like Nexmo, which will provide you with a HTTP Call every time a message arrives.
I'm not sure which language your developing your application in, but if you use any of the popular languages (Java, PHP, Perl) there are modules out there that handle basic SMPP Connectivity for you. A quick google search for your language and "SMPP Client" will give you a list of references.