Why does volley always send another packet 5 minutes after the last packet is sent? - ssl

I have an android app that sends a request to the server via Volley.
The communication is fine but I see unnecessary packets(no.385 ~ no.387) as below. These packets are always sent 5 minutes after the last packet is sent. Please let me know why the client sends the packets to the server and how I can avoid sending these packets.
client : 10.x.x.x
server : 52.x.x.x
Packets 385-387 are relevant here:
image of my wireshark capture

Related

NetScaler Monitors

I am trying to understand the differences between NetScaler Monitor types HTTP-ECV and TCP-ECV and used case scenarios? I want to understand the rationale behind using these monitors since they both use the send string and expects a response from the server. When do one need to use TCP-ECV or HTTP-ECV?
Maybe you should begin by indentifying your needs before chosing monitor types. The description of these monitors is pretty self-descriptive.
tcp-ecv:
Specific parameters: send [””] - is the data that is sent to the service. The maximum permissible length of the string is 512 K bytes.
recv [””] - expected response from the service. The maximum
permissible length of the string is 128 K bytes.
Process: The Citrix
ADC appliance establishes a 3-way handshake with the monitor
destination. When the connection is established, the appliance uses
the send parameter to send specific data to the service and expects a
specific response through the receive parameter. Different servers
send different sizes of segments. However, the pattern must be within
16 TCP segments.
http-ecv:
Specific parameters: send [””] - HTTP data that is sent to the service; recv [””] - the
expected HTTP response data from the service
Process: The Citrix ADC appliance
establishes a 3-way handshake with the monitor destination. When the
connection is established, the appliance uses the send parameter to
send the HTTP data to the service and expects the HTTP response that
the receive parameter specifies. (HTTP body part without including
HTTP headers). Empty response data matches any response. Expected data
may be anywhere in the first 24K bytes of the HTTP body of the
response.
As for web service monitoring (is that's what you need?), if you try to ensure some HTTP headers is present in a response, then use tcp-ecv. For HTML body checks use http-ecv.
TCP-ECV - Layer 4 check - If you want to determine that a TCP port/socket is open and you are happy with the service being marked as up as a result of the completion of a TCP 3-way handshake and TCP send() data being sent expecting TCP recv() response then use the TCP-ECV. This is simply a TCP layer 4 check. It has no application awareness.
HTTP-ECV - Layer 5 check - If a simple TCP check is not enough and you want to send HTTP protocols message over the TCP connection once it is established then use the HTTP-ECV. This will send an HTTP protocol control message over the TCP connection and will wait for an HTTP response message back. Typically you would configure the response to expect a 200 OK as a success and a 404/503 as a failure.

Netty loses udp packets at the beginning of the communication

I have a strange problem with Netty 4.0.27. I'm building a system which have to receive large number of incoming udp packet (all packet is 28 bytes long) and save them to a file. I implemented it and I also developed a simulator which send packets to the server.
I experienced that Netty server has a huge packet loss at the beginning of the communication, but after few thousand packet, the server received all packets successfully. Unfortunately, I can't afford this amount of packet loss (for example it loses 5-6 percent of first 200k packet).
I think, Netty adapts to high traffic, but I found nothing in the documentation. I also tried to use ChannelOption's SO_RCVBUF option but i had the same problem.

UDP packets not displayed in Wireshark

I have an embedded jetty server(server1) which sends UDP packets as trigger to receive a message from another server(server2). server1 then waits for a message from server2. This message is then validated and processed further.
From testing the server1, I saw that some times server2 doesnot send messages on receiving a trigger from server1. So I tried analyzing it with Wireshark.
When I gave the capture filter as UDP, it is not showing any packets even though at the server1 I received some packets.
But when I gave the IP address of the server to which request is sent, it is showing some HTTP and TCP packets. What could be the reason?
The jetty server(server1) is using DatagramPacket for sending requests. Won't this be sent by UDP itself?
Is there any settings in Wireshark which is preventing from displaying UDP messages?

recv() fails on UDP

I’m writing a simple client-server app which for the time being will be for my own personal use. I’m using Winsock for the net communication. I have not done any networking for the last 10 years, so I am quite rusty. I’d like to use as little external code as possible, so I have written a home-made server discovery mechanism, as follows.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket. The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket. The client then uses this info to connect to the server (on a different socket). This mechanism should allow the server to bind its listening socket to the first port it can within the dynamic port range (49152-65535) and to the clients to discover where the server is and on which port it is listening.
The server part works fine: the server receives the broadcast messages and successfully sends its response.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket).
But the message never makes it to the client app. I’ve tried doing a recv() in blocking and non-blocking mode, and there is never any data available. ioctlsocket() always shows no data is available, even though I know the packet got it to the machine.
The server succeeds on doing a recv() on broadcasted data. But the client fails on doing a recv() of the server’s response which is addressed to its discovery socket.
The question is very vague: what gotchas should I watch for in this scenario? Why would recv() fail to get a packet which has actually arrived to the machine? The sockets are UDP, so the fact that they are not connected is irrelevant. Or is it?
Many thanks in advance.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket.
The message doesn't need to contain anything. Just broadcast an empty message from the 'discovery socket'. recvfrom() will tell the server where it came from, and it can just reply directly.
The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket.
Fair enough, although actually the server could just broadcast its own TCP listening port every 5 seconds or whatever.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket). But the message never makes it to the client app
If it got to the host it must get to the application. You must have got the ports mixed up somehow. Simplify it as above and retry.
Well, it was one of those stupid situations: Windows Firewall was active, besides the other firewall, and silently dropping packets. Deactivating it solved the problem.
But I still don’t understand how it works, as it was allowing the server to receive packets sent through broadcasting. And when I got at my wits end and set the server to answer back through a broadcast, THOSE packets got dropped.
Two days of frustration. I hope someone profits from my experience.

Where the datagrams are if a client does not listen to a UDP port?

Suppose a client sends a number of datagrams to a server through my application. If my application on the server side stops working and cannot receive any datagrams, but the client still continues to send more data grams to the server through UDP protocol, where are those datagrams going? Will they stay in the server's OS data buffer (or something?)
I ask this question because I want to know that if a client send 1000 datagrams (1K each) to a PC over the internet, will those 1000 datagrams go through the internet (consuming the bandwidth) even if no one is listening to those data?
If the answer is Yes, how should I stop this happening? I mean if a server stops functioning, how should I use UDP to get to know the fact and stops any further sending?
Thanks
I ask this question because I want to know that if a client send 1000 datagrams (1K each) to a PC over the internet, will those 1000 datagrams go through the internet (consuming the bandwidth) even if no one is listening to those data?
Yes
If the answer is Yes, how should I stop this happening? I mean if a server stops functioning, how should I use UDP to get to know the fact and stops any further sending?
You need a protocol level control loop i.e. you need to implement a protocol to take care of this situation. UDP isn't connection-oriented so it is up to the "application" that uses UDP to account for this failure-mode.
UDP itself do not provide facilities to determine if message is successfully received by a client or not. You need you TCP to establish reliable connection and after it sends data over UDP.
The lowest overhead solution would be a keep-alive type thing like jdupont suggested. You can also change to use tcp, which provides this facility for you.