NetScaler Monitors - load-balancing

I am trying to understand the differences between NetScaler Monitor types HTTP-ECV and TCP-ECV and used case scenarios? I want to understand the rationale behind using these monitors since they both use the send string and expects a response from the server. When do one need to use TCP-ECV or HTTP-ECV?

Maybe you should begin by indentifying your needs before chosing monitor types. The description of these monitors is pretty self-descriptive.
tcp-ecv:
Specific parameters: send [””] - is the data that is sent to the service. The maximum permissible length of the string is 512 K bytes.
recv [””] - expected response from the service. The maximum
permissible length of the string is 128 K bytes.
Process: The Citrix
ADC appliance establishes a 3-way handshake with the monitor
destination. When the connection is established, the appliance uses
the send parameter to send specific data to the service and expects a
specific response through the receive parameter. Different servers
send different sizes of segments. However, the pattern must be within
16 TCP segments.
http-ecv:
Specific parameters: send [””] - HTTP data that is sent to the service; recv [””] - the
expected HTTP response data from the service
Process: The Citrix ADC appliance
establishes a 3-way handshake with the monitor destination. When the
connection is established, the appliance uses the send parameter to
send the HTTP data to the service and expects the HTTP response that
the receive parameter specifies. (HTTP body part without including
HTTP headers). Empty response data matches any response. Expected data
may be anywhere in the first 24K bytes of the HTTP body of the
response.
As for web service monitoring (is that's what you need?), if you try to ensure some HTTP headers is present in a response, then use tcp-ecv. For HTML body checks use http-ecv.

TCP-ECV - Layer 4 check - If you want to determine that a TCP port/socket is open and you are happy with the service being marked as up as a result of the completion of a TCP 3-way handshake and TCP send() data being sent expecting TCP recv() response then use the TCP-ECV. This is simply a TCP layer 4 check. It has no application awareness.
HTTP-ECV - Layer 5 check - If a simple TCP check is not enough and you want to send HTTP protocols message over the TCP connection once it is established then use the HTTP-ECV. This will send an HTTP protocol control message over the TCP connection and will wait for an HTTP response message back. Typically you would configure the response to expect a 200 OK as a success and a 404/503 as a failure.

Related

Why does volley always send another packet 5 minutes after the last packet is sent?

I have an android app that sends a request to the server via Volley.
The communication is fine but I see unnecessary packets(no.385 ~ no.387) as below. These packets are always sent 5 minutes after the last packet is sent. Please let me know why the client sends the packets to the server and how I can avoid sending these packets.
client : 10.x.x.x
server : 52.x.x.x
Packets 385-387 are relevant here:
image of my wireshark capture

How can I Use RabbitMQ between two application while I can't change one of them?

I have an existing system consisting of two nodes, a client/server model.
I want to exchange messages between them using RabbitMQ. I.e. The client would send all its requests to RabbitMQ and the server would listen to the queue indefinitely, consume any messages that arrives and then act upon it.
I can change the server as needed, but my problem is, I cannot change the client's behavior. How can I send back the response to the client?
The client node understands HTTP request/response, what shall I do after configure the other application server to RabbitMQ instead of my app directly.
You can use RPC model or some internal convention, like storing result in database (or cache) with known id and polling your storage for that result in a cycle
You will have to use a proxy server in between that will seem to node 1 (the client you cannot change) as the actual server while it just inject requests into the queuing server. You will also have to use 2 queues.
For clarity, let's enumerate the system players:
The client
The proxy server, a server that offers the same API offered by the actual (but it doesn't do any work)
The actual server, the server that does the actual work
The input queue, the queue where clients requests go into (proxy server does that)
The output queue, the queue where server responses go into (actual server does that)
A possible working scenario:
A client sends a request to the proxy server
The proxy server puts the request in input queue
The actual server (listening to the input queue) will fetch the request
The actual server process the message
The actual server sends the response to the output queue
The proxy server (listening to the output queue) will fetch the response
The proxy server returns the response to the client
This might work, but few problems could happen, e.g. because the proxy server doesn't know when the actual server will response, and, it cannot be sure of the order of responses in the output queue, it may have to re-inject the messages it finds not relevant to the output queue until it finds the correct message.
Or, the proxy server might need to feed the response to the client later via an HTTP request to the client. That is, rather than a response to the client's request, the client will expect no response for the request it sent knowing that it will be get the answer later via a request from the proxy server.
I'm not aware of the situation at your end, but this might work!

UDP packets not displayed in Wireshark

I have an embedded jetty server(server1) which sends UDP packets as trigger to receive a message from another server(server2). server1 then waits for a message from server2. This message is then validated and processed further.
From testing the server1, I saw that some times server2 doesnot send messages on receiving a trigger from server1. So I tried analyzing it with Wireshark.
When I gave the capture filter as UDP, it is not showing any packets even though at the server1 I received some packets.
But when I gave the IP address of the server to which request is sent, it is showing some HTTP and TCP packets. What could be the reason?
The jetty server(server1) is using DatagramPacket for sending requests. Won't this be sent by UDP itself?
Is there any settings in Wireshark which is preventing from displaying UDP messages?

recv() fails on UDP

I’m writing a simple client-server app which for the time being will be for my own personal use. I’m using Winsock for the net communication. I have not done any networking for the last 10 years, so I am quite rusty. I’d like to use as little external code as possible, so I have written a home-made server discovery mechanism, as follows.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket. The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket. The client then uses this info to connect to the server (on a different socket). This mechanism should allow the server to bind its listening socket to the first port it can within the dynamic port range (49152-65535) and to the clients to discover where the server is and on which port it is listening.
The server part works fine: the server receives the broadcast messages and successfully sends its response.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket).
But the message never makes it to the client app. I’ve tried doing a recv() in blocking and non-blocking mode, and there is never any data available. ioctlsocket() always shows no data is available, even though I know the packet got it to the machine.
The server succeeds on doing a recv() on broadcasted data. But the client fails on doing a recv() of the server’s response which is addressed to its discovery socket.
The question is very vague: what gotchas should I watch for in this scenario? Why would recv() fail to get a packet which has actually arrived to the machine? The sockets are UDP, so the fact that they are not connected is irrelevant. Or is it?
Many thanks in advance.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket.
The message doesn't need to contain anything. Just broadcast an empty message from the 'discovery socket'. recvfrom() will tell the server where it came from, and it can just reply directly.
The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket.
Fair enough, although actually the server could just broadcast its own TCP listening port every 5 seconds or whatever.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket). But the message never makes it to the client app
If it got to the host it must get to the application. You must have got the ports mixed up somehow. Simplify it as above and retry.
Well, it was one of those stupid situations: Windows Firewall was active, besides the other firewall, and silently dropping packets. Deactivating it solved the problem.
But I still don’t understand how it works, as it was allowing the server to receive packets sent through broadcasting. And when I got at my wits end and set the server to answer back through a broadcast, THOSE packets got dropped.
Two days of frustration. I hope someone profits from my experience.

FIN pkt with HTTP connection after some time

Opened 2 TCP connections :
1. Normal connection(while implementing echo server,client) &
2. HTTP connection
Opened HTTP connection with curl(modified) utility while running apache as server, where curl is not sending GET request for some time after connection establishment.
For normal connection after connection establishment, server is waiting for request from client.
But as observed, Strangely in HTTP connection after connection establishment, if GET request is not coming from client(for some time), server is sending FIN pkt to client & closing his part of connection.
Is it a mandatory condition for HTTP client to send GET request immediately after initial connection.
Apache got a parameter called Timeout.
Its manual page ( Apache Core - Timeout Directive ) states:
The TimeOut directive defines the length of time Apache will wait for I/O in various circumstances:
When reading data from the client, the length of time to wait for a
TCP packet to arrive if the read buffer is empty.
When writing data to the client, the length of time to wait for an
acknowledgement of a packet if the send buffer is full.
In mod_cgi, the length of time to wait for output from a CGI script.
In mod_ext_filter, the length of time to wait for output from a
filtering process.
In mod_proxy, the default timeout value if ProxyTimeout is not
configured.
I think you fell into case NUMBER ONE
EDIT
I was lurking into W3 HTTP document and I found no refer to timeouts.
but into the chapter 8 (connections) I found:
8.1.4 Practical Considerations
Servers will usually have some time-out value beyond which they will no longer maintain an inactive connection. (...) The use of persistent connections places no requirements on the length (or existence) of this time-out for either the client or the server.
that sounds to me like "every server or client is free to choose his behaviour about inactive connection timeouts"