Simulate high load for an HTTP API client - api

I am writing a client for an HTTP API which is not yet publicly available.
Based on the specs I got, I have mocked a server which simulates the API, to test how my client reacts.
This server is a very simple Rack application, which currently runs on WEBRick.
The API client interacts with this fake API et performs correctly in the different tests cases.
Hopefully, I will just have to change the hostname in the config file when the API goes live.
However, I know for a fact that the API will be put under heavy load when it goes live. My client will thus most likely have to face :
HTTP timeouts
Jitter
Dropped TCP connections
503 Responses
...
I know that my client performs well in an ideal scenario, but how can I randomly (or not randomly) introduce these behaviors in my test cases, to verify that the client handles these errors correctly ?
Is there some kind of reverse proxy that can be configured to simulate these errors when serving data from a stable server on a stable network (in my case : local server on localhost) ?

You can try Net Limiter (http://www.netlimiter.com/) to shape bandwidth.
On the other hand, to make more accurate simulations you need to control both server and client side.
For instance, to simulate timeout condition your mock server can receive request from HTTP API client and then stop, hence triggering timeout on client side.
Other benefits of your own mock/test server is that you can emulate dropped TCP connections (just close newly received client connection), 50X responses, invalid responses, protocol breaking responses and much more.

Related

Is it normal for TCP request to randomly "get lost in the internet"?

I created and manage a SOAP API built in ASP.NET ASMX. The API processes about 10,000 requests per day. Most days, about 3 request sent by the client (we only have 1 client) do not reach the web server (IIS). There is no discernible pattern.
We are actually using 2 web servers that sit behind a load balancer. From the IIS logs, I am 100% confident that the requests are not reaching either web server.
The team that manages the network and the load balancer have not been able to 'confirm or deny' whether the problem is occurring at the load balancer. They suggested it's normal for request to sometimes "get lost in the internet", and said that we should add retry logic to the API.
The requests are using TCP (and TLS). The client has confirmed that there is no problem occurring on their end.
My question is: is it normal for TCP requests to "get lost in the internet" at the frequency we are seeing (about 3 out of 10,000 per day).
BTW, both the web server and the client are located in the same country. For what it's worth, the country in question is an anglopshere country, so it's not the case that our internet infrastructure is shoddy.
There is no such thing as a TCP request getting lost since there is no such thing as a TCP request in the first place. There is a TCP connection and within this there is a TLS tunnel and within this the HTTP protocol is spoken - and only at this HTTP level there is the concept of request and response which then is visible in the server logs.
Problems can occur in many places, like failing to establish the TCP connection in the first place due to no route (i.e. no internet) or too much packet loss. There can be random problems at the TLS level caused by bit flips which cause integrity errors and thus connection close. There can be problems at the HTTP level, for example when using HTTP keep-alive and the server closing an idle connection while at the same time the client is trying to send another request. And probably more places.
The client has confirmed that there is no problem occurring on their end.
I have no idea what exactly this means. No problem would be if the client is sending the request and getting a response. But this is obviously not the case here, so either the client is failing to establish the TCP connection, failing at the TLS level, failing while sending the request, failing while reading the response, getting timeouts ... - But maybe the client is simply ignoring some errors and thus no problem is visible at the clients end.

What is purpose of decryption of data at both the load balancer and then the web server?

I heard that to alleviate the web server of the burden of performing the SSL Termination, it is moved to load balancers and then HTTP connection is made from the LB to the web server. However, in order to ensure security, an accepted practice is to re encrypt the data on the LB and then transmit it to the web server. If we are eventually sending the encrypted data to the web servers, what is the purpose of having a LB terminate SSL in the first place ?
A load balancer will spread the load over multiple backend servers so that each backend server takes only a part of the load. This balancing of the load can be done in a variety of ways, also depending on the requirements of the web application:
If the application is fully stateless (like only serving static content) each TCP connection can be send to an arbitrary server. In this case no SSL inspection would be needed since the decision does not depend on the content of the traffic.
If the application is instead stateful the decision which backend to use might be done based on the session cookie, so that requests end up at the same server as the previous requests for the session. Since the session cookie is part of the encrypted content SSL inspection is needed. Note that in this case often a simpler approach can be used, like basing the decision on the clients source IP address and thus avoiding the costly SSL inspection.
Sometimes load balancers also do more than just balance the load. They might incorporate security features, like a Web Application Firewall, they might sanitize the traffic or similar. These features work on the content so SSL inspection is needed.

How can i broadcast UDP packet to the browser

I am beginner level.
I try to broadcast data to the browsers in local area ( Same router by sending . . . 255 ).
I should implement the real time streaming service to the local level browsers.
But it will occur high traffic when the client browsers is increased.
To broadcast data, it seems to need UDP protocol.
But web browser base on TCP.
So i investigated the webRTC that is based on UDP.
But i don't really know how to use this.
Is it possible to broadcast the data to the web browser like chrome in local area ?
If not, why it is impossible to implement ? just for hazard of DDOS ? How can i solve this high traffic problem ?
( It really occur high traffic when each clients respond to every data from server (TCP) or the server send same data to the every client amount to number of clients (not broadcasting).
I just want to implement that the server just send one broadcasting datagram packet to the local area and each clients in local level receive same one data from the server but not respond to that. )
From a web app (not a modified web browser itself), you cannot create nor manipulate raw (UDP/TCP) sockets.The sandboxing and other mechanisms won't let you.
with webRTC, you will need to make an handshake, and use ICE.
=> You cannot push to a peer knowing only his IP/port
=> You have to have the receiver accept and acknowledge the transfer
you might have more change with WebSockets, but that requires additional mechanisms as well and not all parties will be able to support web socket (or accept the upgrade from http to WS).
For illustration purpose, you can see the work of Jess on a web-based bit torrent. He has exactly the same problems. https://github.com/feross/webtorrent

Slow response of post method of https request using httpc erlang module

Our Application (which uses existing Erlang OTP R15B01 modules) sends https request to external authentication server and it gets reply and seems work fine under normal cases. But under heavy loads some requests are failing since they are consuming more time to do SSL handshake.
I have observed the following things during SSL handshake:
client is taking (our application) nearly 80 sec to send the certificate after server hello is done with server certificate
since our server expects to complete the request-response in 30 sec otherwise it drops the connection hence results in connection failures and affects the performance of application severely
Finally, I would like to know:
Is our application failing to invoke the client certificate quickly? I mean does httpc module do the file/IO related operations to invoke the certificates which results to slow response under heavy loads?
Does Erlang have any limitations in SSL handshake procedure?

All jmeter requests going to only one server with haproxy

I'm using Jmeter to load test my web application. I have two web servers and we are using HAProxy for load balance. All my tests are running fine and configured correctly. I have three jmeter remote clients so I can run my tests distributed. The problem I'm facing is that ALL my jmeter requests are only being processed by one of the web servers. For some reason it's not balancing and I'm having many time outs, and huge response times. I've looked around a lot for a way to make these requests being balanced, but I'm having no luck so far. Does anyone know what can be the cause of this behavior? Please let me know if you need to know anything about my environment first and I will provide the answers.
Check your haproxy configuration:
What is it's load balancing policy, if not round-robin is it based on ip source or some other info that might be common to your 3 remote servers?
Are you sure load balancing is working right? Try testing with browser first, if you can add some information about the web server in response to debug.
Check your test plan:
Are you sure you don't have somewhere in your requests a sessionid that is hardcoded?
How many threads did you configure?
In your Jmeter script by default the HTTP Request "Use KeepAlive" header option is checked.
Keep-Alive is a header that maintains a persistent connection between
client and server, preventing a connection from breaking
intermittently. Also known as HTTP keep-alive, it can be defined as a
method to allow the same TCP connection for HTTP communication instead
of opening a new connection for each new request.
This may cause all requests to go to the same server. Just uncheck the option and save, stop your script and re-run.