HTTPClient intermittently locking up server - asp.net-core

I have a .NET Core 2.2 app which has a controller acting as a proxy to my APIs.
JS makes a fetch to the proxy, proxy forwards call onto API's and returns response.
I am experiencing intermittent lock ups on the proxy app when its awaiting the response from the HttpClient. When this happens it locks up the entire server. No more requests will be processed.
According to the logs of the API that is being proxied to it is returning fine.
To reproduce this is i have to make 100+ requests in a loop on the client through the proxy. Then i have to reload the page multiple times, reloading it whilst the 100 requests are in flight. It usually takes around 5 hits before things start slowing down.
The proxy will lock up waiting for an awaited request to resolve. Sometimes it comes back after a 4 - 5 second delay, other times after a minuet. Most of the time i haven't waited longer then 10 min before giving up and killing the proxy.
I've distilled the code down to the following block that will reproduce the issue.
I believe im following best practices, its async all the way down, im using IHttpClientFactory to enable sharing of HttpClient instances, im implementing using where i believe it is required.
The implementation was based on this: https://github.com/aspnet/AspLabs/tree/master/src/Proxy
I'm hoping im making a rather obvious mistake that others with more experience can pin point!
Any help would be greatly appreciated.
namespace Controllers
{
[Route("/proxy")]
public class ProxyController : Controller
{
private readonly IHttpClientFactory _factory;
public ProxyController(IHttpClientFactory factory)
{
_factory = factory ?? throw new ArgumentNullException(nameof(factory));
}
[HttpGet]
[Route("api")]
async public Task ProxyApi(CancellationToken requestAborted)
{
// Build API specific URI
var uri = new Uri("");
// Get headers frpm request
var headers = Request.Headers.ToDictionary(x => x.Key, y => y.Value);
headers.Add(HeaderNames.Authorization, $"Bearer {await HttpContext.GetTokenAsync("access_token")}");
// Build proxy request method. This is within a service
var message = new HttpRequestMessage();
foreach(var header in headers) {
message.Headers.Add(header.Key, header.Value.ToArray());
}
message.RequestUri = uri;
message.Headers.Host = uri.Authority;
message.Method = new HttpMethod(Request.Method);
requestAborted.ThrowIfCancellationRequested();
// Generate client and issue request
using(message)
using(var client = _factory.CreateClient())
// **Always hangs here when it does hang**
using(var result = await client.SendAsync(message, requestAborted).ConfigureAwait(false))
{
// Appy data from request onto response - Again this is within a service
Response.StatusCode = (int)result.StatusCode;
foreach (var header in result.Headers)
{
Response.Headers[header.Key] = header.Value.ToArray();
}
// SendAsync removes chunking from the response. This removes the header so it doesn't expect a chunked response.
Response.Headers.Remove("transfer-encoding");
requestAborted.ThrowIfCancellationRequested();
using (var responseStream = await result.Content.ReadAsStreamAsync())
{
await responseStream.CopyToAsync(responseStream, 81920);
}
}
}
}
}
EDIT
So modified the code to remove the usings and return the proxied response directly as a string instead of streaming and still getting the same issues.
When running netstat i do see a lot of logs for the url of the proxied API.
4 rows mention the IP of the API being proxied to, probably about another 20 rows mentions the IP of the proxy site. Those numbers dont seem odd to me but i don't have much experience using netstat (first time ive ever fired it up).
Also i have left the proxy running for about 20 min. Its it technically still alive. Responses are coming back. Just taking a very long time between the API being proxied to returning data and the HttpClient resolving. However it wont service any new requests, they just sit there hanging.

Related

Design Minimal API and use HttpClient to post a file to it

I have a legacy system interfacing issue that my team has elected to solve by standing up a .NET 7 Minimal API which needs to accept a file upload. It should work for small and large files (let's say at least 500 MiB). The API will be called from a legacy system using HttpClient in a .NET Framework 4.7.1 app.
I can't quite seem to figure out how to design the signature of the Minimal API and how to call it with HttpClient in a way that totally works. It's something I've been hacking at on and off for several days, and haven't documented all of my approaches, but suffice it to say there have been varying results involving, among other things:
4XX and 500 errors returned by the HTTP call
An assortment of exceptions on either side
Calls that throw and never hit a breakpoint on the API side
Calls that get through but the Stream on the API end is not what I expect
Errors being different depending on whether the file being uploaded is small or large
Text files being persisted on the server that contain some of the HTTP headers in addition to their original contents
On the Minimal API side, I've tried all sorts of things in the signature (IFormFile, Stream, PipeReader, HttpRequest). On the calling side, I've tried several approaches (messing with headers, using the Flurl library, various content encodings and MIME types, multipart, etc).
This seems like it should be dead simple, so I'm trying to wipe the slate clean here, start with an example of something that partially works, and hope someone might be able to illuminate the path forward for me.
Example of Minimal API:
// IDocumentStorageManager is an injected dependency that takes an int and a Stream and returns a string of the newly uploaded file's URI
app.MapPost(
"DocumentStorage/CreateDocument2/{documentId:int}",
async (PipeReader pipeReader, int documentId, IDocumentStorageManager documentStorageManager) =>
{
using var ms = new MemoryStream();
await pipeReader.CopyToAsync(ms);
ms.Position = 0;
return await documentStorageManager.CreateDocument(documentId, ms);
});
Call the Minimal API using HttpClient:
// filePath is the path on local disk, uri is the Minimal API's URI
private static async Task<string> UploadWithHttpClient2(string filePath, string uri)
{
var fileStream = File.Open(filePath, FileMode.Open);
var content = new StreamContent(fileStream);
var httpRequestMessage = new HttpRequestMessage(HttpMethod.Post, uri);
var httpClient = new HttpClient();
httpRequestMessage.Content = content;
httpClient.Timeout = TimeSpan.FromMinutes(5);
var result = await httpClient.SendAsync(httpRequestMessage);
return await result.Content.ReadAsStringAsync();
}
In the particular example above, a small (6 bytes) .txt file is uploaded without issue. However, a large (619 MiB) .tif file runs into problems on the call to httpClient.SendAsync which results in the following set of nested Exceptions:
System.Net.Http.HttpRequestException - "Error while copying content to a stream."
System.IO.IOException - "Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.."
System.Net.Sockets.SocketException - "An existing connection was forcibly closed by the remote host."
What's a decent way of writing a Minimal API and calling it with HttpClient that will work for small and large files?
Kestrel allows uploading 30MB per default.
To upload larger files via kestrel you might need to increase the max size limit. This can be done by adding the "RequestSizeLimit" attribute. So for example for 1GB:
app.MapPost(
"DocumentStorage/CreateDocument2/{documentId:int}",
[RequestSizeLimit(1_000_000_000)] async (PipeReader pipeReader, int documentId) =>
{
using var ms = new MemoryStream();
await pipeReader.CopyToAsync(ms);
ms.Position = 0;
return "";
});
You can also remove the size limit globally by setting
builder.WebHost.UseKestrel(o => o.Limits.MaxRequestBodySize = null);
This answer is good but the RequestSizeLimit filter doesn't work for minimal APIs, it's an MVC filter. You can use the IHttpMaxRequestBodySizeFeature to limit the size (assuming you're not running on IIS). Also, I made a change to accept the body as a Stream. This avoids the memory stream copy before calling the CreateDocument API:
app.MapPost(
"DocumentStorage/CreateDocument2/{documentId:int}",
async (Stream stream, int documentId, IDocumentStorageManager documentStorageManager) =>
{
return await documentStorageManager.CreateDocument(documentId, stream);
})
.AddEndpointFilter((context, next) =>
{
const int MaxBytes = 1024 * 1024 * 1024;
var maxRequestBodySizeFeature = context.HttpContext.Features.Get<IHttpMaxRequestBodySizeFeature>();
if (maxRequestBodySizeFeature is not null and { IsReadOnly: true })
{
maxRequestBodySizeFeature.MaxRequestBodySize = MaxBytes;
}
return next(context);
});
If you're running on IIS then https://learn.microsoft.com/en-us/iis/configuration/system.webserver/security/requestfiltering/requestlimits/#configuration

Nginx reverse proxy - 405 POST

I have Asp.Net Core Web Api application, which uses "x-api-key" http header to authorize a person sending a request. I've setup action filter as
public void OnActionExecuting(ActionExecutingContext context)
{
// Retrieve record with specified api key
var api = dbContext.Apis
.Include(a => a.User)
.FirstOrDefault(a => a.Key.Equals(context.HttpContext.Request.Headers["x-api-key"]));
// Check if record exists
if (api is null)
{
context.Result = new UnauthorizedResult(); // short circuit and return 401
}
}
It is working as expected on both GET and POST requests without nginx proxy, however as soon as I add nginx, I receive 405 Not Allowed on POST request if api key is invalid but 401 on GET (if api key is valid filter works as expected and passes execution to controller). Here is my proxy configuration
server {
listen 80;
location / {
proxy_pass http://ctoxweb:5000;
}
}
(Both nginx and web api are setup using docker). What's the problem and how to fix that?
I managed to fix this problem, however I don't know exactly why this happens, I suppose it's somehow related to nginx not allowing any method (except for GET) on static content. My best guess is that nginx assumes that empty response body (which comes from new UnauthorizedResult()) is static, though it's clearly supplied by backend. The way to fix it is as easy as supply some object to response body, for example
if (api is null)
{
context.HttpContext.Response.ContentType = "application/json";
context.Result = new UnautorizedObjectResult("{\"info\":\"no api key header present\"}");
}

Service Fabric Asp.net Core Kestrel HttpClient hangs with minimal load

I have a barebone Service Fabric Application hosting a Asp.net Core 1.1 Web API with Azure Application Gateway as reverse proxy on a Virtual Machine scale set of 5 DS3_V2.
The API have 10 HttpClients with different URLs injected via Dependency Injection.
A simple foreach cycle in a method call 10 Httpclients in parallel:
var cts = new CancellationTokenSource();
cts.CancelAfter(600);
//Logic for asyncronously parallel calling the Call method below
public async Task<MyResponse> Call(CancellationTokenSource cts, HttpClient client, string endpoint )
{
var endpoint = "finalpartOfThendpoint";
var jsonRequest = "jsonrequest";
try
{
var content = new StringContent(jsonRequest, Encoding.UTF8, "application/json");
await content.LoadIntoBufferAsync();
if (cts.Token.IsCancellationRequested)
{
return new MyResponse("Token Canceled");
}
var response = await client.PostAsync(endpoint, content, cts.Token);
if (response.IsSuccessStatusCode && ((int)response.StatusCode != 204))
{
//do something with response and return
return MyResponse("Response Ok")
}
return MyResponse("No response")
}
catch (OperationCanceledException e)
{
return new MyResponse("Timeout");
}
}
There is a single CancellationToken for all calls.
After 600ms, the still pending HttpCalls are canceled and a response is sent back anyway.
In local and in production all works perfectly, all endpoints are called and return in time, rarely one is canceled before the timeout.
But when the number of concurrent connections reach 30+, ALL calls timeout no matter what, until I reduce the load.
Does Asp.net Core have a connection limit?
This is how I create the HttpClients in a custom factory for injection in the main Controller:
public static HttpClient CreateClient(string endpoint)
{
var client = new HttpClient
{
BaseAddress = new Uri(endpoint)
};
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
return client;
}
All the Httpclients are reused and static.
The same exact code works perfectly on a Asp.net Web API 2 hosted on OWIN in Service Fabric. The problem is only with Asp.net Core 1.1
I saw online to create a HttpClientHandler, but there is no parameter for concurrent connections.
What can I do to investigate further?
No exception are thrown but the OperationcanceledException and If I remove the CancellationToken the calls are stuck and the CPU goes to 100%, basically 30 connections destroy the power of 5 quad core servers.
This has something to do to the number of calls going out of Kestrel.
UPDATE
I tried with WebListener and the problem is still present, so it's not Kestrel, but Asp.net Core
I figured it out.
Asp.net core still have some HttpClient limits for the connection to the same server like the old Asp.net WebAPI.
It's poor documented but the old ServicepointManager option for maxconnections must now be passed via HttpClientHandler.
I just create HttpClient like this and the problem vanished.
var config = new HttpClientHandler()
{
MaxConnectionsPerServer = int.MaxValue
};
var client = new HttpClient(config)
{
BaseAddress = new Uri('url here')
};
Really if someone of the team is reading, this should be the default.

easynetQ delayed respond/request resulting in timeout

I've run into a problem with using the request/respond pattern of EasyNetQ while using it on our server (Windows Server 2008). Not able to reproduce it locally at the moment.
The setup is we have 2 windows services (running as console applications for testing) which are connected through the request/respond pattern in EasyNetQ. This has been working as expected until recently on the server where the request side does not "consume" the responses until after the request timeouts.
I have included 2 links to pastebin which contain the console logging of EasyNetQ which will hopefully make my problem a bit more clear.
RequestSide
RespondSide
Besides that, my request code looks like this:
var request = new foobar();
var response = _bus.Request<foobar, foobar2>(request);
and on the respond side:
var response = new response();
_bus.Respond<foobar, foobar2>(request =>
{
try
{
....
return response;
}
catch (Exception e)
{
....
return response;
}
});
As I've said, the request side sends the request as expected and the respond side consumes/catches it. This works as it should, but when the respond side is done processing and responds (which it does, the messages can be seen in the RabbitMQ management thingy) the request doesn't consume/catch the response until after the request has timed out (default timeout is 10s, tried setting to 60s aswell, makes no difference). This is also evident in the logs linked above as you'll see on the RequestSide, with the 5 or so messages received from the response queue which previously timed out.
I've tried using RespondAsync in case the processing was taking too long and messing something up, didn't help. Tried using both RespondAsync & RequestAsync, just messed everything up even more (I was probably doing something wrong with the request :)).
I might be missing something, but I'm not sure what to try from here.
EDIT: Noticed I messed something up. As well as added more context below:
The IBus used for the request/response is created and injected with Ninject:
class FooModule : NinjectModule
{
public override void Load()
{
Bind<IBus>().ToMethod(ctx => RabbitHutch.CreateBus("host=localhost", x => x.Register<IEasyNetQLogger>(_ => logger))).InSingletonScope();
}
}
And it's all tied together by the service being constructed using Topshelf with Ninject like so:
static void Main(string[] args)
{
HostFactory.Run(x =>
{
x.UseNinject(new FooModule());
x.Service<FooService>(s =>
{
s.ConstructUsingNinject();
s.WhenStarted((service, control) => service.Start(control));
s.WhenStopped((service, control) => service.Stop(control));
});
x.RunAsLocalSystem();
});
}
The Topshelf setup has all been tested pretty thoroughly and it works as intended, and should not really be relevant for the request/respond problem, but I thought I would provide a bit more context.
I had this same issue, my problem was i set the timeout only in the response but not in the request side, after i set the timeoute in both side it worked fine
my connection for eg.
host=hostname;timeout=120;virtualHost=myhost;username=myusername;passw
ord=mypassword

Apache Http Client Put Request Error

I'm trying to upload a file using the Apache Http Client's PUT method. The code is as below;
def putFile(resource: String, file: File): (Int, String) = {
val httpClient = new DefaultHttpClient(connManager)
httpClient.getCredentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(un, pw))
val url = address + "/" + resource
val put = new HttpPut(url)
put.setEntity(new FileEntity(file, "application/xml"))
executeHttp(httpClient, put) match {
case Success(answer) => (answer.getStatusLine.getStatusCode, "Successfully uploaded file")
case Failure(e) => {
e.printStackTrace()
(-1, e.getMessage)
}
}
}
When I tried running the method, I get to see the following error:
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultResponseParser.parseHead(DefaultResponseParser.java:101)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:252)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:281)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:247)
at org.apache.http.impl.conn.AbstractClientConnAdapter.receiveResponseHeader(AbstractClientConnAdapter.java:219)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:298)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:633)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:454)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
I do not know what has gone wrong? I'm able to do GET requests, but PUT seems not to work! Any clues as to where I should look for?
Look on the server. If GET Works, but PUT does not, then you have to figure out the receiving end.
Also, you may want to write a simple HTML File that has a form with PUT Method in it to rule out your Java Part.
As a sidenode: Its technically possible that something in between stops the request from going through or the response reaching you. Best setup a dummy HTTP Server to do the testing against.
Maybe its also a timeout issue, so the server takes to long to process your PUT.
The connection you are trying to use is a stale connection and therefore the request is failing.
But why are you only seeing an error for the PUT request and you are not seeing it for the GET request?
If you check the DefaultHttpRequestRetryHandler class you will see that by default HttpClient attempts to automatically recover from I/O exceptions. The default auto-recovery mechanism is limited to just a few exceptions that are known to be safe.
HttpClient will make no attempt to recover from any logical or HTTP protocol errors (those derived from HttpException class).
HttpClient will automatically retry those methods that are assumed to be idempotent. Your GET request, but not your PUT request!!
HttpClient will automatically retry those methods that fail with a transport exception while the HTTP request is still being transmitted to the target server (i.e. the request has not been fully transmitted to the server).
This is why you don't notice any error with your GET request, because the retry mechanism handles it.
You should define a CustomHttpRequestRetryHandler extending the DefaultHttpRequestRetryHandler. Something like this:
public class CustomHttpRequestRetryHandler extends DefaultHttpRequestRetryHandler {
#Override
public boolean retryRequest(IOException exception, int executionCount, HttpContext context) {
if(exception instanceof NoHttpResponseException) {
return true;
}
return super.retryRequest(exception, executionCount, context);
}
}
Then just assign your CustomHttpRequestRetryHandler
final HttpClientBuilder httpClientBuilder = HttpClients.custom();
httpClientBuilder.setRetryHandler(new CustomHttpRequestRetryHandler());
And that's it, now your PUT request is handled by your new RetryHandler (like the GET was by the default one)