Dotnet site behind kestrel stops working (Requests return 503 from Apache server) - asp.net-core

I run .NET core 3.1 app behind in Linux machine with Kestrel behind reverse proxy (Apache).
I constantly push some data to certain endpoint and then relay that data through signalR hub to users:
[HttpPut("{tagName}/live")]
[Authorize(Roles = "Datasource")]
[ProducesResponseType(StatusCodes.Status200OK)]
[ProducesResponseType(StatusCodes.Status400BadRequest)]
[ProducesResponseType(StatusCodes.Status401Unauthorized)]
[ProducesResponseType(StatusCodes.Status403Forbidden)]
[TypeFilter(typeof(TagControllerFilterAttribute))]
public async Task<IActionResult> PutLiveTag(Tag tag, string tagName)
{
bool status = await tagRepository.UpdateLiveData(tagName, tag);
if (status)
{
await tagHubContext.Clients.Group(tagName).SendAsync(tagName, tag);
return Ok();
}
return BadRequest();
}
Users joins and leaves that signalr hub by simply removing them from group:
public class TagDataHub : Hub
{
[HubMethodName("subscribe")]
public async Task JoinTagGroup(string tagName)
{
await Groups.AddToGroupAsync(Context.ConnectionId, tagName);
}
[HubMethodName("unsubscribe")]
public async Task LeaveTagGroup(string tagName)
{
await Groups.RemoveFromGroupAsync(Context.ConnectionId, tagName);
}
}
My app runs in deployment for few hours and then when I try use that app's API endpoints or anything related to that app I get 503 response from originated from Apache. I really don't know why Kestrel stops working...
When I run service dotnet-site status I get that app is still running.
I checked processes in server, I see that dotnet uses lots of resources, but apart from code provided above there is nothing else running in that app:
Please send help I really don't know how to debug/figure out this on linux server, I would value any suggestion.
I also looked this up and did exactly as this person, but it didn't help me: apache mpm worker

Related

404 Deploying asp.net core hosted blazor webassembly to Netlify

I am attempting to deploy an asp.net core hosted blazor webassembly app to Netlify. I have published the release version of the Server project to a directory on my desktop, and uploaded it to github. I set Netlify's publish directory to wwwroot and the site does render just fine. However, if I attempt a call to an api controller, it returns a 404. Specifically, here is my code:
//Register.razor in the Client project
if (Model.Password.Length >= 6 && Model.Password == Model.ConfirmPassword)
{
await HttpClient.PostAsJsonAsync<RegisterModel>("api/Register/Post", Model);
NavigationManager.NavigateTo("/");
}
//In my controller
[Route("api/Register")]
public class RegisterController : Controller
{
private UserContext UserContext { get; set; }
private IHasher Hasher = new Pbkdf2Hasher();
public RegisterController (UserContext userContext)
{
UserContext = userContext;
}
[RequireHttps]
[HttpPost]
[Route("Post")]
public async Task Post([FromBody]RegisterModel model)
{
var user = new UserModel
{
Email = model.Email,
Password = Hasher.Hash(model.Password)
};
await UserContext.AddAsync(user);
await UserContext.SaveChangesAsync();
}
}
The url request I send is: https://(NetlifyDefaultDomain)/api/Register/Post. However, I get a 404 response. On localhost it works just fine. I'm imagining that there's a setting that I have to change somewhere in order for the request URL to work. I've tried looking but have been unable to find guidance. What do I need to change? Thanks
Edit
Here is the Program.cs file of my Client project
public class Program
{
public static async Task Main(string[] args)
{
var builder = WebAssemblyHostBuilder.CreateDefault(args);
builder.RootComponents.Add<App>("app");
builder.Services.AddTransient(sp => new HttpClient {
BaseAddress = new Uri(builder.HostEnvironment.BaseAddress) });
builder.Services.AddBlazoredLocalStorage();
builder.Services.AddAuthorizationCore();
builder.Services.AddScoped<AuthenticationStateProvider,
ApiAuthenticationStateProvider>();
builder.Services.AddScoped<IAuthService, AuthService>();
await builder.Build().RunAsync();
}
}
Target framework is netstandard2.1, and it's webassembly 3.2.0.
Netlify is a static file host. They do not run any server-side applications like .NET core on their servers.
So you can host your Blazor client-side application on Netlify but if you want server side code to run you must host it somewhere else.
If you are looking for free hosting for your API there are some cloud providers that have a free tier. Azure has free App Service with some limits, Google cloud has a free micro VPS that can host a small app, heroku also has a free offering.
A cheap VPS from Digital Ocean, Vultr or Linode is another alternative.

Trouble Getting Twilio SMS Response from ASP.NET Core MVC Endpoint

When I setup request bin, I get a response that seems totally normal (see screen shot below). When I use the command
ngrok http 5000
And I send the response to my local http endpoint, ngrok reports 200OK if my POST method in the controller has no parameters. Even if I add one parameter ([FromBody] string content), I get a 400 bad request out of ngrok's console.
I'm pasting below a couple different POST method's I've tried. I've tried inheriting my controller from controllerbase and controller and get the same behavior.
[HttpPost]
public string JsonStringBody([FromBody] TwilioSmsModel twilioSmsModel)
{
return "";
}
POST: api/SmsBody
[HttpPost]
public async Task<IActionResult> PostTwilioSmsModel([FromBody] TwilioSmsModel twilioSmsModel)
public async Task<IActionResult> Post()
{
var twilioSmsModel = new TwilioSmsModel();
if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
_context.TwilioSmsModels.Add(twilioSmsModel);
await _context.SaveChangesAsync();
return CreatedAtAction("GetTwilioSmsModel", new { id = twilioSmsModel.SmsSid }, twilioSmsModel);
}
If there is a github example of sms notifications working with asp.net core 2.1, that would be a big help.
Twilio developer evangelist here. Most likely the error you're seeing has nothing to do with the way you're building your application, but with the fact that .NET expect some host headers to be passed in order to process your request. And why it works with requestbin.
You haven't specified any error message in your question, so this is guesswork, but try changing your ngrok commend to the following:
ngrok http 5000 -host-header="localhost:5000"
And you should stop seeing the 400 error you're getting and the requests should go through normally.
Hope this helps.

Service Fabric Asp.net Core Kestrel HttpClient hangs with minimal load

I have a barebone Service Fabric Application hosting a Asp.net Core 1.1 Web API with Azure Application Gateway as reverse proxy on a Virtual Machine scale set of 5 DS3_V2.
The API have 10 HttpClients with different URLs injected via Dependency Injection.
A simple foreach cycle in a method call 10 Httpclients in parallel:
var cts = new CancellationTokenSource();
cts.CancelAfter(600);
//Logic for asyncronously parallel calling the Call method below
public async Task<MyResponse> Call(CancellationTokenSource cts, HttpClient client, string endpoint )
{
var endpoint = "finalpartOfThendpoint";
var jsonRequest = "jsonrequest";
try
{
var content = new StringContent(jsonRequest, Encoding.UTF8, "application/json");
await content.LoadIntoBufferAsync();
if (cts.Token.IsCancellationRequested)
{
return new MyResponse("Token Canceled");
}
var response = await client.PostAsync(endpoint, content, cts.Token);
if (response.IsSuccessStatusCode && ((int)response.StatusCode != 204))
{
//do something with response and return
return MyResponse("Response Ok")
}
return MyResponse("No response")
}
catch (OperationCanceledException e)
{
return new MyResponse("Timeout");
}
}
There is a single CancellationToken for all calls.
After 600ms, the still pending HttpCalls are canceled and a response is sent back anyway.
In local and in production all works perfectly, all endpoints are called and return in time, rarely one is canceled before the timeout.
But when the number of concurrent connections reach 30+, ALL calls timeout no matter what, until I reduce the load.
Does Asp.net Core have a connection limit?
This is how I create the HttpClients in a custom factory for injection in the main Controller:
public static HttpClient CreateClient(string endpoint)
{
var client = new HttpClient
{
BaseAddress = new Uri(endpoint)
};
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
return client;
}
All the Httpclients are reused and static.
The same exact code works perfectly on a Asp.net Web API 2 hosted on OWIN in Service Fabric. The problem is only with Asp.net Core 1.1
I saw online to create a HttpClientHandler, but there is no parameter for concurrent connections.
What can I do to investigate further?
No exception are thrown but the OperationcanceledException and If I remove the CancellationToken the calls are stuck and the CPU goes to 100%, basically 30 connections destroy the power of 5 quad core servers.
This has something to do to the number of calls going out of Kestrel.
UPDATE
I tried with WebListener and the problem is still present, so it's not Kestrel, but Asp.net Core
I figured it out.
Asp.net core still have some HttpClient limits for the connection to the same server like the old Asp.net WebAPI.
It's poor documented but the old ServicepointManager option for maxconnections must now be passed via HttpClientHandler.
I just create HttpClient like this and the problem vanished.
var config = new HttpClientHandler()
{
MaxConnectionsPerServer = int.MaxValue
};
var client = new HttpClient(config)
{
BaseAddress = new Uri('url here')
};
Really if someone of the team is reading, this should be the default.

Asp.net Core API with CORS on Service Fabric 100% CPU bottleneck

I have an ASP.net Core on .Net Framework 4.5.2 hosted on Service Fabric as STATELESS Service.
The API is a vanilla API, empty
[Route("Test")]
public class TestController : Controller
{
[HttpGet]
public IActionResult Get()
{
return Ok("Done");
}
}
This is my Startup Code
public class Startup
{
public Startup(IHostingEnvironment env)
{
var builder = new ConfigurationBuilder()
.SetBasePath(env.ContentRootPath)
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json", true)
.AddEnvironmentVariables();
Configuration = builder.Build();
}
public IConfigurationRoot Configuration { get; }
public void ConfigureServices(IServiceCollection services)
{
services.AddCors(options =>
{
options.AddPolicy("CorsPolicy",
builder => builder.AllowAnyOrigin()
.AllowAnyMethod()
.AllowAnyHeader()
.AllowCredentials());
});
services.AddResponseCompression();
services.AddMvc().AddJsonOptions(opts =>
{
// Force Camel Case to JSON
opts.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
});
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
app.UseCors("CorsPolicy");
app.UseResponseCompression();
app.UseMvc();
}
}
This is the OpenAsync method:
Task<string> ICommunicationListener.OpenAsync(CancellationToken cancellationToken)
{
var endpoint = FabricRuntime.GetActivationContext().GetEndpoint(_endpointName);
string serverUrl = $"{endpoint.Protocol}://{FabricRuntime.GetNodeContext().IPAddressOrFQDN}:{endpoint.Port}";
//.UseWebListener()
_webHost = new WebHostBuilder().UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup<Startup>()
.UseUrls(serverUrl)
.Build();
_webHost.Start();
return Task.FromResult(serverUrl);
}
Everything plain and simple, no customizations.
The CORS call works, everything is perfect.
I did a test using Visual Studio Team Services with 15K users load, and it all worked like a charm, with 14K RPS. I think the Load test from VS do not use the CORS middleware by the way.
Now the problem is that when I put this exactly empty API in production, receiving calls from around just 100 simultaneous users, the CPU jump to 100% in 3 minutes. The calls are answered until the CPU reach 100% and then start sending errors back.
Seems that with 15000 users and NO CORS it all WORKS, and with 100 users + CORS DOES NOT WORK, CPU goes to 100% and remain like that until I reboot the VM Scale set.
If I stop sending calls, the CPU of the 5 nodes remains stable at 99% without receiving any single call.
How is this possible?
I tried everything, the project is plain and simple, the VS load test works, it's only when I put this on real CORS calls from different sites and different IP addresses that this happens.
I did a performance Trace in the server before sending traffic to it, again with a load test from Visual studio using EXACTLY the CORS headers everything is blazing fast.
With Real world calls this is what I see in the profiler:
There is nothing in it but the CORS middleware and the usual Kestrel processes.
The Stateless service eat up 99% of the CPU, and keep it even if I STOP the Traffic.
This is another 30 seconds trace with no traffic coming but CPU at 90%
I don't know what else to do, there is something wrong with CORS, I'm sure of it, even if it works, somehow something goes wrong.
This is a CORS call, correctly served.
Is there a bug in the Asp.net Core CORS middleware?
UPDATE:
I tried many combinations to isolate the problem:
New cluster, same Asp.net Core vanilla service=> Problem still there
Same cluster new project same Asp.net Core vanilla service=> Problem still there
Same Cluster WebAPI OWIN service, same code => Problem VANISHED!
The problem occurs with Asp.Net Core on Service Fabric using CORS with more than 50 concurrent requests.
This is the CPU (0.85%) with a Asp.Net Stateless service using Visual Studio template, OWIN and CORS, with around 100 concurrent connections and the same empty web API above
At this point I need help from a Microsoft official source to address the problem.
I'm almost sure this is a Asp.net Core CORS bug, that happens when you host it on Service Fabric as a stateless service and send it some minimum traffic (not just a couple of refresh in the browser).

SignalR WPF Client can't reach hub deployed on IIS when IIS runs on a different system

I just play a little bit with signalR. My application has only one simple hub which is stored in an ASP.NET Application and I wrote a WPF client, which interacts via the hubconnection and the created proxy with the ASP.NET Application. Everything works fine on my local PC. I deployed the ASP.NET Application on IIS.
Now I am getting to the point...
When I type the following into my browser on my own PC (pcthi-and)
http://pcthi-and:8080/signalr/hubs
I'll get what I want
When I type the same url into a browser of another pc I'll get the same response and everything looks fine.
But my Application only works on my pc and not on the other one. When I start the hubconnection on the other pc I don't get a connectionId.
I tried to change the url to my IP-Address without effect.
Browser call to hub works but the Application doesn't work.
The call looks like this:
private bool tryToConnectToCoffeService()
{
try
{
this.hubConnection = new HubConnection(ConfigurationManager.ConnectionStrings["coffeeConnection"].ConnectionString);
this.hubConnection.Credentials = CredentialCache.DefaultNetworkCredentials;
this.coffeeService = this.hubConnection.CreateHubProxy("coffee");
this.hubConnection.Start();
if (string.IsNullOrEmpty(hubConnection.ConnectionId))
{
return false;
}
return true;
}
catch(Exception ex)
{
return false;
}
}
The Global.asax:
public class Global : System.Web.HttpApplication
{
protected void Application_Start(object sender, EventArgs e)
{
RouteTable.Routes.MapHubs();
}
The hub like this
[HubName("coffee")]
public class CoffeeHub : Hub
{
My Hub Connection String is this:
"http://pcthi-and:8080/"
Or:
"http://My-Current-IP-Address:8080/"
I use SignalR 1.0 rc2.
Does anyone have an idea? Thanks for helping.
Cheers
Frank
I think you need to change
hubConnection.Start();
to
hubConnection.Start().Wait();
If you are running .NET 4.5 you could make the tryToConnectToCoffeService method async and then await when you start the hub connection.
await hubConnection.Start();
It likely works today on localhost because the client can finish connecting before if (string.IsNullOrEmpty(hubConnection.ConnectionId)) executes.
It is probably taking longer to connect from another machine which exposes the race condition present when you don't wait for HubConnection.Start() to complete.