API Download and Response Time backed up by Istio is slow - apache

We have a Web Application which is Hosted as a Kubernetes Deployment which is backed up by Istio Ingress Gateway which acts as a Load Balancer and takes care of routing the traffic.
In the above pattern, We are facing a lag in the Web UI.
We debugged and found that we are having more API Response download time in this case.
The Same Setup again is backed up by Apache WebServer on a different URL.
Here, the response time is fine.
So Response Time in Case of Istio is greater than Response Time in case of Apache.
The resources the API talks to are same in both the cases.
I debugged with no luck.

Related

How to make kubernetes nodes outgoing request use System network

My spring boot application needs to call a third party API to verify an user data. now the API provided from third party, they have some restrictions like they only accept if the request comes from some specific IP.
Now our network consultant has made one of my worker nodes to be able to request this API. I can curl the request to the API and get a proper response. I can deploy my application on that worker node outside of kubernetes using tomcat and get proper response.
But when I deploy it on kubernetes cluster, it does not work. Does not work means that API don't accept/process the request. 503 service not available something like that.
Then I tried to curl the request from inside the pods of worker node and found it does not work.
So, I am guessing my Kubernetes is not using the configuration on the system network.
So is there any way I can make my worker node(even one worker node) to use the system network while calling that or any third party API, as I can see I can request and get proper response from API when I request it outside the cluster but from the worker node pc.

HTTP Error 503. The service is unavailable - Application pools stopped my API service Url in my aws remote server

In my AWS remote Desktop, I have hosted my web application and API service application using IIS(internet information service). In that, Application pools stopped my API service in remote desktop IIS at every midnight of lesser usage time. In the morning always I restart my API services in IIS then it works fine. I need a permanent solution to fix this problem always API service will be in Up. Please suggest your solution.Attachment for reference

What is the http_response_code equivalent for an nginx server?

I have built a RESTful API in which I set the http_response_code() in some cases. I was using the XAMPP APACHE web server while developing the API on localhost.
The problem is that, now that I have deployed the API on our EC2 containerized DEV instance, which runs an nginx web server, all of my http_response_code() calls are not working and the API is always returning a "200 OK" status code, which is not wanted in cases like an error should be thrown when for example a user already exists in the system and a registration with the same email is forbidden.
Therefore, is there an equivalent of http_response_code() for nginx web servers?

Is reverse proxy actually needed on ASP.NET core?

We're wondering if reverse proxy is actually required for most use cases and would appreciate additional information.
The Kerstel/Nginx documentation claims:
"Kestrel is great for serving dynamic content from ASP.NET Core. However, the web serving capabilities aren't as feature rich as servers such as IIS, Apache, or Nginx. A reverse proxy server can offload work such as serving static content, caching requests, compressing requests, and HTTPS termination from the HTTP server. A reverse proxy server may reside on a dedicated machine or may be deployed alongside an HTTP server."
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-nginx?view=aspnetcore-2.2
Could anyone please share some insights if this is actually relevant nowadays?
On our use case, we use Docker instances with external load balancing (AWS ALB).
Each docker instance has both Nginx and our ASP.NET Core application running.
We couldn't figure out the exact benefits of using Nginx.
Serving static content
As we're using an external CRN (AWS CloudFront), I assume static caching doesn't really have any actual benefits, does it?
Caching requests
I believe this is the same as serving static content, as dynamic content isn't cached on most scenarios (on our use case - all scenarios).
Compressing requests
ASP.NET Core has a response compression middleware, however - it claims "The performance of the middleware probably won't match that of the server modules. HTTP.sys server server and Kestrel server don't currently offer built-in compression support.".
Perhaps some benchmarks could be created to validate this claim.
https://learn.microsoft.com/en-us/aspnet/core/performance/response-compression?view=aspnetcore-2.2
HTTPS termination from the HTTP server
I assume most clients having load balancers can skip this part, as HTTPS termination can be done on the load balancer if needed.
Thanks!
Effy
This documentation does not tell you that you „should“ run ASP.NET Core / Kestrel behind a reverse proxy, just that advanced scenarios can benefit from one since Kestrel does not have some features that other web servers may have.
If you don‘t have a use for an additional nginx reverse proxy then you don‘t have to use one.
E.g. Kestrel only recently adopted APIs to change certain connection parameters on the fly without requiring a restart - this is helpful for switching certificates acquired via ACME (e.g. Let‘s Encrypt service).
It highly depends on the system architecture if a reverse proxy is needed or not, but you don‘t have to deploy one if you don‘t require a specific feature.

Azure Application Gateway with API as a backend pool is not working

I have .net core API inside the web app and that web app is backend pool for azure application gateway. while trying to access the web app got below error.
"502 - Web server received an invalid response while acting as a gateway or proxy server."
On app GW, health prob for that web app in unhealthy but while access the API as a https://abc.azurewebsites.net/api/values then it works.
When we deploy API in Web App Service then apiname.azurewebsites.net does not work give any probes to application gateway and treat unhealthy. API works like xxx.azurewebsites.net/api/values and Application Gateway also know this path. We have to put /api/values in override backend path of http settings. Same have to do in health probes.
Yes, you can first verify if the backend API could access directly without app gateway. Then this error may happen due to the following main reasons:
NSG, UDR or Custom DNS is blocking access to backend pool members.
Back-end VMs or instances of virtual machine scale set are not responding to the default health probe.
Invalid or improper configuration of custom health probes.
Azure Application Gateway's back-end pool is not configured or empty.
None of the VMs or instances in virtual machine scale set are healthy.
Request time-out or connectivity issues with user requests.
Generally, the Backend healthy status and details could point it out and show some clues. You could also verify all of the above reasons one by one according to this DOC.