Per backend configurable forwarding timeout in Traefik proxy? - traefik

I have a question on ForwardingTimeout configuration in Traefik (https://docs.traefik.io/configuration/commons/#forwarding-timeouts)
Do any backend providers support configuring it per backend instead of globally in traefik proxy?
Most of the other proxies (haproxy) allow configuring dial-timeout, read-timeout's per backend.

This is not possible at the moment.
There's an discussion for this feature and it's still open: https://github.com/containous/traefik/issues/3027

Related

Is reverse proxy actually needed on ASP.NET core?

We're wondering if reverse proxy is actually required for most use cases and would appreciate additional information.
The Kerstel/Nginx documentation claims:
"Kestrel is great for serving dynamic content from ASP.NET Core. However, the web serving capabilities aren't as feature rich as servers such as IIS, Apache, or Nginx. A reverse proxy server can offload work such as serving static content, caching requests, compressing requests, and HTTPS termination from the HTTP server. A reverse proxy server may reside on a dedicated machine or may be deployed alongside an HTTP server."
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-nginx?view=aspnetcore-2.2
Could anyone please share some insights if this is actually relevant nowadays?
On our use case, we use Docker instances with external load balancing (AWS ALB).
Each docker instance has both Nginx and our ASP.NET Core application running.
We couldn't figure out the exact benefits of using Nginx.
Serving static content
As we're using an external CRN (AWS CloudFront), I assume static caching doesn't really have any actual benefits, does it?
Caching requests
I believe this is the same as serving static content, as dynamic content isn't cached on most scenarios (on our use case - all scenarios).
Compressing requests
ASP.NET Core has a response compression middleware, however - it claims "The performance of the middleware probably won't match that of the server modules. HTTP.sys server server and Kestrel server don't currently offer built-in compression support.".
Perhaps some benchmarks could be created to validate this claim.
https://learn.microsoft.com/en-us/aspnet/core/performance/response-compression?view=aspnetcore-2.2
HTTPS termination from the HTTP server
I assume most clients having load balancers can skip this part, as HTTPS termination can be done on the load balancer if needed.
Thanks!
Effy
This documentation does not tell you that you „should“ run ASP.NET Core / Kestrel behind a reverse proxy, just that advanced scenarios can benefit from one since Kestrel does not have some features that other web servers may have.
If you don‘t have a use for an additional nginx reverse proxy then you don‘t have to use one.
E.g. Kestrel only recently adopted APIs to change certain connection parameters on the fly without requiring a restart - this is helpful for switching certificates acquired via ACME (e.g. Let‘s Encrypt service).
It highly depends on the system architecture if a reverse proxy is needed or not, but you don‘t have to deploy one if you don‘t require a specific feature.

Could Azure service fabric use reverse proxy as edge proxy after load balancer

It is advised that asp net core must use a strong web server such as web listener
or a proxy as an internet gateway. My question is: is the build in reverse proxy strong enough to be that role? If I use asp net core + kestrel in my internal service and all external communication goes through reverse proxy after load balancer, is it secure?
Short answer: no
It's just a proxy with some smart retry logic.
You want to put a WAF, or Azure API manager, in front of it if you want to publish all your internal services to the internet and use kestrel, or use web listener for all your services.
Yes.
"The reverse proxy is built on the same Windows HTTP Server API (http.sys) that WebListener uses which provides the DoS protection that is currently missing from Kestrel." - Vaclav Turecek (github)

How to implement Https on web facing nginx and several microservices behind it

I'm just starting to develop a SPA, with java(dropwizard) REST backend. I'm kinda new to 'web' development, but I did internal web apps before, so security was not a big concern before.
Right now I'm using nginx as my public facing web server, and I just discovered whole slew of complications that arise as we're splitting actual servers: static web server serving my SPA's files, and java microservices behind it.
I'm used to apache talking to tomcat with mod_jk, but now I had to implement CORS in dev because my SPA is deployed on a lite-server serving at different port than the REST Api served by dropwizard.
Now I got to my minimum viable product and wanted to deploy it on prod,
but I have no idea how do I do it.
Do I still need the CORS header? Dropwizard will be run separately on a different port only available to local processes, then I configure nginx to route incoming request from, e.g. /api/ to that port. Does that counts as cross-origin?
I'd like to serve full https. Dropwizard can serve to https, but I don't want to update SSL cert on multiple microservices. I read about nginx ssl termination, will this enable me to use plain http in local and https on nginx?
Any other caveats to watch out on deploying with this architecture?
Thank you!
Yes, you can certainly do it!
You can terminate https with nginx, and still have the backend operate on either plain http or even https still. The proxy_pass directive does support both access schemes for the upstream content. You can also use the newer TCP stream proxying, if necessary.
There are not that many caveats, really. It usually just works.

How to make Axis 2 Proxy property aware of HTTPS for forward proxy server

We are using Axis 2 framework to consume an external service for which we need to route the call thru a forward proxy server. I am using the below code to prove it out in lab but seems on actual proxy server (Which is https://.....) I don't seem to have a way to interact with Axis 2 (ServiceClient) which internally is using CommonsHttpTransportSender something with which it understands that hostname being passed is to be used with HTTPS scheme.
Is there an easier way to achieve this with CommonHttpTransportSender?
Options o = s._getServiceClient().getOptions();
HttpTransportProperties.ProxyProperties proxyProperties = new HttpTransportProperties.ProxyProperties();
proxyProperties.setProxyName(config.getForwardProxyServer());
proxyProperties.setProxyPort(config.getForwardProxyPort());
o.setProperty(HTTPConstants.PROXY, proxyProperties);
After reading the RFC for Web proxy tunneling, I realize the requirement in itself is wrong, Forward proxy usually listens on HTTP protocol and simply facilitates a tunnel between client and server, if the proxy has to act as listening on HTTPS, then it would be more of a case for reverse proxy which wouldn't be applicable for HTTP proxy as question above originally stated!.
CommonsHttpTransportSender internally uses Commons HTTP Client 3.1 which uses HTTP Proxy as per RFC.

How do you integrate/configure an Apache Http Server with Mule ESB

We currently have mule ESB running on out network. Not in the DMZ.
I'm looking for information on how to configure an Apache Http server running in the DMZ to act as a proxy for a web service running on the ESB.
Thanks
The HTTP transport of Mule doesn't have any specific requirement in term of headers, so basically there's no particular recommendation for proxying HTTP requests in front of Mule.
So in the case of Apache, use mod_proxy to configure a reverse proxy in front of Mule and you should be good.