Rewriting content URLs with traefik - traefik

We're using traefik to reverse proxy our microservices environment, running on kubernetes in staging and prod, and docker-compose locally. We're trying to proxy requests to specific URLs to specific microservices, so that, for example, the "orders" microservice serves the API and UI for that concern in our system. We're aiming for our services to be agnostic to the URL on which they're serving so, for example, the "orders" microservice serves on /, but traefik proxies requests from http://api.me.com/orders/{id} with a PathPrefixStrip of orders to e.g. http://orders.svc.kubernetes.local/{id}. I'm trying to resolve an issue where the service needs to know which URL it is serving on so that the URLs it writes out includes the value specified in PathPrefix, but I don't like having to duplicate the knowledge of the PathPrefix across traefik and the application. Is this something that just "has to be done" in this scenario, is traefik capable of rewriting URLs in the responses, or is there some other technique that we can/should apply to achieve this?
Note: I have already read these three similar questions and have not found the answer there.

This question is 2yrs old. Not sure which version of traefik was used back then.
But starting with Traefik 1.3, the stripped prefix path will be available in the X-Forwarded-Prefix header. Source: https://docs.traefik.io/v1.4/basics/

Related

Anyway to setup traefik to feed matomo automatically?

With a self hosted instance like matomo, and a smart edge router like traefik, I was hoping to find some automated solution for analytics via traefik configuration instead of injecting JavaScript into each hosted service on my docker based server.
It seems to me the best way to track usage in the backend, instead of relying on 'the goodness of the browser', especially with ad blockers targeting matomo.
Has anyone tackled this in any fashion?
Yes, it's possible with Log Analytics: https://matomo.org/log-analytics-/
See also: https://github.com/matomo-org/matomo-log-analytics

Is reverse proxy actually needed on ASP.NET core?

We're wondering if reverse proxy is actually required for most use cases and would appreciate additional information.
The Kerstel/Nginx documentation claims:
"Kestrel is great for serving dynamic content from ASP.NET Core. However, the web serving capabilities aren't as feature rich as servers such as IIS, Apache, or Nginx. A reverse proxy server can offload work such as serving static content, caching requests, compressing requests, and HTTPS termination from the HTTP server. A reverse proxy server may reside on a dedicated machine or may be deployed alongside an HTTP server."
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-nginx?view=aspnetcore-2.2
Could anyone please share some insights if this is actually relevant nowadays?
On our use case, we use Docker instances with external load balancing (AWS ALB).
Each docker instance has both Nginx and our ASP.NET Core application running.
We couldn't figure out the exact benefits of using Nginx.
Serving static content
As we're using an external CRN (AWS CloudFront), I assume static caching doesn't really have any actual benefits, does it?
Caching requests
I believe this is the same as serving static content, as dynamic content isn't cached on most scenarios (on our use case - all scenarios).
Compressing requests
ASP.NET Core has a response compression middleware, however - it claims "The performance of the middleware probably won't match that of the server modules. HTTP.sys server server and Kestrel server don't currently offer built-in compression support.".
Perhaps some benchmarks could be created to validate this claim.
https://learn.microsoft.com/en-us/aspnet/core/performance/response-compression?view=aspnetcore-2.2
HTTPS termination from the HTTP server
I assume most clients having load balancers can skip this part, as HTTPS termination can be done on the load balancer if needed.
Thanks!
Effy
This documentation does not tell you that you „should“ run ASP.NET Core / Kestrel behind a reverse proxy, just that advanced scenarios can benefit from one since Kestrel does not have some features that other web servers may have.
If you don‘t have a use for an additional nginx reverse proxy then you don‘t have to use one.
E.g. Kestrel only recently adopted APIs to change certain connection parameters on the fly without requiring a restart - this is helpful for switching certificates acquired via ACME (e.g. Let‘s Encrypt service).
It highly depends on the system architecture if a reverse proxy is needed or not, but you don‘t have to deploy one if you don‘t require a specific feature.

Map WebApi directly to the website and not use IIS Application

I have lot of webapis which are developed and deployed independently.
Each API would have routing for ex:
api/FirstApi
api/SecondApi
These will be deployed under www.myapis.com/.
If I create application for each of the api(s) in IIS, I would access the api as follows
www.myapis.com/FirstApiApp/api/FirstApi
but I want to access it as.
EX: www.myapis.com/api/FirstApi
Or: www.myapis.com/api/SecondApi
I want to remove the application FirstApiApp OR SecondApiApp from the url.
Is it possible to configure this pattern in IIS?
You could have the following structure:
c:\inetpub\wwwroot\api\FirstApi
c:\inetpub\wwwroot\api\SecondApi
And then have a website in IIS mapped to c:\inetpub\wwwroot and inside the api folder you have the 2 applications configured like this:
This assumes that you should drop the api/FirstApi routing from your Web APIs and map them to / directly because the first part will be provided by IIS. If you don't do this the request will become www.myapis.com/api/FirstApi/api/FirstApi which is not the goal here.
This being said, personally I would recommend you against doing this. A better approach would be to have a reverse proxy such as nginx or HAProxy in front which will route requests to /api/FirstApi to for example backend_node:8080 and requests to /api/SecondApi to backend_node:8181. This would allow you to deploy your Web APIs in two separate website in IIS listening on two different ports and keep the routing job to the application layer and not the infrastructure.

How to implement Https on web facing nginx and several microservices behind it

I'm just starting to develop a SPA, with java(dropwizard) REST backend. I'm kinda new to 'web' development, but I did internal web apps before, so security was not a big concern before.
Right now I'm using nginx as my public facing web server, and I just discovered whole slew of complications that arise as we're splitting actual servers: static web server serving my SPA's files, and java microservices behind it.
I'm used to apache talking to tomcat with mod_jk, but now I had to implement CORS in dev because my SPA is deployed on a lite-server serving at different port than the REST Api served by dropwizard.
Now I got to my minimum viable product and wanted to deploy it on prod,
but I have no idea how do I do it.
Do I still need the CORS header? Dropwizard will be run separately on a different port only available to local processes, then I configure nginx to route incoming request from, e.g. /api/ to that port. Does that counts as cross-origin?
I'd like to serve full https. Dropwizard can serve to https, but I don't want to update SSL cert on multiple microservices. I read about nginx ssl termination, will this enable me to use plain http in local and https on nginx?
Any other caveats to watch out on deploying with this architecture?
Thank you!
Yes, you can certainly do it!
You can terminate https with nginx, and still have the backend operate on either plain http or even https still. The proxy_pass directive does support both access schemes for the upstream content. You can also use the newer TCP stream proxying, if necessary.
There are not that many caveats, really. It usually just works.

Modifying html repsonse from a webserver before it reaches the browser using a webserver plugin?

The question is as simple as the title. I have a webapp (I have no clue as to what technology it was built on or what appserver it is running on). However, I do know that this webapp is being served by an Apache Server/ IIS Server / IBM Http Server. Now, I would like to have a plugin/ module / add-on at the web-server end, which would parse/truncate/cut/regex the http response (based on the requested url's pattern), and mask(encrypt/shuffle/substitute) a set of fields in this response based on different parameters(user's LDAP permissions in the intranet / user's geo-location if on the internet, etc) and send the altered response back to the user.
So, Is there an easy answer to creating such plugins/modules/add-ons? How feasible is this approach of creating extra software at the webserver, when you want to mask sensitive information in a webapp without modfying the web-app code? Are there any tools that help you do this for Apache?
And, finally, is this just a really crazy thing to try?!
Each webserver will have its own way of doing so.
There is no universal plugin architecture for webservers.
In IIS you would write an HTTP Handler or HTTP Module, or possibly an ISAPI Filter. You can also directly interact with the http response using the Response object exposed by the HttpContext.
With apache, there are different modules that can do what you want (mod_headers, for example).
I don't know anything about WebSphere, but I am certain it also has similar mechanisms.
What you are asking is required by most web applications, so would be either built in or very easy to do.
The easiest way is to add a plug-in using the web application container. For example, if it's Tomcat, you can add a filter or valve.
If you want to plug-in to the web server, you'd need to write a custom module using the API of whichever web server is being used.
If all else fails, you could always wrap the entire server in a reverse proxy. All requests would go through your proxy and that would give you the opportunity to modify the requests and the responses.