Capture Amazon S3 requests from Tomcat using fiddler - amazon-s3

My web application sitting in tomcat reads the files in Amazon S3 buckets. Is there a way to capture the request? I am not sure what protocol it uses. (s3?) I would like to capture this request using fiddler.
Any idea?

As far as I know, S3 typically uses HTTP/HTTPS for communication (REST, SOAP). Are you using a library to make your S3 calls? The library may not use the default proxy.
As you know, Configuring Tomcat to communicate through proxy in Localhost - Fiddler has general details on how to configure Tomcat to use the Fiddler proxy.

Related

Amazon Application Load Balancer wss to ws forwarding problem

There is a target group in AWS Fargate cluster that manages node.js applications inside Docker containers. Every application serves web socket connections (web socket, not socket.io!).
There is a non-encrypted connection (HTTP / ws) behind the Application Load Balancer. However, outside it’s HTTPS / wss. Thus, when HTTPS request comes to Application Load Balancer, it decrypts the request and forwards HTTP request to a selected container.
The question is - how (and where) is it possible to configure wss->ws forwarding for web socket requests (there is a specific URL)?
HTTPS->HTTP rule does wss->HTTP transformation, which is insanely wrong. How to implement wss->ws transformation and is this possible at all?

How To Convert HTTPS POST to HTTP POST

I have an existing web application hosted in Tomcat which is listening for a HTTP POST.
However for security reasons requests to the web application have to be transported across the network as HTTPS.
I cannot change the web application.
So I want to receive a HTTPS POST, decrypt it and pass it on to the web application as a HTTP POST.
I also need to pass back to the sender response codes etc.
I have been told that I can do this using Apache configured as a "reverse proxy".
But I am not an expert at Apache or Tomcat and before I investigate this option I wanted to be sure I was going down the right path.
i.e.
Schematic
So to the Remote Server application everything looks like it happens over HTTPS.
And to my local Tomcat web application everything looks like it happens over HTTP.
Is this doable and correct ?
Do I need to use Apache or could I do it all in Tomcat ?
Is this what is called "url rewriting" ?
This is more than just "redirection" ?
Thanks,
Brett

Is reverse proxy actually needed on ASP.NET core?

We're wondering if reverse proxy is actually required for most use cases and would appreciate additional information.
The Kerstel/Nginx documentation claims:
"Kestrel is great for serving dynamic content from ASP.NET Core. However, the web serving capabilities aren't as feature rich as servers such as IIS, Apache, or Nginx. A reverse proxy server can offload work such as serving static content, caching requests, compressing requests, and HTTPS termination from the HTTP server. A reverse proxy server may reside on a dedicated machine or may be deployed alongside an HTTP server."
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-nginx?view=aspnetcore-2.2
Could anyone please share some insights if this is actually relevant nowadays?
On our use case, we use Docker instances with external load balancing (AWS ALB).
Each docker instance has both Nginx and our ASP.NET Core application running.
We couldn't figure out the exact benefits of using Nginx.
Serving static content
As we're using an external CRN (AWS CloudFront), I assume static caching doesn't really have any actual benefits, does it?
Caching requests
I believe this is the same as serving static content, as dynamic content isn't cached on most scenarios (on our use case - all scenarios).
Compressing requests
ASP.NET Core has a response compression middleware, however - it claims "The performance of the middleware probably won't match that of the server modules. HTTP.sys server server and Kestrel server don't currently offer built-in compression support.".
Perhaps some benchmarks could be created to validate this claim.
https://learn.microsoft.com/en-us/aspnet/core/performance/response-compression?view=aspnetcore-2.2
HTTPS termination from the HTTP server
I assume most clients having load balancers can skip this part, as HTTPS termination can be done on the load balancer if needed.
Thanks!
Effy
This documentation does not tell you that you „should“ run ASP.NET Core / Kestrel behind a reverse proxy, just that advanced scenarios can benefit from one since Kestrel does not have some features that other web servers may have.
If you don‘t have a use for an additional nginx reverse proxy then you don‘t have to use one.
E.g. Kestrel only recently adopted APIs to change certain connection parameters on the fly without requiring a restart - this is helpful for switching certificates acquired via ACME (e.g. Let‘s Encrypt service).
It highly depends on the system architecture if a reverse proxy is needed or not, but you don‘t have to deploy one if you don‘t require a specific feature.

CoAP on Apache, CoAP Web Service

I am working with CoAP protocol on IoT but also I need a web service. I implemented the web service on Apache with HTTP protocol and a Proxy that converts CoAP-HTTP request and responses. But I don't want to use the Proxy to convert CoAP-HTTP. I want to implement directly CoAP web service. Do you have any idea about that. On Apache or different things. Just any idea?
As you wrote On Apache or different things, I will here talk about the second option :). To implement the CoAP server itself, I would recommend either
NodeJS with the CoAP package
Java implementation Californium, from Eclipse.org
More complete list available at http://coap.technology/impls.html#server-side, see Server-side
And then handle the communication with your Apache HTTP server via WebSockets and REST APIs.
coap.me is also great to run tests during development.

How to implement Https on web facing nginx and several microservices behind it

I'm just starting to develop a SPA, with java(dropwizard) REST backend. I'm kinda new to 'web' development, but I did internal web apps before, so security was not a big concern before.
Right now I'm using nginx as my public facing web server, and I just discovered whole slew of complications that arise as we're splitting actual servers: static web server serving my SPA's files, and java microservices behind it.
I'm used to apache talking to tomcat with mod_jk, but now I had to implement CORS in dev because my SPA is deployed on a lite-server serving at different port than the REST Api served by dropwizard.
Now I got to my minimum viable product and wanted to deploy it on prod,
but I have no idea how do I do it.
Do I still need the CORS header? Dropwizard will be run separately on a different port only available to local processes, then I configure nginx to route incoming request from, e.g. /api/ to that port. Does that counts as cross-origin?
I'd like to serve full https. Dropwizard can serve to https, but I don't want to update SSL cert on multiple microservices. I read about nginx ssl termination, will this enable me to use plain http in local and https on nginx?
Any other caveats to watch out on deploying with this architecture?
Thank you!
Yes, you can certainly do it!
You can terminate https with nginx, and still have the backend operate on either plain http or even https still. The proxy_pass directive does support both access schemes for the upstream content. You can also use the newer TCP stream proxying, if necessary.
There are not that many caveats, really. It usually just works.