Is it possible to disable websockets with Puppeteer? - chromium

I am using puppeteer to create screenshots of websites and I want to eleminate all unneccesary traffic. Besides blocking analytics sites and the like I want to block websocket traffic as well.
I was not able to find something in the puppeteer API. Is there maybe a startup argument for this?

You should be looking for Upgrade headers in http requests. Puppeteer has an API for intercepting requests here, however it's not well documented on what gets passed into that function, so you might have to inspect/debug that out a bit.
In short, all websocket requests start with an HTTP request with an Upgrade header as a handshake of sorts. If you can reject those requests then the following websocket request shouldn't ever happen.

Related

Proxing HTTPS mobile app reqquests fail with 403 response when SSL proxy enabled

When I run an IOS App through proxy using tools such as Charles, Burp suite and Proxyman I'm not able to see the full request (receiving 403 when SSL is enabled) of the final end point after loged in, and also the App just stop working(Just work when SSL is disabled). I would like to see what the full request looks like to do it using Postman and HttpClient in Java. Is there anything I could do in order to get status 200 like when SSl is disabled ?
Any help to try bypass it is appreciated.
This is probably due to TLS fingerprinting.
Platforms like Cloudflare offer services to block bots and other non-browser traffic, which analyse low-level details of how a client makes connections, including the TLS fingerprint, and use this to spot unusual traffic. Because proxies like the ones you're using create separate upstream TLS connections when they intercept incoming connections, they can often end up sending HTTPS traffic where the TLS fingerprint the server sees doesn't match the HTTP headers, which is sufficient for the connection to be blocked as 'unusual'.
Defeating this is not easy. I also maintain an HTTPS debugging proxy called HTTP Toolkit, where I've done some work to defeat this, by optimizing individual fields of TLS handshakes to avoid common blocks (some more details: https://httptoolkit.tech/blog/tls-fingerprinting-node-js/). You might have more success using HTTP Toolkit in this case, since it can successfully avoid 90% of these kinds of blocks at the moment.
(Note that HTTP Toolkit doesn't have automated setup for iOS yet, but you can still use it manually like any other debugging proxy - you just need to trust the certificate & set your proxy settings)
That said, this still won't work 100% of the time. This is a cat & mouse game between bots and site-scrapers and bot-blocking services, where the criteria are constantly changing & tightening, so there is no perfect solution. Your best bet is to keep trying different tools, and see which ones can make themselves look least suspicious while proxying your traffic.

How to authenticate if auth headers are not supported on client-side?

TL;DR: How to authenticate against NGINX if auth headers are not supported on client side?
I am building an IoT-related project using NGINX as a reverse proxy for the server side services and 1NCE as the LTE carrier for the mobile devices. All traffic is authenticated based on HTTPBasicAuth over SSL-encrypted connections and handling "normal" requests works as desired.
As mobile service might be interrupted and the Internet connection might be lost, I want to send SMS for critical status reports and alarm notifications. 1NCE supports SMS mobile originated SMS (MO SMS) which are handled by the 1NCE's internal infrastructure and forwarded to a configurable API endpoint. So, MO SMS are not delivered to a specified phone, but forwarded via an API request which I need to process on my side.
According to 1NCE's SMS documentation and in consultation with their customer support, SMS forwarding does not support any authentication headers. SMS forwarding can only be done by specifying an HTTPS URL (including the desired API endpoint) and a port. The incoming SMS is then wrapped in a request to the given URL and sent in the request body.
I want to add authentication to the SMS forwarding endpoint (receiving forwarded SMS on my side) as well and am currently wondering about how I could achieve this. NGINX supports authentication on subrequest which could be used to evaluate incoming requests by an internal service. So my first idea was to add some credentials to each SMS (as I am also responsible for the SMS sending part of the code on the mobile devices, I could implement whatever is needed) and check those credentials with an internal service called by NGINX's subrequest. However, this does not seem to be doable. According to this SO question GET requests are used for the internal subrequests hence any body of the incoming POST request is discarded. Therefore, the credentials of the forwarded SMS would also be not available to my internal auth service. Extending NGINX's auth capabilities by writing an custom Lua-based plugin was my second idea, but this does not only seem to be not feasible but is also not supported by the NGINX instance I am using (Lua modules are disabled, switching to openresty seems to be a big thing).
My last idea would be to forward all incoming requests to a Python web service (written in Flask, other services I am using are also written in Flask) and parsing the forwarded SMS in Python. Based on the result of the credential evaluation I could return an 401/Unauthorized status code if credentials provided in the SMS (which is part of the request body) are invalid and process the request otherwise. However, I think that this approach is quite ugly as all incoming requests need to be passed to Flask and invalid requests are not rejected at the level of my Reverse Proxy.
Do you have any ideas about how to approach this issue? What would be a considerable approach with regards to "best practises"? Can I extend NGINX in a way to solve this or should I completely drop NGINX in favor of a "better" proxy?

Is it safe to redirect non ssl requests to ssl version of site?

There is an API. Earlier all request were made not via ssl connection (encription was used) - http://api.com/dosomething. Logic has changed now. Now it is a bit problem to change URL for all clients who are using this API. There is https version of the api site. Is it safe to redirect all requests http://api.com/dosomething to https://api.com/dosomething on server side (apache or nginx)? How it works?
Your API consumer transmits everything in the clear: All its data, authentication, etc. And on your new server you're redirecting to the "same" URL, just using https? The https connection now will be secure, but all of your data and authentication has long leaked.
As we don't know anything about your API consumer, technically it could be a web browser that honors "secure" cookies, e.g. it might not transmit the authentication in the clear. But still, all of the data will be out already. As you say that you can't update the clients, I'm assuming that you're not in this situation.
So: The answer is no, it's not secure. Retire the old API, keep track of anyone accessing it. Once they're few enough, notify them of discontinuing the http service so that they upgrade. Or stay unsafe - choose your poison.

Hide Request/Response header for get request from fiddler or other debug proxy apps

I have mobile app which heavily depends on apis response, I was using charles proxy and fiddler to see the api calls made by my app and I have noticed for one of get api call I am able to see full url with all request parameters(which is fine) and request headers(which include secure keys).
So using those info anyone can execute that api outside of mobile app. my app has millions of user and if someone run script to increase traffic it also increase load on server. so is there any way I can secure or hide those keys ?
I am able to think only one way of doing it is
encryption on both app and api side
is there any better way of doing it ?
You can implement certificate or public-key pinning in your app (for the leaf or the root-CA-certificate). This makes it harder for an attacker to use a proxy and intercept HTTPS traffic. However with XPosed and SSL-Unpinning module this will still work.
Also keep in mind that APK files can be decompiled easily, therefore you don't have to attack the network traffic.
Therefore the next step is to harden your app to make it resistent against manipulation via XPosed or Frida. Note that good harding frameworks cost a lot of money. Usually the protection offered is raising with the cost.
See also this related question.

HTTPS tunneling through my proxy

I'm trying to build a complete web caching proxy using Boost Asio and LibCURL, I've already built the server and everything works fine. It receives http requests (GET, POST, upload using POST ...) correctly and also it sends back the responses to the browser for e.g correctly.
Now, I want to extend it, so it can handles https requests. I read about it in LibCURL web site http://curl.haxx.se/libcurl/c/libcurl-tutorial.html (proxy section), I understood how it works and I have a clear idea how it should be done. But I didn't find a good documentation about how proxies handle https requests. and:
what are the possible messages (information, format, length ...) exchanged by the source application and the proxy ?
things to consider.
...
Thanks in advance :-) .
You will receive the CONNECT command in plain text, and respond to it ditto, then the communications after that will be encrypted. If your proxy is to be an SSL endpoint, which is highly problematic given that HTTPS requires a certificate that matches the target host-address, you will then need to enter SSL mode on both connections. More probably you should just start copying bytes in both directions without attempting to process the contents.