Charles Proxy vs Developer Console vs Browserstack [closed] - google-developers-console

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
Improve this question
It seems that more developers are using Charles Proxy nowadays, but I can't beg to differ what are the advantages or what you can do with Charles that you can't do with others like Chrome Developer Console or browser stack.

Debugging proxies like Charles Proxy, HTTP Toolkit and Fiddler all have a few features that browser developer tools don't:
They can capture traffic from non-browser sources, like mobile devices and desktop applications.
They can capture traffic from multiple tabs or other sources all in one place, so you can see the traffic from your web app and the backend traffic between your microservices all in one place.
They can capture traffic that the browser doesn't show - e.g. the browser's own requests for internal browser services, or safebrowsing checks, or CORS requests that don't appear in dev tools.
They include lots options to rewrite or reredirect traffic (usually with both custom rules to automatically mock responses and with breakpoints to manually edit traffic)
They usually offer more advanced traffic filtering and inspection, e.g. more powerful tools for finding certain requests, for inspecting more formats of request & response body, for understanding compression and caching behaviour, for validation of headers or even for validation of recognized request parameters with specific known APIs.
That said, there's a few unique benefits of browser tools:
There is zero setup and no separate applications required.
You immediately get the exact traffic from a single page with no extra noise from anywhere else.
They can use metadata from the browser's internal state, e.g. to show which line of JS sent a specific request, or to show requests that failed before any connections were made (due to CORS, mixed content, or many other browser restrictions).

Related

Why HSTS header is required if http to https redirect is already present? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
We have a webserver running behind AWS ALB and a AWS CloudFront in-front on ALB.
We have setup force http->https redirection in both CDN and ALB.
Do we still need to configure HSTS header ?
What are the disadvantages if we don't setup HSTS, when we have force https redirect enabled?
Consider the following attack (ssl stripping).
User enters "example.com" in the browser.
The browser sends request to http://example.com.
That redirects to https://
The browser requests https://example.com and all is well, right?
What if there is a man in the middle between the browser and these sites? HTTPS protects against man in the middle, so they can't do anything right?
User enters "example.com" in the browser.
The browser sends request to http://example.com.
Attacker hijacks this request and responds arbitrary content (eg. something that looks like the real one).
User entered example.com and got something that looks like it - user is happy, but is looking at a malicious page, on plain http. The attacker can even proxy the real page, replacing all https references with http, and serving appropriate content from a https site, the https connection in that case would be between the attacker and the https server, and not the user and the https server.
Of course the user can discover this if they are security aware and pay attention. Modern browsers now warn of insecure (=non-https) pages and so on. Still the best practice is to make the very first request on https too, so all this is not possible (because an attacker can't forge a valid certificate for https://example.com), and that's exactly what HSTS achieves.

is custom (https) headers a safe API auth method? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Is it safe to put an API key in a custom header as such (pearl script):
my $json = `wget --no-check-certificate --header "keyFile: $hashkey" -q -O - $mediaplayersurl`;
Will the header be encrypted when connecting to an https resource? Or are the headers passed as plain text like the url?
No, this is not safe, but not in the way that you think.
HTTPS is encrypted at the transport layer if you think about the OSI layer model, but http headers are at the application layer. Everything that is sent above the transport layer is encrypted and this includes http headers and the URL. We usually don't recommend to send authentication tokens in the URL because those end up in web server log files where they might be readable by many people and enable them to impersonate the user.
The issue is that the encryption can be easily broken by an active network-based attacker. We usually call it a man-in-the-middle. This is due to the unchecked certificate. If the client doesn't check the certificate, then an attacker might impersonate the server to the client and at the same time impersonate the client to the server by using their own certificate (one where they hold the private key for). They can learn the API key and use it afterwards.
You can fix this either by using (public) certificates that are valid (full certificate chain that is validated by the client until the trust root with valid dates and domain names) or self-signed certificates (those don't have a certificate chain) where the client actually checks the fingerprint.

Hide Request/Response header for get request from fiddler or other debug proxy apps

I have mobile app which heavily depends on apis response, I was using charles proxy and fiddler to see the api calls made by my app and I have noticed for one of get api call I am able to see full url with all request parameters(which is fine) and request headers(which include secure keys).
So using those info anyone can execute that api outside of mobile app. my app has millions of user and if someone run script to increase traffic it also increase load on server. so is there any way I can secure or hide those keys ?
I am able to think only one way of doing it is
encryption on both app and api side
is there any better way of doing it ?
You can implement certificate or public-key pinning in your app (for the leaf or the root-CA-certificate). This makes it harder for an attacker to use a proxy and intercept HTTPS traffic. However with XPosed and SSL-Unpinning module this will still work.
Also keep in mind that APK files can be decompiled easily, therefore you don't have to attack the network traffic.
Therefore the next step is to harden your app to make it resistent against manipulation via XPosed or Frida. Note that good harding frameworks cost a lot of money. Usually the protection offered is raising with the cost.
See also this related question.

Drop in traffic due to HTTPS security [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Ours is a educational website collegesearch.in which is HTTPS secured. We are loosing our desktop traffic because we get error like untrusted certificate on public domain networks and also some of the antiviruses block our website as well. There is no issue with our certificate and they are issued by CSA and are not self-signed.
We understand some of the pages may include mixed content like stuffed http links, which we identify and remove but this itself does not seem to be the reason of traffic drop.
We have 75% mobile users and only ~20% of desktop, while our competitors have 40% of desktop users and they are http websites. This makes us think that using HTTPS has become ironically a problem.
My question is What makes antiviruses block HTTPS website?
Why we get untrusted certicate error?
Anything that can help here...
The site collegesearch.in:
is using a self-signed certificate and thus is not trusted by default by any browsers
on top of this the certificate is expired
on top of this the name in the certificate does not match the URL
on top of that you are offering insecure ciphers
For more details see the SSLLabs report.
Interestingly, www.collegesearch.in is setup in a different way although it still offers some weak ciphers.
It looks like that you are trying to deal with the badly setup collegesearch.in by redirecting users to www.collegesearch.in. But, for the redirect to work the user is first confronted with the bad certificate from collegesearch.in which he must accept before the browsers continues with the HTTP request which then results in the redirect to www.collegesearch.in. To fix this you need to have a proper certificate setup not only for www.collegesearch.in but also collegesearch.in.

scraping a form from an ssl site and using it on your own

If I screen scrape a form from a site secured with SSL, and put that form on my site (which is also secured by SSL), do I still get the benefits of SSL?
Is the scrape process dynamic? Meaning, does it happen each time a user hits the wrapping page on your site, or are you doing it once and just using the result from that day forward?
In either case, there are two SSL sessions in play here. The first is between the computer performing the scrape - probably your web server - and the source server. The second, if applicable, will be between the browser and your server. You will be responsible for the SSL in this case.
Whether or not you "get the benefits of ssl" depends on which part of this process you're referring to.