Browser being blocked (forbidden) behind proxy - ssl

I have 2 servers behind a Load Balancer. This LB has SSL configured on it.
Almost 50 different clients are able to connect to my website successfully, except 1 client which received a Forbidden (403) message from the browser.
After some investigation with him, I discovered he is behind a proxy server.
I suggested he assign my domain to bypass proxy, but he argued that it would exploit his system.
I could not figure out anyway to fix this problem yet.
What solutions might there be? Or even if my suggestion to him (to put my domain to bypass proxy) is even correct?
The Website is: Arke

His proxy is denying your server. Likely, the person who setup his proxy/VPN server has defined some kinda of blacklist. Its not unheard of blocking whole geo-regions or IP Blocks.
I would tell your client to get a new proxy or accept your help of assigning your domain.

Turned out it had nothing to do with blocking my IP or Domains. Magically, the next day it work on its own. Weird!
I assumed something bugged on his end.
Maybe clearing cache, cookies and such stuff would've helped.
Just though of answering my own question might help someone.
Thanks all

Related

How is discord api detecting your ip?

How is the discord API detecting your IP when rate-limiting your computer?
Even by doing requests trough tor and resetting the connection every 5 requests to change my IP, it still rates limit me (you probably know what I am doing, just note that it's for fun, quarantine is boring)
How does it know it still your computer? How does it work?
Exposing an IP is a fundamental part of how the internet works. When you connect to a service, you are sending data to its IP address, including your IP address so that the service may reply to you. There's no way around this, as if the IP given was incorrect, you would not get a reply from that service. Changing your IP using a proxy, VPN, or like you've been using, TOR, is still exposing the IP address of the end point of the proxy, so that the service can respond to the proxy and have the proxy send the request back to you.
Typically, if you are hitting rate-limits that often, you are doing something which is not permitted by the service you are using. If you continually hit rate-limits, the service will catch on and apply harsher rate limits, or even terminate your account. In discord especially, hitting rate-limits that often would indicate you are performing requests with malicious intent. If that's not true, you should re-evaluate how you're going about what you're doing, as there will be a better solution to your problem.

IIS 8 (Server 2012) Site Binding Not Working Works When No Site Name Is Specified

I've run into a strange problem. If I put a site name in the site bindings, the Default Web Site on ISS is not recognizing it. Suppose I leave it blank, then I'm able to get the pages but they show up with the server IP address.
This is a problem because with SSL, it will either not serve pages or it will give me a site warning.
Note that I have the DNS working of GoDaddy with forwarding and masking to the public IP of my EC2 instance on AWS.
All of this started overnight when the SSL cert expired. I have since put a new certificate that's valid but I cannot get the site working again.
I've done a lot of debugging including diffing the old configuration that was working with the new one and I'm not able to understand why this happens.
Setting the site name causes both http and https to not work.
Much appreciate any help in solving this - Thanks in advance!
This appears to be a problem with masking with forwarding provided by the domain host Go-Daddy. For some reason, with masking with forwarding, the response is enclosed in a frame and that frame says the src is the public IP address of the server rather than the domain name.
I also think that there is a problem with https forwarding with masking. While the reason this problem happened is not clear, for now, the fix has been to change the domain from masking with forwarding for http requests alone to point at the http server public IP address.
This is not the ideal solution but at least has the website back up and running. I'll post an update once I know more about masking with forwarding and why that suddenly stopped working.

Protect home (webserver's) dynamic ip from reverse DNS issue

this is my first question here =)
I'm setting my webserver at home (trying both apache and nginx) and I've found that you can easily get and use free dynamic dns (sub) domain to make your server available not only with your home ip that would be changed next time after disconnect and connect to the internet. Actually I've made it working and it's great..
But the problem is that my personal unique data (account id/machine name) could be retrieved with reverse dns look up and I can not hide my real ip that is attached to server.
You can use proxy while browsing the internet, but how to proxy your web server (apache) that have free dyn dns (sub) domain?
I've already tried to create VirtualHost configurations, also with the use of proxy modules (mod_proxy, mod_proxy_http, mod_proxy_html) and additionally with proxifier tool, but no luck.
Some people say that it is possible to hide ip with nginx, but I never used to work with nginx. Still believe it is also possible with Apache, just can't figure it out.
I'm using private proxy in format [proxyip:port]. I must attach it everytime to my home ip to make it work. Maybe it's better to get login/password auth proxy, but at first I should find the way how to use it with web server.
Is there anyone who have luck with it? Can you explain, please, the proper config for apache? Currently I'm using version 2.4.
Many thanks and have a good day!

Apache Reverse Proxy Using a Network Proxy Credential?

I'm trying to set up a reverse proxy on Apache 2.2 (Windows). I am able to do it on a non-corporate network without any problems. I am attempting to reverse proxy content from a vendor domain, but keep it under my own domain for SEO reasons.
dev.example.com/stuff ===> devstuff.vendor.com
However, when I try to incorporate this on my internal network, the Internet Gateway proxy is blocking the request, presumably as I'm not properly authenticating the call to the external domain.
dev.example.com ===> Internet Proxy =X=> devstuff.vendor.com
I've been googling every term I can think of and reading the Apache docs and can't find anything which seems to work. I have tried running Apache as a service with a network account which would have access, but naturally, it's probably not trying to use the proxy at all.
Is there any way to tell Apache to send external ProxyPass requests to use a specific proxy server, and perhaps a specific username/password as well? I'd love to avoid modifying the proxy or firewall too heavily to accomplish this.
Thanks!
Never quite did figure out the "with passing credentials" part, but using the ProxyRemote directive, we could pass everything for our devstuff.vendor.com domain through our network proxy. From there, we had a proxy exception put in to allow from our web server IPs without authentication, since this was an approved arrangement anyhow.
Though, in hindsight, even after solving this, we ended up backing up one step further and just going straight out the firewall for performance reasons (both for the end user with too many hops) as well as negative impacts to our proxy server.

Local HTTPS proxy possible?

TL;DR
I want to set up a local HTTPS proxy that can (LOCALLY) modify the content of HTML pages on my machine. Is this possible?
Motivation
I have used an HTTP Proxy called GlimmerBlocker for years. It started in 2008 as a proxy-based approach to blocking ads (as opposed to browser extensions or other OS X-specific hacks like InputManagers). But besides blocking ads, it also allows the user to inject their own CSS or JavaScript into the page. Development has seriously slowed, but it remains incredibly useful.
The only problem is that it doesn’t do HTTPS (from its FAQ):
Ads on https pages are not blocked
When Safari fetches an https page using a proxy, it doesn't really use the http protocol, but makes a tunneled tcp connection so Safari receives the encrypted bytes. The advantage is that any intermediate proxies can't modify or read the contents of the page, nor the URL. The disadvantage is, that GlimmerBlocker can't modify the content. Even if GlimmerBlocker tried to work as a middleman and decoded/encoded the content, it would have no means of telling Safari to trust it, nor to tell Safari if the websites certificate is valid, so Safari would think you have visited a dubious website.
Fortunately, most ad-providers are not going to switch to https as serving pages using https are much slower and would have a huge processing overhead on the ad-providers servers.
Back in 2008, maybe that last part was true…but not any more.
To be clear, I think the increasing use of SSL is a good thing. I just want to get back the control I had over the content after it arrives on my end.
Points of Confusion
While searching for a solution, I’ve become confused by some apparently contradictory points.
(Also, although I’m quite experienced with the languages of web pages, I’ve always had a difficult time grokking networks and protocols. On that note, sorry if I’m missing something that is way obvious!)
I found this StackOverflow question asking whether HTTPS proxies were possible. The best answer says that “TLS/SSL (The S in HTTPS) guarantees that there are no eavesdroppers between you and the server you are contacting, i.e. no proxies.” (The same answer then described a hack to pull it off, but I don’t understand the instructions. It was very theoretical, anyway.)
In OS X under Network Preferences ▶︎ Advanced… ▶︎ Proxies, there is clearly a setting for an HTTPS proxy. This seems to contradict the previous statement that TLS/SSL’s guarantee against eavesdropping implies the impossibility of proxies.
Other things of note
I can’t remember where, but I read that it is possible to set up an HTTPS proxy, but that it makes HTTPS pointless (by breaking the secure communication in the process). I don’t want this! Encryption is good. I don’t want to filter anyone else’s traffic; I just want something to customize the content after I’ve already received it.
GlimmerBlocker has a nice GUI interface, but I’m fine with non-GUI solutions, too. I may have a poor understanding of networking and protocols, but I’m perfectly comfortable on the command line, tweaking settings in text editors, and so on.
Is what I’m asking possible? Or is my question a case of “either you get security, or you can break it with hacks and get to customize your content—but not both”?
The common idea of a HTTP proxy is a server which accepts a CONNECT request which includes the target hostname and port and then just builds a tunnel to the target server. All the https is done inside the tunnel, so there is no way for the proxy to modify it (end-to-end security from browser to web server).
To modify the data you need to have a proxy which plays man-in-the-middle. In this case you have a https connection between the proxy and the web server and another https connection between the browser and the proxy. Between proxy and web server the original server certificate is used, while between browser and proxy a newly created certificate is used, which is signed by a CA specific to the proxy. Of course this CA must be imported as trusted into he browser, otherwise it would complain all the time about possible attacks.
Of course - all the verification of the original server certificate has to be done in the proxy now, and not all solutions do this the correct way. See also http://www.secureworks.com/cyber-threat-intelligence/threats/transitive-trust/
There are several proxy solution which might do this SSL interception, like squid, mitmproxy (python) or App::HTTP_Proxy_IMP (perl). The last two are specifically designed to let you modify the content with your own code, so these might be good places to start.