Error Code: 502 Proxy Error. The ISA Server denied the specified Uniform Resource Locator - apache

I have installed apache HTTP server and after when i browse to localhost i am getting this error. Apache server is started. Port is configured to 80 and seems to be no one use it. I can't figure out what is the problem. Can someone?
Thanks.

The problem is that you're routing your localhost traffic through your upstream gateway proxy. The upstream gateway proxy refuses to send the traffic back, either because "localhost" has a different meaning to it, or because it's trying to prevent a security threat called "proxy bounceback." What URL are you using to access your site? Put that URL's hostname in your proxy exemption list.

Open the ISA server2006 and create a role and allow networks internal to allow internal and external .And restart the isa services .It will work fine
I tried this and it is working fine.

Related

GCP load balancing ("internal" traffic over HTTPS)

I have a GCP instance group with 2 instances. Both are up and running. I want to configure a load balancer (HTTPS) to manage the traffic.
I've set up a forwarding rule with the HTTP-protocol and a certificate managed by google. This all works, but only when the traffic between the load balancer and the backend (the instances) is plain HTTP.
Steps I did so far
I create a template and this template is just a normal N1 series machine. I checked the boxes to create firewall rules for allowing http and https traffic.
I create a firewall rule named "allow-ports". This firewall rule targets all instances in the network, has a 0.0.0.0/0 IP-range and allow port tcp = 80, 443. How I see this, this firewall rule should open both the http (80) and https (443) port.
I create an instance group with port mapping. "http-port" = 80, "https-port" = 443. I use the template I just created.
When the instance group is created, I check if this is running. With SSH, I get access to the instances and install apache (sudo apt-get install -y apache2) on the both. When navigating to their external IP's in the browser, I see them both.
I create a HTTP(S) load balancer, with the option "From internet to my VMs". For backend configuration, I add a backend service with my instance group, protocol HTTP, named port "http-port". For frontend configuration, I set up the HTTPS protocol, create an IPv4 IP address, create a google-managed ssl certificate, and I'm done. I also added health checks btw.
Now... these steps work (after a few minutes). With the cloud DNS, I have set up a domain name which points to the IP address of the load balancer. When going to , I see the apache page.
What doesn't work?
When I change the backend configuration to HTTPS (and named port "https-port"), I get a 502 server error. So it seems to me that there is some connection, but there is an error. Could this be an apache error?
I have spent a whole day, creating and recreating instance groups, firewall rules, load balancers, ... but nothing seems to work. I'm surely missing something, probably something dumb, but I have no clue what it could be.
What do I want to achieve?
I do not only want a secure (HTTPS) connection between the client and my load balancer, I also want a secure connection between the load balancer and the backend service (the instance group). Because GCP offers the option to use the HTTPS protocol when creating a backend service, I feel that this could be done.
To be honest: I'm reading some articles about the fact that the internal traffic is secured, so a HTTPS connection is not necessary. But that doesn't matter to me, I really want to know how this works!
EDIT
I'm using the correct VPC (default). I also edited the firewall rule from 0.0.0.0/0 to 130.211.0.0/22 and 35.191.0.0/16 (see: https://cloud.google.com/compute/docs/tutorials/globally-autoscaling-a-web-service-on-compute-engine?hl=nl#configure_the_load_balancer).
In addition to my previous comment. I followed your steps at my test project to find out the cause of your issue. I installed the same configuration and checked it with HTTP at the back-end. As it was expected, I found no errors. After that, I installed SSL certificates to the back-end and to the load balancer. Then I switched my back-end, load balancer and health checks to HTTPS and disabled HTTP at the back-end. At this point, I found no errors also.
So, I decided to get 502 error in my test configuration in some way. I switched my health check at the load balancer to HTTP. A few minutes later I tried to reach my test service again and got 502 error. When I switched back my health check to HTTPS 502 error gone away.
During this test, I didn't change firewall rules, but allowed HTTP and HTTPS traffic in my instance template and I used default network.

Haproxy as reverse proxy problem in ssl pathtrough

I setup haproxy as reverse proxy in our organization . we want when the client request for some web site like lenovo or oracle or etc. …the request must be passed through our reverse proxy server .(because our client set our dns server and i defined reverse proxy’s ip as those such domain in our dns server). i using SSL passthrough .but i have some problem in this case.
1- some time haproxy doesnt work fine and have problem to load right certificate.for example when i want to see www.amazon.com haproxy load wrong certificate(SSL_ERROR_BAD_CERT_DOMAIN) so firefox prevent to load website. in this case i have www.intel.com in haproxy config so haproxy getting confused and load www.amazon.com with intel certificate website.
2-I want all sub domain of website like *.oracle.com or *.lenovo.com passed through our reverse proxy so we don’t need to register sub domains of website one by one in haproxy server .
i try with -reg or matching pattern method but all of them need to final destination.
3- some time redirection cant work properly and we facing http to https redirection error .(some time client enter lenovo.com or intel.com (means http requesting).to over come this problem i defined http frontend and redirect all request to https except one hypothetical request by acl . but my issue some time appear.
This is simple done by req_ssl_sni and writing simple acl to forward request but attention to just write single forntend and backend ,because multi frontend and backend cause a confusing in haproxy.

Can ping ec2 server (ubuntu/apache) but don't get response from http request

Background:
OS: ubuntu
Web Server: apache2
What works:
I can ping the server's elastic IP (and receive a response)
I can ssh into the server
What doesn't work:
I cannot get any sort of http response from the server
Expected Behavior:
When I go to http://ec2-XXX-XX-XXX-XXX.compute-1.amazonaws.com/, it will serve my page, or at least give me a 404 that I can debug
Actual Behavior:
When I go to http://ec2-XXX-XX-XXX-XXX.compute-1.amazonaws.com/, it says "Oops! Google Chrome could not connect to ec2-XXX-XX-XXX-XXX.compute-1.amazonaws.com". It doesn't even give me a 404.
Rant:
Clearly the server is there because I can ssh in to that exact address and I can ping that exact IP and get a response. But when I go that exact address in my web browser it's as if it never makes it to the server. Or it's as if Amazon isn't letting http requests through, but in my security group I am clearly specifying that http requests from all sources are allowed through. Apache is definitely running, my document root is definitely set up properly, and my error and access logs don't give me anything.
Is there any sort of log in between Amazon and the server, or in between requests making it to the server and being received by Apache that would specify why it's returning "not found" rather than a 404. Can I make my Apache logs more verbose?
Thanks in advance! I've spent hours on this....
Turns out apache was set to listen on port 8080 rather than port 80, so if you encounter this problem, try taking a look at what apache's listening to.

Unable to determine IP address from host name

We have an iMac running as an internal dev server with Apache, PHP & Mysql.
It has a number of virtual host files and when accessing on the iMac, these work brilliantly.
We're also running Squid proxy server http://web.me.com/adg/squidman/ so that we can access the web through our connection when we're mobile.
General web browsing and such is fine when accessed via proxy, however when we try access a virtualhost url like ourtestsite.dev we get the following message:
Blockquote
he following error was encountered while trying to retrieve the URL: http://ourtestsite.dev/
Unable to determine IP address from host name "ourtestsite.dev"
The DNS server returned:
Name Error: The domain name does not exist.
This means that the cache was not able to resolve the hostname presented in the URL. Check if the address is correct.
Your cache administrator is webmaster.
Can anyone shed any light on how we make these urls accessible via the proxy?
Thanks
within the network config on the iMac, I told it not to use the proxy for addresses that were *.dev
I had this working before with .local addresses but *.local is added as an exception automagically.
So adding the wildcard has solved and we're golden :-)
Just add an entry to the hosts file on your squid server pointing all the virtually hosted domains to the IP address of the iMac. This will bypass DNS lookups for those domains.

DNS problem - dig resolves but curl cannot connect to host

I have recently created a Rackspace cloud server instance using CentOS 5.5. I have used yum to install the "Web Server" group (it includes Apache, etc.), added www.booztrakr.com as the ServerName in httpd.conf, made sure iptables allows on port 80. I had registered this domain with Go-Daddy and changed their name servers to the Rackspace name servers on their site. I added "A" and CNAME records to the Rackspace name servers. httpd has been started. When I use curl on the server I can get the Apache landing page. When I dig www.booztrakr.com from a remote machine(over the internet) the answer section returns:
www.booztrakr.com. 300 IN CNAME booztrakr.com.
booztrakr.com. 300 IN A 184.106.216.156
When I try a browser or curl, it can't connect:
curl -G www.booztrakr.com
curl: (7) couldn't connect to host
I know this has got to be pretty basic and config related but I'll be dammed if I can see it. Any help would be appreciated. Thanks.
If dig resolves, this just means the DNS server returns the right values. It will even work if the IP doesn't exists.
If a HTTP connecting to the server fails, this is a configuration problem.
The server responds to ICMP requests, so it's not a routing problem.
When I use curl on the server I can get the Apache landing page
Your webserver is running, but you just can't reach it from outside. This is the problem. What does iptables --list outputs?