We are trying to incorporate ClickTale for our site which is hosted on Acquia. But there seems to be a problem and we would like to hear from people over here if they have come across a similar situation.
We already have HTTPS enabled.
Because of HTTPS, we had to change
our DNS setting from an A record to a CNAME record.
Now based on
this ClickTale Wiki -
http://wiki.clicktale.com/Article/Help_talk:Drupal_integration_module_v1...
whenever we have a reverse proxy, we need to make sure that the IP
address of the proxy is allowed which would mean ClickTale servers
be able to identify the IP address of the end user.
Because we have a CNAME record, we have a canonical URL but we dont have an IP.
How do we deal with this situation? If we do not do anything, will Acquia servers ensure that the right headers are in place such that ClickTale servers could read end users IP?
As per Acquia documentation they use the X-forwarded-for header to forward you the client IP, see https://docs.acquia.com/articles/logging-client-ip-apache-behind-reverse-proxy
So from the doc you quote yourself, i would enable the following:
If your proxy includes the original IP address in the HTTP headers, you could add a module setting in your config.php file to instruct the module to use that header. If your proxy uses a header field called "X-Forwarded-For" (this is common), then add $config['IPAddressHeaderFieldName'] = "X-Forwarded-For"; to your config.php file to utilize this field.
Related
My company has a LAMP server, and I am not an expert at web hosting but I manage basic tasks.
My server currently hosts about twelve different domains. Each domain has a .conf file in the sites-enabled directory, and they work fine. Let's say we have example1.com, example2.com, and example3.com, just to hopefully help explain this question.
Recently, a person I work with registered a bunch of new domains. With the domain registrar, they pointed the domains to our IP address. I believe this is called "parking" a domain. I have not set up a .conf file or enabled any of these new domains on our server yet. Let's say they are newsite1.com, newsite2.com, etc...
What's puzzling to me is that if one types one of the new domains into a browser, one of our existing domain shows up. Let's say it's example1.com. So, if you go to a browser and type in newsite1.com, or newsite2.com, you are taken to example1.com. Also, in the address bar at the top of the browser, it will be displayed as example1.com.
This is not the desired behaviour. For one thing, we did not choose, as far as I know, for example1.com to be the default, and it's not necessarily the website we would want to be the default. In any case, I don't know why the system is going to example1.com as opposed to example2.com or any of our other sites.
The desired behaviour would be for there to just be a general error, "this domain does not exist" or something like that. If there has to be a default website, we'd like to be able to choose it.
I've seen questions on Stack Oveflow that are similar, but they all presume one wants to set a default. When I look at the configuration files they reference, for example /etc/httpd/conf/httpd.conf, they are empty, so in my case, there is nothing to unset.
How do I stop browsers from being redirected to the website that they are currently being directed to? How can I set it so that Apache just returns a "site not found" error instead of serving up a website?
The easiest way to fix this is name your .conf files starting with a number.
If you look at the default apache configs, you'll notice a file called "000-default.conf". Apache will load the files in number order - so just make your default virtual host .conf file be 000-whatever.conf.
I suppose you're using name based virtual hosts and the <VirtualHost> directive and this is what docs have to say:
If no matching name-based virtual host is found, then the first listed virtual host that matched the IP address will be used. As a consequence, the first listed virtual host for a given IP address and port combination is the default virtual host for that IP and port combination.
So when you say:
I've seen questions on Stack Oveflow that are similar, but they all
presume one wants to set a default.
... all I can add is that that's the way Apache works. I don't think it's inherently wrong to have a default host that serves a this domain does not exist page. I always do so in my Windows development box, typically by commenting out the default hosts at conf/extra/httpd-vhosts.conf file and adding my default host there.
If you ask for my opinion, it's rather questionable that Apache basically serves an arbitrary site when there's no match, thus making this customisation mandatory—and I've seen lots of live sites that don't do it.
I have set up an Apache Web Server 2.4 to act as a proxy for Apache Tomcat 7, communicating via the AJP protocol (mod_proxy_ajp on the Apache side and an AJP connector on the Tomcat side). Everything works great for basic functionality.
Now, I am looking to set some specific AJP attributes, but can't quite get it to work...
Looking at the mod_proxy_ajp page (http://httpd.apache.org/docs/current/mod/mod_proxy_ajp.html), under the Request Packet Structure section, I see a listing of attributes. These attributes include the likes of remote_user, and ssl_cert (code values 0x03 and 0x07, respectively). There is also an "everything else" attribute called req_attribute with code value 0x0A that can be used to set any arbitrary attribute in an AJP request.
Further, on the same page, under the Environment Variables section, it states the following:
Environment variables whose names have the prefix AJP_ are forwarded to the origin server as AJP request attributes (with the AJP_ prefix removed from the name of the key).
This seems straightforward enough, and indeed, I am easily able to set an arbitrary AJP attribute such as "doug-attribute" by setting an Apache environment variable called "AJP_doug-attribute", and assigning a relevant value. After doing such, I can analyze the traffic using Wireshark, and see the "doug-attribute" field show up in the dissected AJP block, prefixed with a hex value of 0x0A (the "req_attribute" type listed above). So far so good.
Now I want to try to set the ssl_cert attribute. In the same fashion, I set an environment variable called "AJP_ssl_cert". Doing so, it does show up in Wireshark, but with prefix code "0x0A". Further, my Java application that wants to read the "javax.servlet.request.x509certificate" does not find the certificate.
However, I also notice some other attributes in the Wireshark capture that are listed on the website, like ssl_cipher and ssl_key_size. But in the capture, they show up as "SSL-Cipher" and "SSL-Key-Size" (and have the appropriate "0x08" and "0x0B" prefix codes). So, I try setting the cert attribute again, this time as "SSL-Cert", but I get the same results as before.
To compare, I altered the Apache configuration to require client certificates, and then provided one in the browser when visiting the associated web page. At this point, I look at the Wireshark capture, and sure enough, there is now an attribute named "SSL-Cert", with code "0x07", and my web application in Tomcat is successfully able to find the certificate.
Is there any way that I can manually set the attributes listed on the mod_proxy_ajp page, or does the module handle them differently from other arbitrary request attributes (like "doug-attribute")? I feel like there must be something I am missing here.
As some background, the reason that I am trying to do this is that I have a configuration with multiple Apache web servers proxying each other, and then at the end, an Apache web server proxying to a Tomcat instance via AJP. All the Apache web servers use SSL and require client certificates. With just one Apache server, Tomcat can receive the user's certificate just fine without any special configuration on my part. However, with multiple, it ultimately receives the server certificate of the previous Apache proxy (set on that machine using the SSLProxyMachineCertificateFile directive).
What I am hoping to do is place the original user's certificate into the headers of the intermediate proxied requests, and then manually set that certificate in the AJP attributes at the back end so that the web application in Tomcat can read the certificate and use it to perform its authorization stuff (this part is already set in stone, otherwise I would just add the certificate as a header and make the Java app read the header).
EDIT: of course, if there is an easier way to accomplish passing the user's certificate through the proxy chain, I'd be more than happy to hear it! :)
If a user visits www.example.com/forum could this point to another server/IP without changing the actual URL the user sees?
This means that every request after that must pass to the other server while sending all the post/get data so that www.example.com/forum/index.php?some=thing must work as if it was myothersite.example.com/forum/index.php?some=thin
edit: all this is because the registrant doesn't allow to manage DNS records or even create subdomains.
Nope, this is not possible - you'd have to make Apache act as a Proxy for this to work, and this brings other problems:
All traffic would have to run through both sites
On shared hosting, you probably won't be allowed to do this
REMOTE_ADDR, cookies and other things would no longer work as expected
the best you can do is create a set of RewriteRules that will do header redirects to the target domain. However, in that scenario, the target domain/IP will always be visible to the end user.
The setup is:
www.domainA.com
www.domainB.com
both actually hosted on one web server (Apache)
123.123.123.123/domainA
123.123.123.123/domainB
I have setup a hidden forward from the domains to the web server directories which works fine, however, produces duplicate content (since it is also available by addressing the web server directly). I tried setting up 301 redirects to the domains for every request that is targeting the IP address directly (using mod_rewrite),but found that this results in a forwarding loop. Obviously the server does not recognize whether the domain has been requested originally.
If anybody can give me a hint on how this is supposed to be done, I'd be glad to hear.
You can set up virtual hosting on the web-server so that it does pay attention to the hostname that was requested. This is a fairly common practice and should solve your problem. You can do away with separate subdirectories since each virtual host has its own virtual root.
So are you saying that you have pages indexed in google that reference your IP address and a directory rather than the domain name?
Also, I'm not sure why doing a redirect from the IP to the domain name would cause a redirect loop. If the redirect is based on the host header, it should work fine.
We noticed that a hacker created a domain and configured DNS to point it to our server's IP address.
We are using apache2.x on Ubuntu.
There is a "default" file in apache's /etc/apache2/sites-available directory and it looks like the the hacker's domain is using "default" apache configuration file to display our web content in their domain.
How can we prevent this?
Can some one post a "default" apache configuration file as an example?
Unknown domains that come into apache over the specified ip and port will be directed to the first virtual host, thus the 000-default file. Your best bet is to make the 000-default host return a 400 or 500 error (or some explicit message saying the domain doesn't belong) and use explicit virtualhosts for each of your sites.
+1 Jeremy's answer: make the default (first) virtual host for each IP address you're listening on return something useless like a 404 or page saying nothing but “this is a virtual server”.
Allowing your web server to serve a real web site on a non-matching ‘Host’-name (including a raw IP address) opens you up to two particular attacks:
DNS rebinding attacks, leading to cross-site scripting into your real web site.
This affects sites with a user access element (eg. logging in, cookies, supposedly-private intranet apps).
‘Search-hijacking’. This affects all sites (even completely static ones). This may be what is happening to you. By pointing their own domain name at your server, they can make search engines see both the real domain name and their fake one as duplicates for the same site. By using SEO techniques they can then try to make their fake address seem like the more popular, at which point the search engines see that as the canonical address for the site, and will start linking to it exclusively instead of yours.
Most web servers are configured by default to serve a web site to all-comers, regardless of what hostname or IP address they're accessing it through. This is a dangerous mistake. For all real live sites, configure it to require that the ‘Host’ header matches your real canonical hostname.