I have 3 applications running on wampserver hosted on our intranet.
I would like to block the access of applications for a certain ip range. So I edited .htaccess files for each application as below. But the application is not blocked.
order deny,allow
allow from 10.212.4.
deny from all
After checking the logs I found that the ip address the client returning was different. Since all our machines use proxy settings by default.
Anyone could help me how to over come this ?
You can use mod_rewrite to block people based on their "x-forwarded-for" header. I would not consider this secure though, because these are easy to forge.
See the discussion here:
http://www.110mb.com/forum/empty-t26129.0.html
Related
My company has a LAMP server, and I am not an expert at web hosting but I manage basic tasks.
My server currently hosts about twelve different domains. Each domain has a .conf file in the sites-enabled directory, and they work fine. Let's say we have example1.com, example2.com, and example3.com, just to hopefully help explain this question.
Recently, a person I work with registered a bunch of new domains. With the domain registrar, they pointed the domains to our IP address. I believe this is called "parking" a domain. I have not set up a .conf file or enabled any of these new domains on our server yet. Let's say they are newsite1.com, newsite2.com, etc...
What's puzzling to me is that if one types one of the new domains into a browser, one of our existing domain shows up. Let's say it's example1.com. So, if you go to a browser and type in newsite1.com, or newsite2.com, you are taken to example1.com. Also, in the address bar at the top of the browser, it will be displayed as example1.com.
This is not the desired behaviour. For one thing, we did not choose, as far as I know, for example1.com to be the default, and it's not necessarily the website we would want to be the default. In any case, I don't know why the system is going to example1.com as opposed to example2.com or any of our other sites.
The desired behaviour would be for there to just be a general error, "this domain does not exist" or something like that. If there has to be a default website, we'd like to be able to choose it.
I've seen questions on Stack Oveflow that are similar, but they all presume one wants to set a default. When I look at the configuration files they reference, for example /etc/httpd/conf/httpd.conf, they are empty, so in my case, there is nothing to unset.
How do I stop browsers from being redirected to the website that they are currently being directed to? How can I set it so that Apache just returns a "site not found" error instead of serving up a website?
The easiest way to fix this is name your .conf files starting with a number.
If you look at the default apache configs, you'll notice a file called "000-default.conf". Apache will load the files in number order - so just make your default virtual host .conf file be 000-whatever.conf.
I suppose you're using name based virtual hosts and the <VirtualHost> directive and this is what docs have to say:
If no matching name-based virtual host is found, then the first listed virtual host that matched the IP address will be used. As a consequence, the first listed virtual host for a given IP address and port combination is the default virtual host for that IP and port combination.
So when you say:
I've seen questions on Stack Oveflow that are similar, but they all
presume one wants to set a default.
... all I can add is that that's the way Apache works. I don't think it's inherently wrong to have a default host that serves a this domain does not exist page. I always do so in my Windows development box, typically by commenting out the default hosts at conf/extra/httpd-vhosts.conf file and adding my default host there.
If you ask for my opinion, it's rather questionable that Apache basically serves an arbitrary site when there's no match, thus making this customisation mandatory—and I've seen lots of live sites that don't do it.
I am having a java based application running in tomcat. It is an online app, the request first goes to apache and then redirects to tomcat.
Today I was not able to log into my application and I noticed warnings at catalina.out file. They said "An attempt was made to authenticate the locked user "root" "and "An attempt was made to authenticate the locked user "manager" "
In my localhost_access_log.2015-07-07.txt I found the below IP addresses trying to access the system.
83.110.99.198
117.21.173.36
I need to block these 2 IPS from accessing my system. The first IP is a well known blacklisted according to the anti-hacker-alliance. How can I do this thing?
FYI I am using apache 2, so the main configuration file is apache2.conf
(Please don't remove the IP addreses I listed above, as I need other developers to be aware of the threat as well)
If you're using VPC:
The best way to block traffic from particular IPs to your resources is using NACLs (Network Access Control Lists).
Do a DENY for All protocols INGRESS for these IPs. This is better than doing it on the server itself as it means traffic from these IPs will never even get as far as your instances. They will be blocked by your VPC.
NACLs are on the subnet level, so you'll need to identify the subnet your instance is in and then find the correct NACL. You can do all of this using the VPC Dashboard on the AWS console.
This section of the documentation will help you:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html
Note that you will need to give the rule numbers for these 2 rules to block these 2 IPs a rule number that is less than the default rule number (100). Use 50 and 51 for example.
You can use an .htaccess file:
Order Deny,Allow
Deny from 83.110.99.198
Deny from 117.21.173.36
It's probably better to add this as a firewall rule though. are you using any firewall service now?
I recently installed Bitnami trac and now I want to access it using my domain name.
I've made the necessary changes to the apache config file (httpd.conf) found in C:\Bitnami\trac-1.0.5-0\apache2\conf\ directory. I've installed trac in port 8080.
So this is the only change I made.
ServerName trac.mydomain.com:8080
I had trac in another server before, and that time also this was the only change I made to get my domain work with it. But this time it doesn't work.
Can somebody please tell me where I've missed? Do I have to update any other place?
According to your comment, you can access the server fine by using the IP address, but can't access it at all when using the domain name. This sounds like it might not be related to Trac at all. Here are a couple of things to try:
Run "nslookup your-domain-name.com". You should get a result that says "Addresses:" and lists your server's IP address. If you don't (or if you get a "Non-existent domain" error), then your DNS server isn't mapping your domain name to your IP address correctly.
Look through Apache's various log files on your server and see if there is any evidence of your request ever reaching your server. Whenever I do this, I first change LogLevel to debug in the Apache config files so that I get as much output as possible (restart Apache after changing the config file). If a 'debug'-level log doesn't even show that Apache saw the request, then something between your server and your local system is causing problems (a firewall perhaps). If the Apache logs do show that the request made it through, then the problem is likely an Apache configuration problem and the log output should provide hints as to how to continue.
Try connecting via VPN and then accessing your server by domain name (not IP address). If you can access the server by IP but not by name, then the server may not know its own domain name.
Bitnami developer here. By default bitnami installations accept petitions from any IP so you don't have to set the ServerName.
Although, if you want your page to be accessible using trac.yourdomain.com, you should consider using apache virtualHosts, could you try to follow this documentation page?
https://wiki.bitnami.com/Components/Apache#How_to_create_a_Virtual_Host.3f
I've managed to setup my WAMP configuration so I can show my clients their websites while they're in development, but I want to secure the root directory so only I can access it.
As it stands now, anyone can simply go to the domain name and see all the other projects I'm working on.
For example, I want to be able to give my clients access to: http://example.com/customer1 but I don't want them to see http://example.com.
I know I have to configure something in my httpd.conf file but not really sure what to do.
Hope I explained this properly.
Deny From All
Allow From localhost 127.0.0.1
It's some time since I used apache, but this should do it.
The setup is:
www.domainA.com
www.domainB.com
both actually hosted on one web server (Apache)
123.123.123.123/domainA
123.123.123.123/domainB
I have setup a hidden forward from the domains to the web server directories which works fine, however, produces duplicate content (since it is also available by addressing the web server directly). I tried setting up 301 redirects to the domains for every request that is targeting the IP address directly (using mod_rewrite),but found that this results in a forwarding loop. Obviously the server does not recognize whether the domain has been requested originally.
If anybody can give me a hint on how this is supposed to be done, I'd be glad to hear.
You can set up virtual hosting on the web-server so that it does pay attention to the hostname that was requested. This is a fairly common practice and should solve your problem. You can do away with separate subdirectories since each virtual host has its own virtual root.
So are you saying that you have pages indexed in google that reference your IP address and a directory rather than the domain name?
Also, I'm not sure why doing a redirect from the IP to the domain name would cause a redirect loop. If the redirect is based on the host header, it should work fine.