apache2 server sometimes returns no response but logs a 200 status - apache

I'm running an Apache2 web server
Server version: Apache/2.2.22 (Ubuntu)
Server built: Mar 19 2014 21:11:10
Every once in a while it silently fails to return content yet writes a 200 status code to the log file. For example, here is the regular log entry for a particular file.
50.158.90.90 - - [17/Nov/2014:06:18:16 -0800] "GET /beta/images/supported_browsers_64h.png HTTP/1.1" 200 12028 "Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko"
But every once in a while an entry like this shows up.
50.158.90.90 - - [17/Nov/2014:07:30:38 -0800] "GET /beta/images/supported_browsers_64h.png HTTP/1.1" 200 0 "Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko"
Nothing shows up in the error log when this happens. Any idea of what is going on?

Related

Apache random IPs in access log trying to execute scripts

I just got a quick question. My apache access log has random IPs from China, Japan, etc. It looks like they are trying to execute scripts from where they are.
The log looks like this: 171.117.10.221 - - [29/Jan/2018:08:05:04 -0800] "GET /ogPipe.aspx?name=http://www.dongtaiwang.com/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.3$
1.202.79.71 - - [29/Jan/2018:08:05:06 -0800] "GET /ogPipe.aspx?name=http://www.epochtimes.com/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (K$
113.128.104.239 - - [29/Jan/2018:08:05:11 -0800] "GET /ogPipe.aspx?name=http://www.wujieliulan.com/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Ge$
117.14.157.148 - - [29/Jan/2018:08:05:17 -0800] "GET /ogPipe.aspx?name=http://www.ntdtv.com/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 (Linux; U; Android 4.3; en-us; SM-N900T Build/JSS15J) AppleWebKit/$
110.177.75.106 - - [29/Jan/2018:08:05:37 -0800] "GET /ogPipe.aspx?name=http://www.dongtaiwang.com/ HTTP/1.1" 404 3847 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/$
221.11.229.244 - - [29/Jan/2018:08:05:57 -0800] "GET /ogPipe.aspx?name=http://www.epochtimes.com/ HTTP/1.1" 404 3847 "-" "Mozilla/5.0 (Linux; U; Android 4.3; en-us; SM-N900T Build/JSS15J) Appl$
182.101.57.39 - - [29/Jan/2018:08:06:03 -0800] "GET /ogPipe.aspx?name=http://www.epochtimes.com/ HTTP/1.1" 404 3847 "-" "Mozilla/5.0 (Linux; U; Android 4.3; en-us; SM-N900T Build/JSS15J) Apple$
113.128.104.88 - - [29/Jan/2018:08:06:13 -0800] "GET /ogPipe.aspx?name=http://www.epochtimes.com/ HTTP/1.1" 404 3847 "-" "Mozilla/5.0 (Linux; U; Android 4.3; en-us; SM-N900T Build/JSS15J) Appl$
106.114.65.1 - - [29/Jan/2018:08:06:14 -0800] "GET /ogPipe.aspx?name=http://www.wujieliulan.com/ HTTP/1.1" 404 3847 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45$
113.128.104.148 - - [29/Jan/2018:08:06:31 -0800] "GET /ogPipe.aspx?name=http://www.ntdtv.com/ HTTP/1.1" 404 3847 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 9_1 like Mac OS X) AppleWebKit/601.1.46$
114.221.124.84 - - [29/Jan/2018:08:06:45 -0800] "GET /ogPipe.aspx?name=http://www.ntdtv.com/ HTTP/1.1" 404 3847 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 9_1 like Mac OS X) AppleWebKit/601.1.46 $
172.104.108.109 - - [29/Jan/2018:08:17:50 -0800] "GET / HTTP/1.1" 302 830 "-" "Mozilla/5.0" (None of these are my IPs, that's why I am putting them out there.)
I used an IP lookup site to see where they are. Does anyone have any advice towards what I should do?
It's a new tls prober from GFW.
The https://example.com/ogPipe.aspx is a tool to bridge some blocked news website in china.(you can see the target websites in log lines)
GFW indeeds to detect/figure out it.
Here's my splunk search result of these 3 days.
remote_ip.png
user_agent.png
The features of the prober.
Source ip is a one-shot address
User-Agent is simulated to Chrome/Safari/Firefox
TLS Protocol is TLSv1.2
Short answer: Ignore them.
Long answer: There are plenty of vulnerabilities in various web servers / application frameworks that hackers want to abuse. Those originating IPs may not be the hackers themselves but victims of some malware / trojan horses remotely controlled by hackers. Those victims were used by hackers to dig if your server is vulnerable for a more promising rewards, e.g. access to your database or passwords. If you are hosting a .net framework application, look closely for any announcement of vulnerability and apply security patches if available. Especially if you have a "ogPipe.aspx" file serving, you should examine every line of code in it to see whether there is security loophole. As shown in your server log, it responded http code 404 meaning that you don't serve ogPipe.aspx, so you are safe. As a prevailing security advice, look closely for any announcement of vulnerability (from your software vendor, e.g. Apache / Microsoft) and apply security patches if available.

log format for goaccess log analysis

Installed goaccess, and trying to parse/analyse one log file. Facing issues in the log format. Any one knows the format we need to use - for below kind of log:[updated the log sample]
::1 - - [24/Jun/2013:17:10:39 -0500] "GET /favicon.ico HTTP/1.1" 404 286 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.116 Safari/537.36" 0 -
It worked after using --log-format=COMBINED.
Answer credits to #Pete Darrow.

Junks in apache access_log?

I am hosting a small test website in ec2 and there should be only 2-3 test users with valid login to my server. However, I am seeing a lot of junk logs in my apache access_log(
/var/log/httpd/access_log):
198.2.208.231 - - [13/Dec/2013:21:11:07 +0000] "GET http://ib.adnxs.com/ttj?id=1995383&position=above HTTP/1.0" 302 - "http://www.minbusiness.net/?p=611" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.18.1 (KHTML, like Gecko) Version/5.0 Safari/533.16"
173.234.32.69 - - [13/Dec/2013:21:11:07 +0000] "GET http://ads.creafi-online-media.com/st?ad_type=iframe&ad_size=728x90,468x60&section=5172215&pub_url=${PUB_URL} HTTP/1.0" 302 - "http://lookfashionstyle.com/index.php?option=com_content&view=category&layout=blog&id=42&Itemid=98&limitstart=24" "Mozilla/4.0 (compatible; MSIE 6.0; WINDOWS; .NET CLR 1.1.4322)"
198.136.31.98 - - [13/Dec/2013:21:11:07 +0000] "GET http://ad.tagjunction.com/st?ad_type=ad&ad_size=468x60&section=4914662&pub_url=${PUB_URL} HTTP/1.0" 302 - "http://www.benzec.com" "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.0.13) Gecko/2009073022 Firefox/3.0.13"
....
Not exactly sure what's going on... Am I being attacked?
thanks!
One possibility is that your server is configured as an open proxy and some ad scams are proxying traffic through it to hide their real origin.
There is alot of bots around the web attempting all kinds of exploits,
I spawned my web server just yesterday and already received lots of spamming/exploit attempts. Like the ones in the thread I've just created ( and not only, quite a few others.. Cloudflare is helping but it doesn't catch it all, at least not in the free version, which is what I am using to get some protection):
Exploit Attempts in nginx access log, Some logs without IP, what to do about it?

Grep specific domain and all subdomains from access.log

I'm trying to grep a specific line with domain from Apache2 access.log. In my access.log I have all my virtual hosts and different domains.
cat/var/log/access.log:
www.something-else-domain.si:80 193.77.xxx. xxx - - [06/Nov/2013:12:21:45 +0100] "GET /path/to/dir/image.jpg HTTP/1.1" 304 - "www.something-else-domain.si/index.php" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0"
www.domain.si:80 193.77.xxx. xxx - - [06/Nov/2013:12:21:45 +0100] "GET /path/to/dir/image. jpg HTTP/1.1" 304 - "www.domain.si/index.php" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0"
domain.si:80 193.77.xxx. xxx - - [06/Nov/2013:12:21:45 +0100] "GET /path/to/dir/image. jpg HTTP/1.1" 304 - "www.domain.si/index.php" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0"
I would want to grep only the domain.si and www.domain.si and whatever.domain.si and not something-else-domain.si. How could I do that? Thanks for help.
egrep '^([^ ]*\.)?domain\.si' /var/log/access.log
Taking this apart:
^ is the beginning of the line.
(xxx)? is "match xxx or nothing"; in this case, match either:
nothing at all, which is the case of a naked domain name (domain.si)
[^ ]*\., any string of characters that are not spaces, followed by a dot. This matches the optional www. or whatever. part.
domain\.si simply matches the domain.si part.
The anchoring with ^, along with the "no spaces" bit, ensures that you only match things at the beginning of the line (not requests like GET /domain.si).
A gnu awk solution
awk '/www.domain$|domanin$/ {print $NF RS}' RS=".si"
www.domain.si
"www.domain.si
"www.domain.si
There is a problem in your example. space are not allowed in url

Rails pages not showing even though passenger is running

I am trying to deploy my first proper Rails Project.
I have a Ubuntu 12.04 virtual server with Nginx, Passenger and PostgreSQL installed.
I have deployed the code to the server using Capistrano.
Everything should be working but for some reason when trying to access the pages through the browser, it just tries to connect for about 10 seconds and then times out.
However, all static pages, such as address/500.html and address/robots.txt load correctly.
I have checked the Nginx error logs at /opt/nginx/logs but they are empty. The Nginx access log on the other hand only logs some of my attempts.
Nginx access log:
130.233.194.3 - - [18/Sep/2012:11:13:04 +0300] "GET /somestuff HTTP/1.1" 301 5 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1"
130.233.194.3 - - [18/Sep/2012:11:37:59 +0300] "GET /500.html HTTP/1.1" 200 643
"-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1"
130.233.194.3 - - [18/Sep/2012:11:45:59 +0300] "-" 400 0 "-" "-"
130.233.194.3 - - [18/Sep/2012:11:57:32 +0300] "GET /500.html HTTP/1.1" 200 643
"-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1"
130.233.194.3 - - [18/Sep/2012:11:57:47 +0300] "GET /favicon.ico HTTP/1.1" 200 0 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1"
130.233.194.3 - - [18/Sep/2012:11:58:19 +0300] "GET /robots.txt HTTP/1.1" 200 204 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1"
(For instance, right after the /somestuff, I tried /nopage, but it isn't logged)
My Rails App production log only shows the database migrations. I can also open the Rails console in production mode and write things into database so I don't think that is the problem.
If I write "app.get("/")" or any other page in the rails console, it will only return 301
(Don't know if that is relevant, on my development enviroment it returns 200)
Here are some other files that might be relevant:
nginx.conf:
#user [User used in deploying];
server {
listen 80;
server_name [Name_of_the_server];
root /home/[User used in deploying]/srv/nsbg/current/public;
passenger_enabled on;
}
(That's what I have added, otherwise it's default)
Capistrano deploy file:
require 'bundler/capistrano'
set :user, '[User used in deploying]'
set :domain, '[server name]'
set :applicationdir, "/home/[User used in deploying]/srv/nsbg"
set :scm, 'git'
set :repository, "[Github repo]"
set :git_enable_submodules, 1 # if you have vendored rails
set :branch, 'master'
set :git_shallow_clone, 1
set :scm_verbose, true
# roles (servers)
role :web, domain
role :app, domain
role :db, domain, :primary => true
# deploy config
set :deploy_to, applicationdir
set :deploy_via, :export
# additional settings
default_run_options[:pty] = true # Forgo errors when deploying from windows
#ssh_options[:keys] = %w(/home/user/.ssh/id_rsa) # If you are using ssh_keysset :chmod755, "app config db lib public vendor script script/* public/disp*"set :use_sudo, false
# Passenger
namespace :deploy do
task :start do ; end
task :stop do ; end
task :restart, :roles => :app, :except => { :no_release => true } do
run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}"
end
end
Using command "sudo passenger-status":
----------- General information -----------
max = 6
count = 1
active = 0
inactive = 1
Waiting on global queue: 0
----------- Application groups -----------
/home/[User used in deploying]/srv/nsbg/current:
App root: /home/[User used in deploying]/srv/nsbg/current
* PID: 8038 Sessions: 0 Processed: 1 Uptime: 1h 4m 29s
Like I said, static pages get served but any other page (including invalid routes) simply hang up and no error is written anywhere.
Please help me, I am very unexperienced in deploying and completely lost