I have an Apache web server which several sites are connecting to.
From most sites it is accessible and serves content properly (sites being remote and connected via Cisco VPNs), but there is one site, where the server will serve an incomplete page when requested for the login page of the application we are running.
It does not matter what this application is I guess since it is working fine on 10 other sites, just not on this one.
I am getting exactly 907 bytes of the page (the last 907 bytes out of 4000 bytes).
Wireshark reports that the server response is not the first packet and that there is a packet missing before the capture started. Needless to say I waited minutes to start the browser after the capture started so there is no way a packet really was lost because of Wireshark still trying to start up.
Any idea where I look to resolve this?
As it works everywhere it seems to indicate something goes wrong on the network. Where would I look for such an awkward behaviour where 3000 bytes of a web server response get swallowed?
There is no indication on the Apache logs that anything special has occured on the failed pages.
Related
I'm using Digitalocean cloud hosting server and apache2 in Ubuntu 16.04 VPS. I can browse the site from my local PC and check apache access.log to see the page requests. However when using a mobile device, I cannot get a response from the website. I can ping the server IP address from my phone successfully. However, any requests for the domain root do not create any record in the access.log.
I have attempted to uninstall fail2ban as per this threads:
https://www.digitalocean.com/community/questions/how-to-debug-solve-a-err_connection_timed_out-error-when-this-error-happens-on-some-browsers-but-not-in-another
http://installion.co.uk/ubuntu/vivid/universe/f/fail2ban/uninstall/index.html
I have also tried simply serving a phpinfo() page. However, no still no records in access.log when trying to access from mobile devices. The site is has https enabled and is serving perfectly to a PC.
Also, using a browser testing site (https://www.browserstack.com/) I also get connection timed out errors, and no response records in the access.log.
Any suggestions on where to start troubleshooting this? Is this possibly a problem with Digitalocean itself? Is there anything in the LAMP stack that would specifically be blocking some browsers or IP addresses?
It sounds to me like one of two things is happening here:
Your DNS is not set to point to that IP, but you set it in your operating system's host file on your computer.
Your DNS is correct, but other systems are not yet seeing the change you've made.
Try visiting the IP of the server directly from your mobile device. If anything occurs besides timing out, be it a redirect (even if failed) or a page load, you will know that DNS resolution is the issue. Given that you can ping the IP from your phone I would suggest fail2ban is not related, as fail2ban should block ping as well.
If it turns out to be #2 there, it's just a game of waiting. DNS changes can take up to 48 hours to be seen by all systems. In most cases 4-6 hours is common, but 48 hours is still the recognized standard of "it could possibly take this long."
Jarland
I have a website, deployed on 2 identically configured servers - Ubuntu 14.04 / apache2 / MySQL / php. One is in a VM, the other is a physical box. Both servers behave the same.
The first request to go to a web page times out when sent from inside the local network, but responds fine from outside. So if I click on a link or a menu item on the web page, or call up a web page from a browser it times out. If then make a request for a web page it responds immediately and on all subsequent request, unless I leave it alone for over 20 seconds, then the next response will time out. If I click on one link, then wait 2 or 3 seconds, then click on the same or another link it responds. If I click a link, then click a link in another browser after 2 or 3 seconds it responds instantly.
My router is set up to redirect links from outside to the same server. When I make a request to the public address remotely it always responds instantly - no latency. This shows it's not the disk, or application pools or whatever else may take some time to spin up, it's something to do with accessing it locally. The same thing also happens with telnet, MySQL workbench and ftp with both machines. Nothing unusual in the apache logs, it seems the first request just doesn't get there.
I think it's probably my network config. I have a reason for the Ubuntu servers to be on a separate subnet, but I'm currently combining them. The servers are static IPs at 192.168.0.10 and 11, with a mask of 255.255.254.0.
I'm accessing them from machines in the 192.168.1.xx network, also with a mask of 255.255.254.0. Pings seems to go both ways instantly. It's really frustrating trying to test web updates when firstly the ftp has to be done twice and then the clicks have to be done twice if I leave it more than 20s.
Not many views of the question so probably nobody is interested anyway, but I found the answer.
I had a VMware virtual network set up at 192.168.0.0 for when I'm on a train (or at least not at home) so that I can communicate with my VM server by connecting the VM network adapter with the virtual network instead, preserving the static IP address. Even when I have the VM connected directly to the home network, the virtual network is still active on the PC which meant there are 2 separate networks in that range which obviously confuses things and takes a while to sort out. I guess it has to wait for one to time out before trying the other. Anyway, disabling the VMware virtual network when I'm at home sorts out the problem.
I found my httpd processes were using a lot of resources in 'htop' (while my page view is actually low and my webserver has 4 CPU with 4G RAM), so I tried to find out what was happening.
Then I found there is an IP kept visiting my site via httpd access log.
After I blocked this IP, my web server was back to normal.
However, I found the problem after my server was in heavy load, which already caused a lot of connection rejections earlier. Therefore my questions are:
(1) What kind of visiting is this? From the rate of the visiting, it's definitely not a human. I looked up this IP, it's from Neitherland, no other information.
(2) What if they change the IP to visit my site like this again? Any way to prevent this kind of visiting proactively?
I am coding a mac app, which will be a server that serve files to each user's mobile device.
The issues with this of course are getting the actual ip/port of the server host, as it will usually be inside of a home network. If the ip/port changes, its no big as i plan to send that info to a middle-man-server first, and have my mobile app get the info from there.
I have tried upnp with https://code.google.com/p/tcmportmapper/ but even though I know my router supports upnp, the library does not work as intended.
I even tried running a TURN server on my amazon ec2 instance, but i had a very hard time figuring how what message to communicate with it to get the info i need.
I've been since last night experimenting with google's libjingle, but am having a hard time even getting the provided ios example to run.
Any advice on getting this seemingly difficult task accomplished?
The port of your app will not change. The IP change could be handled by posting your servers IP to a web service every hour or whatever time period you want.
Server should run a URL http://your-web-service.com/serverip.php?ip=your-updated-ip and then have your serverip.php handle the rest (put it into a mySQL db or something)
When your client start it should ask your site for the IP and then connect to your server with that.
This is a pretty common way of handling this type of things.
I may be giving entirely the wrong information here, but at the moment we're a bit unsure where to look for the issue. We have a server running on WebLogic, of which version I'm not sure.
Our site has an installer that clients need that can run around 15 MB. Normally, this downloads perfectly fine, but we've recently been finding issues in the download where the browser reports it completed, but the installer can't be opened - it appears that the filesize isn't what it's expected to be either, like the download was just cut off.
The issues are relegated to instances where the user is on a spotty connection, such as a 3G card in their laptop.
It seems to happen mostly on Macs, but it seems like that's because the mac .dmg file is much larger than the windows executable. Still, from my knowledge of network protocols, a spotty network shouldn't cause the specific issue we're seeing.
At the moment, we're debugging several of the layers of the transfer, like our firewalls, but with my meager knowledge of Weblogic, I'm kind of curious if there is something we could be missing in the server's configuration itself.
Unfortunately, I'm not sure if I am able to post the configuration files here - I'm pretty sure at the moment, there are no servlet rules created specifically for the installer's directory - but I was hoping someone here might at least recognize this type of issue and be able to point me in the right direction.
Check if you have any maxpostsize limit set.
Check for the responses that has failed if there's any socket timeout errors seen in the log file.
If you are using a proxy, check for error there related mainly to sockets.
Such issues can come when a tcp socket is timed out at the firewall end, WLS end, Frontend proxy like apache end.
There are few other settings like http connection timeout I think in WLS.
check from admin console-server-protocol-general tab or http tab.