Our site has taken a significant dip recently in daily visitors. It happened almost overnight in fact. I'v elooked in Webmaster tools and we have some 'Site Errors', listing DNS, Server Connectivity and Robots.txt Fetch:
. DNS - couldn't communicate with the DNS server
. Server connectivity - Request timed out or site is blocking Google
. Robots.txt Fetch - Crawl postponed bexcause robots.txt was
inaccessible
This is odd because I can reach the robots file with no issues at all. The line graph associated with each also shows no errors. What's the problem here?
Any help appreciated.
It's most probably your site error are 404 "Not found" messages. You need to take care about those errors. There will be cases where they are just wrong pages that it was bad typed or whatever, but some of those 404 you need to redirect to the correct page, using an 301 o 302 redirect.
About the messages of "DNS - couldn't communicate with the DNS server" and 0 error notificaction and 0 error on the graph below, it's just informative. But you need to take care that your site is online, and your DNS service is ok, with the "green tick"
Related
Myself and my users are often running into a Cloudflare Bad Gateway 502 error. Trying to figure out what goes wrong is hard, because Cloudflare blames the hosting company and the hosting company blames Cloudflare. A typical situation when using Cloudflare.
What I noticed is that nothing actually fails. The host receives the request and is handling the request just fine but which sometimes takes a bit longer than usual to complete. But Cloudflare can't wait and instead throws a Bad Gateway error, while the script is actually still running.
I've noticed this behavior when performing heavy back-end tasks (like generating +50 PDFs). My users notice this when they try to upload an image (which often starts a re-sizing task).
Is there a way I can configure my server so that Cloudflare knows that the request is still being processed? Or should I just ditch Cloudflare overall?
The culprit was Railgun. After disabling Railgun (in Cloudflare's control panel) the Bad Gateway 502 errors immediately disappeared.
I've gone through this error for quite a long time, Cloudflare support wasn't able to guide me.
To solve this I tried multiple tweaks and tricks.
the successful one was changing your https to HTTP in your database > wp_option.
for example :
https://xxxxx.com/ to http://xxxxx.com/
switching your SSL setting to "full" in Cloudflare settings.
this should work fine, good luck.
I have researched on this error very deeply and what I found the result https://modernbreeze.in/error-502-bad-gateway-cloudflare-how-to-fix-in-wordpress/
I noted down in the above blog post. Please read and let me know if it's solved or not.
I really have done searched and read dozens of replies on this old worn out subject. My "virtual server" must be different to the ones I read about here.
I have no cPanel. I have a Linux platform powered by Apache. I know the "root" of the server is a "free domain" which is never used and added domains sit on the webspace in folders. I can use redirects and expires and a few other instruction used on .htaccess. OK - redirects do not work - sometimes! Expires worked once and never again - at least I cannot see on Google Pagespeed that an expiry as been applied to a bunch of files.
I think teh redirects are blighted by AB Zero.html not being AB_Zero.html or AB-Zero.html ditto a similar issues with various folders.
The website originated in 2002 and I inherited the file and folder names.
I tried adding " %20 " to the spaces - to no avail.
Thats the story, the question is - is there another way to handle redirects that overcomes this problem?
I own a VPS with ISPConfig installed. On that VPS we got 4 websites, which are running well, but we have recently spotted problem with Google indexing nonexistent subdomains.
No matter that if you type www.xxx.com or www.xxx.xxx.com or www.yyy.xxx.com or yyy.xxxx.com it will load the main website www.xxx.com, which is, I assume, bad for Google as we give them millions of pages to index. I got to mention that subdomains "xxx" and "yyy" were not preset, in a matter of fact we got no subdomains at all, except "mail", which we use to reach the Roundcube for our websites.
I spotted that the "auto-subdomain" setting for every website is "*." and setted it to "www.", which fixed the issue with redirect but now all subdomains are still reachable - response code is 200 and they show empty pages.
I would like to show 404 error or something like that, not OK status.
Take a look at your dns manager, probably you have an "*" A record with *.xxx.com pointing to your IP address, then if you type this.xxx.com or that.xxx.com, you get www.xxx.com.
I have a client running into some 500 errors when using a CDN. These errors indicate that there are too many internal redirects, and our research confirms that. The client does not want to adjust their internal redirects, and wants to address this problem in another way.
Based on my research so far, this seems like a hard cap which is not specific to any one type of web server, and is in place to avoid endless loops. That being said, is there any way to raise this limit - for instance to 20 instead of 10?
Example:
Browser >>9 redirects>> Origin 200 page (9 redirects total)
Browser >>9 redirects>> Origin gives custom 404 page (+1 redirect for custom 404 - 10 redirects total)
Browser >> CDN (+1 redirect from custom rule) >>9 redirects>> Origin 200 page (10 redirects total)
Browser >> CDN (+1 redirect from custom rule) >>9 redirects>> Origin gives custom 404 page (+1 redirect for custom 404 - 11 redirects total)
Only example 4 gives a 500 error. Without adjusting the redirect configuration or removing the CDN, is there any way to get around this? (Unfortunately, I cannot provide htaccess for more info on the redirects, my apologies).
Unfortunately it is the HTTP client who decides how many redirects it is willing to follow. The limitation you see stems from a recommendation originally given in RFC 2068, sec 10.3 and quoted again in RFC 7231, sec 6.4:
An earlier version of this specification recommended a maximum of five redirections [...] Content developers need to be aware that some clients might implement such a fixed limitation.
A rough estimate on how many redirects are going to hit the limit for browsers can be found in this answer. Most browsers allow to configure this limit (e.g. Firefox is exposing the network.http.redirection-limit setting).
Webservers then are a different matter: It appears Apache had the MaxRedirects option for the RewriteOptions directive between v2.0.45 and 2.1. The LimitInternalRecursion setting seems to have taken over for this. I've been unable to find an equivalent setting for nginx.
As a final note: If you are really seeing this many internal redirects (i.e. redirects that are only performed within the rewrite engine and do not lead to real HTTP redirects immediately), this may be a strong indicator to revise your rewrite rules.
You will need to impress on the client that you must fix the poor application logic that needs countless internal redirects instead of trying to change every browser on the planet.
** EDIT **
Apache apparently has the option to change this server side using RewriteOptions MaxRedirects option but I think you will still have an issue with browsers which often suggest your users stop the redirects and bail out ... sometimes even before 10 redirects.
Excuse the potential noobishness of the question, but I'm, well, a bit of a noob when it comes to this domain architecture lark. If "domain architecture" is even the technical term. Anyway, I digress...
So, I've googled this question , but I can't see the answer I'm looking for (maybe it doesn't exist, who knows!?) The situation is that I host a .com top level domain which does a 301 forward to another site on the net not hosted by me. Can I set up a subdomain that then points somewhere else whether that be on my host itself or just some other site elsewhere on the net?
Essentially, if I set up a subdomain, will it too inherit the web forwarding, and if so, can I directly affect where that subdomain points?
Any answers gratefully appreciated!
Before I try to answer your question, let me be a little fussy :)
First thing first you are confusing and mixing together two different protocols ([DNS] and [HTTP]), actually there is even a dedicated page to the Wikipedia for HTTP 301 responses: http://en.wikipedia.org/wiki/HTTP_301 (but you should read the whole shebang: ([Wikipedia, search for HTTP] is always a good start, and the [RFC 2616] is absolutely a must, IETF RFCs are not easy reading but the Internet is built on them).
DNS is used to translate a name, like www.example.com into an IP address, like 192.168.0.1, in order to locate a machine on the Internet. So DNS is involved as the one of the very first steps a browser takes in order to resolve an URL: but once the "machine name" is translated by the separate DNS Service, and it has become an IP address, DNS job is over and it is used/involved no more.
Then when a browser, using HTTP, contacts the Web Server located on that machine (in this example the machine www.example.com, which the DNS Service has kindly translated to an IP address, in our example 192.168.0.1, because the operating system can only use an IP address as the argument for an [internet socket]) and only at that moment the web server, instead of serving a page, answers whith an "error" code (which, actually is a "response header" with a numeric code that does not start with "2").
Only that this error code is actually used to tell something else: that the browser should try again an "HTTP request", this time connecting to another machine (and, as long as this redirection is "permanent" instead of "temporary" ([HTTP_307]), the new address should be remembered by the browser, its cache and history).
So, if you can setup [redirection response header] on the first machine, it means that there is a Web Server on that first machine that is programmed (given a certain URL pattern) to spit out a Redirection Header, and as long as you can control these redirections, you can as well send the browser wherever you want, not merely sending them to another machine on the Internet but to another URL as well, even on the same website (actually this is the original intended use of code 301, as a measure against [link rot]).
Basically you are free to do whatever you want, or better, to send them wherever you want.
The pros are obvious... the cons are that you must have control over the first web server, and that the visiting browsers will have to perform two "GET request" in order to land at the intended page (this is not grim as it looks, since the [RFC 2616] suggests that the browser (they call it User Agent) caches and remembers the redirection (because it is
permanent)).
Disclaimer: I am being prevented to post hyperlinks, but they where basically all from the Wikipedia so, if you will, you can look the words in brackets "[...]" on the Wikipedia...