Site down because of moving to another host bad for seo? - seo

I have bought a ipad website and it's moved to my server.
Now i have tried to make an addon domain, but it does not work on my first hosting account.
On my second hosting account it works, but on that server there is another ipad website so i don't think this is smart to do because of the same ip adresses.
So adding an addon domain does not work and the site is down now!
I have added a service ticket, but i think this will cost at least 8 hours before i get an answer.
Can anyone tell me how bad this is for my serp position in google.
The website has always been on the first page.
Will this 404 error do bad to my site?cOr is it better to place the site on the same server as the other ipad website?

EDIT:
It is not ideal to serve a 404/timeouts, however your rankings should recover. You mentioned that the sites are different. Moving the site to a different server/IP shouldn't matter too much as long as you can minimize the down time of the said process performed (and should probably be preferred over downtime, if possible). I want to ensure this is communicated, but do NOT show site #2 as site #1 in the short term as you will experience duplicate content issues.
If you don't already have it, you might open up a Google Webmaster Tools account. It will provide you with some diagnostics about your outage (e.g. how many attempts Google tried, the returned response codes, etc..) and if something major happens, which is unlikely, you can request re-inclusion.

I believe it is very bad if the 404 is a result of an internal link.
I cannot tell you anything about which server you should host it on though, as i have no idea if that scenario is bad. Could you possibly host it on the one server, then when the next is up, host it from there?

Related

What are the URLs to add to Time Screen to be able to log in replit.com account on iMac?

I have added these 3 urls to the authorised urls on my iMac "Time Screen" settings to be able to log in on replit.com but the page stays on the log in page...
Any ideas why it is blocked?
Thanks
[https://replit.com][1]
[https://replit.com/~][2]
[https://replit.com/login][3]
Thank you very much, but here are all the urls I have added thanks to your advises and it keeps blocking.
Any ideas what I am doing wrong on iMac "Screen Time"?
For Replit to work properly you need a lot more than just replit.com unblocked.
To make sure Replit works for you and your students on your school network, >you need to ensure the following domains are whitelisted/unblocked:
*.replit.com (primary domain)
*.repl.co (where web applications built on Replit are hosted)
*.repl.it (old domain, not actively used)
*.replitusercontent.com (old domain, not actively used)
*.cdn.replit.com
Clients must be able to access all subdomains of the above domains. The specific hosts that clients communicate with under the above names are subject to change without notice.
According to the Replit docs.
I do recognize that it refers to students because this specific page is for the IT dept so Replit would work in their distract.

Why is dojotoolkit.org suspended?

When I go to https://dojotoolkit.org/, I get, "Unable to connect". In some browsers I get "You have reached a domain that is pending ICANN verification".
I've used a number of dojo libraries in my code. Does anyone know what happened to the owner and whether this is likely to be fixed in the near future?
If it isn't fixed, what is my best option for replacing it?
This seems to be a temporary administrative DNS issue, based on their Twitter response:
We apologize for the issues accessing the Dojo 1 web site. We’re
working on it as fast as possible. In the mean time, you can add the
IP address directly to /etc/hosts. 104.16.205.241
There are also some workarounds on the dojo gitter.im channel:
Reference guide content is also at https://github.com/dojo/docs/ And
tutorials are at
https://github.com/dojo/dojo-website/tree/master/src/documentation/tutorials
Also, as mentioned in this related question, you can use the Archive.org Wayback Machine.
The site now appears to be back up. I was able to access it and get information on features I'm using.

Recent https (SSL) addition, getting site cannot provide secure connection error page

Recently our website went from http to https. I, and others, are randomly getting "The Site Can't Provide a Secure Connection" page. Upon refresh, the page loads just fine. Why are we getting this initial page randomly?
FYI... We have http to https redirects in place.
Impossible to say without more details, but some things I can suggest are:
You have multiple servers and some are configured correctly and some incorrectly.
You are not including the full certificate chain. Sometimes your browser has the missing intermediary cached and sometimes not (see this answer for more info here: https://serverfault.com/questions/826100/ca-certificate-trouble-with-squid-on-centos7/826321#826321)
A bug in browser/software. I had this issue on Chrome when using Apache HTTP/2. Never did figure it out but a Chrome update fixed it.
Run https://www.ssllabs.com/ssltest/ on your site to confirm not a problem with your https set up and, if that doesn't work, or you don't understand the results it gives, then update your question with more details (what Server and Browser you are using and what version, if you have any proxy in place between your Browser and the site and, ideally the website name) if you want people to help you.
Also be aware this is a programming site and some people don't like these questions here and will suggest other Stack Exchange sites but honestly don't know where this question is best placed: serverfault.com maybe, but is for professional SysAdmins only, Unix and Linux seems a little generic (not even sure if you are using a Linux webserver!), Webmasters is more for content and SEO questions, Information and Security is more for theoretical SSL/TLS questions...

Site functionality diminished over VPN/company network

We currently experience a diminished with one of our customers at our main production site. All subpages and resources seem to be affected as well.
The customer reports a completely broken experience for themselves with the site not working correctly at all, mostly due to assets not loading correctly.
We already started investigating and have found that - so far - nothing seems to be wrong with the site itself.
Quick rundown:
The production site has a Cloudflare layer and almost all of it's assets are delivered either via CDNjs or Amazon's Cloudfront (behind Cloudflare) - all assets are reachable via HTTP as well
The site uses SSL and enforces it (the dynamic cert from Cloudflare)
We could secure a HAR from one of the requests for the request to one of our sites, the request times are extremely long. If you like to try, here is an online HAR viewer, be sure to uncheck validation of the file.
The customer uses Internet Explorer 8 and Chrome (39). While the site is not optimized for IE8. It should run fine in Chrome, in fact, in runs in most browsers above IE9 just fine for all of us.
Notes
We already ruled out:
Virtual delivery problems (there could be physical limitations we are not aware of)
General faultiness of our setup (We tried three different open VPNs to verify this)
Being on the customers blacklist by accident (although we cannot be entirely sure of this)
SSL Server name indication (SNI) problems
(Potentially) a general problem with the customers network, the customer does not report any problems with "the rest of the internet".
The customer will not give access to their VPN/disclose security details so we cannot really test for the situation ourselves. We suspect that the customer uses an internal proxy that might cause the problems described, but we are not sure.
Questions
My questions here are:
Is there any known problem caused by internal networking in conjunction with our setup that can cause this behaviour?.
Are there potential problems on our end that we could have overlooked or things that we do different from other sites?
It seems the connection is being done (or routed) through a low bandwidth high latency link (or a very congested one). Most of the dns lookups and connects seems to be taking ~10s.
In the HAR you can see that it affects fonts.googleapis.com and cdnjs.cloudflare.com. https://www.google-analytics.com/analytics.js has no data captured. To me the affirmation that the customer does not report any problems with "the rest of the internet" seems kind of dubious, seeing that in this HAR it hasn't been able to load the analytics js and access to usual cdns are very slow.
My guesses (pick one or more):
they are testing in a machine different than the one they have no problems with "the rest of the internet"
this machine is very, very slow
it has some kind of content filtering, antivirus, whatever filtering the web (perhaps with a ssl certificate installed in order to forge & inspect https traffic)
the access is done through a congested route, or a low bandwidth high latency link
Two hotspots:
It happens sometime for CDN points to be inconsistent, I spent a lot of time to understand this issue. How? In a live session with the client when I opened each resource loaded one by one I understand there are differences between CDN access points (Mine eastern Europe - His central Europe ). CDN hosting was one of the biggest US player in the world, anyhow we fixed this by invalidating(deleting) all files from CDN as so new/correct ones were loaded.
You need to have CDN that supports serving files over HTTPS, then use that CDN for the SSL requests.

Fiddler reports everything coming from process avp:2016

I have Fiddler installed at home and at work. At home, it tells me the actual process from the web session. At work, it only says avp:2016 no matter what browser or process the session is coming from.
Does anyone know why?
That's happening because all of the traffic is being redirected through AVP.exe.
"Why's that?" you ask?
Because you've installed an Antivirus package (e.g. Kaspersky) which injects its own proxy into every process so that it can attempt to scan traffic for viruses or malware. (I think this is a terrible idea with bad perf/safety ratio, but a number of AV companies do this.)
If you disable that option in your AV settings (it's usually called "Web Safety" or similar) then the traffic's origins will no longer be concealed.