Cloudflare - Custom URL Purge not working on different accounts - cloudflare

What is the issue with Cloudflare custom URL purge caching not working on different accounts? Bascially I have the account, and this functionality does work, but my colleague who lives in USA, I live in Europe, for her this does not work.
Can the issues be with account settings? Basically she gets the error or sometimes the cache does not want to work

I've faced issues in the past where features in Cloudflare was not working due to plugins or browser configuration. First you should confirm if the error is indeed coming from Cloudflare or from the browser.
Ask her to try to use a different browser and also a different device to confirm the issue is either tied to the account or to her location.
Try to inspect the page when performing the action to try and find out what's happening behind the scenes.

Related

What are the URLs to add to Time Screen to be able to log in replit.com account on iMac?

I have added these 3 urls to the authorised urls on my iMac "Time Screen" settings to be able to log in on replit.com but the page stays on the log in page...
Any ideas why it is blocked?
Thanks
[https://replit.com][1]
[https://replit.com/~][2]
[https://replit.com/login][3]
Thank you very much, but here are all the urls I have added thanks to your advises and it keeps blocking.
Any ideas what I am doing wrong on iMac "Screen Time"?
For Replit to work properly you need a lot more than just replit.com unblocked.
To make sure Replit works for you and your students on your school network, >you need to ensure the following domains are whitelisted/unblocked:
*.replit.com (primary domain)
*.repl.co (where web applications built on Replit are hosted)
*.repl.it (old domain, not actively used)
*.replitusercontent.com (old domain, not actively used)
*.cdn.replit.com
Clients must be able to access all subdomains of the above domains. The specific hosts that clients communicate with under the above names are subject to change without notice.
According to the Replit docs.
I do recognize that it refers to students because this specific page is for the IT dept so Replit would work in their distract.

Why is dojotoolkit.org suspended?

When I go to https://dojotoolkit.org/, I get, "Unable to connect". In some browsers I get "You have reached a domain that is pending ICANN verification".
I've used a number of dojo libraries in my code. Does anyone know what happened to the owner and whether this is likely to be fixed in the near future?
If it isn't fixed, what is my best option for replacing it?
This seems to be a temporary administrative DNS issue, based on their Twitter response:
We apologize for the issues accessing the Dojo 1 web site. We’re
working on it as fast as possible. In the mean time, you can add the
IP address directly to /etc/hosts. 104.16.205.241
There are also some workarounds on the dojo gitter.im channel:
Reference guide content is also at https://github.com/dojo/docs/ And
tutorials are at
https://github.com/dojo/dojo-website/tree/master/src/documentation/tutorials
Also, as mentioned in this related question, you can use the Archive.org Wayback Machine.
The site now appears to be back up. I was able to access it and get information on features I'm using.

Scrapy on Ubuntu web server getting 417 error

I have been developing a crawling script for a number of news websites and using Scrapy to handle the logic.
When I run my script on an Ubuntu web server (Digital Ocean, if that helps), a lot of the websites that return 200 on my local machine turn out to be 417 instead.
I was wondering how I should fix this, if it is a problem at all? I'm actually not quite sure if it is affecting the final output, but it seems like it has been.
Some of my own research has turned up:
http://www.checkupdown.com/status/E417.html . I've tried adding an Expect header to my requests, which hasn't worked
I've heard that it might be a problem with HTTP 1.1 vs 1.0? EDIT: Nope. Scrapy's HTTPDownloaderHandler automatically chooses 1.1 if it is available
417 is the error a web server gives you when your client says it expects content-types a,b,c, but the content that the server could deliver doesn't match any of these types.
This looks like a scrapy bug or, more likely, misconfiguration.
It seems either your public ip address was already banned or was banned while you scraped by the web server of the page you want to scrape. For the first situation you can reboot your instance to get a new public ip (at least this works on Amazon). For the second scenario, here are some tips from the official documentation to avoid this situation:
rotate your user agent from a pool of well-known ones from browsers
(google around to get a list of them)
disable cookies (see COOKIES_ENABLED) as some sites may use cookies to spot bot behaviour
use download delays (2 or higher). See DOWNLOAD_DELAY setting.
if possible, use Google cache to fetch pages, instead of hitting the
sites directly
use a pool of rotating IPs. For example, the free Tor
project or paid services like ProxyMesh
use a highly distributed downloader that circumvents bans internally, so you can just focus on parsing clean pages. One example of such downloaders is Crawlera
Additionally, you can reduce concurrent requests settings in your spider, that worked once for me.

Site down because of moving to another host bad for seo?

I have bought a ipad website and it's moved to my server.
Now i have tried to make an addon domain, but it does not work on my first hosting account.
On my second hosting account it works, but on that server there is another ipad website so i don't think this is smart to do because of the same ip adresses.
So adding an addon domain does not work and the site is down now!
I have added a service ticket, but i think this will cost at least 8 hours before i get an answer.
Can anyone tell me how bad this is for my serp position in google.
The website has always been on the first page.
Will this 404 error do bad to my site?cOr is it better to place the site on the same server as the other ipad website?
EDIT:
It is not ideal to serve a 404/timeouts, however your rankings should recover. You mentioned that the sites are different. Moving the site to a different server/IP shouldn't matter too much as long as you can minimize the down time of the said process performed (and should probably be preferred over downtime, if possible). I want to ensure this is communicated, but do NOT show site #2 as site #1 in the short term as you will experience duplicate content issues.
If you don't already have it, you might open up a Google Webmaster Tools account. It will provide you with some diagnostics about your outage (e.g. how many attempts Google tried, the returned response codes, etc..) and if something major happens, which is unlikely, you can request re-inclusion.
I believe it is very bad if the 404 is a result of an internal link.
I cannot tell you anything about which server you should host it on though, as i have no idea if that scenario is bad. Could you possibly host it on the one server, then when the next is up, host it from there?

Google Chrome err_failed chrome (err2) - Web App

I'm a web application developer, who runs a site http://myfav.es. We've been struggling with this issue for about a month now.
We use the HTML application cache spec - www.w3.org/TR/offline-webapps/ - with dynamically generated manifest files - myfav.es/personal.manifest - to speed page delivery. These dynamically generated manifest files use proper headers, and PHP to serve up custom manifests for users.
We also use gzip compression to serve the site from a linux/apache host.
For the life-cycle of our site, users report getting a err_failed similar to this screenshot in chrome. twitpic.com/272237.
This error is intermittent, occuring once every 200-300 visits, but will persists on every page refresh, including hard refreshes, which presumably means that an error using app cache is causing them to continuously load a failed version of the site. However, mysteriously JUST clearing cookies causes the error to fix itself.
I'm completely out of ideas on how to approach this error, and googling the error message appears to get a ton of confused users with voodoo-ish approaches to solving it. I've personally seen the error, along with a number of complaint from other users of chrome, so I'm fairly certain it cannot be caused by a particular user having abnormal settings or browser preferences.
Does anyone have any insight into the cause of this browser error and its origins? Whether its likely server-side or a byproduct of app design?