CloudFlare failed to purge dynamic content to a blog post - cloudflare

I run my blog that linked with Cloudflare when I create a new post I don't see the current post that affected my site. I tried many methods that I learned from sites like trying to use a page rule to bypass Cloudflare cache but it didn't work. Also, I turn it off auto minify js, CSS, and Html still does not work. my blog still shows the oldest post that is from since 5 days ago. When you log in WordPress dashboard panel you will see the current posts but for the normal visit will see the cached posts that remain static all time
here is my Cloudflare setting
Page Rules Settings
Page Speed Settings
Page Caching Settings
I need your help from everybody who knows about this problem and how we can solve it
Thanks....!

Looking at the site now, looks like you have maybe disabled Cloudflare? I'm seeing the latest posts, and there are no Cloudflare response headers coming back.
For troubleshooting caching issues, one of the most useful things you can do is to inspect the response headers (using Chrome Dev Tools or similar 'Network' tab). First step is to identify which request is responsible for the cached content (a document, or an AJAX call, etc.)
From there, you can look at the response headers to see why it is behaving this way, specifically you'll want to check the Cache-Control and CF-Cache-Status header. More info here - https://developers.cloudflare.com/cache/about/default-cache-behavior

I fixed this problem because all performance is managed by Ezoic. I tried to purge everything on Ezoic and all things are working perfectly.

Related

Is it possible to have GitHub Readme images follow redirects?

I'm trying to add a test coverage badge to the Readme of a private repository on GitHub. Our continuous integration process saves out the image to a secured Google Cloud Storage bucket that's not accessible to the public, and should remain that way.
Google's authorization layer is smart enough that if I go to the URL for the image, I'm automatically redirected to the resource with a valid auto-generated signed URL.
E.g., if I go to http://storage.cloud.google.com/secret-files/mysecretfile.png, then if I'm logged in and allowed to view it, I'm automatically redirected to something like https://blahblah-apidata.googleusercontent.com/download/storage/v1/b/secret-files/o/mysecretfile.png?key=verylongkey, where I can load the image.
This seemed perfect. Reference the canonical path in the GitHub Readme, authenticated users see the image, unauthenticated users are still blocked, we don't have to make the file public, and we don't have to do anything complicated.
Except that GitHub is proxying the image request, meaning that it will always be unauthenticated. My browser is loading something like https://camo.githubusercontent.com/mysecretimage.png.
Is there a clever way to work around this? Or do I need to go back to the drawing board?
All images on github.com are proxied using the Camo image proxy. There are a couple reasons for this:
It preserves the privacy of users. It isn't possible for a document to track users by directing them to a different site or using cookies to track them.
It means images can be cached and served at an appropriate size.
GitHub can have a very strict content security policy that does not allow loading from untrusted sites, which means that any sort of accidental security problem (like an XSS) is a lot less likely to work.
Note the last part. Even if you found some sneaky way to get another image URL to render properly in the website, your browser wouldn't load it because it violates the Content-Security-Policy header the site sent, and moreover, your browser would tattle about that to the reporting URL that GitHub provided.
So any image URL you provide will need to be readable by GitHub's image proxy and it won't be possible to serve different content to different users.

Google 404 soft error on index page that is working fine

A friend of mine has been having trouble getting her site indexed by google and asked me to have a look, but that is not something I really know much about and was hoping for some assistance.
Looking at her search console, google crawl shows an error of soft-404 on the index page. I marked this as fixed a few times, because the site looks fine to me but it keeps coming back.
If I fetch the site as google it seems to be working fine, although it is showing the mobile version instead of the desktop.
It keeps giving another reoccurring 404 of a page http://www.smeyan.com/new-page, which doesn't exist anywhere I can see including server files or sitemaps.
Here is what I know about this site:
It used to be a wix site and was moved to a host gator shared server 2-3 months ago.
It's using JavaScript/jQuery .load to get page content outside the index.html template.
It has 2 sitemaps one for the URLs and one for both URLs and images
http://www.smeyan.com/sitemap_url.xml http://www.smeyan.com/sitemap.xml
It has been about 2 months since it was submitted for indexing and google has not indexed any of the content when you search for site:www.smeyan.com it shows some old stuff from the wix server. Although search console says it has 172 images indexed.
it has www. as a preference set in search console.
Has anyone experienced this and has an direction for a fix?
How long time was set for this site in Cache-Control header? If long, you should use "google removals" for obsolete snippets and cache. I simulated Google visit on your webpage. Correct 404 return code. Correct headers. Thus. Report google removals for "not found" pages. You must request visit of Googlebot and keep calm and wait for reaction.
BTW: For permanently removed content use 410 Gone for Google or... report via Removals.
https://support.google.com/webmasters/answer/1663419?hl=en
The only download error that I saw while using Chrome's Inspect function pertains to a SCRIPT tag with a Facebook url as the source (src) file.
This is the error as reported by Inspect.
This is the SCRIPT tag that caused the error.
I am not sure that this is the cause of the reoccurring 404 error, but it is an issue that needs attention on this website.
I checked your site with Tor Browser which has... DISABLED SCRIPTS. You should provide any content on your site with use of <noscript/> tag. It doesn't have to be beautiful but should be visible for bots. <a href... ></a>, <img/> etc. and... TEXT. Without it the site is NOT OPTIMIZED for search bots. Read about SEO. The sitemap content can be never indexed if the content will be never linked.
Probably your webpage also doesn't meet requirements for screen readers (for blind people).
Note: The image with "SMEYAN" caption is visible on webpage and is indexed.
second image on the webpage (in source): <img class="gallery-full-image" src="./galleries/home_gallery/smeyan_home-1.jpg" /> and indexed
The menu also doesn't work without scripts.
I thought the step is good implemented.
Please use <noscript/> element and implement version for blind people (without scripts, provide alt tag for images) and for noscript browsers. You can test it via disabling script or via NOSCRIPT extension for Firefox.
BTW. You should use HTML, CSS (including animations) and... use the JS ONLY if it is needed. Or... <noscript/> method.
Google bot currently use web rendering service (WRS) that is based on old Chrome 41 (M41), so it may fail where browsers succeed.
To learn how google boot works read this.
Add this code to the page to see the real error.
You can see the error using Url Inspector live, from google search console. It will show at more info tab.
Note: if the bot gets a 301 code or if the page is too little to have significant content it will return a soft 404 error, and won't preview or show any other error.

Google AppEngine API Explorer redirects and lists no URLs

I'm having an unending issue trying to use the AppEngine API explorer with the stupidly simple helloworld example.
When trying to navigate to the url to explore the API my Chrome browser redirects to HTTPS from the default HTTP and no API's are listed. I have gone through every possible fix I can find (Like this, and all of these) and none are working reliably.
What's the most infuriating is I have gotten the API listed TWICE but now no longer displays with any of the methods below.
The setup I had when it worked the first time:
Chrome launched with "C:\Program Files (x86)\Google\Chrome\Application\Chrome.exe" --unsafely-treat-insecure-origin-as-secure=http://localhost:8080 (As per the tutorial)
The url being: (http://)apis-explorer.appspot.com/apis-explorer/?base=http://localhost:8080/_ah/api&root=http://localhost:8080/_ah/api#p/
The second time it worked was using also using the above URL but lasted only a second before being redirected to HTTPS and not listing anything.
Some specifics:
Windows 10 OS.
Every time the page loads I get the "The API you are exploring is hosted over HTTP, which can cause problems. Learn how to use Explorer with a local HTTP API." message, even the times the API displayed correctly.
Every time I now load any of the API Explorer URLs I get redirected to HTTPS, and nothing is listed. Also the URL is escaped (%3A instead of ':'). Not sure if it's important but the first time it worked the URL was HTTP and NOT escaped.
I have tried the shield in the search bar and enabling Load unsafe scripts ( from here).
Tried launching Chrome as usual and with the flags --unsafely-treat-insecure-origin-as-secure=http://localhost:8080 and/or --allow-running-insecure-content (from this answer).
Tried http://localhost:8080/_ah/api/explorer
Tried http://apis-explorer.appspot.com/apis-explorer/?base=http://localhost:8080/_ah/api#p/
http://localhost:8080/_ah/admin works correctly and shows the Admin console every time.
Since the API's being listed once I haven't touched the project code, but restarted the server, Chrome, and tried different URLs on more occasions than I care to count.
I also tried accessing the API URL directly as explained in this answer but cannot find the correct URL to access the helloworld /sayHi endpoint. Maybe someone can help me work out what I need to prefix it with as all of the variations I try give me a 404.
Any help would be a very very appreciated.

How do I force Disqus to use HTTPS on all requests?

I'm loading Disqus on a page loaded via HTTPS with the following code, as suggested in this answer.
<div id="disqus_thread"></div>
<script type="text/javascript">
var disqus_shortname = 'our-shortname';
(function() {
var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
dsq.src = 'https://' + disqus_shortname + '.disqus.com/embed.js?https';
(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
})();
</script>
Note that I've changed the request of embed.js to be https rather than http, and I've added ?https to the end of the request too, which I believe is supposed to force HTTPS.
The initial request goes via HTTPS as planned, but it makes a secondary request via HTTP, which Chrome is hating (I get the red cross over padlock icon).
From the Chrome console:
The page at https://our-website.com/blog-post-name ran insecure content from http://juggler.services.disqus.com/event.js?thread=635675380&forum=our-shortname...[long query string]
Is this the correct method to get Disqus to use ssl on all requests, or have I missed a step?
Thanks.
This looks to be an issue within Disqus itself. We had Disqus working fine via SSL with the same approach in a couple of Drupal sites, but both recently had Disqus begin causing SSL warnings in both IE and Chrome as you've described.
I did a bit of digging, and I see that the DISQUS.useSSL function that's defined in embed.js and called in thread.js updates a few URLS (specifically ["disqus_url","realtime_url","uploads_url"]) in the Disqus json settings object by replacing http in their URL with https if https is found in the settings. The juggler_url doesn't get the same treatment, and so it's not updated to load via SSL. I'm not sure what juggler's purpose is, but it appears that that URL (http://juggler.services.disqus.com/) won't load via SSL in any case, so even if it's url was changed to https, it still wouldn't work.
So perhaps Disqus has made a recent change, since we had this working previously? We're taking this up with them, since this doesn't appear to be a config issue on our end...
UPDATE:
Apaprently Disqus launched a new service that does not support SSL. This is what's generating the extra scripts that get loaded insecurely, thus triggering the security warning. Disqus disabled this new service (which they didn't tell us the name of) for our specific account, and now SSL is once again working as expected. So, the solution is to just ask them to make your account SSL compliant, and that should take care of it.
Found this article, which hands the solution:
http://help.disqus.com/customer/portal/articles/542119-can-disqus-be-loaded-via-https-
Basically it's not possible (yet) with Disquss 2012, but switch it off and change the embed src so it uses https:// and add the ?https param:
dsq.src = 'https://' + disqus_shortname + '.disqus.com/embed.js?https';
I have had this problem off and on for the past few months and have been forced to disable Disqus altogether. Initially I contacted Disqus to see if they could make the switch that disabled the non-SSl compliant feature on their side and this worked for a while but the mixed content problem kept re-occuring.
What seems to happen is that despite Disqus forcing the https version of its count.js javascript, count.js still redirects to mediacdn.disqus.com instead of securecdn.disqus.com for some reason. If one appends ?https manually in the plugin editor to force the redirect to the securecdn.disqus.com, the problem disappears in the first call to the CDN but in subsequent calls to the CDN with the query string ?https added to the count.js call, the redirect just reverts back to mediacdn.disqus.com. I've tried this numerous times.
The annoying thing about this issue is that the SSL page in question on my site creating the mixed content notification does not even have a comment section. So Disqus is loading its javascript needlessly on the page.
I like Disqus but it's unbelievable to me they wouldn't fix this issue by either allowing users to disable the javascript selectively or by implementing a secure cdn version that works in all cases. I am hoping they figure this out.
Also they told me that Disqus 2012 doesn't support HTTPS (although it will be in the future).
Apaprently Disqus launched a new service that does not support SSL. This is what's generating the extra scripts that get loaded insecurely, thus triggering the security warning.

Umbraco - use HTTPS for some pages

I'm building a site with Umbraco, and there are a couple of pages that need to be visited over HTTPS instead of HTTP (e.g. a login page).
I've seen a couple of macros that get put on the page that needs to use HTTPS, and essentially just check the protocol used and do a Response.Redirect with the correct protocol if necessary. This seems like a terrible way of achieving what seems to be a fairly basic requirement - ideally I'd want Umbraco to render any links to these pages as <a href="https://...", not do a redirect when the user goes to a page.
With these redirecting macros, there's also the possibility of a browser displaying a warning if the user's on an HTTPS page and navigates to a HTTP one. If the links are relative, the user will be redirected from HTTPS to HTTP, and the browser may warn about this.
Is there a way to achieve this without modifying any Umbraco framework code?
There's currently no built-in way to make a few pages in Umbraco return a https url.
The only way I can think of doing this at the moment is just by making sure that you set up your links correctly.
But there's no way of stopping people from entering the insecure link. That is where the redirects come in handy though, it will make sure you don't get to a secure page insecurely.
I would recommend running the whole site in https mode. In the past, performance would have been an objection to running your full site in https mode. However with modern servers, this really shouldn't be a problem any more.