How to block hotlink image when a website use referrerpolicy="no-referrer"? - cloudflare

I have a nginx rule that block hotlinking my images : "valid_referers none blocked server_names *.example.com;" with exclusions for google, bing, facebook and twitter.
But a website use referrerpolicy="no-referrer".
So I wanted to block direct access by removing the "none" word after valid_referers , but now when I share a page of my website on twitter, there is no more image.
Is there a way to authorize twitter ? Or is there a way to block the website that is hotlinking me with his IP ?
I also have a cloudfare account if it's possible with them.
Thank you.

Related

Vue + Flask Gmail API

I am attempting to build a webapp using Vue for the frontend and Flask for the backend that reads in the users Gmail emails.
Desired functionality:
User clicks a button to "Link Gmail Account" on the frontend
User is authenticated with gmail Oauth2 and confirms. Once confirmed, they redirect back to the page they were on
Once the user confirms, the backend queries gmail to get all of the users emails and returns the data to the frontend
I have been trying to use https://developers.google.com/gmail/api/quickstart/python as a starting point, but I cannot authenticate the user -- I keep getting a redirect uri mistmatch error with a random port (I am doing this locally so have set the redirect uri to be the localhost port where I access my project).
I think I am doing something fundamentally wrong or not using the Gmail API in the correct way, but have searched all over google and youtube to no avail.
Specific things that I think could be causing an issue:
What is the best overall strategy to implement this? Should I use the Gmail API in Python or Javascript? Right now, the use clicks the "Link Account" button which calls an API in my backend which then runs the code in the Python Quickstart guide.
What kind of google project should I set up? I currently have my credentials configured for a "web application"
What should I put as the redirect uri? I am using localhost but am unsure exactly what to put here (I have tried http://localhost, http://localhost:5000, http://localhost:5000/, http://localhost:5000/emails [this is the url I want them to return to]). No matter what I put, I keep getting a redirect uri mismatch and it says the uri it is looking for is http://localhost:[random port]/
I would appreciate any help on how to approach achieving this. Thank you!
Depending on what you are going to use Gmail API for, you must select the device or category. In your case, as it is a website it should be set to "Web Application".
Also, you should be using the following redirect URI: http://localhost/emails/. You should not include the port number and you should be using trailing slashes (adding the last / at the end). Note that the redirect URI you set up in your backend must be an exact match of the one you have set up in your Credentials Page. Also please note that it might take some minutes to update this URI.
Moreover, this is a guide on how to create a Sign In button that will authorise your users that I believe will be useful for you.

How to fetch image using URL API from localhohst?

I followed the document to prefix my image url with their URL API:
https://res.cloudinary.com/<my Cloudinary account's cloud name>/image/fetch/http://localhost:3000/img/example.jpg
It won't fetch.
I have added localhost:3000 to Allowed fetch domains on the Settings security page already. Not working.
Then I tested it with my testing server with a domain. It works.
How to solve this?
When using 'fetch' Cloudinary needs the domains to fetch from to be publicly accessible domains (for example - mydomain.com). As your localhost is only accessible in your own specific server/network and is invisible everywhere else, Cloudinary and everyone else can't see it but you.

React Router + AWS Backend, how to SEO

I am using React and React Router in my single page web application. Since I'm doing client side rendering, I'd like to serve all of my static files (HTML, CSS, JS) with a CDN. I'm using Amazon S3 to host the files and Amazon CloudFront as the CDN.
When the user requests /css/styles.css, the file exists so S3 serves it.
When the user requests /foo/bar, this is a dynamic URL so S3 adds a hashbang: /#!/foo/bar. This will serve index.html. On my client side I remove the hashbang so my URLs are pretty.
This all works great for 100% of my users.
All static files are served through a CDN
A dynamic URL will be routed to /#!/{...} which serves index.html (my single page application)
My client side removes the hashbang so the URLs are pretty again
The problem
The problem is that Google won't crawl my website. Here's why:
Google requests /
They see a bunch of links, e.g. to /foo/bar
Google requests /foo/bar
They get redirected to /#!/foo/bar (302 Found)
They remove the hashbang and request /
Why is the hashbang being removed? My app works great for 100% of my users so why do I need to redesign it in such a way just to get Google to crawl it properly? It's 2016, just follow the hashbang...
</rant>
Am I doing something wrong? Is there a better way to get S3 to serve index.html when it doesn't recognize the path?
Setting up a node server to handle these paths isn't the correct solution because that defeats the entire purpose of having a CDN.
In this thread Michael Jackson, top contributor to React Router, says "Thankfully hashbang is no longer in widespread use." How would you change my set up to not use the hashbang?
You can also check out this trick. You need to setup cloudfront distribution and then alter 404 behaviour in "Error Pages" section of your distribution. That way you can again domain.com/foo/bar links :)
I know this has been a few months old, but for anyone that came across the same problem, you can simply specify "index.html" as the error document in S3. Error document property can be found under bucket Properties => static Website Hosting => Enable website hosting.
Please keep in mind that, taking this approach means you will be responsible for handling Http errors like 404 in your own application along with other http errors.
The Hash bang is not recommended when you want to make SEO friendly website, even if its indexed in Google, the page will display only a little and thin content.
The best way to do your website is by using the latest trend and techniques which is "Progressive web enhancement" search for it on Google and you will find many articles about it.
Mainly you should do a separate link for each page, and when the user clicks on any page he will be redirected to this page using any effect you want or even if it single page website.
In this case, Google will have a unique link for each page and the user will have the fancy effect and the great UX.
EX:
Contact Us

Remove subdomain from google and yahoo

if I have a subdomain named abc.aaa.com
and now i have move to aaa.com/abc
more my server admin has help me to make a redirect on abc.aaa.com to aaa.com/abc
so no matter access which page/section/file in abc.aaa.com it will force to the HOME PAGE
of aaa.com/abc
therefore i cant use robots.txt to disallow the subdomain
and even i cant submit to both yahoo and google webmaster
any idea?
Redirecting the subdomain is the correct course of action. You don't want to use robots.txt. If you did, googlebot couldn't crawl it anymore and see that it now resides in a new home.
Your redirect sounds problematic though. You should not be redirecting everything to the home page. You should be redirecting each document to the new location of that document. When you redirect to the home page, Google considers the redirect to be about the same as a 404. They call it a "soft 404". Redirecting to homepage will lose any search engine rankings those pages have and lose any credit you have for inbound links coming into that sub-domain.
Having implemented the redirect without robots.txt, both Google and Yahoo will pick up on the move. It should happen within a couple weeks. There is no need for you to take further action.

How to let googlebot access pages behind a login

Have searched the net and here too, but still looking for a solid answer to allowing Googlebot to access pages behind my login.
Is there a secure way to do this?
I have added a login allow through Adsense, but wish to go further than just permitting pages that contain Adsense content.
I receive report that 238 pages have access denied errors.
Would appreciate some help here.
Kind Regards Chris
How about checking ip (whether it starts with 66.249.*) and user-agent (Googlebot) and serve authorized pages if both of the situations matches ?