How to fetch image using URL API from localhohst? - cloudinary

I followed the document to prefix my image url with their URL API:
https://res.cloudinary.com/<my Cloudinary account's cloud name>/image/fetch/http://localhost:3000/img/example.jpg
It won't fetch.
I have added localhost:3000 to Allowed fetch domains on the Settings security page already. Not working.
Then I tested it with my testing server with a domain. It works.
How to solve this?

When using 'fetch' Cloudinary needs the domains to fetch from to be publicly accessible domains (for example - mydomain.com). As your localhost is only accessible in your own specific server/network and is invisible everywhere else, Cloudinary and everyone else can't see it but you.

Related

ListBucketResult xml trying to show home page of site in S3 thorugh CloudFront

I created a bucket where I´m hosting my static website.
I set the properties to use it as static website hosting (which index document value index.html)
The URL was: http://mywebsitelearningcurve.s3-website-us-east-1.amazonaws.com (not currently up, just to explain)
I exposed it as public (permission).
Overview of my bucket
/images
/static
/asset-manifest.json
/favicon.ico
/index.html
/manifest.json
/service-worker.js
Using http://mywebsitelearningcurve.s3-website-us-east-1.amazonaws.com I could access to my site. However I decided to use CloudFront in front of my bucket.
I created a new distribution for WEB.
On Origin Domain Name I used mywebsitelearningcurve.s3.amazonaws.com
Origin ID: S3-mywebsitelearningcurve
In Viewer Protocol Policy I selected: Redirect HTTP to HTTPS.
Once it finished and I waited for a prudential time to propagate, I had the url https://d2qf2r44tssakh.cloudfront.net/ (not currently up, just to explain).
The issue:
When I tried to use https://d2qf2r44tssakh.cloudfront.net/ it showed me a xml
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>mywebsitelearningcurve</Name>
...
...
...
</ListBucketResult>
However, when I tried https://d2qf2r44tssakh.cloudfront.net/index.html it works properly.
I go through several tutos and post but I can´t still make it work. Anyone can provide help?
Thanks
I had the same problem today and was able to fix it by adding index.html to the Default Root Object in the distribution settings:
Optional. The object that you want CloudFront to return (for example,
index.html) when a viewer request points to your root URL
(http://www.example.com) instead of to a specific object in your
distribution (http://www.example.com/index.html).
i had 5 years prod experience on AWS with 5 certifications in place.
When it comes to s3 + cloudfront, i got always in troubles
I tried to automate that using Cloudformation, but Cloudformation does not support everything needed (.i.e. custom origin in cloudfront).
At the end, i relies only on terraform to automate this part:
https://github.com/riboseinc/terraform-aws-s3-cloudfront-website/blob/master/sample-site/main.tf
If you don't mind to use terraform, i highly recommend to jump there.

CrossDomain Access, HLS through CloudFront with Signed URL(JWplayer)

I am using HLS streaming with the Amazon S3 and Cloud Front using the JWplayer.(With Rails)
I used the Signed URL to encrypt the URL and created an Origin Access Identity as given in the Amazon Cloud Front documentation.
The Signed URL's are generated fine.
I also have a 'crossdomain.xml' file in my bucket which is allowing all the origins(I have given '*')
Now, when I am trying to play my Hls video files from my bucket, I am getting crossdomain access denied issue
I think JW Player is trying to access the 'crossdomain.xml' file without the signed hash. So, it's getting that error.
I have tested my file in demo JWplayer Stream tester and this is the error I am getting in console.
Fetch API cannot load http://xxxxxxxx.cloudfront.net/xxx/1/1m_test.ts.
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://demo.jwplayer.com' is therefore not allowed access.
The response had HTTP status code 403.
If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Here is the ScreenShot.
Please help me out. Thank You.
This is the link I followed to configure my CloudFront Distribution
I just had the same problem (but with the Flowplayer). I am not sure yet about security risks (and if all steps are needed), but I got it running with:
adding permissions on the crossdomain.xml for everyone to open/download
adding a behaviour in the cloudfront distribution only for crossdomain.xml without restricting access (above the behaviour for * with restricted access)
and then I noticed that in the bucket, the link to the crossdomain.xml was something like "https://some-server.amazonaws.com/bucket.name/%1Fcrossdomain.xml" (notice the weird %1F) and that when I went on rename of the crossdomain.xml, I could delete one invisible character on first position of the name (I didn't make the crossdomain.xml, so I am not sure how this happened)
Edit:
I had hlsjs also running with this and making the crossdomain.xml accessible somehow disabled the CORS request. I am still looking into this.

React Router + AWS Backend, how to SEO

I am using React and React Router in my single page web application. Since I'm doing client side rendering, I'd like to serve all of my static files (HTML, CSS, JS) with a CDN. I'm using Amazon S3 to host the files and Amazon CloudFront as the CDN.
When the user requests /css/styles.css, the file exists so S3 serves it.
When the user requests /foo/bar, this is a dynamic URL so S3 adds a hashbang: /#!/foo/bar. This will serve index.html. On my client side I remove the hashbang so my URLs are pretty.
This all works great for 100% of my users.
All static files are served through a CDN
A dynamic URL will be routed to /#!/{...} which serves index.html (my single page application)
My client side removes the hashbang so the URLs are pretty again
The problem
The problem is that Google won't crawl my website. Here's why:
Google requests /
They see a bunch of links, e.g. to /foo/bar
Google requests /foo/bar
They get redirected to /#!/foo/bar (302 Found)
They remove the hashbang and request /
Why is the hashbang being removed? My app works great for 100% of my users so why do I need to redesign it in such a way just to get Google to crawl it properly? It's 2016, just follow the hashbang...
</rant>
Am I doing something wrong? Is there a better way to get S3 to serve index.html when it doesn't recognize the path?
Setting up a node server to handle these paths isn't the correct solution because that defeats the entire purpose of having a CDN.
In this thread Michael Jackson, top contributor to React Router, says "Thankfully hashbang is no longer in widespread use." How would you change my set up to not use the hashbang?
You can also check out this trick. You need to setup cloudfront distribution and then alter 404 behaviour in "Error Pages" section of your distribution. That way you can again domain.com/foo/bar links :)
I know this has been a few months old, but for anyone that came across the same problem, you can simply specify "index.html" as the error document in S3. Error document property can be found under bucket Properties => static Website Hosting => Enable website hosting.
Please keep in mind that, taking this approach means you will be responsible for handling Http errors like 404 in your own application along with other http errors.
The Hash bang is not recommended when you want to make SEO friendly website, even if its indexed in Google, the page will display only a little and thin content.
The best way to do your website is by using the latest trend and techniques which is "Progressive web enhancement" search for it on Google and you will find many articles about it.
Mainly you should do a separate link for each page, and when the user clicks on any page he will be redirected to this page using any effect you want or even if it single page website.
In this case, Google will have a unique link for each page and the user will have the fancy effect and the great UX.
EX:
Contact Us

No Images Over SSL in Ebay Finding API?

I have recently moved from HTTP to HTTPS but I'm getting security warnings because the pictures from ebay API are still transferring over HTTP. Does any API user know of a way to get gallery url or picture url over HTTPS?
I have tried making a call through HTTPS like this https://open.api.ebay.com/xxxxx but obviously it won't work. Is there a parameter option for returning HTTPS link?
A returned data in a successful call from GetSingleItem API is like this:
<GalleryURL>
http://thumbs1.ebaystatic.com/pict/xxxxx.jpg
</GalleryURL>
<PictureURL>
http://i.ebayimg.com/00/s/ODUwWDgwMA==/z/xxxx.JPG?set_id=xxxx
</PictureURL>
I have had the same issue, I used galleryURL property of SearchItem object returned by ebay Finding API, but found that even galleryURL provides insecure link to image, the same image available on if I replace http with https.

Setting up CORS Access-Control-Allow-Origin on Rackspace Cloud Files?

Can anyone provide any insight into how CORS works on Rackspace Cloud Files?
I tried setting the value of "Access-Control-Allow-Origin" to the url of my webserver but I can still access the public url of the object by just pasting it into a browser.
Maybe I misunderstand how CORS works? Setting the access control allow origin to my web server would only allow the file to be accessed from the web server. Me, as a user pasting the url into the web browser, would be from a different origin, no ?
The documentation at Rackspace isn't the best sometimes for stuff like this (or I'm looking in the wrong place...)
That's not really what CORS is for. Cross-origin resource sharing (CORS) is a mechanism that allows Javascript on a web page to make XMLHttpRequests to another domain, not the domain the Javascript originated from, see CORS.
It has nothing to do with making an object or container public/private.
On Rackspace, CORS works according to this doc.