what is the reason for Access-Control-Allow-Origin? [duplicate] - ruby-on-rails-3

This question already has answers here:
How does the 'Access-Control-Allow-Origin' header work?
(19 answers)
Closed 6 years ago.
I've tried to understand HTTP base statements, because that is not working as I expected.
E.g. I've put Access-Control-Allow-Origin as http://www.example.com, and I tried to send POST requests from http://www.example2.com and it was with error like I expected.
It says ...request has been blocked by CORS policy.
But I was wonder when looked that actually that request was done on http://www.example.com and POST action was called.
Question then, why do we need that protection?

When a web-page is loaded into a browser, its HTML, CSS, Javascript is loaded, its session is being used. Some of the many potential problems:
The remote page inside the iframe might be a page where you are logged in (like your personal email account's web-page) and a spider could silently steal important data (like the content of your emails, including access to confidential areas, like bank account-related data, personal, private data, etc.)
Confidential CSS/Javascript could be stolen from trusted users. Example: you create some very good code in Javascript and CSS and only paid users can use their benefit. However, someone sends you a link which points to a page which loads your site silently as an iframe and extracts the CSS and Javascript goodies from there. Then the stealer will sell your product with a discount and you can work on new products and on a better security policy.
Your accounts could be hacked. A page where you have an active session could be loaded inside an iframe and then a spider could wreak havoc there, including, but by far not limited to changing your username/password and excluding you from your own account.
Malicious things could be done against others in your name.

Related

Blocking Requests using HTTP_ORIGIN to Prevent Spamming

Over the last couple days I've been getting millions of requests from rotating IPs. They're attempting to run post requests and seem to be using an incorrect HTTP_ORIGIN. By incorrect, I mean that it's not the same as what my server sends:
My server sends: "https://www.example.com"
The spam request sends: www.example.com
I placed some logging for each scenario:
User logged in and has incorrect HTTP_ORIGIN
User NOT logged in and has incorrect HTTP_ORIGIN
What I've noticed is that there are users that are logged in, but have the wrong HTTP_ORIGIN (origin is missing "https://". I have checked those user accounts and while they appear to be real, and not created by the original spam requests, they may be currently run through scripts.
It seems like it would prevent those users from accessing the POST requests of the site, but on the other hand, if they were real users, it would cause a problem.
Now if I were to put filtering in place to block requests that didn't match the origin, my questions are:
What would be the side effect of that?
Are there downsides or negative aspects?
Would I see drops in traffic?
If that so, It's like you said some are using your website from scripts, considering if your website is normal (I mean not like a website to upload data or sth like that), then it would be good to consider adding captcha to your website in place of filtering requests (cause I think it would be simple for those who send incorrect HTTP_ORIGIN to make a similar one to the original if they use a sslstream especially if it is for malicious goals).
And for the consequences if you use a filtering to the http request, I think the requests will drop remarkably (since you will refuse incorrect ones), and some real users who use scripts will switch to browser (it's a rare case especially if they scrape data from website in an automatic way) or they will stop using your website.
You need to wait for further research and make sure that those false requests are not malicious ones (perhaps they are using simple tcp client). Either way it is best for the time being to inspect data sent in the POST requests (incorrect ones) and see if there is some suspicious data (In that case you should use some safety method in your website)

Facebook App in Page Tab receiving signed_request but missing page data

I have a page tab app that I am hosting. I have both http and https supported. While I receive a signed_request package as expected, after I decode it does not contain page information. That data is simply missing.
I verified that like schemes are being used (https) among facebook, my hosted site and even the 'go between'-- facebook's static page handler.
Also created a new application with page tab support but got the same results-- simply no page information in the signed_request.
Any other causes people can think of?
I add the app to the page tab using this link:
https://www.facebook.com/dialog/pagetab?app_id=176236832519816&next=https://www.intelligantt.com/Facebook/application.html
Here is the page tab I am using (Note: requires permissions):
https://www.facebook.com/pages/School-Auction-Test-2/154869721351873?id=154869721351873&sk=app_176236832519816
Here is the decoded signed_request I am receiving:
{"algorithm":"HMAC-SHA256","code":!REMOVED!,"issued_at":1369384264,"user_id":"1218470256"}
5/25 Update - I thought maybe the canvas app urls didn't match the page tab urls so I spent several hours going through scenarios where they both had a trailing slash or not. Where they both had a trailing ? or not, with query parameters or not.
I also tried changing the 'next' value when creating the page tab to the canvas app url and the page tab url.
No success on either count.
I did read where because I'm seeing the 'code' value in the signed_request it means Facebook either couldn't match my urls or that I'm capturing the second request. However, I given all the URL permutations I went through I believe the urls match. I also subscribed to the 'auth.authResponseChange' which should give me the very first authResponse that should contain the signed_request with page.id in it (but doesn't).
If I had any reputation, I'd add a bounty to this.
Thanks.
I've just spent ~5 hours on this exact same problem and posted a prior answer that was incorrect. Here's the deal:
As you pointed out, signed_request appears to be missing the page data if your tab is implemented in pure javascript as a static html page (with *.htm extension).
I repeated the exact same test, on the exact same page, but wrapped my html page (including js) within a Perl script (with *.cgi extension)... and voila, signed_request has the page info.
Although confusing (and should be better documented as a design choice by Facebook), this may make some sense because it would be impossible to validate the signed_request wholly within Javascript without placing your secretkey within the scope (and therefore revealing it to a potential hacker).
It would be much easier with the PHP SDK, but if you just want to use JavaScript, maybe this will help:
Facebook Registration - Reading the data/signed request with Javascript
Also, you may want to check out this: https://github.com/diulama/js-facebook-signed-request
simply you can't get the full params with the javascript signed_request, use the php sdk to get the full signed_request . and record the values you need into javascript variabls ...
with the php sdk after instanciation ... use the facebook object as following.
$signed_request = $facebook->getSignedRequest();
var_dump($signed_request) ;
this is just to debug but u'll see that the printed array will contain many values that u won't get with js sdk for security reasons.
hope that helped better anyone who would need it, cz it seems this issue takes at the min 3 hours for everyone who runs into.

How do the Facebook like button and Google +1 button deal with a redirected url? [duplicate]

I understand the og:url meta tag is the canonical url for the resource in the open graph.
What strategies can I use if I wish to support 301 redirecting of the resource, while preserving its place in the open graph? I don't want to lose my likes because i've changed the URLs.
Is the best way to do this to store the original url of the content, and refer to that? Are there any other strategies for dealing with this?
To clarify - I have page:
/page1, with an og:url of http://www.example.com/page1
I now want to move it to
/page2, using a 301 redirect to http://www.example.com/page2
Do I have any options to avoid losing the likes and comments other than setting the og:url meta to /page1?
Short answer, you can't.
Once the object has been created on Facebook's side its URL in Facebook's graph is fixed - the Likes and Comments are associated with that URL and object; you need that URL to be accessible by Facebook's crawler in order to maintain that object in the future. (note that the object becoming inaccessible doesn't necessarily remove it from Facebook, but effectively you'd be starting over)
What I usually recommend here is (with examples http://www.example.com/oldurl and http://www.example.com/newurl):
On /newpage, keep the og:url tag pointing to /oldurl
Add a HTTP 301 redirect from /oldurl to /newurl
Exempt the Facebook crawler from this redirect
Continue to serve the meta tags for the page on http://www.example.com/oldurl if the request comes from the Facebook crawler.
No need to return any actual content to the crawler, just a simple HTML page with the appropriate tags
Thus:
Existing instances of the object on Facebook will, when clicked, bring users to the correct (new) page via your redirect
The Like button on the (new) page will still produce a like of the correct object (but at the old URL)
If you're moving a lot of URLs around or completely rewriting your URL scheme you should use the new URLs for new articles/products/etc, but you'll need to keep the redirect in place if you want to retain likes, comments, etc on the older content.
This includes if you're changing domain.
The only problem here is maintaining the old URL -> new URL mapping somewhere in your code, but it's not technically difficult, just an additional thing to maintain in the future.
BTW, The Facebook crawler UA is currently facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)
I'm having the same problem with my old sites. Domains are changing, admins want to change urls for seo etc
I came to conclusion its best to have some sort uniqe id in db just for facebook - from the beginning. For articles for example I have myurl.com/a/123 where 123 is ID of the article.
Real url is myurl.com/category/article-title. Article can then be put in different category, renamed etc with extensive logic for 301 redirects behind it. But the basic fb identifier can stay the same for ever.
Of course this is viable only when starting with a fresh site or when implementing fb comments for the first time.
Just an idea if you can plan ahead :) Let me know what you think.

What is the best way to upload files to another domain from a browser? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am creating a web service of scheduled posts to some social network.Need help dealing with file uploads under high traffic.
Process overview:
User uploads files to SomeServer (not mine).
SomeServer then responds with a JSON string.
My web app should store that JSON response.
Option 1: Save, cURL POST, delete tmp
The stupid way I made it work:
User uploads files to MyWebApp;
MyWebApp cURL's the file further to SomeServer, getting the response.
Option 2: JS magic
The smart way it could be perfect:
User uploads the file directly to SomeServer, from within an iFrame;
MyWebApp gets the response through JavaScript.
But this is(?) impossible due to the 'Same Origin Policy', isn't it?
Option 3: nginx proxying?
The better way for a production server:
User uploads files to MyWebApp;
nginx intercepts the file uploads and sends them directly to the SomeServer;
JSON response is also intercepted by nginx and processed by MyWebApp.
Does this make any sense, and what would be the nginx config for, say, /fileupload Location to proxy it to SomeServer?
I don't have a server to use to stand in for SomeServer for me to test out my suggestions, but I'll give it a shot anyway. If I'm wrong, then I guess you'll just have to use Flash (sample code from VK).
How about using an iFrame to upload the file to SomeServer, receive the JSON response, and then use postMessage to pass the JSON response from the iFrame to your main window from your site. As I understand it, that is pretty much the motivation for creating postMessage in the first place.
Overall, I'm thinking of something like this or YUI's io() module but with postMessage added to get around the same origin policy.
Or in VK's case, using their explicit iFrame support. It looks to me like you can add a method to the global VK object and then call that method from the VK origin domain using VK.callMethod(). You can use that workaround to create a function that can read the response from the hidden iFrame.
So you use VK.api('photos.getUploadServer', ...) to get the POST URL.
Then you use JS to insert that URL as the action for your FORM that you use to upload the file. Follow the example under "Uploading Files in an HTML Form" in the io() docs and in the complete function, use postMessage to post the JSON back to your parent window. See example and docs here. (If it doesn't work with io(), you can certainly make it work using the roll-your-own example code if I'm right about VK.callMethod().)
Then in response to the postMessage you can use regular AJAX to upload the JSON response back to your server.
I can see only two major approaches to this problem: server-side proxying and javascript/client-side cross-site uploading. Your approaches 1 and 3 are the same thing. It shouldn't really matter whether you POST files with means of cURL or nginx - not performance-wise anyway. So if you already implemented approach 1 from your question, I don't see any reason to switch to 3.
In regards to javascript and Same Origin Policy, it seems there are many ways to achieve your goal, but in all of these ways, either your scenario must be supported by SomeServer's developers, or you have to have some sort of access to SomeServer. Here's an approximate list of possibilities:
CORS—your domain must be allowed to access SomeServer's domain;
Changing document.domain—this requires that your page and target page are hosted on subdomains of the same domain;
Using a flash uploader (e.g. SWFUpload)—it is still required that your domain is allowed via the cross-domain policy, in case of Flash, via a crossdomain.xml in the root of SomeServer's domain;
xdcomm (e.g. EasyXDM)—requires that you can upload at least an html page to the target domain. This page can then be used as a javascript proxy for your manipulations with SomeServer's iframe.
The last one could, actually, be a real possibility for you, since you can upload files to SomeServer. But of course, it depends on how it's implemented—for example, in case there is another domain the files are served from, or if there are some security measures which won't allow you to host html files, it may not work out.

remote image embeds: how to handle ones that require authentication?

I manage a large and active forum and we're being plagued by a very serious problem. We allow users to embed remote images, much like how stackoverflow handles image (imgur) however we don't have a specific set of hosts, images can be embedded from any host with the following code:
[img]http://randomsource.org/image.png[/img]
and this works fine and dandy... except users can embed an image that require authentication, the image causes a pop-up to appear and because authentication pop-ups can be edited they put something like "please enter your [sitename] username and password here" and unfortunately our users have been falling for it.
What is the correct response to this? I have been considering the following:
Each page load has a piece of Javascript execute that checks each image on the page and its status
Have an authorised list of image hosts
Disable remote embedding completely
The problem is I've NEVER seen this happen anywhere else, yet we're plagued with it, how do we prevent this?
Its more than the password problem. You are also allowing some of your users to carry out CSRF attacks against other users. For example, a user can set up his profile image as [img]http://my-active-forum.com/some-dangerous-operation?with-some-parameters[/img].
The best solution is to -
Download the image server side and store it on the file system/database. Keep a reasonable maximum file size, otherwise the attacker can download tons of GBs of data onto your servers to hog n/w and disk resources.
Optionally, verify the file is actually an image
Serve the image using a throw-away domain or ip address. It is possible to create images that masquerade as a jar or applet; serving all files from a throwaway domain protects you
from such malicious activity.
If you cannot download the images on the server side, create a white list of allowed url patterns (not just domains) on the server side. Discard any urls that don't match this URL pattern.
You MUST NOT perform any checks in javascript. Performing checks in JS solves your immediate problems, but does not protect your from CSRF. You are still making a request to an attacker-controlled url from your users browser, and that is risky. Besides, the performance impact of that approach is prohibitive.
I think you mostly answered your own question. Personally I would have gone for a mix between option 1 and option 2: i.e. create a client-side Javascript which first checks image embed URLs against a set of white-listed hosts. For each embedded URL which is not in that list, do something along these lines, while checking that the server does not return the 401 status code.
This way there is a balance between latency (we attempt to minimize duplicate requests via the HEAD method and domain whitelists) and security.
Having said that, option 2 is the safest one, if your users can accept it.