Why is CORS based on the target server? Why do I have to use JSONP? - jsonp

I would like a concrete example in an answer if possible.
For explanations sake we have three players here.
My Server (myserver.com)
Client Server (myclient.com)
Client User (accessing data through myclient.com)
I'm making a web service available to my clients that allows them to retrieve their data in JSON format. In order for their websites to work they have to use the standard XOR workarounds - either making the request server-side or relying on me to set
Access-Control-Allow-Origin: http://myclient.com
So two part question here. First, why do I set the origin policy at myserver.com? Why does my server care who it serves content up to? Shouldn't it be myclient.com that sets this? Concrete example here would be great.
Part two, I understand that JSONP works around this, but I'm worried about using it because I don't understand the security implications from part one. What is the point of JSONP if I can just set Access-Control-Allow-Origin: *?

Lots of questions!
JSONP is definitely dangerous if you intend to serve user-specific content. If the content the server is serving is completely public, and (probably) read-only, JSONP is a wise choice. Don't use it for anything that assumes a 'logged in state' or authentication/authorization.
CORS is definitely much better than JSONP, but it's not supported in every (older) browser. If you want to support as much as possible, you will need some kind of fallback. CORS allows you to do requests other than GET, which greatly improves flexibility.
The reason the target server needs to allow this, is mainly because javascript running on domain A, should not be able to access domain B. If domain A could 'allow' this, it implies you could create javascript applications that have access to the sandbox of any public server. Only the owner of domain B can explicitly allow the owner of domain A to access their content.
Your argument (why does domain B care who accesses their resources) would normally be valid. But this is not to protect domain B, it is to protect the end-user. Domain A should not be allowed to perform requests on behalf of the end-user to Domain B without explicit permission.
And just to be sure: unless you understand the security implications of JSONP quite well, CORS is likely a much safer choice.

Related

How can I filter requests to Google Cloud Storage depending on their HTTP headers?

My use case is that I have pretty large files (>2GB, these are Cloud Optimized Geotiffs) on Google Cloud Storage, which can be used in applications through HTTP range requests.
I would like to filter out requests that are missing the Range header.
This would avoid the case of users downloading the whole file. (I guess someone could still make a range request for the whole file with a bit of work, but i am not concerned about this.)
The documentation (https://firebase.google.com/docs/storage/security/rules-conditions#request_evaluation) says "HTTP headers and authentication state are also included", so I would expect to be able to use this information in the security rules.
Is it possible at all and if it is, how?
I cannot find any example of using HTTP headers in the security rules conditions. I have also tried the rules playground in Firebase, but didn't figure out how to access the request headers.
It doesn't seem like there's any way to access HTTP headers. The only request variables are those in the document
You can try the request.params variable which will be populated with query params present in the request
eg. <firebase storage url>?myParam=true -> request.params.myParam == "true" should work
No, it is not possible to filter requests depending on HTTP headers.
The request variable in the security rules does not include HTTP headers. (As stated by a firebaser in the comments of Roopa M's answer.) The documentation has been updated since this question was asked, and does not state any longer that this information is included.
Roopa M's answer gives an idea to filter requests based on query parameters, which might help you, but is independent from HTTP headers.
In order to really handle queries according to HTTP headers in the context of firebase, it is probably necessary to rely on a cloud function that will act as middleware. These have access to the full HTTP request if i am not mistaken.
Alternatively, this kind of rule should be reasonably easy to implement in a regular web server like Nginx, if you have the option to build your project in such an environment.

Could REST API OPTIONS be used as the HATEOAS only request?

As I've understood it, REST MUST use the HATEOAS constraint to be implemented properly. My understanding of HATEOAS is that basically every resource should share information about what communication options it has and how the consumer can use those options to achieve their end goal.
My question is if the HTTP OPTIONS method could be used as a way to navigate a REST API. Basically the response from an OPTIONS request would include the possible actions to take on a resource which would make it possible to consume the API without knowing the endpoints.
e.g.
An initial request to the API
HTTP OPTIONS /api
Could return all resources available for consumption and their relations. Like a massive tree to consume the API and all it has to offer. This idea doesn't neglect implementing HATEOAS on other responses as well, but the OPTIONS request would allow navigation without returning data that the consumer might not actually want to consume.
Is this a really bad idea? Or is it something that is commonly implemented. I'm currently attempting to implement a REST API but I'm having a hard time understanding the benefit of HATEOAS if there is no way to navigate the API without actually requesting data that you might not necessarily need when consuming certain end points. And I assume HATEOAS aims to make clients consume resources by their relation and not actually hard coding the end point?
TL;DR
Could HTTP OPTIONS request act as a way to navigate a REST API by returning what communication options are available for the requested resource without actually returning the resource?
According to RFC 7231
The OPTIONS HTTP method requests information about the communication options available for the target resource, at either the origin server or an intervening intermediary. This method allows a client to determine the options and/or requirements associated with a resource, or the capabilities of a server, without implying a resource action.
...
A server generating a successful response to OPTIONS SHOULD send any header fields that might indicate optional features implemented by the server and applicable to the target resource (e.g., Allow), including potential extensions not defined by this specification. The response payload, if any, might also describe the communication options in a machine or human-readable representation. A standard format for such a representation is not defined by this specification, but might be defined by future extensions to HTTP. A server MUST generate a Content-Length field with a value of "0" if no payload body is to be sent in the response.
So, basically a response to an OPTIONS request will tell your client which HTTP operations may be performed on a certain resource. It is furthermore admissible to target the whole server on utilizing * instead of a specific resource URI.
A response to an OPTIONS request may look like this:
HTTP/1.1 204 No Content
Allow: OPTIONS, GET, HEAD, POST
Cache-Control: max-age=604800
Date: Thu, 13 Oct 2016 11:45:00 GMT
Expires: Thu, 20 Oct 2016 11:45:00 GMT
Server: EOS (lax004/2813)
x-ec-custom-error: 1
which states that a certain resource supports the mentioned operations in the Allow header of the resonse. Via the Cache-Control header a client knows that it by default can cache responses of safe requests (GET and HEAD) for up to 7 days (value is mentioned in seconds). The x-ec-custom-error header specifies a non-standard header that is specific to a particular software, in that particular case to a ECS Server. According to this Q & A the meaning isn't publicly documented and therefore application specific.
In regards to returning a tree of traversable resources from the given resource the OPTIONS operation was requested for, technically this could be possible, however, certain systems might produce an almost never-ending list of URIs. Therefore such a design is questionable for larger systems.
My understanding of HATEOAS is that basically every resource should share information about what communication options it has and how the consumer can use those options to achieve their end goal.
Hypertext as the engine of application state (HATEOAS) is basically just a requirement to use the interaction model used on the Web for decades quite successfully and offer the same functionality to applications. This enabled applications to surf the Web similar like we humans do.
Great, but how does it work?
On the Web we use links and Web forms all the time. Through a Web form a server is able to teach a client basically what properties a certain resource supports or expects. But that's not all! The same form also tells your client where to send the request to (target URI), the HTTP method to use and, usually implicitly given, the media type the payload needs to be serialized to upon sending the request to the server. This, in essence, makes out-of-band API documentation unnecessary as all the information a client needs to make a valid request is given by the server already.
On a typical Web site you might have a table of entries which offers the option to add new entries, update or delete existing ones. Usually such links are hidden behind fancy images, i.e. a dustbin for deleting an entry and a pencil for editing an existing entry or the like where the image represents an affordance. The affordance of certain elements make it clear what you should do with it or what's the purpose of that element. A button on a page wants to be pushed while a slider widget wants to be changed while a text field waits for user input. As applications aren't that eager to work on images a further concept is used instead. Link relation names exactly serve this purpose. I.e. if you have a pageable collection consisting of multiple page à 25 entries i.e. you might be familiar with a widget containing arrows to page through that collection. A link here should usually be annotated with link relation names such as self, next, prev, first or last. The purpose of such links is quite clear, some others like prefetch, that indicates that a resource can be loaded in the background early as it is very likely that the next action may request it, might be less intuitive at first. Such link relation names should be standardized or at least follow the Web Linking extension mechanism.
Through the help of link-relation names a client that knows to look for URIs annotated with next i.e. will still work if the server decides to change its URI scheme as it treats the URI rather opaque.
Of course, both client and server need to support the same media type that furthermore is able to represent such capabilities. Plain application/json is i.e. not able to provide such a support. HAL JSON or JSON Hyper-Schema at least add support for links and link relation names to JSON based documents, while hal-forms, halo+json (halform) and ion might be used to teach a client how a request needs to be created. The question here shouldn't be which media type to support but how many different ones you want to support as the more media types your API is able to handle, the more likely it will be to interact with arbitrary clients not under your control.
These concepts allow you to basically use the controls given in the server response to "drive your workflow" forward. In essence, what you, as an API designer should do is to design the interactions of a client with your API so that it follows a certain, as Jim Webber termed it, domain application protocol or state machine as Asbjørn Ulsberg put it that basically guides a client through its task, i.e. ordering from your shop API.
So, in short, HATEOAS is comparable to Web surfing for applications by making use of named link relations and form-like media type representations that allow you to take actions solely on the response retrieved from a server directly instead of having to bake external knowledge stemming from some reference documentation page (Swagger, OpenAPI or the like) into your application.
But how does HATEOAS benefit the consumer in practice then?
First, it does not have to consult any external documentation other maybe than the current media type specification, though usually support for well-known media types is already backed into popular frameworks or at least allows to add support through plugins or further libraries. Once the media type is understood and supported interactions with all serivces that also support the same media type is possible, regardless of their domain. This allows to reuse the same client implementation to interact with service A and service B out of the box. In an RPC-like systems you'd need to integrate the API of service A first and if you want to interact with service B also you need to integrate those API separately. It's most likely that these APIs are incompatible and thus don't allow the reusage of the same classes.
Without knowing the URL for a resource, is the idea that the consumer can discover it by browsing the API, but they will still have a hard dependency on the actual URL? Or is HATEOAS purpose to leverage actions on a certain resource, i.e. the consumer knows the users end-point but he does not need to know the end-points for actions to take on the users resource cause those are provided by the API?
A client usually does not care about the URI itself, it cares about the content a URI may provide. Compare this to your typical browsing behavior. Do you prefer a meaningful text that summarizes that links content so you can decide whether to request that resource or do you prefer parsing and dissecting a URI to learn what it might do? Minifying or obfuscating URIs will do you no favor in the latter case though.
A further danger arise from URIs and resources that a client put meaning to. A slopy developer will interpret such URIs/resources and implement a tiny hack to interact with that service assuming the URI/resource will remain static. I.e. it is not unreasonable to consider a URI /api/users/1 to return some user related data and based on the response format a tiny Java class is written that expects to receive a field for username and one for email i.e.. If the server now decides to add additional data or rename its fields, the client suddenly will not be able to interact with that service further. And rest assured that in practice, especially in the EDI domain, you will have to interact with clients that are not meant to interact with the Web or where programmers implemented their own JSON framework that can't coope with changing orders of elements or can't handle additional optional fields, even though the spec contains notes on those issues. Fielding claimed that
A REST API should never have “typed” resources that are significant to the client. Specification authors may use resource types for describing server implementation behind the interface, but those types must be irrelevant and invisible to the client. The only types that are significant to a client are the current representation’s media type and standardized relation names. [ditto] (Source)
Instead of typed resources content type negotiation should be used to support interoperability of different stackholders in the network.
As such, the URI itself is just the identifier of a resource that is mainly used to learn where to send a request to. Through the help of meaningful link relation names a client should know that it is interested in i.e. http:/www.acme.com/rel/orders if it wants to send an order to the service and just looks up the URI that either is annotated with that Web Linking extension realtion name or that has an URI attached to it. Whether the link relation name is just an annotation (i.e. a further attribute on the URI element) or the URI being attached to the link-relation name (i.e. as an embedded object of the link relation name) is dependent on the actual media type. This way, if a server ever decides to change its URI scheme or move around resources, for whatever reason, the client will still be able to find the URI that way and it couldn't care less about the characters present in the URI or not. It just treats the URI as opaque thing. The nice thing here is, that a URI can be annotated with multiple link relation names simultaneously, which allows a server to "offer" that URI to clients that support different link-relation names. In the case of forms the URI to send the request to is probably contained in the action attribute of the form element or the like.
As you hopefully can see, with HATEOAS there is no need for a hard dependency on URIs, if so there may be a dependency on the link-relation name though. It still requires URIs to learn where to send the request to, but through looking up the URI via its accompanying link relation name you make the handling of URIs much more dynamic as it allows a server to change the URI anytime it wants to or has to.

Pass data such as username in hostname

I have seen some sites use hostnames as data such as usernames (for example username.example.com) and was wondering how you would be able to achieve this.
Is it good practice to use hostnames like this or are there reasons against it?
Thanks in advance.
It is generally bad practice to treat hostnames this way. Lookups become a bit more complicated and it is always safest to use usernames in the path or query.
Hostnames are designed to be thought of in a global sense. For instance user.example.com/username/profile
It also helps protect the user (a little) because paths can be encoded into the http request where a subdomain request essentially requests user.example.com and that request can be redirected multiple times before returning to the client and dns monitoring is the number one way that people do tracking.
DNS tracking is easy because its already fast, open, and the contents aren't designed to be hidden like https or more recent ipsec techniques.
I've accomplished this by setting up a DNS wildcard with your DNS host (*.example.com) then using PHP to parse out the username in the URL and act accordingly.

How RESTful is using subdomains as resource identifiers?

We have a single-page app (AngularJs) which interacts with the backend using REST API. The app allows each user to see information about the company the user works at, but not any other company's data. Our current REST API looks like this:
domain.com/companies/123
domain.com/companies/123/employees
domain.com/employees/987
NOTE: All ids are GUIDs, hence the last end-point doesn't have company id, just the employee id.
We recently started working on enforcing the requirement of each user having access to information related exclusively the company where the user works. This means that on the backend we need to track who the logged in user is (which is simple auth problem) as well as determining the company whose information is being accessed. The latter is not easy to determine from our REST API calls, because some of them do not include company id, such as the last one shown above.
We decided that instead of tracking company ID in the UI and sending it with each request, we would put it in the subdomain. So, assuming that ACME company has id=123 our API would change as follows:
acme.domain.com
acme.domain.com/employees
acme.domain.com/employees/987
This makes identifying the company very easy on the backend and requires minor changes to REST calls from our single-page app. However, my concern is that it breaks the RESTfulness of our API. This may also introduce some CORS problems, but I don't have a use case for it now.
I would like to hear your thoughts on this and how you dealt with this problem in the past.
Thanks!
In a similar application, we did put the 'company id' into the path (every company-specific path), not as a subdomain.
I wouldn't care a jot about whether some terminology enthusiast thought my design was "RESTful
" or not, but I can see several disadvantages to using domains, mostly stemming from the fact that the world tends to assume that the domain identifies "the server", and the path is how you find an item on that server. There will be a certain amount of extra stuff you'll have to deal with with multiple domains which you wouldn't with paths:
HTTPS - you'd need a wildcard certificate instead of a simple one
DNS - you're either going to have wildcard DNS entries, or your application management is now going to involve DNS management
All the CORS stuff which you mention - may or may not be a headache in your specific application - anything which is making 'same domain' assumptions about security policy is going to be affected.
Of course, if you want lots of isolation between companies, and effectively you would be as happy running a separate server for each company, then it's not a bad design. I can't see it's more or less RESTful, as that's just a matter of viewpoint.
There is nothing "unrestful" in using subdomains. URIs in REST are opaque, meaning that you don't really care about what the URI is, but only about the fact that every single resource in the system can be identified and referenced independently.
Also, in a RESTful application, you never compose URLs manually, but you traverse the hypermedia links you find at the API endpoint and in all the returned responses. Since you don't need to manually compose URIs, from the REST point of view it's indifferent how they look. Having a URI such as
//domain.com/ABGHTYT12345H
would be as RESTful as
//domain.com/companies/acme/employees/123
or
//domain.com/acme/employees/smith-charles
or
//acme.domain.com/employees/123
All of those are equally RESTful.
But... I like to think of usable APIs, and when it comes to usability having readable meaningful URLs is a must for me. Also following conventions is a good idea. In your particular case, there is nothing unrestful with the route, but it is unusual to find that kind of behaviour in an API, so it might not be the best practice. Also, as someone pointed out, it might complicate your development (Not specifically on the CORS part though, that one is easily solved by sending a few HTTP headers)
So, even if I can't see anything non REST on your proposal, the conventions elsewhere would be against subdomains on an API.

Why is cross-domain JSONP safe, but cross-domainJSON not?

I'm having trouble connecting some dots having recently learned of JSONP. Here's my understanding:
Cross-domain XmlHttpRequests for any content (including JSON) is banned, due to the same origin policy. This protects against XSRF.
You are permitted to have a script tag with a src that returns JSONP - some JSON padded inside a call to a Javascript function (say 'Foo')
You can have some implementation of 'foo' on the page that will get called when the JSONP data is returned, and you can do things with the JSON data that function is passed
Why is it OK to receive cross-domain data if it came via JSONP, but not if it came via JSON?
Is there an assumption that JSON is prone to permitting XSRF but JSONP is not? If so, is there any reason for that other than JSONP being some de-facto data format that won't ever provide data that enables XSRF? Why JSONP and not some arbitrary root tag on XML instead?
Thank you in advance for your answers, please make my brain work again after failing to figure this one out.
I understand this question to be about why the browser considers JSONP safe, not about whether it is safe (which it isn't). I will address this question step by step.
Regular ol' AJAX
To perform a regular AJAX request, the browser creates an XHR object, points it at the URL and pulls the data. The XHR object will only trust data from the same domain. This is a hard limitation. There is no getting round it in current browsers (edit - you can now use CORS).
Solution - Don't use XHR
Since XHR is subject to the same domain poilicy, we can't use XHR to do cross domain AJAX. Luckily, there are other ways to hit a remote server. We could append an image tag to the page for example. We can also append a script tag and give it a src attribute that points to the remote server. We can pull JQuery from a CDN for example and expect it to work.
How JSONP works.
When we make a JSONP request, our code dynamically appends a script tag to the page. The script tag has a source attribute which points to the remote JSONP API url, just as though you were inserting a script from a CDN.
The JSONP script returned by the server is wrapped in a function call. When the script is downloaded, the function will be executed automatically.
This is why we have to tell the JSONP the name of the callback function we want to wrap the script in. That function will be called once the script has downloaded.
Security concerns
There are some fairly big security concerns here. The script you are downloading could take control over your page and put your users at risk. JSONP is not safe for your users, it is just not blocked by the web browsers. JSONP really is a browser exploit which we are taking advantage of. Use with caution.
Used wisely, JSONP is pretty awesome though.
I don't know how the perception that JSONP is safe came up but see
JSON-P is, for that reason, seen by many as an unsafe and hacky
approach to cross-domain Ajax, and for good reason. Authors must be
diligent to only make such calls to remote web services that they
either control or implicitly trust, so as not to subject their users
to harm.
and
The most critical piece of this proposal is that browser vendors must
begin to enforce this rule for script tags that are receiving JSON-P
content, and throw errors (or at least stop processing) on any
non-conforming JSON-P content.
both quotes from http://json-p.org/ .
other links with some useful information about JSONP/security:
http://beebole.com/en/blog/general/sandbox-your-cross-domain-jsonp-to-improve-mashup-security/
Cross Domain Limitations With Ajax - JSON
JSONP Implications with true REST
all these tell 2 things - basically it is not considered "safe" but there are ideas on how to make it "safer"... though most ideas rely on standardization AND specific check logic to be built into browsers etc.