Cloudflare adds too much latency to API calls - cloudflare

Trying out cloudflare for ddos protection of an SPA webapp, using free tier for testing.
Static contents loading is fine, but API calls became very slow.
From original <50ms for each api call to around 450~500ms each.
My apis are called via a subdomain eg apiXXX.mydomain.xyz
Any idea the problems or alternative fast ddos protection solution?

Cloudflare has created a page to explain the configuration you have to do when proxyfing APIs :
You have to create a new Page Rule in which you have to bypass cache, turn off Always Online and Browser Integrity Check options.
If you do not configure this, you may have slow response time because all those options are enabled by default on API calls.
Here's the link to create the configuration : Cloudflare Page Rule for API

Related

AWS Signed Cookies in an IFrame?

Is it possible to support cookies in an IFrame that won't be broken by some of the recent cookie security improvements? The IFrame is embedded on arbitrary domains that we don't control. Other than the initial request URI being passed, we don't care about any special message passing or cross domain access.
Context: an app I've inherited serves authenticated S3 content in an IFrame to users. The content is proxied by CloudFront, leaning on their Signed Cookies feature to authenticate the initial HTML page, as well as every other asset (CSS, JS, images, etc.) that might be on the page. The cookie is generated/set after a successful auth handshake.
Recently, the move towards blocking third party cookies has broken this model. Users need to downgrade their security settings, and this will flat out stop working soon.
Short of a larger architectural change, is there a way to configure the cookies or CF to work within an IFrame, embedded on domains we don't control? My assumption is that this model is fundamentally broken now, but I wanted to triple check before reaching for a larger architectural change.
Thanks

Third Party Cookies - Cross Domain APIs w/ Session Tracking

Given a CORS API that requires a session cookie to track users as they move through a checkout process, there are issues in multiple browsers where the cookie is not set until after the user visits the site the API is hosted on.
For example:
johnny.com uses an CORS JSON API from jacob.com. jacob.com sets a
cookie after the first AJAX call is made, but some browsers will not
set the cookie for subsequent calls. Therefore the API will not
function as expected.
Browser Behavior:
Chrome seems to function fine unless "Third-Party cookies" are
deliberately disabled. There doesn't seem to be a workaround for
this.
IE does not allow the cookie to be set initially unless there is a P3P privacy policy header returned with the initial call.
Safari does not allow the cookie to be set initially unless a hack is used (see: http://measurablewins.gregjxn.com/2014/02/safari-setting-third-party-iframe.html)
Any insight on how to work around these issues is greatly appreciated.
Unfortunately, it seems there are not option to make that work across all browsers.
Safari now restricts third party use of cookies.
It seems the best is to evaluate alternatives :
Setup a proxy server that will redirect the calls to the different services (for example, when you hit johnny.com/jacob/abc, act as proxy to retrieve jacob.com/abc)
Use oauth login on API (it might be impractical)
Move the API under johnny.com/api/...
Paypal has also created several js based solutions to try to go around this kind of problems : https://medium.com/#bluepnume/introducing-paypals-open-source-cross-domain-javascript-suite-95f991b2731d

Separate back-end and front-end apps on same domain?

We are building a fully RESTful back-end with the Play Framework. We are also building a separate web front-end with a different technology stack that will call the RESTful API.
How do we deploy both apps so they have the same domain name, with some URLs used for the backend API and some for the front-end views?
For example, visiting MyDomain.example means the front-end displays the home page, but sending a GET to MyDomain.example/product/24 means the back-end returns a JSON object with the product information. A further possibility is if a web browser views MyDomain.example/product/24, then the front-end displays an HTML page, and that webpage was built from a back-end call to the same URL.
Finally, do we need two dedicated servers for this? Or can the front-end and back-end be deployed on the same server (e.g. OpenShift, Heroku)
You are gonna to dig yourself... deep :)
Simplest and most clean approach with no any doubt is creating a single application serving data for both, BE and FE, where you differ response (JSON vs HTML) by the URL, pseudo routes:
GET /products/:id controllers.Frontend.productHtml(id)
GET /backend/products/:id controllers.Backend.productJson(id)
Benefits:
single deployment (let's say to Heroku)
name space managed from one app
No need to modify the models in many apps after change in one of them
else if
If you're really determined to create a two separate apps, use some HTTP server as a proxy - for an example nginx - so it will send all requests to domain.tld/* to application working at port 9000 (which will answer with HTML) but requests to domain.tld/backend/* redirect to application working at port 9001 responding with JSON.
else
If you are really gonna to response with JSON or HTML depending on the caller you can try to compare headers to check if request was sent from browser or from AJAX call in each controller , but believe me that will become a nightmare faster than you thing... insert the coin, choose the flavor
I thought of a different solution. I'm going to deploy back-end to a subdomain like
http://api.myapp.example/
and deploy front-end to the main domain:
http://myapp.example/
but I think you'd better use 2 different hosts, one for front-end and one for back-end (I searched the Google and this was the result of my investigations
Other possibility (therefore as separate answer) is using a possibility added in Play 2.1.x a Content negotiation I think it's closest for that what you wanted to get initially :)
Indeed its much easier to create a MEAN STACK APP and use one hosting like Heroku for instance.
Your frontend is what it is, front end for your backend. It will be easy to access backend / restfulAPI's and frontend like this:
http://localhost:3000/api/contacts (to access and consume your API endpoint)
http://localhost:3000/contacts (frontend)
NB: localhost:3000 or http://yourapp.example/api/contacts (api)
http://yourapp.example/contacts (frontend)
It's in the URL

Should website & API have a different hostnames?

The webapp I'm making is medium-sized, and it's going to be a single-page static JS+HTML app (made with Backbone, and served by nginx) which accesses an API, hosted on a proper webserver.
Should the API be under a different hostname, or same hostname but different path? What could be possible pros & cons of these options? Both options are feasible, thanks to nginx.
I would suggest using an intuitive separated environment. Splitting the access location like example.com and api.example.com allow the hostnames to describe the purpose of each environment. Separating these keeps things organised and clear while using the same hostname for each could cause confusion as to what kind of request is being done.
Using example.com/api is possible as well, but could lead to future issues where directories are used for other things as well. E.g., would example.com/newfeature have a directory like example.com/newfeature/api as well?
In the end, it's all a matter of personal preference though. Pick something that works in a clear way for your environment.
I think your question is somewhat irrelevant, as long as your code is flexible about the base url of the api. Make sure you can configure your code (both javascript and back-end) so that all api URLs are relative to some single configuration parameter and you will have flexibility to put your api service anywhere you want or need to put it.
I tend to think it might be a good idea to have everything on the same hostname, because the user might have disabled 3rd party cookies, and so the API server won't be able to recognize you after you close your browser. Before anyone tells me I should have the main website serve the cookies instead, let me tell you that I'd like the main website to be completely static HTML/JS files, and so they have no ability to serve httpOnly cookies, which is the kind of cookies I like.

iOS internal proxy in app

In one of our apps, we have to implement an online / offline feature. The caching is already done. What we have to do, however, is to implement a save way to prevent the app from opening a network connection. So my idea was, use CFNetwork to route every network call thru an internal proxy which checks the status of the app. If the app is allowed to go online it simply forwards the message. If not, it returns an http error.
My question is: Are there any open source proxies out there, that can handle this feature or do I have to implement the proxy all by myself?
Best regards,
Michael
How are you doing the caching / making the requests? Because NSURLConnection supports caching by default, and has a cache mode NSURLRequestReturnCacheDataDontLoad that will only load from the cache (and return nil if the resource you're requesting isn't available in the cache). If you're not using NSURLConnection, what are you using instead (since this will probably affect your solution)?