Consume an API that have a rate limit - api

I have an API that limit 10 calls/second from an IP - Lets call this API-1
I have a webapp that consumes API-1. Lets call this WebApp-1
If I my Web App have more traffic and needs to make more calls per second than allowed, how do I design WebApp-1 call to API-1?

Some ideas of how to approach a rate limited API that come to mind:
Raise the API limits for your client key. Probably not your case but may be an option in some cases.
Create/purchase more client accounts (access keys) to the API to raise the overall rate limit. Split traffic among the keys evenly.
Cache results on the querying side (WebApp in your case). It depends on the application, but if WebApp is a browser-based application caching may not be effective as there's no shared cache between clients.
Introduce caching proxy. WebApp makes requests to the proxy which forwards them to the rate limited API. This will help with maintaining the shared cache. Some options to implement the proxy: Nginx, Varnish, AWS API Gateway, etc.
Introduce a query queue (synchronous). Again if WebApp is a browser application, you may need to put a backend service as a proxy between the WebApp and the API. Proxy would keep a steady flow of requests to the API. If there's a burst in incoming requests it would delay processing to respect API's rate limit (WebApp may have to wait longer to get an answer from the proxy). Not really scalable.
Introduce a query queue (asynchronous). WebApp sends requests to the proxy. Proxy acknowledges the receipt and returns a receipt ID. Then either the proxy makes a callback request to the WebApp when response from the API is ready or WebApp is polling the proxy to know if there's any data for a given receipt ID.
Another (obviously shady) solution is making requests from different machines and IPs. Probably not something API owner wants to see!

Related

How to track in logs requests from same user?

I want to be able to link various requests that are coming from same client (browser). I came with adding header based on cookie:
backend servers
description My backend
http-request set-header Request-Id %[req.cook(AspNet.Session),sha1,hex]
server srv_01 127.0.0.1:5000
This is going to be used only for debug purposes when I want to find what user was doing. Should I be worried about performance? My cookie is around 300 bytes. There are also other hash functions (like xxh64 or wt6). Does it makes sense to use it?
Debian Buster, haproxy 2.2, ASP.Net Core as backend server.
Actually, If it related to your debug environment, I think is not bad for performance, but consider this, every sidecar added, has its own overhead to the response time, You can follow one of the following scenarios:
write a action filter to log request/response/userInfo ActionFilter
use some event-based patterns. this approach has less overhead on the response time because the logging will be processed separately and independent of the current request threat.
but I'm quite sure, there are other patterns that can be used to logging requests and user info.

How to authenticate if auth headers are not supported on client-side?

TL;DR: How to authenticate against NGINX if auth headers are not supported on client side?
I am building an IoT-related project using NGINX as a reverse proxy for the server side services and 1NCE as the LTE carrier for the mobile devices. All traffic is authenticated based on HTTPBasicAuth over SSL-encrypted connections and handling "normal" requests works as desired.
As mobile service might be interrupted and the Internet connection might be lost, I want to send SMS for critical status reports and alarm notifications. 1NCE supports SMS mobile originated SMS (MO SMS) which are handled by the 1NCE's internal infrastructure and forwarded to a configurable API endpoint. So, MO SMS are not delivered to a specified phone, but forwarded via an API request which I need to process on my side.
According to 1NCE's SMS documentation and in consultation with their customer support, SMS forwarding does not support any authentication headers. SMS forwarding can only be done by specifying an HTTPS URL (including the desired API endpoint) and a port. The incoming SMS is then wrapped in a request to the given URL and sent in the request body.
I want to add authentication to the SMS forwarding endpoint (receiving forwarded SMS on my side) as well and am currently wondering about how I could achieve this. NGINX supports authentication on subrequest which could be used to evaluate incoming requests by an internal service. So my first idea was to add some credentials to each SMS (as I am also responsible for the SMS sending part of the code on the mobile devices, I could implement whatever is needed) and check those credentials with an internal service called by NGINX's subrequest. However, this does not seem to be doable. According to this SO question GET requests are used for the internal subrequests hence any body of the incoming POST request is discarded. Therefore, the credentials of the forwarded SMS would also be not available to my internal auth service. Extending NGINX's auth capabilities by writing an custom Lua-based plugin was my second idea, but this does not only seem to be not feasible but is also not supported by the NGINX instance I am using (Lua modules are disabled, switching to openresty seems to be a big thing).
My last idea would be to forward all incoming requests to a Python web service (written in Flask, other services I am using are also written in Flask) and parsing the forwarded SMS in Python. Based on the result of the credential evaluation I could return an 401/Unauthorized status code if credentials provided in the SMS (which is part of the request body) are invalid and process the request otherwise. However, I think that this approach is quite ugly as all incoming requests need to be passed to Flask and invalid requests are not rejected at the level of my Reverse Proxy.
Do you have any ideas about how to approach this issue? What would be a considerable approach with regards to "best practises"? Can I extend NGINX in a way to solve this or should I completely drop NGINX in favor of a "better" proxy?

Is it safe to proxy a request from https to http?

I have 2 servers, Web and Api. Web serves up webpages, and Api serves up json.
I want to be able to make ajax calls from Web to Api, but I want to avoid CORS pre-flight requests. So instead, I thought to proxy all requests for https://web.com/api/path to https://api.com/path.
The only way I've been able to get this to work is to drop the https when making the request to the api server. In other words, it goes https://web.com/some/page -> https://web.com/api/path -> http://api.com/path.
Am I leaving myself vulnerable to an attack by dropping the https in my proxy request?
(I would make a comment but I don't have enough rep)
I think this would depend largely on what you mean by proxying.
If you actually use a proxy (that is, your first server relays the request to the second, and it comes back through the first), then you're only as vulnerable as the connection between those two servers. If they're in physical proximity, over a private network, I wouldn't worry about it too much, as an attacker would have to compromise your physical network. If they're communicating over open internet, you might have other attacks happen (DNS spoofing comes to mind if you don't supply an actual IP address), and I would not recommend this.
If by 'proxy' you mean the webpage makes an Ajax call to your API server, this would open things up to the same attacks that proxying across the internet could.
Of course, this all depends on what you're serving up in JSON. If any of it involves authentication or session-related information, I wouldn't leave it unencrypted. If it's just basic info that's the same for all users, you might not care. However, a skilled attacker could potentially manipulate the data with a man-in-the-middle attack, so I would still encrypt it.

HTTP 2 will support server push, what does this mean?

I've read a lot of things about HTTP 2 (which is still in development), so I also heard about the server push feature, but I my head, this is not clear.
Does this server push feature mean that the server will be able to send a response to the client without the latter making a request? Just like a vanilla TCP connection? Or I'm missing the point?
The HTTP2 push mechanism is not a generic server push mechanism like websocket or server sent events.
It is designed for a specific optimisation of HTTP conversations. Specifically when a client asks for a resource (eg index.html) the server can guess that it is going to next ask for a bunch of associated resources (eg theme.css, jquery.js, logo.png, etc. etc.) Typically a webpage can have 10s of such associated requests.
With HTTP/1.1, the server had to wait until the client actually sends request for these associated resources, and then the client is limited by connections to only ask for approx 6 at a time. Thus it can take many round trips before all the associated resources that are needed by a webpage are actually sent.
With HTTP/2, the server can send in the response to the index.html GET push promises to tell the client that it is going to also send theme.css, jquery.js, logo.png, etc. as if the client had requested them. The client can then cancel those pushes or just wait for them to be sent without incurring the extra latency of multiple round trips.
Here is a demo of push with SPDY (the basis for HTTP2) with Jetty https://www.youtube.com/watch?v=4Ai_rrhM8gA . Here is a blog about the push API for HTTP2 and SPDY in jetty: https://webtide.com/http2-push-with-experimental-servlet-api/
Essentially your understanding is correct, however, there is a lot more to it.
The server will only be able to send a resource to the client after a request for an HTTP page has been made and the resources required by that page for it to render properly, i.e. images, JavaScript files, CSS etc, have been identified. The mechanism responsible for this is the server side framework. In Java, this will be Servlet 4 and possibly JSF.
A server can not just send any resource to the client when it feels like it. Only under the above circumstance will it occur and a client will always be able to reject the server request to push a resource.
The mechanism of HTTP/2 server push has been really well designed and to get to grips with it I recommend this overview of HTTP/2 and this in depth article diving into the internals of the HTTP/2 protocol.

Securing channels with nginx and http push module

I was able to setup nginx as a message server for building a real-time javascript application with Dojo. For the setup I used the nginx http_push_module which can be configured to handle publish/subscribe requests on different “channels”. Channels is “A resource representing an isolated pathway for message transmission. Each channel has a single unique message queue”.
Channels are identified by a id parameter in the url used within the XHR requests.
I'm in the need of implementing some sort of private channel, which the application can use to push messages to the users, but I've no idea of how to implement channel authentication.
Does anybody ever used http_push_module to create private channels or have suggestions about implementing them?
Thanks in advance for your support.
Maybe you can use my fork of the http_push_module module I've been working on, that one implements fine grain security access to channels in it. I just updated the README for you to know how to use it, but it basically uses md5 hashes, it provides expiration times to channels and per-client-IP/per-channel security (it additionally adds jsonp support if you need it):
https://github.com/Kronuz/nginx_http_push_module