How to track in logs requests from same user? - asp.net-core

I want to be able to link various requests that are coming from same client (browser). I came with adding header based on cookie:
backend servers
description My backend
http-request set-header Request-Id %[req.cook(AspNet.Session),sha1,hex]
server srv_01 127.0.0.1:5000
This is going to be used only for debug purposes when I want to find what user was doing. Should I be worried about performance? My cookie is around 300 bytes. There are also other hash functions (like xxh64 or wt6). Does it makes sense to use it?
Debian Buster, haproxy 2.2, ASP.Net Core as backend server.

Actually, If it related to your debug environment, I think is not bad for performance, but consider this, every sidecar added, has its own overhead to the response time, You can follow one of the following scenarios:
write a action filter to log request/response/userInfo ActionFilter
use some event-based patterns. this approach has less overhead on the response time because the logging will be processed separately and independent of the current request threat.
but I'm quite sure, there are other patterns that can be used to logging requests and user info.

Related

Consume an API that have a rate limit

I have an API that limit 10 calls/second from an IP - Lets call this API-1
I have a webapp that consumes API-1. Lets call this WebApp-1
If I my Web App have more traffic and needs to make more calls per second than allowed, how do I design WebApp-1 call to API-1?
Some ideas of how to approach a rate limited API that come to mind:
Raise the API limits for your client key. Probably not your case but may be an option in some cases.
Create/purchase more client accounts (access keys) to the API to raise the overall rate limit. Split traffic among the keys evenly.
Cache results on the querying side (WebApp in your case). It depends on the application, but if WebApp is a browser-based application caching may not be effective as there's no shared cache between clients.
Introduce caching proxy. WebApp makes requests to the proxy which forwards them to the rate limited API. This will help with maintaining the shared cache. Some options to implement the proxy: Nginx, Varnish, AWS API Gateway, etc.
Introduce a query queue (synchronous). Again if WebApp is a browser application, you may need to put a backend service as a proxy between the WebApp and the API. Proxy would keep a steady flow of requests to the API. If there's a burst in incoming requests it would delay processing to respect API's rate limit (WebApp may have to wait longer to get an answer from the proxy). Not really scalable.
Introduce a query queue (asynchronous). WebApp sends requests to the proxy. Proxy acknowledges the receipt and returns a receipt ID. Then either the proxy makes a callback request to the WebApp when response from the API is ready or WebApp is polling the proxy to know if there's any data for a given receipt ID.
Another (obviously shady) solution is making requests from different machines and IPs. Probably not something API owner wants to see!

Hide Request/Response header for get request from fiddler or other debug proxy apps

I have mobile app which heavily depends on apis response, I was using charles proxy and fiddler to see the api calls made by my app and I have noticed for one of get api call I am able to see full url with all request parameters(which is fine) and request headers(which include secure keys).
So using those info anyone can execute that api outside of mobile app. my app has millions of user and if someone run script to increase traffic it also increase load on server. so is there any way I can secure or hide those keys ?
I am able to think only one way of doing it is
encryption on both app and api side
is there any better way of doing it ?
You can implement certificate or public-key pinning in your app (for the leaf or the root-CA-certificate). This makes it harder for an attacker to use a proxy and intercept HTTPS traffic. However with XPosed and SSL-Unpinning module this will still work.
Also keep in mind that APK files can be decompiled easily, therefore you don't have to attack the network traffic.
Therefore the next step is to harden your app to make it resistent against manipulation via XPosed or Frida. Note that good harding frameworks cost a lot of money. Usually the protection offered is raising with the cost.
See also this related question.

Why does Twitter serve every page over HTTPS (SSL)?

Is there a reason why a website such as Twitter serves all pages over HTTPS? I was under the impression that the only pages that need to be served over an encrypted channel are pages where sensitive information is being submitted or received.
I do that when developing web apps. It makes securing user data much simpler, because I don't have to think about whether or not confidential information could be passed through a particular request. If there is a performance penalty, it's hasn't been bad enough to make it worth my while to start profiling. My projects have been fairly small, in terms of usage, so far.
Every page on Twitter either:
Is accessed when you are logged in and sending credentials in the request (and potentially receiving data that is private) or
Contains a login form (that shouldn't be interfered with via a man-in-the-middle attack).
Consequently every page on the site has the potential to be a page where sensitive information is being submitted or received.
Switching between HTTP and HTTPS can be tricky to do correctly.
If any resource that is served over HTTP requires authentication, some form of authentication token (typically a session cookie) will be leaked from HTTPS to HTTP (assuming the user authentication itself is done over HTTPS).
Getting the flow of pages right so that, once that token has been used over plain HTTP, it can no longer be relied upon for anything more sensitive (which would require HTTPS) can require a lot of planning in the design of the application. (There are certainly a number of websites that don't do it properly.)
Since Twitter is a website where you're always logged on (or always have the opportunity to log on securely in the corner), it seems to make sense to use HTTPS for everything.
The main overhead in HTTPS is the SSL/TLS handshake: checking the certificates, asymmetric cryptography, ... Once the connection is established, it's all symmetric cryptography, with a much lower overhead.
You'll see a number of questions here (and other places) where people insist to have redirection rules to force plain HTTP for resources that don't need to be used securely, while forcing HTTPS for other pages. This seems misguided to me: by the time the redirection from HTTPS to HTTP happens, the handshake has already taken place. A good browser will keep the connection alive (and will be able to reuse sessions) to get multiple resources and pages, thereby keeping the overhead to a minimum, almost negligible at that point.

How to use Varnish to cache RESTful API, but still use HMAC for signing/verifying each request?

I am interested in using Varnish to cache/throttle/etc responses to a RESTful API I am creating. I may be using the term/acronym "HMAC" too loosely, but what I mean is that each request to my API should include a header that includes a hash that was calculated by the client by hashing parts of the request (including a timestamp) with a shared secret. The server then calculates this same hash with the same ingredients from the request, and determines if the request is valid and should be responded to.
This works well enough, but now I would like to use Varnish to cache my API responses. The nature of HMAC requires that each request calculates the hash to verify the user is who they are, but the actual response that is returned is the same - so the meat of the API call is very much cacheable.
What I'd like (and I'm assuming this can be achieved, I just don't know HOW) is to pass the authentication task to the backend, somehow tell Varnish "yes, go ahead and respond to this request" or "no, don't respond to this request" and then from there let Varnish determine if the request can be served from cache or not.
Even more ideally, would be to do something slightly fancier, and allow Varnish to handle the authentication itself, or pass the HMAC processing onto something faster then the backend. For example, the API might store the client secret/public key in a redis cache, then Varnish might actually calculate the hash itself using the values from Redis.
You should be able to implement the fancier solution in Varnish VCL code (Varnish Configuration Language) by using two Varnish Modules:
Redis vmod to fetch keys.
Varnish Digest Module for calculating/processing HMAC.
Both modules are used in production, as listed in the modules directory.
If Varnish handles the authentication in VCL, you can let Varnish cache your API backend response and deliver it only for authenticated requests.
If the HMAC implementation requires the request body:
As Gridfire points out in his/her answer, Varnish cannot access the request body. And we can/should not send the full request body in a HTTP header from the backend/application.
But, we can send a hash/digest of the full request body in a HTTP header. Calculation of the hash on the backend should be negligible compared to generating the output(markup|data|whatever).
AFAICT there should be no cryptological/practical downsides to this method as long as the hash/digest and HMAC is robust, and the digest is lengthy (256bits or more). Performance testing is adviced as usual.
Varnish can easily do HMAC using the VMOD's in Geir Bostad answer, unless your HMAC implementation uses the request body as part of the hash.
Varnish does not give you access to the request body, libvmod-bodyaccess provides some functions but I have found no way of actually getting the request body.
You could theoretically add a header containing the request body, but this is pretty bad practice and will either bloat your HTTP requests with redundant data, or break HTTP request standards if you choose to only put the data in the header. Simply put not recommended.
An alternate solution would be to use Nginx, which can also act as an SSL terminator if you want to use HTTPS (Varnish doesn't do SSL).
Nginx has a module to run Lua scripts (Ubuntu/Debian package nginx-extras provides it without requiring you to compile it yourself), and the module brings the handy access_by_lua_file directive to allow or block access based on the result of the script.
There's a HMAC script for Nginx here.

HTTP 2 will support server push, what does this mean?

I've read a lot of things about HTTP 2 (which is still in development), so I also heard about the server push feature, but I my head, this is not clear.
Does this server push feature mean that the server will be able to send a response to the client without the latter making a request? Just like a vanilla TCP connection? Or I'm missing the point?
The HTTP2 push mechanism is not a generic server push mechanism like websocket or server sent events.
It is designed for a specific optimisation of HTTP conversations. Specifically when a client asks for a resource (eg index.html) the server can guess that it is going to next ask for a bunch of associated resources (eg theme.css, jquery.js, logo.png, etc. etc.) Typically a webpage can have 10s of such associated requests.
With HTTP/1.1, the server had to wait until the client actually sends request for these associated resources, and then the client is limited by connections to only ask for approx 6 at a time. Thus it can take many round trips before all the associated resources that are needed by a webpage are actually sent.
With HTTP/2, the server can send in the response to the index.html GET push promises to tell the client that it is going to also send theme.css, jquery.js, logo.png, etc. as if the client had requested them. The client can then cancel those pushes or just wait for them to be sent without incurring the extra latency of multiple round trips.
Here is a demo of push with SPDY (the basis for HTTP2) with Jetty https://www.youtube.com/watch?v=4Ai_rrhM8gA . Here is a blog about the push API for HTTP2 and SPDY in jetty: https://webtide.com/http2-push-with-experimental-servlet-api/
Essentially your understanding is correct, however, there is a lot more to it.
The server will only be able to send a resource to the client after a request for an HTTP page has been made and the resources required by that page for it to render properly, i.e. images, JavaScript files, CSS etc, have been identified. The mechanism responsible for this is the server side framework. In Java, this will be Servlet 4 and possibly JSF.
A server can not just send any resource to the client when it feels like it. Only under the above circumstance will it occur and a client will always be able to reject the server request to push a resource.
The mechanism of HTTP/2 server push has been really well designed and to get to grips with it I recommend this overview of HTTP/2 and this in depth article diving into the internals of the HTTP/2 protocol.