Clarification about TURN server authentication through REST api - webrtc

I was going through this draft to undertstand usage of REST api to access TURN servics. I am bit confused after going through that.
Currently, I am authenicating my TURN server using Long Term Credential Mechanism with Redis database, but instead of using actual username and password, I am using a authenication token( which expires in 8 hours) and a random string as password.
My doubts about the draft are:
the ttl recieved in the response is never used( at least not part of RTCPeerConnection). so how exactly is TURN know when to expire the user?
I see no option in turnserver arguments to specify the timestamp format, ss it is fixed a UNIX timestamp?
Does REST api implementation offer any advantage over my implementation( considering the fact the mine doesn't have a dependency on sync between webrtc server and TURN server time)

The timestamp generated by the REST endpoint as part of the username is ttl seconds in the future. So the TTL in the response is just informative.
The advantage of the overall approach is that (assuming time sync which is a solved problem) it requires no communication between the entity that generates the token and the TURN server. When deploying multiple TURN servers around the globe (see later in this I/O 2015 presentation) this is somewhat easier than syncing a redis database.

Related

Optimization for GetSecret with Azure Keyvault

Our main goal for now is optimising the a processing service.
The service has a system-assigned managed identity with accespolicies that allow to get a secret.
This service makes 4 calls to a keyvault. The first one takes a lot longer than the others. I'm scratching my head, because the Managed Identity token takes 91µs to obtain. Application Insights image
I changed the way the tokens were obtained. The program only obtains it once and keeps using that same token for other round trips. I did this by making the CredentialClass AddScoped.
I assume the question is why the first call takes more time. If yes, just throwing in a couple of generic reasons which might contribute:
HTTPS handshake on the first call might take time
The client might need to create a connection pool
In case of Azure Key Vault, the first call does two round-trips AFAIK, first for auth, the second for the real payload

Intercepting, manipulating and forwarding API calls to a 3rd party API

this might be somewhat of a weird, long and convoluted question but hear me out.
I am running a licensed 3rd party closed-source proprietary software on my on-premise server that stores and manipulates data, the specifics of what it does are not important. One of the features of this software is that it has an API that accepts requests to insert/manipulate/retrieve data. Because of the poorly designed software, there is no mechanism to write internal scripts (at least not anymore, it has been deprecated in the newest versions) for the software or any events to attach to for writing code that further enhances the functionality of the software (further manipulation of the data according to preset rules, timestamping through a TSA of the incoming packages, etc.).
How can I bypass the need for an internal scripting functionality that still gives me a way to e.g. timestamp an incoming package and return an appropriate response via the API to the sender in case of an error.
I have thought about using the in-built database trigger mechanisms (specifically MongoDB Change Streams API) to intercept the incoming data and adding the required hash and other timestamping-related information directly into the database. This is a neat solution, other than the fact that in case of an error (there have been some instances where our timestamping authority API is down or not responding to requests) there is no way to inform the sender that the timestamping process has not gone through as expected and that the new data will not be accepted into the server (all data on the server must be timestamped by law).
Another way this could be done is by intercepting the API request somehow before it reaches its endpoint, doing whatever needs to be done to the data, and then forwarding the request further to the server's API endpoint and letting it do its thing. If I am not mistaken the concept is somewhat similar to what a reverse proxy does on the network layer - it routes incoming requests according to rules set in the configuration, removes/adds headers to the packets, encrypts the connection to the server, etc.
Finally, my short question to this convoluted setup would be: what is the best way of tackling this problem, are there any software solutions or concepts that I should be researching?

Caching authentication calls

Imagine, that we have one authentication service and two applications that use It by http call. Where we should put a cache?
Only authentication service has its cache (redis for example)
Caching is a part of application number one and two
Putting your cache inside the applications is a judgement call. It will be far more efficient if it can cache in-memory, but it comes at a trade-off in that it might be invalid. This is ultimately a per-case basis, and depends on the needed security of your application.
You will have to figure out what an acceptable TTL for your cache is, it could be anywhere between zero and eternal. If it's zero, then you've answered your question, and there is no value in a cache layer at all in the application.
Most applications can accept some level of staleness, at least on the order of a few seconds. If you are doing banking transactions, you likely cannot get away with this, but if you are creating a social-media application, you can likely have a TTL at least in the several-minutes range, possibly hours or days.
Just a bit of advice, since you are using HTTP for your implementation, take a look at using the cache-control that is baked into HTTP, it's likely your client will support it out-of-the-box, and most of the large, complicated issues (cache expiration, store sizing, etc) will be be solved by people long before you.

API authentication using timestamp : What to do when the client's time setting is changed?

I am implementing an REST API authentication system.
I am basically using the method explained in this site:
http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/
Basically it uses the request body to create a hash, sends it to the server along with the actual request, the server recreates and compares it, and what not...
I won't bother explaining the details. The important part is that I am using a timestamp in order to prevent "replay attacks".
Quoting from the site, it explains:
Compare the current server’s timestamp to the timestamp the client sent. Make sure the difference between the two timestamps it within an acceptable time limit (5-15mins maybe) to hinder replay attacks.
The problem I am facing now is that if the client's clock setting is modified, it may cause unexpected API authentication failures, since the timestamp varies between the client and the server.
Is there no way around this? Do I have to give up on using the timestamp?
I would highly appreciate it if anyone can help me out with a solution for this timestamp problem, or with any other way which I can prevent replay attacks.
Note: I am aware that issuing a nonce to the client is an excellent way to prevent "replay attacks", but I want to make that my last resort, since the implementation cost of creating a nonce-issuing-API and the backend to manage the nonce is too large.
When comparing the server's time stamp with the client sent timestamp, it does not have to be THE client timestamp, but the previuously timestamp sent by the server to the client.
You can never rely on client's own timestamp as it could be anything or it could be in the other side of the world.
When the client connects to the server the first time, the server can reply a timestamp from itself and stored on the client, the next time the client must send the last timestamp received.
I think you want your timestamp to be UTC time as the updated article indicates.
Store your times as UTC, which is the number of seconds since the unix epoch.
Display your time fields with time and date formatting to reflect a user's timezone

Shared Key Lite authentication scheme validity with azure storage service

I read in MS msdn site [Link] about the Shared Key Lite authentication scheme for the azure storage service access . It was mentioned that Shared Key Lite signature is valid for 15 mins, this avoids the replay attacks. But my question is, why such a long duration for validity? During 15 mins time span replay attacks can happen right?
But my question is, why such a long duration for validity?
Think of this 15 minutes as a buffer to take care of any clock skewness. It may be entirely possible that the clock on the machine from where you're creating the authorization header is not in sync with clocks in Windows Azure and you obviously don't want exact time match between the 2 systems for the authorization to succeed.