How to synchronously refresh a access token with project reactor - spring-webflux

I'm accessing a external rest api which is secured using OpenID connect. I must run a job which calls the api several times before it completes. This job may be executed in parallel using several concurrent threads. Thus I use a service for api access which is instantiated for each job:
private AccessAndRefresTokens tokens; // I initially get that from somewhere else
public Mono<Result> callTheApi(){
return createWebClientWith(tokens.accessToken).executeRequest();
}
public Mono<Result> callOtherApiFunction(){
return createWebClientWith(tokens.accessToken).executeRequest();
}
public Mono<Result> callYetAnotherApiFunction(){
return createWebClientWith(tokens.accessToken).executeRequest();
}
As my job is executed it might happen that the access token expires between two api calls. To prevent this from happening, I like to check the validity of the access token before every request and refresh it if necessary.
My first idea was to do the validity check inside a flatMap operator:
Mono.just(tokens).flatMap(tokens -> {
if(accessTokenExpired){
return refreshAccessToken().doOnNext(refreshedTokens -> tokens = refreshedTokens);
}else{
return Mono.just(tokens.accessToken);
}
}).flatMap(accessToken -> createWebClientWith(accessToken));
However it seems to me that this would trigger the refresh several times if multiple threads access the service and the refresh has not yet completed. As a consequence I would end up refreshing the access token several times in a very short time which might fail due to rate limits.
I am new to the whole reactive thing and I suppose using a synchronized block is not a desired option. So I tried to figure out something using reactor's Processor but I could not find a satisfying solution.
So is there a way to ensure the access token is refershed only once? And how do I achieve this without using blocking code?

Related

Kotlin Coroutines: suspending subsequent access token renewal requests

I have an app which authenticates with a back-end and receives a long-lived refresh token and short-lived access token back. We use the access token for authorizing our requests, and every time it expires, we trigger a token renewal (using the refresh token) before we retry the API call with the new tokens. Pretty standard stuff.
It's possible to trigger multiple network calls simultaneously (asynchronously), but if they all have invalid access tokens, we want only the first to go through and refresh the token, while the subsequent calls wait for the initial one to complete before they continue. Currently, we've implemented it like this:
private var renewingDeferred: Deferred<Unit>? = null
suspend fun renewTokens() {
coroutineScope {
if (renewingDeferred == null) {
renewingDeferred = async {
try {
tokens = tokensApi.renew().await()
} finally {
renewingDeferred = null
}
}
} else {
renewingDeferred?.await()
}
}
}
The idea is that every request that has an invalid access token will call renewTokens() and then retry with new headers. The caller of renewTokens() shouldn't care about whether it actually renews or just waits for a previous renewal to finish, as long as it knows that tokens are renewed once the function returns (is no longer suspended). And as far as I can tell, it works fine, but I'm not completely sure about the code, specifically the renewingDeferred = null part of the finally block.
For instance, say request A starts renewing, request B is triggered and starts awaiting request A, then A finishes, sets renewingDeferred to null, but does setting that value to null somehow affect request B's awaiting status?
Also, what if an exception is thrown when calling the API? I don't have a catch block, sort of assuming that await()ing on a deferred that throws will somehow rethrow, but exception handling in coroutines isn't really my strong side. Would love to get some advice on this.
Also, if anyone has a general idea about how this should be solved in a different manner, all suggestions are welcome!
You can solve this by using Mutex.
Create one property for coroutine locking, and when renewToken is called, lock Mutex until token is renewed.

Conditionally returning a custom response object to the client in Wep Api 2

I have a Web Api 2 service that will be deployed across 4 production servers. When a request doesn't pass validation a custom response object is generated and returned to the client.
A rudimentary example
if (!ModelState.IsValid)
{
var responseObject = responseGenerator.GetResponseForInvalidModelState(ModelState);
return Ok(responseObject);
}
Currently the responseGenerator is aware of what environment it is in and generates the response accordingly. For example, in development it'll return a lot detail but in production it'll only return a simple failure status.
How can I implement a "switch" that turns details on without requiring a round trip to the database each time?
Due to the nature of our environment using a config file isn't realistic. I've considered using a flag in the database and then caching it at the application layer but environmental constraints make refreshing the cache on all 4 servers very painful.
I ended up going with the parameter suggestion and then implementing a token system on the back end. If a Debug token is present in the request the service validates it against the database. If it's a valid and active token it returns the additional detail.
This allows us to control things from our end while keeping things simple for the vendors and only adds that extra round trip to the database during debugging.

Ensure a web server's query will complete even if the client browser page is closed

I am trying to write a control panel to
Inform about certain KPIS
Enable the user to init certain requests / jobs by pressing a button that then runs a stored proc on the DB or sets a specific setting etc
So far, so good, except I would like to run some bigger jobs where the length of time that the job is running for is unknown and could run over both the script timeout period AND the time the user is willing to wait for a response.
What I want is a "fire and forget" process so the user hits the button and even if they kill the page or turn off their phone they know the job has been initiated and WILL complete.
I was looking into C# BeginExecuteNonQuery which is an async call to the query so the proc is fired but the control doesn't have to wait for a response from it to carry on. However I don't know what happens when the page/app that fired it is shut.
Also I was thinking of some sort of Ajax command that fires the code in a page behind the scenes so the user doesn't know about it running but then again I believe if the user shuts the page down the script will die and the command will die on the server as well.
The only way for certain I know of is a "queue" table where jobs are inserted into this table then an MS Agent job comes along every minute or two checking for new inserts and then runs the code if there is any. That way it is all on the DB and only a DB crash will destroy it. It won't help with multiple jobs waiting to be run concurrently that both take a long time but it's the only thing I can be sure of that will ensure the code is run at all.
Any ideas?
Any language is okay.
Since web browsers are unconnected, requests from them always take the full amount of time. The governing factor isn't what the browser does, but how long the web site itself will allow an action to continue.
IIS (and in general, web servers) have a timeout period for requests, where if the work being done takes simply too long, the request is terminated. This would involve abruptly stopping whatever is taking so long, such as a database call, running code, and so on.
Simply making your long-running actions asynchronous may seem like a good idea, however I would recommend against that. The reason is that in ASP and ASP.Net, asynchronously-called code still consumes a thread in a way that blocks other legitimate request from getting through (in some cases you can end up consuming two threads!). This could have performance implications in non-obvious ways. It's better to just increase the timeout and allow the synchronously blocking task to complete. There's nothing special you have to do to make such a request complete fully, it will occur even if the sender closes his browser or turns off his phone immediately after (presuming the entire request was received).
If you're still concerned about making certain work finish, no matter what is going on with the web request, then it's probably better to create an out-of-process server/service that does the work and to which such tasks can be handed off. Your web site then invokes a method that, inside the service, starts its own async thread to do the work and then immediately returns. Perhaps it also returns a request ID, so that the web page can check on the status of the requested work later through other methods.
You may use asynchronous method and call the query from this method.
Your simple method can be changed in to a asynch method in the following manner.
Consider that you have a Test method to be called asynchronously -
Class AsynchDemo
{
public string TestMethod(out int threadId)
{
//call your query here
}
//create a asynch handler delegate:
public delegate string AsyncMethodCaller(out int threadId);
}
In your main program /or where you have to call the Test Method:
public static void Main()
{
// The asynchronous method puts the thread id here.
int threadId;
// Create an instance of the test class.
AsyncDemo ad = new AsyncDemo();
// Create the delegate.
AsyncMethodCaller caller = new AsyncMethodCaller(ad.TestMethod);
// Initiate the asychronous call.
IAsyncResult result = caller.BeginInvoke(
out threadId, null, null);
// Call EndInvoke to wait for the asynchronous call to complete,
// and to retrieve the results.
string returnValue = caller.EndInvoke(out threadId, result);
Console.WriteLine("The call executed on thread {0}, with return value \"{1}\".",
threadId, returnValue);
}
From my experience a Classic ASP or ASP.NET page will run until complete, even if the client disconnects, unless you have something in place for checking that the client is still connected and do something if they are not, or a timeout is reached.
However, it would probably be better practice to run these sorts of jobs as scheduled tasks.
On submitting your web page could record in a database that the task needs to be run and then when the scheduled task runs it checks for this and starts the job.
Many web hosts and/or web control panels allow you to create scheduled tasks that call a URL on schedule.
Alternately if you have direct access to the web server you could create a scheduled task on the server to call a URL on schedule.
Or, if ASP.NET, you can put some code in global.asax to run on a schedule. Be aware though, if your website is set to stop after a certain period of inactivity then this will not work unlesss there is frequent continuous activity.

In the new ASP.NET Web API, how do I design for "Batch" requests?

I'm creating a web API based on the new ASP.NET Web API. I'm trying to understand the best way to handle people submitting multiple data-sets at the same time. If they have 100,000 requests it would be nice to let them submit 1,000 at a time.
Let's say I have a create new Contact method in my Contacts Controller:
public string Put(Contact _contact)
{
//add new _contact to repository
repository.Add(_contact);
//return success
}
What's the proper way to allow users to "Batch" submit new contacts? I'm thinking:
public string BatchPut(IEnumerable<Contact> _contacts)
{
foreach (var contact in _contacts)
{
respository.Add(contact);
}
}
Is this a good practice? Will this parse a GET request with a JSON array of Contacts (assuming they are correctly formatted)?
Lastly, any tips on how best to respond to Batch requests? What if 4 out of 300 fail?
Thanks a million!
When you PUT a collection, you are either inserting the whole collection or replacing an existing collection as if it was a single resource. It is very similar to GET, DELETE or POST a collection. It is an atomic operation. Using is as a substitute for individual calls to PUT a contact may not be very RESTfull (but that is really open for debate).
You may want to look at HTTP pipelining and send multiple PutContact requests of the same socket. With each request you can return standard HTTP status for that single request.
I implemented batch updates in the past with SOAP and we encountered a number of unforeseen issues when the system was under load. I suspect you will run into the same issues if you don't pay attention.
For example, the database may timeout in the middle of the batch update and the all hell broke loose in terms of failures, reliability, transactions etc. And the poor client had to figure out what was actually updated and try again.
When there was too many records to update, the HTTP request would time out because we took too long. That opened another can of worms.
Another concern was how much data would we accept during the update? Was 10MB of contacts enough? Perhaps 1MB? Larger buffers has numerous implications in terms of memory usage and security.
Hence my suggestion to look at HTTP pipelining.
Update
My suggestion would to handle batch creation of contacts as an async process. Just assume that a "job" is the same as a "batch create" process. So the service may look as follows:
public class JobService
{
// Post
public void Create(CreateJobRequest job)
{
// 1. Create job in the database with status "pending"
// 2. Save job details to disk (or S3)
// 3. Submit the job to MSMQ (or SQS)
// 4. For 20 seconds, poll the database to see if the job completed
// 5. If the job completed, return 201 with a URI to "Get" method below
// 6. If not, return 202 (aka the request was accepted for processing, but has not completed)
}
// Get
public Job Get(string id)
{
// 1. Fetch the job from the database
// 2. Return the job if it exists or 404
}
}
The background process that consumes stuff from the queue can update the database or alternatively perform a PUT to the service to update the status of Job to running and completed.
You'll need another service to navigate through the data that was just processed, address errors and so forth.
You background process may be need to be tolerant of validation errors. If not, or if your service does validation (assuming you are not doing database calls etc for which response times cannot be guaranteed), you can return a structure like CreateJobResponse that contains enough information for your client to fix the issue and resubmit the request. If you have to do some validation that is time consuming, do it in the background process, mark the job as failed and update the job with the information that will allow a client to fix the errors and resubmit the request. This assumes that the client can do something with the fact that the job failed.
If the Create method breaks the job request into many smaller "jobs" you'll have to deal with the fact that it may not be atomic and pose numerous challenges to monitor whether jobs completed successfully.
A PUT operation is supposed to replace a resource. Normally you do this against a single resource but when doing it against a collection that would mean you replace the original collection with the set of data passed. Not sure if you are meaning to do that but I am assuming you are just updating a subset of the collection in which case a PATCH method would be more appropriate.
Lastly, any tips on how best to respond to Batch requests? What if 4 out of 300 fail?
That is really up to you. There is only a single response so you can send a 200 OK or a 400 Bad Request and put the details in the body.

how to cancel WCF service call?

I have a WCF function that is executing long time, so I call the function in UI with backgraundworker... I want to give a feature to cancel the execution, so I abort IComunicationObject, the problem is that Service execution is not stoping, Is there any way to stop Service execution in this case?
You may not need a BackgroundWorker. You can either make the operation IsOneWay, or implement the asynchronous pattern. To prevent threading issues, consider using the SynchronizationContext. Programming WCF Services does a great job at explaining these.
Make a CancelOperation() method which sets some static ManualResetEvent in your service. Check this event in your Operation method frequently. Or it can be CancelOperation(Guid operationId) if your service can process multiple operation calls concurrently.
One important thing to understand if you're using the Async calls is that there's still no way to cancel a request and prevent a response coming back from the service once it's started. It's up to your UI to be intelligent in handling responses to avoid race conditions. Fortunately there's a simple way of doing this.
This example is for searching orders - driven by a UI. Lets assume it may take a few seconds to return results and the user is running two searches back to back.
Therefore if your user runs two searches and the first search returns after the second - you need to make sure you don't display the results of the first search.
private int _searchRequestID = 0; // need one for each WCF method you call
// Call our service...
// The call is made using the overload to the Async method with 'UserToken'.
// When the call completes we check the ID matches to avoid a nasty
// race condition
_searchRequestID = _searchRequestID++;
client.SearchOrdersCompleted += (s, e) =>
{
if (_searchRequestID != (int)e.UserState))
{
return; // avoid nasty race condition
}
// ok to handle response ...
}
client.SearchOrdersAsync(searchMessage, _searchRequestID);