Spring WebFlux Web Client - Iterating paged REST API - spring-webflux

I need to get the items from all pages of a pageable REST API. I also need to start processing items, as soon as they are available, not needing to wait for all the pages to be loaded. In order to do so, I'm using Spring WebFlux and its WebClient, and want to return Flux<Item>.
Also, the REST API I'm using is rate limited, and each response to it contains headers with details on the current limits:
Size of the current window
Remaining time in the current window
Request quota in window
Requests left in current window
The response to a single page request looks like:
{
"data": [],
"meta": {
"pagination": {
"total": 10,
"current": 1
}
}
}
The data array contains the actual items, while the meta object contains pagination info.
My current solution first does a "dummy" request, just to get the total number of pages, and the rate limits.
Mono<T> paginated = client.get()
.uri(uri)
.exchange()
.flatMap(response -> {
HttpHeaders headers = response.headers().asHttpHeaders();
Limits limits = new Limits();
limits.setWindowSize(headers.getFirst("X-Window-Size"));
limits.setWindowRemaining(headers.getFirst("X-Window-Remaining"));
limits.setRequestsQuota(headers.getFirst("X-Requests-Quota");
limits.setRequestsLeft(headers.getFirst("X-Requests-Remaining");
return response.bodyToMono(Paginated.class)
.map(paginated -> {
paginated.setLimits(limits);
return paginated;
});
});
Afterwards, I emit a Flux containing page numbers, and for each page, I do a REST API request, each request being delayed enough so it doesn't get past the limit, and return a Flux of extracted items:
return paginated.flatMapMany(paginated -> {
return Flux.range(1, paginated.getMeta().getPagination().getTotal())
.delayElements(Duration.ofMillis(paginated.getLimits().getWindowRemaining() / paginated.getLimits().getRequestsQuota()))
.flatMap(page -> {
return client.get()
.uri(pageUri)
.retrieve()
.bodyToMono(Item.class)
.flatMapMany(p -> Flux.fromIterable(p.getData()));
});
});
This does work, but I'm not happy with it because:
It does initial "dummy" request to get the number of pages, and then
repeats the same request to get the actual data.
It gets rate limits only with the initial request, and assumes the
limits won't change (eg, that it's the only one using the API) -
which may not be true, in which case it will get an error that it
exceeded the limit.
So my question is how to refactor it so it doesn't need the initial request (but rather get limits, page numbers and data from the first request, and continue through all pages, while updating (and respecting) the limits.

I think this code will do what you want. The idea is to make a flux that make a call to your resource server, but in the process to handle the response, to add a new event on that flux to be able to make the call to next page.
The code is composed of:
A simple wrapper to contains the next page to call and the delay to wait before executing the call
private class WaitAndNext{
private String next;
private long delay;
}
A FluxProcessor that will make HTTP call and process the response:
FluxProcessor<WaitAndNext, WaitAndNext> processor= DirectProcessor.<WaitAndNext>create();
FluxSink<WaitAndNext> sink=processor.sink();
processor
.flatMap(x-> Mono.just(x).delayElement(Duration.ofMillis(x.delay)))
.map(x-> WebClient.builder()
.baseUrl(x.next)
.defaultHeader("Accept","application/json")
.build())
.flatMap(x->x.get()
.exchange()
.flatMapMany(z->manageResponse(sink, z))
)
.subscribe(........);
I split the code with a method that only manage response: It simply unwrap your data AND add a new event to the sink (the event beeing the next page to call after the given delay)
private Flux<Data> manageResponse(FluxSink<WaitAndNext> sink, ClientResponse resp) {
if (resp.statusCode()!= HttpStatus.OK){
sink.error(new IllegalStateException("Status code invalid"));
}
WaitAndNext wn=new WaitAndNext();
HttpHeaders headers=resp.headers().asHttpHeaders();
wn.delay= Integer.parseInt(headers.getFirst("X-Window-Remaining"))/ Integer.parseInt(headers.getFirst("X-Requests-Quota"));
return resp.bodyToMono(Item.class)
.flatMapMany(p -> {
if (p.paginated.current==p.paginated.total){
sink.complete();
}else{
wn.next="https://....?page="+(p.paginated.current+1);
sink.next(wn);
}
return Flux.fromIterable(p.getData());
});
}
Now we just need to initialize the system by calling for the retrieval of the first page with no delay:
WaitAndNext wn=new WaitAndNext();
wn.next="https://....?page=1";
wn.delay=0;
sink.next(wn);

Related

Chaining Reactive Asynchronus calls in spring

I’m very new to the SpringReactor project.
Until now I've only used Mono from WebClient .bodyToMono() steps, and mostly block() those Mono's or .zip() multiple of them.
But this time I have a usecase where I need to asynchronously call methods in multiple service classes, and those multiple service classes are calling multiple backend api.
I understand Project Reactor doesn't provide asynchronous flow by default.
But we can make the publishing and/or subscribing on different thread and make code asynchronous
And that's what I am trying to do.
I tried to read the documentation here reactor reference but still not clear.
For the purpose of this question, I’m making up this imaginary scenario. that is a little closer to my use case.
Let's assume we need to get a search response from google for some texts searched under images.
Example Scenario
Let's have an endpoint in a Controller
This endpoint accepts the following object from request body
MultimediaSearchRequest{
Set<String> searchTexts; //many texts.
boolean isAddContent;
boolean isAddMetadata;
}
in the controller, I’ll break the above single request object into multiple objects of the below type.
MultimediaSingleSearchRequest{
String searchText;
boolean isAddContent;
boolean isAddMetadata;
}
This Controller talks to 3 Service classes.
Each of the service classes has a method searchSingleItem.
Each service class uses a few different backend Apis, but finally combines the results of those APIs responses into the same type of response class, let's call it MultimediaSearchResult.
class JpegSearchHandleService {
public MultimediaSearchResult searchSingleItem
(MultimediaSingleSearchRequest req){
return comboneAllImageData(
getNameApi(req),
getImageUrlApi(req),
getContentApi(req) //dont call if req.isAddContent false
)
}
}
class GifSearchHandleService {
public MultimediaSearchResult searchSingleItem
(MultimediaSingleSearchRequest req){
return comboneAllImageData(
getNameApi(req),
gitPartApi(req),
someRandomApi(req),
soemOtherRandomApi(req)
)
}
}
class VideoSearchHandleService {
public MultimediaSearchResult searchSingleItem
(MultimediaSingleSearchRequest req){
return comboneAllImageData(
getNameApi(req),
codecApi(req),
commentsApi(req),
anotherApi(req)
)
}
}
In the end, my controller returns the response as a List of MultimediaSearchResult
Class MultimediaSearchResponse{
List< MultimediaSearchResult> results;
}
If I want to use this all asynchronously using the project reactor. how to achieve it.
Like calling searchSingleItem method in each service for each searchText asynchronously.
Even within the services call each backend API asynchronously (I’m already using WebClient and converting response bodyToMono for backend API calls)
First, I will outline a solution for the upper "layer" of your scenario.
The code (a simple simulation of the scenario):
public class ChainingAsyncCallsInSpring {
public Mono<MultimediaSearchResponse> controllerEndpoint(MultimediaSearchRequest req) {
return Flux.fromIterable(req.getSearchTexts())
.map(searchText -> new MultimediaSingleSearchRequest(searchText, req.isAddContent(), req.isAddMetadata()))
.flatMap(multimediaSingleSearchRequest -> Flux.merge(
classOneSearchSingleItem(multimediaSingleSearchRequest),
classTwoSearchSingleItem(multimediaSingleSearchRequest),
classThreeSearchSingleItem(multimediaSingleSearchRequest)
))
.collectList()
.map(MultimediaSearchResponse::new);
}
private Mono<MultimediaSearchResult> classOneSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.just(new MultimediaSearchResult("1"));
}
private Mono<MultimediaSearchResult> classTwoSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.just(new MultimediaSearchResult("2"));
}
private Mono<MultimediaSearchResult> classThreeSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.just(new MultimediaSearchResult("3"));
}
}
Now, some rationale.
In the controllerEndpoint() function, first we create a Flux that will emit every single searchText from the request. We map these to MultimediaSingleSearchRequest objects, so that the services can consume them with the additional metadata that was provided with the original request.
Then, Flux::flatMap the created MultimediaSingleSearchRequest objects into a merged Flux, which (as opposed to Flux::concat) ensures that all three publishers are subscribed to eagerly i.e. they don't wait for one another. It works best on this exact scenario, when several independent publishers need to be subscribed to at the same time and their order is not important.
After the flat map, at this point, we have a Flux<MultimediaSearchResult>.
We continue with Flux::collectList, thus collecting the emitted values from all publishers (we could also use Flux::reduceWith here).
As a result, we now have a Mono<List<MultimediaSearchResult>>, which can easily be mapped to a Mono<MultimediaSearchResponse>.
The results list of the MultimediaSearchResponse will have 3 items for each searchText in the original request.
Hope this was helpful!
Edit
Extending the answer with a point of view from the service classes as well. Assuming that each inner (optionally skipped) call returns a different type of result, this would be one way of going about it:
public class MultimediaSearchResult {
private Details details;
private ContentDetails content;
private MetadataDetails metadata;
}
public Mono<MultimediaSearchResult> classOneSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.zip(getSomeDetails(req), getContentDetails(req), getMetadataDetails(req))
.map(tuple3 -> new MultimediaSearchResult(
tuple3.getT1(),
tuple3.getT2().orElse(null),
tuple3.getT3().orElse(null)
)
);
}
// Always wanted
private Mono<Details> getSomeDetails(MultimediaSingleSearchRequest req) {
return Mono.just(new Details("details")); // api call etc.
}
// Wanted if isAddContent is true
private Mono<Optional<ContentDetails>> getContentDetails(MultimediaSingleSearchRequest req) {
return req.isAddContent()
? Mono.just(Optional.of(new ContentDetails("content-details"))) // api call etc.
: Mono.just(Optional.empty());
}
// Wanted if isAddMetadata is true
private Mono<Optional<MetadataDetails>> getMetadataDetails(MultimediaSingleSearchRequest req) {
return req.isAddMetadata()
? Mono.just(Optional.of(new MetadataDetails("metadata-details"))) // api call etc.
: Mono.just(Optional.empty());
}
Optionals are used for the requests that might be skipped, since Mono::zip will fail if either of the zipped publishers emit an empty value.
If the results of each inner call extend the same base class or are the same wrapped return type, then the original answer applies as to how they can be combined (Flux::merge etc.)

How can I get webflux response body twice? Store some data from request/response

I created a filter to log some important data from the request and response...
#Override
public Mono<ClientResponse> filter(ClientRequest request, ExchangeFunction next) {
URI uri = request.url();
HttpMethod method = request.method();
return next.exchange(request).onErrorResume(err -> {
writeToFile(method, uri, 500, "", true);
return Mono.error(err);
}).flatMap((response) -> {
return response.bodyToMono(String.class).flatMap(body -> {
writeToFile(method, uri, response.rawStatusCode(), body, false);
return Mono.just(response);
});
});
}
The idea of this filter is to register some data and let the rest work exactly the same.
Expected: This was my attempt to keep the rest of the code untouched and just include a filter to store some data apart from the normal flow.
Actual: It works, except when the real code calls bodyToMono, it returns null
At some point of the normal flow, I am doing something similar to this:
return client.get().uri(...).exchangeToMono((response) -> {
return response.bodyToMono(MyObject.class);
})
It works flawlessly when I remove the filter.
So I discovered I can't call bodyToMono twice the way I am using probably because the first call is consuming the body and then it is empty in the second call.

Restricting options method in javalin

We have a kotlin code like the following, I am trying to disable the options method for the API's as follows using Javalin(3.12.0), but it is resulting in blocking all the other methods like get and post as well. What is that I am missing here?
val app = Javalin.create {
it.defaultContentType = "application/json"
it.enableWebjars()
it.addStaticFiles("", Location.CLASSPATH)
it.enableCorsForAllOrigins()
it.dynamicGzip = true
}
app.options("/*") {ctx -> ctx.status(405)}
app.routes {
path("/auth") {
post("/login") {
Auth.doLogin(it)
}
get("/metrics") {
val results = getData()
it.json(results)
}
}
Also there are 2 questions
1.want to implement the ratelimit for the get APi's for 20 request for an hour using the below code
app.get("/") { ctx ->
RateLimit(ctx).requestPerTimeUnit(5, TimeUnit.MINUTES) // throws if rate limit is exceeded
ctx.status("Hello, rate-limited World!")
}
How to achieve it?
How to restrict the jetty server version to display when the API call is made?
For Jetty...
There is only 1 Rate Limit concept in Jetty, and that's the org.eclipse.jetty.server.AcceptRateLimit, added as a Jetty Container LifeCycle bean to the ServerConnector, it cannot adjust rates for specific request endpoints, only for the entire connector.
If you want specific endpoint rates, then the org.eclipse.jetty.servlets.QoSFilter is the way that's done with Jetty.
The org.eclipse.jetty.server.HttpConfiguration for the org.eclipse.jetty.server.ServerConnector contains the controls to enable/disable the server announcement.
See
HttpConfiguration.setSendServerVersion(boolean)
HttpConfiguration.setSendXPoweredBy(boolean)
HttpConfiguration.setSendDateHeader(boolean)

Is it possible to send an initial value?

I have a web method that returns a flux object when it will be time (it's linked to a pub/sub).
Would it at least be possible, only for the first call, to return a default?
public Flux<String> receiveStream() {
return myReactiveService.getData() //here can I return a value at start? //.map(...);
It is not that easy to do it "only for the first call". Each request is supposed to get its own sequence of Strings, unless you take specific steps to change that. And that is at two levels:
- WebFlux: each request leads to a separate invocation of the controller method, so the Flux is newly instantiated
- Reactor: most Flux are "cold", ie they don't generate data until they're subscribed to, and each subscription regenerates a separate dataset.
So even if you returned a cached Flux, it would probably still serve each request separately.
There is a way to share() a long-lived Flux so that later newcomers only see data that becomes available after they've subscribed to the shared Flux, which could help with the "only the first request" aspect of your requirement.
Assuming getData() by itself is cold (ie simply calling it doesn't trigger any meaningful processing):
AtomicReference<Flux<String>> sharedStream = new AtomicReference<>();
public Flux<String> receiveStream() {
Flux<String> result = sharedStream.get();
if (result == null) {
Flux<String> coldVersionWithInit = myReactiveService
.getData()
.startWith(FIRST_VALUE)
.map(...);
Flux<String> hotVersion = coldVersionWithInit.share();
if (sharedStream.compareAndSet(null, hotVersion))
result = hotVersion;
else
result = sharedStream.get();
}
return result;
}
i think what you are looking for is Flux#defaultIfEmpty

Dojo datagrid jsonrest response headers

I'd like to use custom headers to provide some more information about the response data. Is it possible to get the headers in a response from a dojo datagrid hooked up to a jsonRest object via an object store (dojo 1.7)? I see this is possible when you are making the XHR request, but in this case it is being made by the grid.
The API provides an event for a response error which returns the response object:
on(this.grid, 'FetchError', function (response, req) {
var header = response.xhr.getAllResponseHeaders();
});
using this I am successfully able to access my custom response headers. However, there doesn't appear to be a way to get the response object when the request is successful. I have been using the undocumented private event _onFetchComplete with aspect after, however, this does not allow access to the response object, just the response values
aspect.after(this.grid, '_onFetchComplete', function (response, request)
{
///unable to get headers, response is the returned values
}, true);
Edit:
I managed to get something working, but I suspect it is very over engineered and someone with a better understanding could come up with a simpler solution. I ended up adding aspect around to allow me to get hold of the deferred object in the rest store which is returned to the object store. Here I added a new function to the deffered to return the headers. I then hooked in to the onFetch of the object store using dojo hitch (because I needed the results in the current scope). It seems messy to me
aspect.around(restStore, "query", function (original) {
return function (method, args) {
var def = original.call(this, method, args);
def.headers = deferred1.then(function () {
var hd = def.ioArgs.xhr.getResponseHeader("myHeader");
return hd;
});
return def;
};
});
aspect.after(objectStore, 'onFetch', lang.hitch(this, function (response) {
response.headers.then(lang.hitch(this, function (evt) {
var headerResult = evt;
}));
}), true);
Is there a better way?
I solved this today after reading this post, thought I'd feed back.
dojo/store/JsonRest solves it also but my code ended up slightly different.
var MyStore = declare(JsonRest, {
query: function () {
var results = this.inherited(arguments);
console.log('Results: ', results);
results.response.then(function (res) {
var myheader = res.xhr.getResponseHeader('My-Header');
doSomethingWith(myheader);
});
return results;
}
});
So you override the normal query() function, let it execute and return its promise, and attach your own listener to its 'response' member resolving, in which you can access the xhr object that has the headers. This ought to let you interpret the JsonRest result while fitting nicely into the chain of the query() all invokers.
One word of warning, this code is modified for posting here, and actually inherited from another intermediary class that also overrode query(), but the basics here are pretty sound.
If what you want is to get info from the server, also a custom key-value in the cookie can be a solution, that was my case, first I was looking for a custom response header but I couldn't make it work so I did the cookie way getting the info after the grid data is fetched:
dojo.connect(grid, "_onFetchComplete", function (){
doSomethingWith(dojo.cookie("My-Key"));
});
This is useful for example to present a SUM(field) for all rows in a paginated datagrid, and not only those included in the current page. In the server you can fetch the COUNT and the SUM, the COUNT will be sent in the Content-Range header and the SUM can be sent in the cookie.