webflux Mono<T> onErrorReturn not called - spring-webflux

this is my HandlerFunction
public Mono<ServerResponse> getTime(ServerRequest serverRequest) {
return time(serverRequest).onErrorReturn("some errors has happened !").flatMap(s -> {
// this didn't called
return ServerResponse.ok().contentType(MediaType.TEXT_PLAIN).syncBody(s);
});
}
time(ServerRequest serverRequest) method is
private Mono<String> time(ServerRequest request) {
String format = DateTimeFormatter.ofPattern("HH:mm:ss").format(LocalDateTime.now());
return Mono.just("time is:" + format + "," + request.queryParam("name").get());
}
when i don't using param "name",it will throw one NoSuchElementException;
But, the Mono onErrorReturn not working!
why or what do i wrong?

The onError... operators are meant to deal with error signals happening in the pipeline.
In your case, the NoSuchElementException is thrown outside of the reactive pipeline, before anything can subscribe to the returned Mono.
I think you might get the behavior you're looking for by deferring the execution like this:
private Mono<String> time(ServerRequest request) {
return Mono.defer(() -> {
String format = DateTimeFormatter.ofPattern("HH:mm:ss").format(LocalDateTime.now());
Mono.just("time is:" + format + "," + request.queryParam("name").get());
});
}

Related

Getting a warning when use objectmapper in flux inappropriate blocking method call in java reactor

i am new to reactor, i tried to create a flux from Iterable. then i want to convert my object into string by using object mapper. then the ide warns a message like this in this part of the code new ObjectMapper().writeValueAsString(event). the message Inappropriate blocking method call. there is no compile error. could you suggest a solution.
Flux.fromIterable(Arrays.asList(new Event(), new Event()))
.flatMap(event -> {
try {
return Mono.just(new ObjectMapper().writeValueAsString(event));
} catch (JsonProcessingException e) {
return Mono.error(e);
}
})
.subscribe(jsonStrin -> {
System.out.println("jsonStrin = " + jsonStrin);
});
I will give you an answer, but I don't pretty sure this is what you want. it seems like block the thread. so then you can't get the exact benefits of reactive if you block the thread. that's why the IDE warns you. you can create the mono with monoSink. like below.
AtomicReference<ObjectMapper> objectMapper = new AtomicReference<>(new ObjectMapper());
Flux.fromIterable(Arrays.asList(new Event(), new Event()))
.flatMap(event -> {
return Mono.create(monoSink -> {
try {
monoSink.success(objectMapper .writeValueAsString(event));
} catch (JsonProcessingException e) {
monoSink.error(e);
}
});
})
.cast(String.class) // this cast will help you to axact data type that you want to continue the pipeline
.subscribe(jsonString -> {
System.out.println("jsonString = " + jsonString);
});
please try out this method and check that error will be gone.
it doesn't matter if objectMapper is be a normal java object as you did. (if you don't change). it is not necessary for your case.
You need to do it like this:
Flux.fromIterable(Arrays.asList(new Event(), new Event()))
.flatMap(event -> {
try {
return Mono.just(new ObjectMapper().writeValueAsString(event));
} catch (JsonProcessingException e) {
return Mono.error(e);
}
})
.subscribe(jsonStrin -> {
System.out.println("jsonStrin = " + jsonStrin);
});

How do I hook into micronaut server on error handling from a filter?

For any 4xx or 5xx response given out by my micronaut server, I'd like to log the response status code and endpoint it targeted. It looks like a filter would be a good place for this, but I can't seem to figure out how to plug into the onError handling
for instance, this filter
#Filter("/**")
class RequestLoggerFilter: OncePerRequestHttpServerFilter() {
companion object {
private val log = LogManager.getLogger(RequestLoggerFilter::class.java)
}
override fun doFilterOnce(request: HttpRequest<*>, chain: ServerFilterChain): Publisher<MutableHttpResponse<*>>? {
return Publishers.then(chain.proceed(request), ResponseLogger(request))
}
class ResponseLogger(private val request: HttpRequest<*>): Consumer<MutableHttpResponse<*>> {
override fun accept(response: MutableHttpResponse<*>) {
log.info("Status: ${response.status.code} Endpoint: ${request.path}")
}
}
}
only logs on a successful response and not on 4xx or 5xx responses.
How would i get this to hook into the onError handling?
You could do the following. Create your own ApplicationException ( extends RuntimeException), there you could handle your application errors and in particular how they result into http error codes. You exception could hold the status code as well.
Example:
class BadRequestException extends ApplicationException {
public HttpStatus getStatus() {
return HttpStatus.BAD_REQUEST;
}
}
You could have multiple of this ExceptionHandler for different purposes.
#Slf4j
#Produces
#Singleton
#Requires(classes = {ApplicationException.class, ExceptionHandler.class})
public class ApplicationExceptionHandler implements ExceptionHandler<ApplicationException, HttpResponse> {
#Override
public HttpResponse handle(final HttpRequest request, final ApplicationException exception) {
log.error("Application exception message={}, cause={}", exception.getMessage(), exception.getCause());
final String message = exception.getMessage();
final String code = exception.getClass().getSimpleName();
final ErrorCode error = new ErrorCode(message, code);
log.info("Status: ${exception.getStatus())} Endpoint: ${request.path}")
return HttpResponse.status(exception.getStatus()).body(error);
}
}
If you are trying to handle Micronaut native exceptions like 400 (Bad Request) produced by ConstraintExceptionHandler you will need to Replace the beans to do that.
I've posted example here how to handle ConstraintExceptionHandler.
If you want to only handle responses itself you could use this mapping each response code (example on #Controller so not sure if it works elsewhere even with global flag:
#Error(status = HttpStatus.NOT_FOUND, global = true)
public HttpResponse notFound(HttpRequest request) {
<...>
}
Example from Micronaut documentation.
Below code I used for adding custom cors headers in the error responses, in doOnError you can log errors
#Filter("/**")
public class ResponseCORSAdder implements HttpServerFilter {
#Override
public Publisher<MutableHttpResponse<?>> doFilter(HttpRequest<?> request, ServerFilterChain chain) {
return this.trace(request)
.switchMap(aBoolean -> chain.proceed(request))
.doOnError(error -> {
if (error instanceof MutableHttpResponse<?>) {
MutableHttpResponse<?> res = (MutableHttpResponse<?>) error;
addCorsHeaders(res);
}
})
.doOnNext(res -> addCorsHeaders(res));
}
private MutableHttpResponse<?> addCorsHeaders(MutableHttpResponse<?> res) {
return res
.header("Access-Control-Allow-Origin", "*")
.header("Access-Control-Allow-Methods", "OPTIONS,POST,GET")
.header("Access-Control-Allow-Credentials", "true");
}
private Flowable<Boolean> trace(HttpRequest<?> request) {
return Flowable.fromCallable(() -> {
// trace logic here, potentially performing I/O
return true;
}).subscribeOn(Schedulers.io());
}
}

How to return object from retrofit api get call

I am trying to get list of objects from api call with retrofit but i just cant find the way to do so :(
This is the function i built:
private List<Business> businesses getBusinesses()
{
List<Business> businessesList = new ArrayList<>();
Call<List<Business>> call = jsonPlaceHolderApi.getBusinesses();
call.enqueue(new Callback<List<Business>>() {
#Override
public void onResponse(Call<List<Business>> call, Response<List<Business>> response) {
if(!response.isSuccessful())
{
textViewResult.setText("Code: " + response.code());
return;
}
List<Business> businesses = response.body();
for(Business business : businesses)
{
String content = "";
content += "ID: " + business.getId() + "\n";
content += "Name: " + business.getName() + "\n";
content += "On promotion: " + business.isOnPromotion() + "\n\n";
textViewResult.append(content);
}
businessesList = businesses;
}
#Override
public void onFailure(Call<List<Business>> call, Throwable t) {
call.cancel();
textViewResult.setText(t.getMessage());
}
});
}
I am trying to get the businesses response and return it.
can anyone help me?
Feeling frustrated :(
The way your executing the Retrofit call is asynchronous - using call.enqueue. there's nothing wrong with this approach. In fact it's perhaps the best option, since network calls can take a while and you don't want to block unnecessarily.
Unfortunately, this means you cannot return the result from the function. In most scenarios, if you did, the call would likely finish after the return making your return useless.
There are several ways to deal with this, the simplest one is to use callbacks. For example:
interface OnBusinessListReceivedCallback {
void onBusinessListReceived(List<Business> list);
}
private void businesses getBusinesses(OnBusinessListReceivedCallback callback){
Call<List<Business>> call = jsonPlaceHolderApi.getBusinesses();
call.enqueue(new Callback<List<Business>>() {
#Override
public void onResponse(Call<List<Business>> call, Response<List<Business>> response) {
if(!response.isSuccessful()){
textViewResult.setText("Code: " + response.code());
return;
}
callback.onBusinessListReceived(response.body());
}
#Override
public void onFailure(Call<List<Business>> call, Throwable t) {
call.cancel();
textViewResult.setText(t.getMessage());
}
});
}
You can then call it like so:
getBusinesses(new OnBusinessListReceivedCallback() {
public void onBusinessListReceived(List<Business> list){
// list holds your data
}
});

getting apache ignite continuous query to work without enabling p2p class loading

I have been trying to get my ignite continuous query code to work without setting the peer class loading to enabled. However I find that the code does not work.I tried debugging and realised that the call to cache.query(qry) errors out with the message "Failed to marshal custom event" error. When I enable the peer class loading , the code works as expected. Could someone provide guidance on how I can make this work without peer class loading?
Following is the code snippet that calls the continuous query.
public void subscribeEvent(IgniteCache<String,String> cache,String inKeyStr,ServerWebSocket websocket ){
System.out.println("in thread "+Thread.currentThread().getId()+"-->"+"subscribe event");
//ArrayList<String> inKeys = new ArrayList<String>(Arrays.asList(inKeyStr.split(",")));
ContinuousQuery<String, String> qry = new ContinuousQuery<>();
/****
* Continuous Query Impl
*/
inKeys = ","+inKeyStr+",";
qry.setInitialQuery(new ScanQuery<String, String>((k, v) -> inKeys.contains(","+k+",")));
qry.setTimeInterval(1000);
qry.setPageSize(1);
// Callback that is called locally when update notifications are received.
// Factory<CacheEntryEventFilter<String, String>> rmtFilterFactory = new com.ccx.ignite.cqfilter.FilterFactory().init(inKeyStr);
qry.setLocalListener(new CacheEntryUpdatedListener<String, String>() {
#Override public void onUpdated(Iterable<CacheEntryEvent<? extends String, ? extends String>> evts) {
for (CacheEntryEvent<? extends String, ? extends String> e : evts)
{
System.out.println("websocket locallsnr data in thread "+Thread.currentThread().getId()+"-->"+"key=" + e.getKey() + ", val=" + e.getValue());
try{
websocket.writeTextMessage("key=" + e.getKey() + ", val=" + e.getValue());
}
catch (Exception e1){
System.out.println("exception local listener "+e1.getMessage());
qry.setLocalListener(null) ; }
}
}
} );
qry.setRemoteFilterFactory( new com.ccx.ignite.cqfilter.FilterFactory().init(inKeys));
try{
cur = cache.query(qry);
for (Cache.Entry<String, String> e : cur)
{
System.out.println("websocket initialqry data in thread "+Thread.currentThread().getId()+"-->"+"key=" + e.getKey() + ", val=" + e.getValue());
websocket.writeTextMessage("key=" + e.getKey() + ", val=" + e.getValue());
}
}
catch (Exception e){
System.out.println("exception cache.query "+e.getMessage());
}
}
Following is the remote filter class that I have made into a self contained jar and pushed into the libs folder of ignite, so that this can be picked up by the server nodes
public class FilterFactory
{
public Factory<CacheEntryEventFilter<String, String>> init(String inKeyStr ){
System.out.println("factory init called jun22 ");
return new Factory <CacheEntryEventFilter<String, String>>() {
private static final long serialVersionUID = 5906783589263492617L;
#Override public CacheEntryEventFilter<String, String> create() {
return new CacheEntryEventFilter<String, String>() {
#Override public boolean evaluate(CacheEntryEvent<? extends String, ? extends String> e) {
//List inKeys = new ArrayList<String>(Arrays.asList(inKeyStr.split(",")));
System.out.println("inside remote filter factory ");
String inKeys = ","+inKeyStr+",";
return inKeys.contains(","+e.getKey()+",");
}
};
}
};
}
}
Overall logic that I'm trying to implement is to have a websocket client subscribe to an event by specifying a cache name and key(s) of interest.
The subscribe event code is called which creates a continuous query and registers a local listener callback for any update event on the key(s) of interest.
The remote filter is expected to filter the update event based on the key(s) passed to it as a string and the local listener is invoked if the filter event succeeds. The local listener writes the updated key value to the web socket reference passed to the subscribe event code.
The version of ignite Im using is 1.8.0. However the behaviour is the same in 2.0 as well.
Any help is greatly appreciated!
Here is the log snippet containing the relevant error
factory init called jun22
exception cache.query class org.apache.ignite.spi.IgniteSpiException: Failed to marshal custom event: StartRoutineDiscoveryMessage [startReqData=StartRequestData [prjPred=org.apache.ignite.configuration.CacheConfiguration$IgniteAllNodesPredicate#269707de, clsName=null, depInfo=null, hnd=CacheContinuousQueryHandlerV2 [rmtFilterFactory=com.ccx.ignite.cqfilter.FilterFactory$1#5dc301ed, rmtFilterFactoryDep=null, types=0], bufSize=1, interval=1000, autoUnsubscribe=true], keepBinary=false, routineId=b40ada9f-552d-41eb-90b5-3384526eb7b9]
From FilterFactory you are returning an instance of an anonymous class which in turn refers to the enclosing FilterFactory which is not serializable.
Just replace the returned anonymous CacheEntryEventFilter based class with a corresponding nested static class.
You need to explicitly deploy you CQ classes (remote filters specifically) on all nodes in topology. Just create a JAR file with them and put into libs folder prior to starting nodes.

Pig - passing Databag to UDF constructor

I have a script which is loading some data about venues:
venues = LOAD 'venues_extended_2.csv' USING org.apache.pig.piggybank.storage.CSVLoader() AS (Name:chararray, Type:chararray, Latitude:double, Longitude:double, City:chararray, Country:chararray);
Then I want to create UDF which has a constructor that is accepting venues type.
So I tried to define this UDF like that:
DEFINE GenerateVenues org.gla.anton.udf.main.GenerateVenues(venues);
And here is the actual UDF:
public class GenerateVenues extends EvalFunc<Tuple> {
TupleFactory mTupleFactory = TupleFactory.getInstance();
BagFactory mBagFactory = BagFactory.getInstance();
private static final String ALLCHARS = "(.*)";
private ArrayList<String> venues;
private String regex;
public GenerateVenues(DataBag venuesBag) {
Iterator<Tuple> it = venuesBag.iterator();
venues = new ArrayList<String>((int) (venuesBag.size() + 1)); // possible fails!!!
String current = "";
regex = "";
while (it.hasNext()){
Tuple t = it.next();
try {
current = "(" + ALLCHARS + t.get(0) + ALLCHARS + ")";
venues.add((String) t.get(0));
} catch (ExecException e) {
throw new IllegalArgumentException("VenuesRegex: requires tuple with at least one value");
}
regex += current + (it.hasNext() ? "|" : "");
}
}
#Override
public Tuple exec(Tuple tuple) throws IOException {
// expect one string
if (tuple == null || tuple.size() != 2) {
throw new IllegalArgumentException(
"BagTupleExampleUDF: requires two input parameters.");
}
try {
String tweet = (String) tuple.get(0);
for (String venue: venues)
{
if (tweet.matches(ALLCHARS + venue + ALLCHARS))
{
Tuple output = mTupleFactory.newTuple(Collections.singletonList(venue));
return output;
}
}
return null;
} catch (Exception e) {
throw new IOException(
"BagTupleExampleUDF: caught exception processing input.", e);
}
}
}
When executed the script is firing error at the DEFINE part just before (venues);:
2013-12-19 04:28:06,072 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1200: <file script.pig, line 6, column 60> mismatched input 'venues' expecting RIGHT_PAREN
Obviously I'm doing something wrong, can you help me out figuring out what's wrong.
Is it the UDF that cannot accept the venues relation as a parameter. Or the relation is not represented by DataBag like this public GenerateVenues(DataBag venuesBag)?
Thanks!
PS I'm using Pig version 0.11.1.1.3.0.0-107.
As #WinnieNicklaus already said, you can only pass strings to UDF constructors.
Having said that, the solution to your problem is using distributed cache, you need to override public List<String> getCacheFiles() to return a list of filenames that will be made available via distributed cache. With that, you can read the file as a local file and build your table.
The downside is that Pig has no initialization function, so you have to implement something like
private void init() {
if (!this.initialized) {
// read table
}
}
and then call that as the first thing from exec.
You can't use a relation as a parameter in a UDF constructor. Only strings can be passed as arguments, and if they are really of another type, you will have to parse them out in the constructor.