Unable to set WriteTimeout on reactor-netty version 0.9.10 - spring-webflux

I have written a Reactive API using Spring WebFlux version 2.3.0.RELEASE having reactor-netty version 0.9.10. As part of the API's SLA, I want to timeout the request if the Server takes more than the stipulated configured WriteTimeout.
Sharing the code snipped below where I have implemented a customizer for NettyReactiveWebServerFactory.
#Bean
public WebServerFactoryCustomizer serverFactoryCustomizer() {
return new NettyTimeoutCustomizer();
}
class NettyTimeoutCustomizer implements WebServerFactoryCustomizer<NettyReactiveWebServerFactory> {
#Override
public void customize(NettyReactiveWebServerFactory factory) {
int connectionTimeout = 1000;
int writeTimeout = 1;
factory.addServerCustomizers(server -> server.tcpConfiguration(tcp ->
tcp.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, connectionTimeout)
.doOnConnection(connection ->
connection.addHandlerLast(new WriteTimeoutHandler(writeTimeout)))));
}
}
In spite of the Customizer, the WriteTimeout is Not Working for the API.

Instead of defining a WebServerFactoryCustomizer bean, create a bean of NettyReactiveWebServerFactory to override Spring's auto-configuration.
#Bean
public NettyReactiveWebServerFactory nettyReactiveWebServerFactory() {
NettyReactiveWebServerFactory webServerFactory = new NettyReactiveWebServerFactory();
webServerFactory.addServerCustomizers(new MyCustomizer());
return webServerFactory;
}
Now the MyCustomizer will look something like this:
public class MyCustomizer implements NettyServerCustomizer {
#Override
public HttpServer apply(HttpServer httpServer) {
return httpServer.tcpConfiguration(tcpServer -> tcpServer.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000)
.bootstrap(serverBootstrap -> serverBootstrap.childHandler(new ChannelInitializer<Channel>() {
#Override
protected void initChannel(Channel channel) throws Exception {
channel.pipeline().addLast("writeTimeoutHandler", new WriteTimeoutHandler(1));
}
}))
);
}
}
This is the way suggested in the official API doc

Related

Is it possible to specify a retry policy in the WebClientBuilder?

With Project Reactor extras we can define retry policies directly on the WebClient API:
public Mono<String> getData(String stockId) {
return webClient.get()
.uri(PATH_BY_ID, stockId)
.accept(MediaType.APPLICATION_JSON)
.retrieve()
.bodyToMono(String.class)
.retryWhen(Retry.backoff(3, Duration.ofSeconds(2)).jitter(0.75));
}
I'm designing a library that exposes an preconfigured WebClient with some customizations. I would need to configure a default retry policy and not rely on the library users to add the .retryWhen(...) call.
Is it possible to configure a default retry policy in the WebClientBuilder or in a WebClientCustomizer?
#Configuration
public class ArchitectureWebClientConfiguration {
private final WebClient.Builder webClientBuilder;
public ArchitectureWebClientConfiguration(List<WebClientCustomizer> customizers) {
WebClient.Builder builder = WebClient.builder();
if (customizers != null && !customizers.isEmpty()) {
customizers.forEach(c -> c.customize(builder));
}
builder.xxx
this.webClientBuilder = builder;
}
#Bean
public ArchitecureWebClient architecureWebClient() {
return webClientBuilder.build();
}

Spring webflux filter: How to get the reactor context after the query execution?

Spring boot 2.1.5
Project Reactor 3.2.9
In my webflux project I extensively use the reactor contexts in order to pass around some values.
I set up a filter and am trying to log things which are in the context and to log different things in case of error/success.
I have checked this documentation: https://projectreactor.io/docs/core/release/reference/#context
I still struggle (especially on the error side) to get it.
Basically, I have this filter:
#Component
public class MdcWebFilter implements WebFilter {
#NotNull
#Override
public Mono<Void> filter(#NotNull ServerWebExchange serverWebExchange,
WebFilterChain webFilterChain) {
Mono<Void> filter = webFilterChain.filter(serverWebExchange);
return filter
.doAfterSuccessOrError(new BiConsumer<Void, Throwable>() {
#Override
public void accept(Void aVoid, Throwable throwable) {
//Here i would like to be able to access to the request's context
System.out.println("doAfterSuccessOrError:" + (throwable==null ? "OK" : throwable.getMessage())+"log the context");
}
})
.doOnEach(new Consumer<Signal<Void>>() {
#Override
public void accept(Signal<Void> voidSignal) {
//Here i have the context but i don't really know if i am in success or error
System.out.println("doOnEach:"+"Log OK/KO and the exception" + voidSignal.getContext());
}
})
.subscriberContext(context -> context.put("somevar", "whatever"));
}
}
I also tried with a flatMap() and a Mono.subscriberContext() but i am not sure how to plug correctly with the filter (especially in error).
What would be the best way to achieve this ?
I'm not sure whether it possible access request reactor context from within WebFilter. WebFilter context exists in another Mono chain.
But it is do possible to assosiate attributes with request and able to fetch these attributes during request life time RequestContextHolder for Reactive Web
Very similar to Servlet API.
Controller:
#GetMapping(path = "/v1/customers/{customerId}")
public Mono<Customer> getCustomerById(
#PathVariable("customerId") String customerId,
ServerWebExchange serverWebExchange)
{
serverWebExchange.getAttributes().put("traceId", "your_trace_id");
return customerService.findById(customerId);
}
WebFilter:
public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
// ...
String traceId = exchange.getAttributeOrDefault("traceId", "default_value_goes_here");
//...
return chain.filter(exchange);
}
I know this is probably not the cleanest of the solutions, but you could create a container class that would keep the context between your two callbacks.
You would store the context at doOnEach and then you would be able to load it back at doAfterSuccessOrError:
public Mono<Void> filter(#NotNull ServerWebExchange serverWebExchange, WebFilterChain webFilterChain) {
#lombok.Data
class MyContextContainer {
private Context context;
}
MyContextContainer container = new MyContextContainer();
Mono<Void> filter = webFilterChain.filter(serverWebExchange);
return filter
.doAfterSuccessOrError(new BiConsumer<Void, Throwable>() {
#Override
public void accept(Void aVoid, Throwable throwable) {
// load the context here
Context context = container.getContext();
// then do your stuff here
}
})
.doOnEach(new Consumer<Signal<Void>>() {
#Override
public void accept(Signal<Void> voidSignal) {
// store the context here
container.setContext(voidSignal.getContext());
}
})
.subscriberContext(context -> context.put("somevar", "whatever"));
}
It doesn't need to be a class, really. It could be an AtomicReference, but you get the idea.
Again, this might be just a workaround. I believe there must be a better way to access the context.

Spring amqp converter issue using rabbit listener

I think I am missing something here..I am trying to create simple rabbit listner which can accept custom object as message type. Now as per doc it says
In versions prior to 1.6, the type information to convert the JSON had to be provided in message headers, or a custom ClassMapper was required. Starting with version 1.6, if there are no type information headers, the type can be inferred from the target method arguments.
I am putting message manually in to queue using rabbit mq adm in dashboard,getting error like
Caused by: org.springframework.messaging.converter.MessageConversionException: Cannot convert from [[B] to [com.example.Customer] for GenericMessage [payload=byte[21], headers={amqp_receivedDeliveryMode=NON_PERSISTENT, amqp_receivedRoutingKey=customer, amqp_deliveryTag=1, amqp_consumerQueue=customer, amqp_redelivered=false, id=81e8a562-71aa-b430-df03-f60e6a37c5dc, amqp_consumerTag=amq.ctag-LQARUDrR6sUcn7FqAKKVDA, timestamp=1485635555742}]
My configuration:
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("test");
connectionFactory.setPassword("test1234");
connectionFactory.setVirtualHost("/");
return connectionFactory;
}
#Bean
RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(new Jackson2JsonMessageConverter());
return rabbitTemplate;
}
#Bean
public AmqpAdmin amqpAdmin() {
RabbitAdmin rabbitAdmin = new RabbitAdmin(connectionFactory());
return rabbitAdmin;
}
#Bean
public Jackson2JsonMessageConverter jackson2JsonMessageConverter() {
return new Jackson2JsonMessageConverter();
}
Also question is with this exception message is not put back in the queue.
I am using spring boot 1.4 which brings amqp 1.6.1.
Edit1 : I added jackson converter as above (prob not required with spring boot) and given contenty type on rmq admin but still got below, as you can see above I am not configuring any listener container yet.
Caused by: org.springframework.messaging.converter.MessageConversionException: Cannot convert from [[B] to [com.example.Customer] for GenericMessage [payload=byte[21], headers={amqp_receivedDeliveryMode=NON_PERSISTENT, amqp_receivedRoutingKey=customer, content_type=application/json, amqp_deliveryTag=3, amqp_consumerQueue=customer, amqp_redelivered=false, id=7f84d49d-037a-9ea3-e936-ed5552d9f535, amqp_consumerTag=amq.ctag-YSemzbIW6Q8JGYUS70WWtA, timestamp=1485643437271}]
If you are using boot, you can simply add a Jackson2JsonMessageConverter #Bean to the configuration and it will be automatically wired into the listener (as long as it's the only converter). You need to set the content_type property to application/json if you are using the administration console to send the message.
Conversion errors are considered fatal by default because there is generally no reason to retry; otherwise they'd loop for ever.
EDIT
Here's a working boot app...
#SpringBootApplication
public class So41914665Application {
public static void main(String[] args) {
SpringApplication.run(So41914665Application.class, args);
}
#Bean
public Queue queue() {
return new Queue("foo", false, false, true);
}
#Bean
public Jackson2JsonMessageConverter converter() {
return new Jackson2JsonMessageConverter();
}
#RabbitListener(queues = "foo")
public void listen(Foo foo) {
System.out.println(foo);
}
public static class Foo {
public String bar;
public String getBar() {
return this.bar;
}
public void setBar(String bar) {
this.bar = bar;
}
#Override
public String toString() {
return "Foo [bar=" + this.bar + "]";
}
}
}
I sent this message
With this result:
2017-01-28 21:49:45.509 INFO 11453 --- [ main] com.example.So41914665Application : Started So41914665Application in 4.404 seconds (JVM running for 5.298)
Foo [bar=baz]
Boot will define an admin and template for you.
Ran into the same issue, turns out that, git stash/merge messed up with my config, I need to include this package again in my main again:
#SpringBootApplication(scanBasePackages = {
"com.example.amqp" // <- git merge messed this up
})
public class TeamActivityApplication {
public static void main(String[] args) {
SpringApplication.run(TeamActivityApplication.class, args);
}
}

How use batch with role-based security

Sorry for my english.... May be someone help me find information about using batch job with role-based security in glassfish server?
When I invoke the method from EJB :
#Override
#RolesAllowed({"root_role", "admin_role", "user_role"})
public void execute() {
BatchRuntime.getJobOperator().start(STATISTIC_JOB_NAME, new Properties());
}
I get exception like this:
javax.ejb.AccessLocalException: Client not authorized for this invocation
My job class:
#Dependent
#Named(value = "StatisticJob")
public class StatisticJob extends AbstractBatchlet {
#EJB
private StatisticFacadeLocal sfl;
#Override
public String process() throws Exception {
System.out.println("StatisticJob.process()");
List<StatisticPortEntity> spes = sfl.findAll();
if (spes != null && !spes.isEmpty()) {
for (StatisticPortEntity spe : spes) {
System.out.println(spe);
}
} else {
return "Statistic list is empty.";
}
return "StatisticJob.proccess is done.";
}
}
How use role-based security with batch?
Thank's!

Can I use both the JAX-RS and RAML extensions in Restlet in the same application?

I am preparing a ReSTful service which I would like to have documented using RAML (and perhaps Swagger as well), but it seems that I cannot implement both JAX-RS and RAML in the same application at the same time.
I have created an Application class for JAX-RS as follows:
public class Application extends javax.ws.rs.core.Application {
#Override
public Set<Class<?>> getClasses() {
// Use the reflections library to scan the current package tree for
// classes annotated with javax.ws.rs.Path and add them to the JAX-RS
// application
Reflections reflections = new Reflections(this.getClass().getPackage().getName());
return reflections.getTypesAnnotatedWith(Path.class);
}
}
I attach the JAX-RS Application object as follows:
Component component = new Component();
Server server = new Server(Protocol.HTTP, PORT);
component.getServers().add(server);
JaxRsApplication jaxRsApplication = new JaxRsApplication(component.getContext().createChildContext());
jaxRsApplication.add(new Application());
jaxRsApplication.setObjectFactory(objectFactory);
component.getDefaultHost().attach("/rest", jaxRsApplication);
And I would also like to implement the RAML extension, but it looks like it is tied to the Restlet Router and having it's own Application class. Is there a way to combine the two?
Indeed the RAML extension of Restlet isn't designed to be used within JAXRS application. That said you can define a resource that provide the RAML content based on classes ApplicationIntrospector of Restlet and RamlEmitter of RAML parser, as described below:
public class RamlResource {
private Definition definition;
#Path("/raml")
#GET
public String getRaml() {
return new RamlEmitter().dump(RamlTranslator
.getRaml(getDefinition()));
}
private synchronized Definition getDefinition() {
if (definition == null) {
synchronized (RamlResource.class) {
definition = ApplicationIntrospector.getDefinition(
Application.getCurrent(),
new Reference("/"), null, false);
}
}
return definition;
}
}
It's the way the RAML extension of Restlet works. You could also use such an approach for Swagger but be careful since Swagger 1.2 requires several resources (a main and several sub ones with each categories). It's not the case anymore for Swagger 2.
You can notice that there is a JAX-RS support for Swagger in the extension org.restlet.ext.swagger.
----- Edited
Perhaps can you make a try with this class that corresponds to a port of the class JaxRsApplicationSwaggerSpecificationRestlet to RAML. It's based on the class JaxRsIntrospector which seems relevant for JAX-RS application:
public class JaxRsApplicationRamlSpecificationRestlet extends Restlet {
private Application application;
private String basePath;
private Reference baseRef;
private Definition definition;
public JaxRsApplicationRamlSpecificationRestlet(Application application) {
this(null, application);
}
public JaxRsApplicationRamlSpecificationRestlet(Context context, Application application) {
super(context);
this.application = application;
}
public void attach(Router router) {
attach(router, "/api-docs");
}
public void attach(Router router, String path) {
router.attach(path, this);
router.attach(path + "/{resource}", this);
}
public Representation getApiDeclaration() {
Raml raml = RamlTranslator.getRaml(
getDefinition());
ObjectMapper mapper = new ObjectMapper(new YAMLFactory());
try {
return new StringRepresentation(
mapper.writeValueAsString(raml),
MediaType.APPLICATION_YAML);
} catch (Exception ex) {
return new StringRepresentation("error");
}
}
public String getBasePath() {
return basePath;
}
private synchronized Definition getDefinition() {
if (definition == null) {
synchronized (JaxRsApplicationRamlSpecificationRestlet.class) {
definition = JaxRsIntrospector.getDefinition(application,
baseRef, false);
}
}
return definition;
}
#Override
public void handle(Request request, Response response) {
super.handle(request, response);
if (Method.GET.equals(request.getMethod())) {
response.setEntity(getApiDeclaration());
} else {
response.setStatus(Status.CLIENT_ERROR_METHOD_NOT_ALLOWED);
}
}
public void setApiInboundRoot(Application application) {
this.application = application;
}
public void setApplication(Application application) {
this.application = application;
}
public void setBasePath(String basePath) {
this.basePath = basePath;
// Process basepath and check validity
this.baseRef = basePath != null ? new Reference(basePath) : null;
}
}
You can use this class like this:
JaxRsApplication application
= new JaxRsApplication(component.getContext());
MyApplication app = new MyApplication();
application.add(app);
new JaxRsApplicationRamlSpecificationRestlet(app);
(...)
There is no need for a dedicated resource. Please note that this code is a bit experimental ;-) I could propose it back for a contribution for the extension raml in Restlet...
Hope it helps you,
Thierry