How to test Reactor WebClient body response from a reactive endpoint that returns Mono<ResponseEntity<Flux<MyDto>>>? - spring-webflux

I used open openapi-generator-maven-plugin from org.openapitools in my Spring Boot project with the reactive configuration enabled. One of my endpoint returns a List body response that is auto-generated as Mono<ResponseEntity<Flux>>
<plugin>
<groupId>org.openapitools</groupId>
<artifactId>openapi-generator-maven-plugin</artifactId>
<version>${openapi-generator.version}</version>
<configuration>
...
<configOptions>
<interfaceOnly>true</interfaceOnly>
<reactive>true</reactive>
...
</configOptions>
</configuration>
</plugin>
How can I test the body of my endpoint Controller within an integration test using WebTestClient?
If I try this it doesn't work due to I'm receiving a flux instead of the expected dto object.
webTestApi.get()
.uri("my_uri")
.accept(MediaType.APPLICATION_JSON)
.exchange()
.expectStatus().isOk()
.expectHeader().contentType(MediaType.APPLICATION_JSON)
.expectBody(MyDto.class)
.isEqualTo(myDto);

I finally resolved it by my own.
StepVerifier.create(rateApi.get()
.uri("...")
.accept(MediaType.APPLICATION_JSON)
.exchange()
.expectStatus().isOk()
.returnResult(MyDto.class).getResponseBody())
.expectNext(myDto)
.verifyComplete();
I nested the ResponseBody into a StepVerifier as we usually do for testing a standard Producer (Flux)

Related

HttpComponentsClientHttpConnector is not accepting org.apache.http.impl.nio.client.CloseableHttpAsyncClient for Webclient with Apache Http Client

Im trying to run Webflux on Tomcat and try to create Sping WebClient with Apache Http Client.
Reference Documentation stated that theres built-in support:
https://docs.spring.io/spring-framework/docs/current/reference/html/web-reactive.html#webflux-client-builder-http-components
private ClientHttpConnector getApacheHttpClient(){
HttpAsyncClientBuilder clientBuilder = HttpAsyncClients.custom();
clientBuilder.setDefaultRequestConfig(RequestConfig.DEFAULT);
CloseableHttpAsyncClient client = clientBuilder.build();
ClientHttpConnector connector = new HttpComponentsClientHttpConnector(client);
return connector;
}
But Springs HttpComponentsClientHttpConnector is not accepting org.apache.http.impl.nio.client.CloseableHttpAsyncClient. It requires org.apache.hc.client5.http.impl.async.CloseableHttpAsyncClient. So there seems to be a package rename and I canĀ“t find a Maven Dependency that has the required class.
Does anybody know the right Maven Dependency for that class. Or how could I make it work?
Apache HTTP Client 5 is a separate artifact. You'll need to add the following dependencies to your pom.xml:
<dependency>
<groupId>org.apache.httpcomponents.client5</groupId>
<artifactId>httpclient5</artifactId>
<version>5.1</version>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents.core5</groupId>
<artifactId>httpcore5-reactive</artifactId>
<version>5.1</version>
</dependency>
import org.apache.hc.client5.http.impl.async.HttpAsyncClients;
import org.springframework.http.client.reactive.HttpComponentsClientHttpConnector;
public class ApacheHttp {
public static void main(String[] args) {
new HttpComponentsClientHttpConnector(HttpAsyncClients.custom().build())
}
}

Spring Cloud Gateway Custom Filter : WebClient.create().post() causes hanging when testing

So I've created a custom filter that, when accessed, will create a webflux client and post to a predetermined url. This seems to work fine when running, but when testing this code the test is hanging (until I cancel the test). So I feel there is a possible memory leak on top of not being able to complete the test to make sure this route is working properly. If I switch the WebClient method to get() then a resulting test of the filter works fine. Something with a post() I am not sure what is missing.
#Component
class ProxyGatewayFilterFactory: AbstractGatewayFilterFactory<ProxyGatewayFilterFactory.Params>(Params::class.java) {
override fun apply(params: Params): GatewayFilter {
return OrderedGatewayFilter(
GatewayFilter { exchange, chain ->
exchange.request.mutate().header("test","test1").build()
WebClient.create().post()
.uri(params.proxyBasePath)
.body(BodyInserters.fromDataBuffers(exchange.request.body))
.headers { it.addAll(exchange.request.headers) }
.exchange()
.flatMap {
println("the POST statusCode is "+it.statusCode())
Mono.just(it.statusCode().is2xxSuccessful)
}
.map {
exchange.request.mutate().header("test", "test2").build()
println("exchange request uri is " + exchange.request.uri)
println("exchange response statusCode is "+ exchange.response.statusCode)
exchange
}
.flatMap(chain::filter)
}, params.order)
}
Taken from the documentation, if using exchange you have an obligation to consume the body.
Unlike retrieve(), when using exchange(), it is the responsibility of the application to consume any response content regardless of the scenario (success, error, unexpected data, etc). Not doing so can cause a memory leak. The Javadoc for ClientResponse lists all the available options for consuming the body. Generally prefer using retrieve() unless you have a good reason for using exchange() which does allow to check the response status and headers before deciding how to or if to consume the response.
Spring framework 5.2.9 Webclient
This api has been changed in the latest version of the spring framework 5.3.0 now spring will force you to consume the body, because developers didn't actually read the docs.

Apache Beam : is it possible to comsume messages of RabbitMQ with exchange and routing key

I defined a pipeline in Apache Beam to consume messages of a given queue in RabbitMQ message broker.
I defined an exchange and routing key in RabbitMQ.
I used AmqpIO.read() in Beam (version 2.9.0) but I did not found any API to set the echange and the routing key.
(Following this doc : https://beam.apache.org/releases/javadoc/2.4.0/org/apache/beam/sdk/io/amqp/AmqpIO.html)
Is there any possibility to do that ? Even with any other plugin.
Regards,
Ali
There is a new (experimental) IO connector for RabbitMQ shipped with the latest v2.9.0 Apache Beam release. The AMQP connector will not work for RabbitMQ.
If you are using Maven add the following dependency in your POM
<!-- Beam MongoDB I/O -->
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-mongodb</artifactId>
<version>2.9.0</version>
</dependency>
and you can use it in a pipeline like
public class RabbitMQPipeline {
final static Logger log = LoggerFactory.getLogger(RabbitMQPipeline.class);
/**
* Mongo Pipeline options.
*/
public interface RabbitMQPipelineOptions extends PipelineOptions {
#Description("Path of the file to read from")
#Default.String("amqp://localhost")
#Required
String getUri();
void setUri(String uri);
}
/**
* #param args
*/
public static void main(String[] args) {
RabbitMQPipelineOptions options = PipelineOptionsFactory.fromArgs(args).withValidation()
.as(RabbitMQPipelineOptions.class);
Pipeline pipeline = Pipeline.create(options);
PCollection<RabbitMqMessage> messages = pipeline
.apply(RabbitMqIO2.read().withUri(options.getUri()).withQueue("test"));
messages.apply(ParDo.of(new DoFn<RabbitMqMessage, String>() {
#ProcessElement
public void process(#Element RabbitMqMessage msg) {
System.out.println(msg.toString());
}
}));
pipeline.run().waitUntilFinish();
}
}
The RabbitMqIO Javadoc has examples of how to use the reader and writer.
A word of caution
There is a known bug that has been fixed but scheduled for release in v2.11.0 that blocks the connector from working even in the simplest scenarios. The fix is really simple (see JIRA issue) but you will need to rebuild a new version of the class. In case you want to give it a try make sure you add the following Maven dependency
<dependency>
<groupId>com.google.auto.value</groupId>
<artifactId>auto-value</artifactId>
<version>1.5.2</version>
<scope>provided</scope>
</dependency>
and add the following configuration in Maven Compiler plugin
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.6.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
<annotationProcessors>
<annotationProcessor>com.google.auto.value.processor.AutoValueProcessor</annotationProcessor>
</annotationProcessors>
</configuration>
</plugin>
If you are using Eclipse make sure you install the m2-apt Maven plugin. Good luck!

Apache cxf jax-rs implementation with xml databind

I configured my rest service to implement content negotiation through Variant.
On jersey all works fine but on apache cxf something goes wrong.
No message body writer has been found for class ContentType: application/xml
It seems thath when I construct the response as xml type it cannnot find the correct body writer.
I configured jax-rs with jacksonJaxbJsonProvider and all works great with json databind.
<jaxrs:providers>
<bean class="com.fasterxml.jackson.jaxrs.json.JacksonJaxbJsonProvider" />
</jaxrs:providers>
cxf-rt-frontend-jaxrs version 3.0.3
jackson-databind: 2.4.2
Any idea?
Add a #XmlRootElement(name="order") generated xml cannot be <orderId>data<orderId>, it should have root element. Thus updated code would look like
#XmlRootElement(name="order")
#XmlType(propOrder = { "orderId"})
public class OrderForConfirmationEmail implements Serializable {
#XmlElement
public long getOrderId() {
long orderId = new Random().nextLong();
return orderId;
}
}
Generated xml is
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><order xmlns="http://com.kp.swasthik/so/schema">
<orderId>369317779145370211</orderId>
</order>
and json is
{"orderId":6812414735706519327}

Camel Redis Component subscribe to a channel not working

I've a simple route that listens to a Redis channel. For some reason it's not working.
Here is my route. I verified that data is being published into the Redis channel and I can read it back using a normal Jedis subscriber. I'm running Camel inside Jetty and it is deployed as a war.
public class RedisSubscriberRoute extends RouteBuilder{
#Override
public void configure() throws Exception {
from("spring-redis://localhost:6379?command=SUBSCRIBE&channels=mychannel")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
String res = exchange.getIn().getBody().toString();
System.out.println("************ " + res);
exchange.getOut().setBody(res);
}
})
.to("log:foo");
}
}
UPDATE (10-May-2013 9:56 AM EST): Adding version information
<properties>
<spring.version>3.2.2.RELEASE</spring.version>
<camel.version>2.11.0</camel.version>
<jetty.version>7.6.8.v20121106</jetty.version>
</properties>
Redis server version is 2.6.11
The sample git project is here.
https://github.com/soumyasd/camelredisdemo
UPDATE 10-May-2013 (10:18 PM EST):
As suggested in the comments below I changed the version of the spring-data to 1.0.0.RELEASE. Looks like the message is getting to the subscriber but I'm still getting an exception.
java.lang.RuntimeException: org.springframework.data.redis.serializer.SerializationException: Cannot deserialize; nested exception is org.springframework.core.serializer.support.SerializationFailedException: Failed to deserialize payload. Is the byte array a result of corresponding serialization for DefaultDeserializer?; nested exception is java.io.StreamCorruptedException: invalid stream header: 77686174
at org.apache.camel.component.redis.RedisConsumer.onMessage(RedisConsumer.java:73)[camel-spring-redis-2.11.0.jar:2.11.0]
at org.springframework.data.redis.listener.RedisMessageListenerContainer.executeListener(RedisMessageListenerContainer.java:242)[spring-data-redis-1.0.0.RELEASE.jar:]
at org.springframework.data.redis.listener.RedisMessageListenerContainer.processMessage(RedisMessageListenerContainer.java:231)[spring-data-redis-1.0.0.RELEASE.jar:]
at org.springframework.data.redis.listener.RedisMessageListenerContainer$DispatchMessageListener$1.run(RedisMessageListenerContainer.java:726)[spring-data-redis-1.0.0.RELEASE.jar:]
at java.lang.Thread.run(Thread.java:680)[:1.6.0_45]
There is something broken in the consumer with v 1.0.3.RELEASE, use 1.0.0.RELEASE instead.
The exception you are getting is something different: Camel producer uses Spring RedisTemplate, which in turn uses JdkSerializationRedisSerializer. To make it symetric, the consumer by default also uses JdkSerializationRedisSerializer to deserialize data. So if you are using Camel producer to publish data, it should work fine w/o hustle. But if you are publishing data to redis using other redis clients (or as in your case some other libraries) you have to use another serializer for the consumer. Long explanation, but to make it work is actually two lines:
from("spring-redis://localhost:6379?command=SUBSCRIBE&channels=mychannel&serializer=#serializer")
Here is a summary of what I had to change to make this work.
As pointed out by #Bilgin Ibryam - you have to use the version 1.0.0.RELEASE of spring-data-redis (as on 11-May-2013)
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
<!-- IMPORTANT - as of 10-May-2013 the Redis Camel
component only works with version 1.0.0.RELASE -->
<version>1.0.0.RELEASE</version>
</dependency>
Other versions that I used in my pom.xml are
3.2.2.RELEASE
2.11.0
7.6.8.v20121106
If you are publishing and consuming using the Camel Redis component you don't have to declare a different serializer. In my case I was publishing from python as well as plain old Java using Jedis. I had to change as my route to include the serializer and define the serializer in my spring/camel config.
#Override
public void configure() throws Exception {
from("spring-redis://localhost:6379?command=SUBSCRIBE&channels=mychannel&serializer=#redisserializer")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
String res = exchange.getIn().getBody().toString();
System.out.println("************ " + res);
exchange.getOut().setBody(res);
}
})
.to("log:foo");
}