WebFlux+RSocket, How to pass flux from RSocket to WebFlux - spring-webflux

I'm trying to use WebFlux with RSocket, The sample application has server and client applications. both running on WebFlux and RSocket, my rsocket communication type is request-stream. client-server application runs perfectly fine for couple concurrent requests, however when I load test with 1000qps with 8 threads, requests starts hanging. On investigation below sample code passes through load test.
WORKING SAMPLE
RSocketClientConfig.java
public class RSocketClientConfig {
#Bean
RSocketRequester rSocketRequester(RSocketRequester.Builder rsocketRequesterBuilder, RSocketStrategies strategies,
RSocketClientProperties clientProp) {
RSocketRequester rsocketRequester = rsocketRequesterBuilder.rsocketStrategies(strategies)
.dataMimeType(new MimeType("application", "x-protobuf"))
.connectTcp(clientProp.getHost(), clientProp.getRsocPort()).retry().block();
rsocketRequester.rsocket().onClose().doOnError(error -> log.warn("Connection CLOSED"))
.doFinally(consumer -> log.info("Client DISCONNECTED")).subscribe();
return rsocketRequester;
}
}
Client.java
#Service
public class PersonRSocketClient {
#Autowired
private RSocketRequester personClient;
public Flux<Person> list() {
return personClient.route("person").retrieveFlux(Person.class);
}
}
NOT WORKING
RSocketClientConfig.java
public class RSocketClientConfig {
#Bean
Mono<RSocketRequester> rSocketRequester(RSocketRequester.Builder rsocketRequesterBuilder, RSocketStrategies strategies,
RSocketClientProperties clientProp) {
Mono<RSocketRequester> rsocketRequester = rsocketRequesterBuilder.rsocketStrategies(strategies)
.dataMimeType(new MimeType("application", "x-protobuf"))
.connectTcp(clientProp.getHost(), clientProp.getRsocPort());
return rsocketRequester;
}
}
Client.java
#Service
public class PersonRSocketClient {
#Autowired
private Mono<RSocketRequester> personClient;
public Flux<Person> list() {
return personClient
.flatMapMany(rsocket -> rsocket.route("person").retrieveFlux(Person.class));
}
}
How to map request-stream to flux correctly?

Related

Spring Integration testing a Files.inboundAdapter flow

I have this flow that I am trying to test but nothing works as expected. The flow itself works well but testing seems a bit tricky.
This is my flow:
#Configuration
#RequiredArgsConstructor
public class FileInboundFlow {
private final ThreadPoolTaskExecutor threadPoolTaskExecutor;
private String filePath;
#Bean
public IntegrationFlow fileReaderFlow() {
return IntegrationFlows.from(Files.inboundAdapter(new File(this.filePath))
.filterFunction(...)
.preventDuplicates(false),
endpointConfigurer -> endpointConfigurer.poller(
Pollers.fixedDelay(500)
.taskExecutor(this.threadPoolTaskExecutor)
.maxMessagesPerPoll(15)))
.transform(new UnZipTransformer())
.enrichHeaders(this::headersEnricher)
.transform(Message.class, this::modifyMessagePayload)
.route(Map.class, this::channelsRouter)
.get();
}
private String channelsRouter(Map<String, File> payload) {
boolean isZip = payload.values()
.stream()
.anyMatch(file -> isZipFile(file));
return isZip ? ZIP_CHANNEL : XML_CHANNEL; // ZIP_CHANNEL and XML_CHANNEL are PublishSubscribeChannel
}
#Bean
public SubscribableChannel xmlChannel() {
var channel = new PublishSubscribeChannel(this.threadPoolTaskExecutor);
channel.setBeanName(XML_CHANNEL);
return channel;
}
#Bean
public SubscribableChannel zipChannel() {
var channel = new PublishSubscribeChannel(this.threadPoolTaskExecutor);
channel.setBeanName(ZIP_CHANNEL);
return channel;
}
//There is a #ServiceActivator on each channel
#ServiceActivator(inputChannel = XML_CHANNEL)
public void handleXml(Message<Map<String, File>> message) {
...
}
#ServiceActivator(inputChannel = ZIP_CHANNEL)
public void handleZip(Message<Map<String, File>> message) {
...
}
//Plus an #Transformer on the XML_CHANNEL
#Transformer(inputChannel = XML_CHANNEL, outputChannel = BUS_CHANNEL)
private List<BusData> xmlFileToIngestionMessagePayload(Map<String, File> xmlFilesByName) {
return xmlFilesByName.values()
.stream()
.map(...)
.collect(Collectors.toList());
}
}
I would like to test multiple cases, the first one is checking the message payload published on each channel after the end of fileReaderFlow.
So I defined this test classe:
#SpringBootTest
#SpringIntegrationTest
#ExtendWith(SpringExtension.class)
class FileInboundFlowTest {
#Autowired
private MockIntegrationContext mockIntegrationContext;
#TempDir
static Path localWorkDir;
#BeforeEach
void setUp() {
copyFileToTheFlowDir(); // here I copy a file to trigger the flow
}
#Test
void checkXmlChannelPayloadTest() throws InterruptedException {
Thread.sleep(1000); //waiting for the flow execution
PublishSubscribeChannel xmlChannel = this.getBean(XML_CHANNEL, PublishSubscribeChannel.class); // I extract the channel to listen to the message sent to it.
xmlChannel.subscribe(message -> {
assertThat(message.getPayload()).isInstanceOf(Map.class); // This is never executed
});
}
}
As expected that test does not work because the assertThat(message.getPayload()).isInstanceOf(Map.class); is never executed.
After reading the documentation I didn't find any hint to help me solved that issue. Any help would be appreciated! Thanks a lot
First of all that channel.setBeanName(XML_CHANNEL); does not effect the target bean. You do this on the bean creation phase and dependency injection container knows nothing about this setting: it just does not consult with it. If you really would like to dictate an XML_CHANNEL for bean name, you'd better look into the #Bean(name) attribute.
The problem in the test that you are missing the fact of async logic of the flow. That Files.inboundAdapter() works if fully different thread and emits messages outside of your test method. So, even if you could subscribe to the channel in time, before any message is emitted to it, that doesn't mean your test will work correctly: the assertThat() will be performed on a different thread. Therefore no real JUnit report for your test method context.
So, what I'd suggest to do is:
Have Files.inboundAdapter() stopped in the beginning of the test before any setup you'd like to do in the test. Or at least don't place files into that filePath, so the channel adapter doesn't emit messages.
Take the channel from the application context and if you wish subscribe or use a ChannelInterceptor.
Have an async barrier, e.g. CountDownLatch to pass to that subscriber.
Start the channel adapter or put file into the dir for scanning.
Wait for the async barrier before verifying some value or state.

Spring WebFlux (Flux): how to publish dynamically

I am new to Reactive programming and Spring WebFlux. I want to make my App 1 publish Server Sent event through Flux and my App 2 listen on it continuously.
I want Flux publish on-demand (e.g. when something happens). All the example I found is to use Flux.interval to periodically publish event, and there seems no way to append/modify the content in Flux once it is created.
How can I achieve my goal? Or I am totally wrong conceptually.
Publish "dynamically" using FluxProcessor and FluxSink
One of the techniques to supply data manually to the Flux is using FluxProcessor#sink method as in the following example
#SpringBootApplication
#RestController
public class DemoApplication {
final FluxProcessor processor;
final FluxSink sink;
final AtomicLong counter;
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
public DemoApplication() {
this.processor = DirectProcessor.create().serialize();
this.sink = processor.sink();
this.counter = new AtomicLong();
}
#GetMapping("/send")
public void test() {
sink.next("Hello World #" + counter.getAndIncrement());
}
#RequestMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent> sse() {
return processor.map(e -> ServerSentEvent.builder(e).build());
}
}
Here, I created DirectProcessor in order to support multiple subscribers, that will listen to the data stream. Also, I provided additional FluxProcessor#serialize which provide safe support for multiproducer (invocation from different threads without violation of Reactive Streams spec rules, especially rule 1.3). Finally, by calling "http://localhost:8080/send" we will see the message Hello World #1 (of course, only in case if you connected to the "http://localhost:8080" previously)
Update For Reactor 3.4
With Reactor 3.4 you have a new API called reactor.core.publisher.Sinks. Sinks API offers a fluent builder for manual data-sending which lets you specify things like the number of elements in the stream and backpressure behavior, number of supported subscribers, and replay capabilities:
#SpringBootApplication
#RestController
public class DemoApplication {
final Sinks.Many sink;
final AtomicLong counter;
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
public DemoApplication() {
this.sink = Sinks.many().multicast().onBackpressureBuffer();
this.counter = new AtomicLong();
}
#GetMapping("/send")
public void test() {
EmitResult result = sink.tryEmitNext("Hello World #" + counter.getAndIncrement());
if (result.isFailure()) {
// do something here, since emission failed
}
}
#RequestMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent> sse() {
return sink.asFlux().map(e -> ServerSentEvent.builder(e).build());
}
}
Note, message sending via Sinks API introduces a new concept of emission and its result. The reason for such API is the fact that the Reactor extends Reactive-Streams and has to follow the backpressure control. That said if you emit more signals than was requested, and the underlying implementation does not support buffering, your message will not be delivered. Therefore, the result of tryEmitNext returns the EmitResult which indicates if the message was sent or not.
Also, note, that by default Sinsk API gives a serialized version of Sink, which means you don't have to care about concurrency. However, if you know in advance that the emission of the message is serial, you may build a Sinks.unsafe() version which does not serialize given messages
Just another idea, using EmitterProcessor as a gateway to flux
import reactor.core.publisher.EmitterProcessor;
import reactor.core.publisher.Flux;
public class MyEmitterProcessor {
EmitterProcessor<String> emitterProcessor;
public static void main(String args[]) {
MyEmitterProcessor myEmitterProcessor = new MyEmitterProcessor();
Flux<String> publisher = myEmitterProcessor.getPublisher();
myEmitterProcessor.onNext("A");
myEmitterProcessor.onNext("B");
myEmitterProcessor.onNext("C");
myEmitterProcessor.complete();
publisher.subscribe(x -> System.out.println(x));
}
public Flux<String> getPublisher() {
emitterProcessor = EmitterProcessor.create();
return emitterProcessor.map(x -> "consume: " + x);
}
public void onNext(String nextString) {
emitterProcessor.onNext(nextString);
}
public void complete() {
emitterProcessor.onComplete();
}
}
More info, see here from Reactor doc. There is a recommendation from the document itself that "Most of the time, you should try to avoid using a Processor. They are harder to use correctly and prone to some corner cases." BUT I don't know which kind of corner case.

stop polling files when rabbitmq is down: spring integration

I'm working on a project where we are polling files from a sftp server and streaming it out into a object on the rabbitmq queue. Now when the rabbitmq is down it still polls and deletes the file from the server and losses the file while sending it on queue when rabbitmq is down. I'm using ExpressionEvaluatingRequestHandlerAdvice to remove the file on successful transformation. My code looks like this:
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpProperties.getSftpHost());
factory.setPort(sftpProperties.getSftpPort());
factory.setUser(sftpProperties.getSftpPathUser());
factory.setPassword(sftpProperties.getSftpPathPassword());
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpRemoteFileTemplate sftpRemoteFileTemplate() {
return new SftpRemoteFileTemplate(sftpSessionFactory());
}
#Bean
#InboundChannelAdapter(channel = TransformerChannel.TRANSFORMER_OUTPUT, autoStartup = "false",
poller = #Poller(value = "customPoller"))
public MessageSource<InputStream> sftpMessageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(sftpRemoteFileTemplate,
null);
messageSource.setRemoteDirectory(sftpProperties.getSftpDirPath());
messageSource.setFilter(new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
"streaming"));
messageSource.setFilter(new SftpSimplePatternFileListFilter("*.txt"));
return messageSource;
}
#Bean
#Transformer(inputChannel = TransformerChannel.TRANSFORMER_OUTPUT,
outputChannel = SFTPOutputChannel.SFTP_OUTPUT,
adviceChain = "deleteAdvice")
public org.springframework.integration.transformer.Transformer transformer() {
return new SFTPTransformerService("UTF-8");
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice deleteAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnSuccessExpressionString(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
advice.setPropagateEvaluationFailures(false);
return advice;
}
I don't want the files to get removed/polled from the remote sftp server when the rabbitmq server is down. How can i achieve this ?
UPDATE
Apologies for not mentioning that I'm using spring cloud stream rabbit binder. And here is the transformer service:
public class SFTPTransformerService extends StreamTransformer {
public SFTPTransformerService(String charset) {
super(charset);
}
#Override
protected Object doTransform(Message<?> message) throws Exception {
String fileName = message.getHeaders().get("file_remoteFile", String.class);
Object fileContents = super.doTransform(message);
return new customFileDTO(fileName, (String) fileContents);
}
}
UPDATE-2
I added TransactionSynchronizationFactory on the customPoller as suggested. Now it doesn't poll file when rabbit server is down, but when the server is up, it keeps on polling the same file over and over again!! I cannot figure it out why? I guess i cannot use PollerSpec cause im on 4.3.2 version.
#Bean(name = "customPoller")
public PollerMetadata pollerMetadataDTX(StartStopTrigger startStopTrigger,
CustomTriggerAdvice customTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setAdviceChain(Collections.singletonList(customTriggerAdvice));
pollerMetadata.setTrigger(startStopTrigger);
pollerMetadata.setMaxMessagesPerPoll(Long.valueOf(sftpProperties.getMaxMessagePoll()));
ExpressionEvaluatingTransactionSynchronizationProcessor syncProcessor =
new ExpressionEvaluatingTransactionSynchronizationProcessor();
syncProcessor.setBeanFactory(applicationContext.getAutowireCapableBeanFactory());
syncProcessor.setBeforeCommitChannel(
applicationContext.getBean(TransformerChannel.TRANSFORMER_OUTPUT, MessageChannel.class));
syncProcessor
.setAfterCommitChannel(
applicationContext.getBean(SFTPOutputChannel.SFTP_OUTPUT, MessageChannel.class));
syncProcessor.setAfterCommitExpression(new SpelExpressionParser().parseExpression(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])"));
DefaultTransactionSynchronizationFactory defaultTransactionSynchronizationFactory =
new DefaultTransactionSynchronizationFactory(syncProcessor);
pollerMetadata.setTransactionSynchronizationFactory(defaultTransactionSynchronizationFactory);
return pollerMetadata;
}
I don't know if you need this info but my CustomTriggerAdvice and StartStopTrigger looks like this :
#Component
public class CustomTriggerAdvice extends AbstractMessageSourceAdvice {
#Autowired private StartStopTrigger startStopTrigger;
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null) {
if (startStopTrigger.getStart()) {
startStopTrigger.stop();
}
} else {
if (!startStopTrigger.getStart()) {
startStopTrigger.stop();
}
}
return result;
}
}
public class StartStopTrigger implements Trigger {
private PeriodicTrigger startTrigger;
private boolean start;
public StartStopTrigger(PeriodicTrigger startTrigger, boolean start) {
this.startTrigger = startTrigger;
this.start = start;
}
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
if (!start) {
return null;
}
start = true;
return startTrigger.nextExecutionTime(triggerContext);
}
public void stop() {
start = false;
}
public void start() {
start = true;
}
public boolean getStart() {
return this.start;
}
}
Well, would be great to see what your SFTPTransformerService and determine how it is possible to perform an onSuccessExpression when there should be an exception in case of down broker.
You also should not only throw an exception do not perform delete, but consider to add a RequestHandlerRetryAdvice to re-send the file to the RabbitMQ: https://docs.spring.io/spring-integration/docs/5.0.6.RELEASE/reference/html/messaging-endpoints-chapter.html#retry-advice
UPDATE
So, well, since Gary guessed that you use Spring Cloud Stream to send message to the Rabbit Binder after your internal process (very sad that you didn't share that information originally), you need to take a look to the Binder error handling on the matter: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#_retry_with_the_rabbitmq_binder
And that is true that ExpressionEvaluatingRequestHandlerAdvice is applied only for the SFTPTransformerService and nothing more. The downstream error (in the Binder) is not included in this process already.
UPDATE 2
Yeah... I think Gary is right, and we don't have choice unless configure a TransactionSynchronizationFactory on the customPoller level instead of that ExpressionEvaluatingRequestHandlerAdvice: ExpressionEvaluatingRequestHandlerAdvice .
The DefaultTransactionSynchronizationFactory can be configured with the ExpressionEvaluatingTransactionSynchronizationProcessor, which has similar goal as the mentioned ExpressionEvaluatingRequestHandlerAdvice, but on the transaction level which will include your process starting with the SFTP Channel Adapter and ending on the Rabbit Binder level with the send to AMQP attempts.
See Reference Manual for more information: https://docs.spring.io/spring-integration/reference/html/transactions.html#transaction-synchronization.
The point with the ExpressionEvaluatingRequestHandlerAdvice (and any AbstractRequestHandlerAdvice) that they have a boundary only around handleRequestMessage() method, therefore only during the component they are declared.

WebTestClient used multiple times returns empty body sometimes

not sure, why this could be an issue, but I can't stabilize my unit-tests.
Here some snippets from my testclass:
#SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT, properties = { "spring.main.web-application-type=reactive" })
#RunWith(SpringRunner.class)
#TestPropertySource(locations = "classpath:application-test.properties")
public class SolrControllerV1Test {
#Inject
ApplicationContext context;
#LocalServerPort
int port;
private WebTestClient client;
#TestConfiguration
static class TestConfig {
#Bean
public TestingAuthenticationProvider testAuthentiationManager() {
return new TestingAuthenticationProvider();
}
#Bean
public SecurityWebFilterChain securityConfig(ServerHttpSecurity http, ReactiveAuthenticationManager authenticationManager) {
AuthenticationWebFilter webFilter = new AuthenticationWebFilter(authenticationManager);
return http.addFilterAt(webFilter, SecurityWebFiltersOrder.AUTHENTICATION)
.authorizeExchange()
.anyExchange()
.authenticated()
.and()
.build();
}
}
#Before
public void setUp() {
this.client = WebTestClient.bindToApplicationContext(context).configureClient().responseTimeout(Duration.ofDays(1L)).baseUrl("http://localhost:" + port).build();
}
private void defaultCheck(ResponseSpec spec) {
spec.expectStatus().isOk().expectBody().jsonPath("$.response.numFound").hasJsonPath();
}
#Test
#WithMockUser(roles = { "ADMIN" })
public void simpleUsrSelect() throws Exception {
ResponseSpec spec = this.client.get().uri("/" + serviceVersion + "/usr/select?q=*:*&fq=*:*&fl=USRTYP,USRKEY,USRCID&rows=1&start=10&sort=last_update desc").exchange();
defaultCheck(spec);
}
#Test
#WithMockUser(roles = { "ADMIN" })
public void simpleCvdSelect() throws Exception {
ResponseSpec spec = this.client.get().uri("/" + serviceVersion + "/cvd/select?q=*:*&rows=10000").exchange();
defaultCheck(spec);
}
.
.
.
}
There are some more unit-tests there, some of which are long running (>1sec). If I have enough unit-tests in the class (~5-8), of which 1 or 2 are taking a bit longer, the unit-tests start to break. This looks like a thread safety issue, but I don't know, what I'm doing wrong. Any ideas?
EDIT
Here the Server Part that made trouble:
#PreAuthorize("hasAnyRole('ADMIN','TENANT')")
public Mono<ServerResponse> select(ServerRequest request) {
return request.principal().flatMap((principal) -> {
return client.get().uri(f -> {
URI u = f.path(request.pathVariable("collection")).path("/select/").queryParams(
queryModifier.modify(principal, request.pathVariable("collection"), request.queryParams())
.onErrorMap(NoSuchFieldException.class, t -> new ResponseStatusException(HttpStatus.NOT_FOUND, "Collection not found"))
.block()).build();
return u;
})
.exchange()
.flatMap((ClientResponse mapper) -> {
return ServerResponse.status(mapper.statusCode())
.headers(c -> mapper.headers().asHttpHeaders().forEach((name, value) -> c.put(name, value)))
.body(mapper.bodyToFlux(DataBuffer.class), DataBuffer.class);
})
.doOnError(t -> handleAuthxErrors(t, principal, request.uri()));
});
}
If I add a publishOn(Schedulers.elastic) right after the .exchange() part, it seems to be working. Since this is trial&error, and I don't really understand why the publishOn fixes the problem, does anybody else know? I'm not even sure, whether using springs reactive Webclient is blocking in this case, or not...
Thanks, Henning

spring 3 AOP anotated advises

Trying to figure out how to Proxy my beans with AOP advices in annotated way.
I have a simple class
#Service
public class RestSampleDao {
#MonitorTimer
public Collection<User> getUsers(){
....
return users;
}
}
i have created custom annotation for monitoring execution time
#Target({ ElementType.METHOD, ElementType.TYPE })
#Retention(RetentionPolicy.RUNTIME)
public #interface MonitorTimer {
}
and advise to do some fake monitoring
public class MonitorTimerAdvice implements MethodInterceptor {
public Object invoke(MethodInvocation invocation) throws Throwable{
try {
long start = System.currentTimeMillis();
Object retVal = invocation.proceed();
long end = System.currentTimeMillis();
long differenceMs = end - start;
System.out.println("\ncall took " + differenceMs + " ms ");
return retVal;
} catch(Throwable t){
System.out.println("\nerror occured");
throw t;
}
}
}
now i can use it if i manually proxy the instance of dao like this
AnnotationMatchingPointcut pc = new AnnotationMatchingPointcut(null, MonitorTimer.class);
Advisor advisor = new DefaultPointcutAdvisor(pc, new MonitorTimerAdvice());
ProxyFactory pf = new ProxyFactory();
pf.setTarget( sampleDao );
pf.addAdvisor(advisor);
RestSampleDao proxy = (RestSampleDao) pf.getProxy();
mv.addObject( proxy.getUsers() );
but how do i set it up in Spring so that my custom annotated methods would get proxied by this interceptor automatically? i would like to inject proxied samepleDao instead of real one. Can that be done without xml configurations?
i think should be possible to just annotate methods i want to intercept and spring DI would proxy what is necessary.
or do i have to use aspectj for that? would prefere simplest solution :- )
thanks a lot for help!
You haven't to use AspectJ, but you can use AspectJ annotations with Spring (see 7.2 #AspectJ support):
#Aspect
public class AroundExample {
#Around("#annotation(...)")
public Object invoke(ProceedingJoinPoint pjp) throws Throwable {
...
}
}