Testing a state machine saga in MassTransit with a Kafka rider - testing

In MassTransit documentation, I saw an example of testing a state machine saga with a bus, but there was no example of doing it with a Kafka rider. Whether you do it the same way, or should it be done differently?

There are no test harnesses for riders, only for the supported transports.
You can look at the state machine unit tests for Kafka in the unit tests project.

Related

Mule Flow only for Integration Testing

I have got the Mule 4 application which is driven by Scheduler to schedule every 30 minutes. I have added http listener under the file test-listener.xml just for invoking it for building the integration testing.
I want the test-listener.xml to be deployed only into non-production environment. How can I achieve it in Mule 4.3.0 Runtime
Thanks
Adding an HTTP Listener to test the flow is not a good practice. If you are interested on just testing the flow just use MUnit to implement tests of the flow. If you are interested on testing the scheduler execution you can use this method with MUnit: https://docs.mulesoft.com/munit/2.3/test-flow-with-scheduler-cookbook

How to disable Kafka in Quarkus test?

My application uses Kafka and Hibernate. For Kafka a running docker image is required. If I run a Quarkus test for Hibernate, the test fails if Kafka is not running. In my IDE this is not a problem, but in Jenkins there is no Kafka server available and the test fails because it cannot resolve the Kafka server.
Is it possible to disable Kafka in Quarkus tests?
You could make use of Microprofile's Emitter for sending messages to Kafka channel:
#Inject
#Channel("hello")
Emitter<String> emitter;
By default, in case when there is no Kafka behind that emitter, it will create an in-memory message bus. So the docker image for Kafka would not be required.
Another solution would be to use KafkaContainer from TestContainers to create a throwaway Kafka container for each test run.
You could check both examples in Alex Soto's repository.
Look at CheckoutProcess class and corresponding component test and integration test.

Is it Possible to use RPC while writing Test cases for flows in corda?

Not able to access RPC client while writing test cases for flows in corda
To test node RPC calls, you need to create integration tests using the node driver. See an example here: https://github.com/corda/cordapp-example/blob/release-V3/java-source/src/integrationTest/java/com/example/DriverBasedTests.java.

Does Apache Apollo have failover support?

I'm looking to use a message queue system for an ongoing project, which now is relying on a custom (and brittle) message subsystem to interconnect multiple applications. Both the pub/sub and queue patterns are heavily used in my system.
Apache Apollo is one of the message queue systems I'm taking into account, but I don't find information about how can I handle (for instance) an Apollo server failure.
Is there a way to provide failover support in Apollo?
No, as of now this has not been resolved. Apollo is a very good broker, indeed, but lacks some production critical features like fail over. Apollo was an attempt to make a core for the next generation of ActiveMQ. However, the development is no loger active.
Have you considered other brokers like Apache Artemis? It's basically a new attempt to remake ActiveMQ with code from HornetQ, ActiveMQ and Apollo. Development is very active at the moment and there is support for fail over etc.

Deploying java client, RabbitMQ, and Celery to server

I have a Java API on my server, and I want it to create tasks and add them to Celery via RabbitMQ. I followed the following tutorial, http://www.rabbitmq.com/tutorials/tutorial-two-python.html, where I used java for the client (send.java) and python to receive (receive.py). In receive.py, where the callback method is invoked, I call a method that I've annotated with #celery.task so that the task is added to celery.
I'm wondering how all of this is deployed on a server though, specifically, why there is a receive.py file. Is receive.py a process that must continually run on the server? Is there a way to configure RabbitMQ so that it automatically routes java client tasks to celery?
Thanks!
RabbitMQ is just a message queue. Producers put messages and consumers get them on demand. You can only restrict access for specific queues via RabbitMQ's auth options.
As for deployment: yes, receive.py needs to continuously run. It is Celery's job to do that. See the Workers Guide for info on running a worker.