I am testing my Mule flows and want to make them modular in order to test individual parts. Take the following for example:
<flow name="in" doc:name="in">
<sftp:inbound-endpoint
connector-ref="sftp"
address="sftp://test:test#localhost:${sftp.port}/~/folder" autoDelete="false" doc:name="SFTP">
</sftp:inbound-endpoint>
<vm:outbound-endpoint path="out" />
</flow>
My Mule test then requests the message off the out VM queue to test that a file is correctly picked up etc:
MuleMessage message = muleClient.request("vm://out", 10000L);
Is this a good practice or would it be better using the FunctionalTestComponent to check the event received?
By using vm instead of the FunctionalTestComponent I don't need to change my flows for testing purposes which is a plus.
However by doing so , I am unsure of the consequences. I heard flow-ref was the preffered way to modularise flows, but that doesn't allow me to pick up the message in my test etc.
Any advice or best practice appreciated.
Modularizing around request-response VM endpoints has several drawbacks including the loss of inbound properties and the introduction of an extra hop with potential performance cost, something flow-ref doesn't have. One-way VM endpoints offer a different functionality than flow-ref so can't really be compared.
The problem with private flows or sub-flows that you invoke with flow refs is that it's hard to invoke them from code. It's possible but hard.
One workaround I've found is to create test flows with VM inbound endpoints in a test configuration file and use them to inject test messages via flow-ref to private/sub flows. The advantage is that the main configuration files are unaffected.
I think that my recent blog post can serve you well with your doubts. I wrote down what I recon as best practices in terms of Mule flow designing. I focused heavily on the testing side (using MUnit framework). With it you can easily mock any message processor (it includes flows and sub-flows): http://poznachowski.blogspot.co.uk/2014/04/munit-testing-mule-practices-and-some.html
Related
I am new to contract testing and trying to do a PoC using Karate framework. I know Pact (another contract testing tool) used for contract testing where in contracts and verification results between consumer and provider projects are managed using Pact broker.
When it comes to Karate, please advise on what functionality in Karate framwork performs similar roles of Pact broker(managing contracts and verification results).
Appreciating your help on this.
Karate does not have the equivalent of the broker. It is possible to achieve contract testing without a broker if the Producer and Consumer have access to the mock and test. Git is typically the best way to share these artifacts. Since they are plain-text files, even email would suffice.
So you don't need to stand up a server and go through all the complications of keeping it running and accessible by both teams and worry about the security implications of if the Producer or Consumer is outside your firewall.
Note that if you genuinely have a case where the Producer or Consumer is not part of your corporate organization, you have a bigger problem to solve - which is to get that team to agree to follow the Consumer Driven Contract flow.
But if you are trying to do CDC where the producer and consumer are 2 teams within the same organization, Karate is more than sufficient. You just need a Git repo. The mock becomes a "deliverable" for the Producer team. The only thing you may miss is the visualization of "which teams depend on which service", which IMHO is not a big deal, it is just a pretty picture you can do without. The advantage of Karate is all the complex assertions that you can achieve and that you can continue to write normal tests, as long as the mock is "smart" enough to reply to those tests.
Skip to 33:30 of this video for an explanation: https://youtu.be/yu3uupBZyxc?t=2013
And read this article for a detailed explanation of what you should expect from a Contract Testing tool: https://www.linkedin.com/pulse/api-contract-testing-visual-guide-peter-thomas/
We have to choose the best way of implementing RabbitMQ Queue.
We have two approaches
1. Create a Queue and Bind using #Bean and Queue class in Spring.
2. Create a Queue in RabbitMQ web console itself.
We need to know which is the best way the Programming way or Console way and Why?
IMHO, the better way is using the web console. Queue is an infrastructure and will be used by many applications. You should not provide full control of the infrastructure to applications. It should be maintained by the admin.
Also please consider the following aspects.
Security
Ease of use
Threats
I have a synchronous Webservice (WSDL First JAX-WS Service) as inbound endpoint. i have some business logic for which i have a separate flow which invokes another Webservice which is one-way. The problem which i am facing is that after the main flow invokes the business-logic flow it is expecting a response from the business logic flow. I read Mule documentation regarding this and found out that Mule Flows take up the behavior of the source endpoint. So in my case the source is synchronous endpoint thus the flow getting invoked also gets synchronous behavior. i tried to change the Flow strategy to asynchronous but it makes the flow invalid.
Please suggest on how to invoke a flow in fire and forget pattern from within another flow
What you want to use is probably the async scope.
<queued-asynchronous-processing-strategy name="async_processing_strategy" maxThreads="16"/>
<flow>
...
<async processingStrategy="async_processing_strategy">
<flow-ref name="verySlowFlow"/>
</async>
...
</flow>
It sounds like one solution to your issue might be to incorporate the use one-way (asynchronous) vm/jms queues to trigger your second flow.
The first flow would have an outbound connector and the second flow would have an inbound connector.
Does that help? If not, post your xml so we can better understand your problem.
You could try reliability pattern. This will help to decouple the inbound reception and processing .
https://docs.mulesoft.com/mule-management-console/v/3.7/reliability-patterns
I am looking for a pragmatic solution to do Integration testing of our Integration tier based on Mule.
This article here has some excellent pointers about it, but looks a tad outdated. I am reproducing an excellent idea from the article here
Keeping track of the delivery of messages to external systems. Interrogating all the systems that have been contacted with test messages after the suite has run to ensure they all received what was expected would be too tedious to realize. How to keep track of these test messages? One option could be to run Mule ESB with its logging level set to DEBUG and analyze the message paths by tracking them with their correlation IDs. This is very possible. I decided to follow a simpler and coarser approach, which would give me enough certitude about what happened to the different messages I have sent. For this, I decided to leverage component routing statistics to ensure that the expected number of messages where routed to the expected endpoints (including error messages to error processing components). Of course, if two messages get cross-sent to wrong destinations, the count will not notice that. But this error would be caught anyway because each destination will complain about the error, hence raising the count of error messages processed.
Using this technique when I test my integration tier I will not have to stand up all the external systems and can test the integration tier in isolation which would be great.
#David Dassot has provided a reference implementation as well, however I think it was based on Mule 2.X and hence I cannot find the classes in the Mule 3.X codebase.
Looking around I did find FlowConstructStatistics but this is flow specific statistics and I am looking for endpoint specific statistics.
I do agree that as a work around we could wrap all outbound endpoints within sub-flows and get this working, but I would like to avoid doing this ...
Any techniques that help query the endpoint for the number of calls made, payload passed through the endpoints would be great!
First take a look to JMX, perhaps what you need is available right there.
Otherwise, if looging is not enough, and upgrading to the enterprise version is not ok for you. Give it a try to the endpoint level notifications.
I have a scenario where I am inserting the data from FTP file into various systems.
Depending on success or failure, the entry should be made in another system using SOAP call. The other system is maintained entirely for statistical purpose.
My approach was to have two flow-refs , one in case of success and other in exception strategy, which will call the flow making SOAP call to other system.
Is this the right approach? If not, I would like to know if there is any functionality in Mule which can detect end of the process(running in the background) and call a flow which will internally call the SOAP web service.
Thanks,
Varada
Having separate flows for different tasks is absolutely a good idea. A recommendation/suggestion from my end is to configure these two flows as private flows with asynchronous processingStrategy. You can rather configure these two flows as sub-flows with vm inbound endpoint. This approach, however, is not recommended to your requirement, as vm endpoints serialize/de-serialize the message, which you don't need, of course.