I have a scenario where I am inserting the data from FTP file into various systems.
Depending on success or failure, the entry should be made in another system using SOAP call. The other system is maintained entirely for statistical purpose.
My approach was to have two flow-refs , one in case of success and other in exception strategy, which will call the flow making SOAP call to other system.
Is this the right approach? If not, I would like to know if there is any functionality in Mule which can detect end of the process(running in the background) and call a flow which will internally call the SOAP web service.
Thanks,
Varada
Having separate flows for different tasks is absolutely a good idea. A recommendation/suggestion from my end is to configure these two flows as private flows with asynchronous processingStrategy. You can rather configure these two flows as sub-flows with vm inbound endpoint. This approach, however, is not recommended to your requirement, as vm endpoints serialize/de-serialize the message, which you don't need, of course.
Related
I am developing a mule application where I have to take orders from One system System-1 & have to send it to the another system say System-2 through soap (which actually takes care of creation of orders, invoices etc) & the response from System-2 is routed back to system-1 with success or failure response. Now what approach should be the best, will a VM be the best approach for referencing purpose or a flow reference ? The number of orders coming could be like 100 per hour. Also for both cases what should be the ideal worker size ?
If the system 2 is to be called SOAP web service, you should be using web service consumer component or an http requester component.
Lets understand the difference between flowref and vm.
Both flowref and vm are used to call another flow within the mule application.
Major difference between both is vm uses in-memory queue to refer to other flow and it creates a transport barrier, hence flow variable and inbound properties wont be propagated across, hence use of vm component should be considered only of it creating a transport barrier is necessary.
If thats(creating transport barrier) not the requirement, usage of flow ref is recommended.
Simplified... We are using NServiceBus for updating our storage.
In our sagas we first read data from our storage and updates the data and puts it back again to storage.The NServicebus instance is selfhosted in a windows service. Calls to storage are separated in its own assembly ('assembly1').
Now we will also need synchronous read from our storage through WCF. In some cases there will be the same reads that were needed when updating in sagas.
I have my opinion quite clear but maybe I am wrong and therefore I am asking this question...
Should we set up a separate WCF service that is using a copy of 'assembly1'?
Or, should the WCF instance host nservicebus?
Or, is there even a better way to do it?
It is in a way two endpoints, WCF for the synchronous calls and the windows service that hosts nservicebus (which already exists) right now.
I see no reason to separate into two distinct endpoints in your question or comments. It sounds like you are describing a single logical service, and my default position would be to host each logical service in a single process. This is usually the simplest approach, as it makes deployment and troubleshooting easier.
Edit
Not sure if this is helpful, but my current client runs NSB in an IIS-hosted WCF endpoint. So commands are handled via NSB messages, while queries are still exposed via WCF. To date we have had no problems hosting the two together in a single process.
Generally speaking, a saga should only update its own state (the Data property) and send messages to other endpoints. It should not update other state or make RPC calls (like to WCF).
Before giving more specific recommendations, it would be best to understand more about the specific responsibilities of your saga and the data being updated by 'assembly1'.
I am testing my Mule flows and want to make them modular in order to test individual parts. Take the following for example:
<flow name="in" doc:name="in">
<sftp:inbound-endpoint
connector-ref="sftp"
address="sftp://test:test#localhost:${sftp.port}/~/folder" autoDelete="false" doc:name="SFTP">
</sftp:inbound-endpoint>
<vm:outbound-endpoint path="out" />
</flow>
My Mule test then requests the message off the out VM queue to test that a file is correctly picked up etc:
MuleMessage message = muleClient.request("vm://out", 10000L);
Is this a good practice or would it be better using the FunctionalTestComponent to check the event received?
By using vm instead of the FunctionalTestComponent I don't need to change my flows for testing purposes which is a plus.
However by doing so , I am unsure of the consequences. I heard flow-ref was the preffered way to modularise flows, but that doesn't allow me to pick up the message in my test etc.
Any advice or best practice appreciated.
Modularizing around request-response VM endpoints has several drawbacks including the loss of inbound properties and the introduction of an extra hop with potential performance cost, something flow-ref doesn't have. One-way VM endpoints offer a different functionality than flow-ref so can't really be compared.
The problem with private flows or sub-flows that you invoke with flow refs is that it's hard to invoke them from code. It's possible but hard.
One workaround I've found is to create test flows with VM inbound endpoints in a test configuration file and use them to inject test messages via flow-ref to private/sub flows. The advantage is that the main configuration files are unaffected.
I think that my recent blog post can serve you well with your doubts. I wrote down what I recon as best practices in terms of Mule flow designing. I focused heavily on the testing side (using MUnit framework). With it you can easily mock any message processor (it includes flows and sub-flows): http://poznachowski.blogspot.co.uk/2014/04/munit-testing-mule-practices-and-some.html
What is the most sensible approach to integrate/interact NServiceBus Sagas with REST APIs?
The scenario is as follows,
We have a load balanced REST API. Depending on the load we can add more nodes.
REST API is a wrapper around a DomainServices API. This means the API can be consumed directly.
We would like to use Sagas for workflow and implement NServiceBus Distributor to scale-out.
Question is, if we use the REST API from Sagas, the actual processing happens in the API farm. This in a way defeats the purpose of implementing distributor pattern.
On the other hand, using DomainServives API directly from Sagas, allows processing locally within worker nodes. With this approach we will have to maintain API assemblies in multiple locations but the throughput could be higher.
I am trying to understand the best approach. Personally, I’d prefer to consume the API (if readily available) but this could introduce chattiness to the system and could take longer to complete as compared to to in-process.
A typical sequence could be similar to publishing an online advertisement,
Advertiser submits a new advertisement request via a web application.
Web application invokes the relevant API endpoint and sends a command
message.
Command message initiates a new publish advertisement Saga
instance.
Saga sends a command to validate caller permissions (in
process/out of process API call)
Saga sends a command to validate the
advertisement data (in process/out of process API call)
Saga sends a
command to the fraud service (third party service)
Once the content and fraud verifications are successful,
Saga sends a command to the billing system.
Saga invokes an API call to save add details. (in
process/out of process API call)
And this goes on until the advertisement is expired, there are a number of retry and failure condition paths.
After a number of design iterations we came up with the following guidelines,
Treat REST API layer as the integration platform.
Assume API endpoints are capable of abstracting fairly complex micro work-flows. Micro work-flows are operations that executes in a single burst (not interruptible) and completes with-in a short time span (<1 second).
Assume API farm is capable of serving many concurrent requests and can be easily scaled-out.
Favor synchronous invocations over asynchronous message based invocations when the target operation is fairly straightforward.
When asynchronous processing is required use a single message handler and invoke API from the handlers. This will delegate work to the API farm. This will also eliminate the need for a distributor and extra hardware resources.
Avoid Saga’s unless if the business work-flow contains multiple transactions, compensation logic and resumes. Tests reveals Sagas do not perform well under load.
Avoid consuming DomainServices directly from a message handler. This till do the work locally and also introduces a deployment hassle by distributing business logic.
Happy to hear out thoughts.
You are right on with identifying that you will need Sagas to manage workflow. I'm willing to bet that your Domain hooks up to a common database. If that is true then it will be faster to use your Domain directly and remove the serialization/network overhead. You will also lose the ability to easily manage the transactions at the database level.
Assuming your are directly calling your Domain, the performance becomes a question of how the Domain performs. You may take steps to optimize the database, drive down distributed transaction costs, sharding the data, etc. You may end up using the Distributor to have multiple Saga processing nodes, but it sounds like you have some more testing to do once a design is chosen.
Generically speaking, we use REST APIs to model the commands as resources(via POST) to allow interaction with NSB from clients who don't have direct access to messaging. This is a potential solution to get things onto NSB from your web app.
I have asp.net site where I call my WCF service using jQuery.
Sometimes the WCF service must have an ability to ask user with confirmation smth and depend on user choice either continue or cancel working
does callback help me here?
or any other idea appreciated!
Callback contracts won't work in this scenario, since they're mostly for duplex communication, and there's no duplex on WebHttpBinding (there's a solution for a polling duplex scenario in Silverlight, and I've seen one implementation in javascript which uses it, but that's likely way too complex for your scenario).
What you can do is to split the operation in two. The first one would "start" the operation and return an identifier and some additional information to tell the client whether the operation will be just completed, or whether additional information is needed. In the former case, the client can then call the second operation, passing the identifier to get the result. In the second one, the client would again make the call, but passing the additional information required for the operation to complete (or to be cancelled).
Your architecture is wrong. Why:
Service cannot callback client's browser. Real callback over HTTP works like reverse communication - client is hosting service called by the client. Client in your case is browser - how do you want to host service in the browser? How do you want to open port for incoming communication from the browser? Solutions using "callback like" functionality are based on pooling the service. You can use JavaScript timer and implement your own pooling mechanism.
Client browser cannot initiate distributed transaction so you cannot start transaction on the client. You cannot also use server side transaction over multiple operations because it requires per-session instancing which in turn requires sessinoful channel.
WCF JSON/REST services don't support HTTP callback (duplex communication).
WCF JSON/REST services don't build pooling solution for you - you must do it yourselves
WCF JSON/REST services don't support distributed transactions
WCF JSON/REST services don't support sessionful channels / server side sessions
That was technical aspect of your solution.
Your solution looks more like scenario for the Workflow service where you start the workflow and it runs till some point where it waits for the user input. Until the input is provided the workflow can be persisted to the database so generally user can provide the input several days later. When the input is provided the service can continue. Starting the service and providing each needed input is modelled as separate operation called from the client. This is not usual scenario for something called from JavaScript but it should be possible because you can write custom WebHttpContextBinding to support workflows. It will still not achieve the situation where user will be automatically asked for something - that is your responsibility to find when the popup should appear and handle it.
If you leave standard WCF world you can check solutions like COMET which provides AJAX push/callback.