Mule ESB CE supports object stores, which can be set to persistent. From here I know also, that the stores are application-specific if defined in the application XMLs.
Unfortunately, I was unable to find any information if any data will be lost when:
Mule is restarted
Mule is killed
The application is re-deployed
I'm almost sure that (1) has no impact on the data. I suppose that the object store is also kill-agnostic. What about application being redeployed? I think there are 2 scenarios here:
Object store is defined on app-level
Object store is defined on domain-level
Am I right that in the 1st scenario data will be lost, while the latter will retain the data across application redeploys?
I'm working on Mule 3.5.0 CE.
Any help & references will be appreciated.
For 1,2 and 3 data should be persistent and available upon restart/redeploy etc. The only issue is changing the application name since object stores use the application name as part of the persisted storage information, so if you want the data to be available across redeploys, the newly deployed application must have the same name as the previous one.
In no cases data will be lost from the queue until it's tried (depends upon configuration) and it goes to DLQ.
Related
Goal:
I have two SQL server databases (DB-A and DB-B) located on two different severs in same network.
DB-A has a table T1 and I want to copy data from DB-A's Table T1 (source) to DB-B's Table T2 (Destination). This DB sync should take palace anytime any record in T1 is added, updated, and deleted.
Please note: All db to db data syc options are out of consideration, I must use MuleSoft API for this job.
Background:
I am new to MuleSoft and its offered products, I am told mule soft platform can help with building and managing API’s.
I explored web for MuleSoft offering, there are many articles (mentioned below) which are suggesting that MuleSoft itself can read and write from one DB table and write to another DB table (using DB connectors etc).
Questions:
Is it possible that MuleSoft itself can get this data sync job done without us writing own MuleSoft API invoker or MuleSoft API Consumer (to trigger MuleSoft API from one end or to receive data from MuleSoft API on the other end and write to DB table)?
What are all key steps to get this data transfer working? If you can provide any reference which shows step by step journey to achieve the goal will be huge help.
Links:
https://help.mulesoft.com/s/question/0D52T00004mXXGDSA4/copy-data-from-one-oracle-table-to-another-oracle-table
https://help.mulesoft.com/s/question/0D52T00004mXStnSAG/select-insert-data-from-one-database-to-another
https://help.mulesoft.com/s/question/0D72T000003rpJJSAY/detail
First let's clarify the terminology since the questions mixes several concepts in a confusing way. MuleSoft is a company that has several products that may apply. A MuleSoft API should be considered an API created by MuleSoft. Since you clearly are talking about APIs created by you or your organization that would be an incorrect description. What you are talking about are really Mule applications, which are applications that are deployed and executed in a Mule runtime. Mule applications may implement your APIs, or may implement integrations. After all Mule originally was an ESB product used to integrate other systems, before REST APIs where a thing. You may deploy Mule applications to Anypoint Platform. Specifically to the CloudHub component of the platform, or to an on-prem instance of Mule runtime.
In any case, a Mule application is perfectly capable of implementing APIs, integrations or both. There is no need that it implements an API or call another API if that is not what you want. You need to trigger the flow somehow, either reading directly from the database to find new rows, with a scheduler to execute a query at a given time, an HTTP request or even have an API listening for requests to trigger the flow.
As an example the application can use the <db:listener> source of the Database connector to start the flow fetching rows. You need to take care of any watermark columns configurations to detect only new rows. See the documentation https://docs.mulesoft.com/db-connector/1.13/database-documentation#listener for details.
Alternatively you can trigger the flow in another way and just use a select operation.
After that use DataWeave to transform the records as needed. Then use insert or update operations.
There are examples in the documentation that can help you to get started. If you are not familiar with Mule you should start with reading the documentation and do some training until you get the concepts.
I am using MFP 8.0, and there are requirements that we want implement cache on the adapter level.
Whenever MFP server starts we want to dump all the database in cache till the server restart again.
Now whenever user hit some transaction or adapter procedure which call database so instead of calling database it must read from cache.
Adapters support read-only and transactional access modes to back-end systems.
Adapters are Maven projects that contain server-side code implemented in either Java or JavaScript. Adapters are used perform
any necessary server-side logic, and to transfer and retrieve
information from back-end systems to client applications and cloud
services.
JSONStore is an optional client-side API providing a lightweight, document-oriented storage system. JSONStore enables persistent storage
of JSON documents. Documents in an application are available in
JSONStore even when the device that is running the application is
offline. This persistent, always-available storage can be useful to
give users access to documents when, for example, there is no network
connection available in the device.
From your description, assuming you are talking about some custom DB where you have data stored, then you need to implement the logic of caching the data.
Adapter's have two classes <AdapterName>Application.java and <AdapterName>Resource.java. <>Application.java contains the lifecycle methods - init() and destroy().
You should put your custom code of loading data from your DB into cache in the init() method. And also take care of removing it in the destroy().
Now during transactional access (which hits <>Resource.java), you refer to the cache you have already created.
Your requirement, however may not be ideal for heavily loaded systems. You need to consider that:
a) Your adapter initialization is delayed. Any wrongly written code can also break the adapter initialization. An adapter isn't available to service your request until it has been initialized. In case of a clustered environment, the adapter load in all cluster members will delayed depending on the amount of data your are loading. Any client request intended for this adapter will get a runtime exception until the initialization is complete.
b) Holding the cache in memory means, so much space in the heap is used up. If your DB keeps growing, this adversely affects adapter initialization and also heap usage.
c) You are in charge maintaining the data at the latest level and also cleaning it up after use.
To summarize, while it is possible, it is not recommended. While this may work in case of very small data set, this cannot scale well. The design of adapters is to provide you transactional access to data/backend systems. You should use the adapter the way it was designed to.
I have a specific requirement wherein Mule would be pulling information from provider system and sending it to other system, there are series of such asynchronous calls wherein we have to correlate each messages to a specific user session, can someone throw their insight as how can we maintain session in mule for asynchronous calls? One approach I thought to store it in the DB but it would cause an performance issue. Any thoughts would be highly appreciated.
You can try using Object Stores for that, where each user session can be stored and accessed in the store by an unique id. They can be in-memory or physically persisted (depending on your requirements). Check the Object Store Connector to easily get and store objects from the stores.
I am working on Mulesoft application which I have deployed in Mule servers of two different physical machines.The servers are binded together to form a cluster.
In clustering mode, the servers are said to share common distributed memory such that if one machine goes down,the other machine takes up the task of the first machine.So,they maintain common distributed memory between them.
Is there any way to configure the memory for the common distributed memory the cluster leverages?
As the traffic/number of applications gets added up,I guess,there will be need to lift the threshold memory up for the respective cluster.
Or if not,do we ever have to modify the memory volume at all that Mulesoft cluster uses ?
Please help me out.
Thanks
In clustered scenarios all object stores are replaced with clustered object stores. Clustered object stores use the shared memory grid created by the clustering code to persist information (meaning that there is no file system level persistence), in case of an outage with a node, other nodes in the cluster should remain active and maintain the OS information in the shared memory grid, thus making the persistence in the file system unnecessary
Additionally, since object stores use the name of the application as part of storage information, if you want to keep them across re-deployments, the newly deployed application must have the same name as the previous one. Please see below as a reference:
Scenario a:
1. Current application name: test
2. New application name: test
- Object store values will be preserved from 1 to 2.
Scenario b:
1. Current application name: test-v1
2. New application name: test-v2
- Object store values will not be preserved from 1 to 2.
Note, In-memory store – Prior to Mule 3.5.0, in-memory store was the default. As of Mule 3.5.0, persistent store is the default.
Mulesoft has a felicity of active-active server here we need not to bother about which server has to work when one server is down another will work. The memory is similar to jvm memory consumption.
I have a few (small size) tables, saved in Table Storage which I use only for reading from.
When my service starts, I'd like to read all tables, save the data in a data structure (i.e. List), and read from that List from there on.
Is there a way to do that, or must I read from the Table Storage each time I need data?
If there is a way, where should the List be declared, and where should it be initialized?
Thanks.
Azure cache may be the best route, but there is an obvious cost.
Could you declare the WCF service as a singleton and store the data as a static property?
You could use the Windows Azure Cache service to store the data. See http://www.windowsazure.com/en-us/home/tour/caching/
If your list is not too big, you could use the Windows Azure caching component http://www.windowsazure.com/en-us/home/tour/caching/ . During the initialization process of your service, read the information from your tables, and stored it there. You are also asking where the list should declared and initialized. Are you also hosting your service on Windows Azure? Is this a web service runnig on IIS, or a windows service? Are you using WCF to expose your service?
I see others are suggesting static properties (good choice) and Azure Chache. Anyway it is good to cache the data if it is not often updated, and not read it every time from the Table Storage.
I want to give my two cents:
I would not use Azure Cahce if the data is small enough (1MB is small enough for me). Static property would do the work. But there is also something new to .NET 4.0 and obviously missing from most of programmes view. It's the System.Runtime.Caching namespace. I haven't presonally used it yet, but it seems to be a good for small local caches. You could use the MemoryCache object and store your data in-memory. And, of course program like against any other type of chache - in the getter of a property, check if data exists in the chache. If exists - return it. If does not exists - retrieve from tables, store in chache, and then return it.