Is there any known issue with scatter gather scope in Mule Runtime 4.2.2(On-prem). I could see few of my request is not moving after the scatter gather component with no error or exceptions. I do invoking the soap service , rest service and file consuming under the scatter gather component. Its not happening for all the request but for few request. It is causing timeout for that request.I do have before and after logger . Its does not showing any before activities as well. On the same time, another request process it successfully.
I am just using basic config of scatter-gather component.
There are at least 6 fixed issues with scatter-gather mentioned in the Mule 4.3.0 release notes. This means that the issues were reported against previous versions. You can check those issues by searching the id number (MULE-NNNNN) in Mule JIRA: https://www.mulesoft.org/jira to see if any has affected version 4.2.2.
You have also to consider the possibility that the issue is related to something else, like threading.
It would be a good idea to test with Mule 4.3.0 to see if you notice any improvements.
Related
I am performing XML to JSON transformations using Transform Message connector. I have created a mule-plugin for the transformations code and added it as a dependency to my application. When I deploy the application in anypoint studio(4.3.0) it is working as expected i.e. I am getting the full payload transformed to JSON. But, when I deploy the same application to ONPREM some fields of input(XML) are missing in the output(JSON). In case of the ONPREM application I am sending the message(XML-payload) via JMS(1.7.1)-Publish by publishing it to a JMS queue where my application is listening using JMS-On New Message and using the transformations mule-plugin(added as a dependency) to transform the XML to JSON and publishing via JMS-publish to a queue where another API is listening.
I observed that when I am dividing parts of dwl in modules and importing them in a main dwl and deploying at ONPREM the fields are missing. But, when I am using all the module's dwl code in same dwl file I am getting all the fields.
Please Help me with this.
Issue Resolved. There was a difference between Studio Runtime and ONPREM Runtime. When I patched the ONPREM with latest update. The Issue got Resolved.
people! I am migrating to mule 4 Kernel version.
I'm stuck in this moment: requirement is to read file from FTP and then process it. In old version it was like a few components:
1. quartz,
2. transformer
3. transformer
4. queue
Can somebody help me to migrate it to mule 4 kernel?
How to do this? How to put file content as string into queue like it was in older version? It would be nice if we could talking about Mule Kernel version. I'm new member of this community and of Mule developers, to pls dont hate me.
In next step I'm gonna split this file (splitter) but I know in Kernel there is not splitters anymore, so I have to use for each, right?
Now I've got
1. http listener (but it should be job. For my own tests It is http listener, I'm gonna to change this).
2. FTP read with FTP connector
3. ????
<flow>
<quartz
with cronExpression
and with conector to FTP>
</quartz>
<gzip-uncompress-transformer encoding="UTF-8"></gzip-uncompress-transformer>
<byte-array-to-string-transformer encoding="UTF-8"></byte-array-to-string-transformer>
<jms:outbound-endpoint queue="xxx" ></jms:outbound-endpoint>
</flow>
You should first try to read the migration guide and try to migrate each component to its equivalent in Mule 4: https://docs.mulesoft.com/mule-runtime/4.2/index-migration
For the example you mentioned it should very straightforward.
Quar
tz: it was already deprecated in Mule 3 by the Poll element. Its
replacement in Mule 4 is the Scheduler:
https://docs.mulesoft.com/mule-runtime/4.2/migration-core-poll
FTP:
there is a new FTP connector, which works in operations rather than
inbound/output:
https://docs.mulesoft.com/mule-runtime/4.2/migration-connectors-ftp-sftp
gzip uncompress: this isn't mentioned in the migration guide but just a search shows that there is a replacement: https://docs.mulesoft.com/connectors/compression/compression-module#gzip-decompressor-strategy
jms outbount: https://docs.mulesoft.com/mule-runtime/4.2/migration-connectors-jms#SendingMessages
: this kind of low level transformation is usually not needed anymore in Mule 4. (https://docs.mulesoft.com/mule-runtime/4.2/migration-core)
Most of the information is already in the documentation.
Another suggestion is try to use the latest available version of Mule and connectors.
While Running the Mule, I am facing the below error:
Timeout waiting for mule context to be completely started
Please let me know the work around solution for this. The same integration is working fine i.e the query fetching is happening fine with other system having mule but the same is not working in my system. Please Suggest a way to overcome this.
Thanks in Advance...!
Goutham ...Did you configured timeout in your flow? If it is configured ..
1. is it configured in Munit which we need to look into run and wait scope..
2. Or is this coming during the shutdown of mule ?
You can set a timeout value to enable the current flow to complete. However, there is no built in method or utility to check what messages are in flight. You can connect a profiler and see the active threads (or just a thread dump), this should provide you an overview of what’s happening at the JVM level.
To ensure all inflight messages are processed you can shutdown mule in two steps:
Stop the flow(s) manually (this will prevent new messages from coming)
Stop Mule
Alternatively, you can set shutdownTimeout to a milliseconds value for a flow; hwoever this is not a global value.
https://docs.mulesoft.com/mule-user-guide/v/3.8/starting-and-stopping-mule-esb
http://grepcode.com/file/repo1.maven.org/maven2/org.mule/mule-core/3.7.0/org/mule/transport/AbstractMessageDispatcher.java
The second link will provide you the internal implementation of Mule's AbstractMessageDispatcher .Hope this helps.
Thanks
I can't seem to get the Castle Windsor Integration working for Mass Transit over RabbitMQ. Everything was working fine until I introduced Windsor into the picture. I referenced Castle.Windsor 3.2 and MassTransit.WindsorIntegration 2.9 and configured the container for use within my application. I'm registering the MassTransit Consumers via:
Container.Register(..., Types.FromThisAssembly().BasedOn<IConsumer>());
When I debug and inspect the container after this line is ran, I can see that it successfully registered all of the consumers along with all of my other components. I then have the following code to initialize and register the service bus:
var serviceBus = ServiceBusFactory.New(sbc =>
{
sbc.UseRabbitMq();
sbc.ReceiveFrom(Config.ServiceBusEndpoint);
sbc.Subscribe(sc => sc.LoadFrom(Container));
});
Container.Register(Component.For<IServiceBus>().Instance(serviceBus));
I am using the LoadFrom(IWindsorContainer container) extension method provided by MassTransit.WindsorIntegration.
All of the examples I've found so far stop here and indicate that this is all you should have to do. Unfortunately for me my Consumers are never being called and messages are just piling up in the queue (eventually timing out and going to error queue). The fact that messages are showing up in the Consumer queue at all (+ I see a single consumer bound to the queue via the RabbitMQ Admin Tool) indicates to me that the consumers are probably being subscribed properly - so I'm not sure where the problem lies.
I added NLog logging for Windsor and MassTransit but no errors are showing up in the logs. I'm not sure how I should proceed troubleshooting at this point. Any ideas?
Also, this application is currently just a console application using Topshelf for development. Ultimately it will be installed as a Windows Service. Not sure if that is relevant or not but I thought I'd mention it just in case.
UPDATE
As a test I created a very simple Consumer with a parameter-less constructor for processing a very simple test message. This Consumer is successfully called! The "real" Consumers however have dependencies that need to be injected into them via the constructor. I was hoping the Container would resolve these but apparently it's having some sort of trouble. Weird that nothing is showing up in the logs about it. Stay tuned...
Okay I figured it out. Somewhere along the way when I was adding/removing NuGet packages I somehow managed to delete a reference to a DLL (ServiceStack.Text.dll) that one of my components needed (RedisClientsManager).
I started the debugger, let all my components get registered then popped open the Immediate Window and attempted to resolve each component one by one (by calling container.Resolve<RegisteredType>()) until I found the one that threw the exception when I attempted to resolve it.
The Exception message from Windsor at that point told me exactly what the problem was. I'm a little lost as to why this wasn't being logged or why the Exception was not raised when the container itself attempted to resolve it. Anyhow, moral of the story is make sure your components resolve.
I'm using neo4j in a glassfish server through a modified version of Alex Smirnov neo4j JCA connector.
My version is available here : https://github.com/Riduidel/neo4j-connector
I'm using this connector with neo4j 1.8.
As a consequence, when i want to use it, i first install the connector in my Glassfish application server, then use this connector in applications wishing to connect to.
It works OK when using it with fresh stores.
But, when using it with stores created with previous version, I encounter weird bugs.
Typically, I got today the following stack
javax.resource.spi.ResourceAllocationException: Error in allocating a connection. Cause: Failed to transition org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader#3bbd53b1 from NONE to STOPPED
...
...
.../* JCA internal exception stack */
...
...
Caused by: com.sun.appserv.connectors.internal.api.PoolingException: Failed to transition org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader#494b584c from NONE to STOPPED
at com.sun.enterprise.resource.pool.ConnectionPool.createSingleResource(ConnectionPool.java:924)
at com.sun.enterprise.resource.pool.ConnectionPool.createResource(ConnectionPool.java:1185)
at com.sun.enterprise.resource.pool.datastructure.RWLockDataStructure.addResource(RWLockDataStructure.java:98)
... 66 more
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Failed to transition org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader#494b584c from NONE to STOPPED
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:388)
at org.neo4j.kernel.lifecycle.LifeSupport.init(LifeSupport.java:82)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:116)
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:227)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:79)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:70)
at com.netoprise.neo4j.AbstractNeo4jManagedConnectionFactory.createDatabase(AbstractNeo4jManagedConnectionFactory.java:165)
at com.netoprise.neo4j.AbstractNeo4jManagedConnectionFactory.createDatabase(AbstractNeo4jManagedConnectionFactory.java:127)
at com.netoprise.neo4j.Neo4jManagedConnectionFactory.createManagedConnection(Neo4jManagedConnectionFactory.java:163)
at com.sun.enterprise.resource.allocator.ConnectorAllocator.createResource(ConnectorAllocator.java:160)
at com.sun.enterprise.resource.pool.ConnectionPool.createSingleResource(ConnectionPool.java:907)
... 68 more
Caused by: java.lang.AssertionError
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:265)
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:260)
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:260)
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:260)
at org.neo4j.index.impl.lucene.LuceneDataSource.<init>(LuceneDataSource.java:185)
at org.neo4j.index.lucene.LuceneIndexProvider.load(LuceneIndexProvider.java:72)
at org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader.loadIndexImplementations(InternalAbstractGraphDatabase.java:1171)
at org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader.init(InternalAbstractGraphDatabase.java:1143)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:382)
... 78 more
A fast inspection reveals that this exception is linked to an undeletable "write.lock" file. My write.lock file can't be deleted because I guess migration is not over.
How can I make sure the migration is done before using it without migrating it outside of Glassfish ?
Is there a way to ahve exclusive store migrations in that context ? And if so, how ?
And is it the solution for my problem ?
EDIT 1 Added exception message.
EDIT 2 All this only happen when loaded graph was previously used with a Neo4j 1.5 and now with a Neo4j 1.8 connector. when graph is created by connector, absolutely no error happens.
EDIT 3 Strangely enough, this happens as long as there is no debugger plugged into that code : as soon as I try to debug it, the issue stop appearing. Which make me thinking there may be a migration cleanup mechanism that remvoe the write lock once migration is done, and this cleanup is not performed when using my neo4j JCA connector. Is it a valid observation ?
I am not too familiar with the JCA connector, but to be sure, I would just write a very small migration java class that opens the database, lets it migrate and shut down. Then try it again with the JCA connector?
After further investigations, truth revealed to not be in multiple calls to the EmbeddedGraphDatabase constructor, but instead to multiple identicail IndexProvider being loaded.
I use neo4j embedded in an open-source JCA connector.
In this connector, the org.neo4j.kernel.Service class is replaced by a custom one which contains a workaround regarding service loading for JBoss non shared libraries.
Unfortunatly, in our context, this workaround implies loading twice the index provider :
once using the EAR classloader
once using the Glassfish library classloader.
Why ?
Because, as our neo4j instance is using for application data AND for authentication, neo4j connector jar is put in ${domain}/lib. As a consequence, due to Classloader delegation in application server, the EAR classloader delegates to the Glassfish library classloader, and find this way the LuceneIndexProvider. Then, the Glassfish library classloader is directly used to load the same LuceneIndexProvider class.
This concludes by us having two LuceneIndexProvider objects, both trying to migrate the lucene index. Which lead to the AssertionError as the write.lock file created by the first object should be deleted by the second one, which can't do that.
I've then changed slightly that very specific class to use JBoss workaround only when default loading mechanism do not return any class (seee commit here). This small change worked like a charm, so I think you can considered this issue as fixed.