I was having intermittent issue running a Mule Batch with huge data in Anypoint Studio. That issue is resolved by enabling 'Always' option under 'Clear Application Data' in 'Run Configurations' (as per the given instruction in Mule ESB - Clear Memory of a batch process). That option is shown in the picture.
How to enable the same 'Always' option in stand alone Mule Runtime during the startup that means when we are not running the batch from Anypoint Studio? Is there any command line argument available that can be used in startup script of the Mule Runtime to achieve the same goal?
By deleting the local data you are deleting batch queues, persistent objects stores and maybe some other information. In a development environment like Anypoint Studio IDE it is usually OK but for a standalone Mule Runtime it means you are deleting production data, for example records that are used by batch to continue processing after a restart. That data will be lost. Having said that, it might be needed if the data is completely corrupted.
It is a best practice and a strong advice to any user to resolve the root cause of the issue rather than delete data. And it should never be done every time you start your production Mule, only when there is absolutely no other alternative.
I don't recommend to delete local files at all. If even after my warnings you absolutely need to do this never ever delete the .mule directory. If you still want to risk losing data delete only the directory with the application name under the .mule directory.
Related
I am attempting to host a Saga from one project in another project using NServiceBus 6 with SqlPersistence and SqlDialect.MsSqlServer. In most examples I have found, the Saga is contained in the same assembly as the hosting app, and perhaps this is why I am struggling.
When hosting everything in the same app, the NServiceBus.Persistence.Sql.MsBuild package correctly outputs Saga .sql files during the build and then picks these up and executes them on run. Using a separate app, only the Outbox, Subscription and Timeout .sql files are generated, not the Saga ones. The following entry is then logged on run:
INFO NServiceBus.Persistence.Sql.Installer Directory '[PATH]\SagaPersistence\Service\bin\Debug\NServiceBus.Persistence.Sql\MsSqlServer\Sagas' not found so no saga creation scripts will be executed.
A full VS 2017 repro may be found at https://github.com/WolfyUK/NServiceBusSagaSqlPersistence.
Firstly, is it a bad idea to host a Sagas from another service, rather than being self-hosted? If not, can someone advise the best way to resolve the SQL Persistence issue?
Can you add NServiceBus.Persistence.Sql.MsBuild to the Saga project? The scripts should then be found there. Unfortunately they're not copied to the host its folder, so you'll have to take them from there into production. Or have them generated by using EnableInstallers, like you're already doing.
I recently was modifying some of my server properties in Rational Application Developer to try and increase the memory of my JVM on startup. I forgot to take a backup before doing this, and by adding in an incorrect JVM variable, it seems I have broke my server in an unworking state. Whenever I try and startup my server to do any configuration changes, the JVM refuses to start with invalid params being passed in.
Is there a way to reset any JVM changes for WebSpehere Application Server v7.0 through the filesystem, or a way to do it without needing the server running already? I have been looking around in the wasProfile hoping to stumble onto a file where my settings ultimately live, but have had no luck.
It should be possible to write a wsadmin script to view/adjust the JVM options, but if you're on a non-z/OS platform, the fastest way to get back to working is probably to edit PROFILE_HOME/config/cells/CELL/nodes/NODE/servers/SERVER/server.xml; the JVM settings are typically written at the very end.
For asynchronous processing large amount of files it could be nice to store messages in a persistent storage to releave JVM heap and avoid data loss in case of system failure.
I configured file-queue-store, but unfortunatelly, I can not see msg files in the .mule/queuestore/myqueuename folder.
I feed the flow with files from smb:endpoint and send them to a cxf endpoint.
When I stop Mule ESB (version 3.2.0) properly during file processing, it writes a lot of .msg files to the queuestore. After restart it processes them one-by-one.
But, when I kill the JVM (to test a system failure, or OutOfMemoryError, etc.), there is no fies in the queuestore, so all of the is lost.
My question: Is it possible to force queuestore to store the messages on disk and delete them only when they fully processed?
Please advise. Thanks in advance.
Mule 3.2.0 was affected by this issue
You should consider upgrading.
Im having major issues with Mule 3 and files being read and that later should be put on some standard queue on ActiveMQ.
basically its a really simple service, initially started that on inbound starts off by
This file is read correctly from the SFTP area, and in the mule log for the reading application its stated that the file is written to the specified archiveDir..
After this, its silent, nothing else happens... the file is just placed in the archiveDir and neithe ActiveMQ or Mule3 gives any indications to that something have gone wrong...
The queue names etc etc is all correct.
Basically the same environment is running on a second server, with no disturbance..
Is there any commonly known issues that could make mule not continue on with its processing putting the file on queue?
Thx in advance!
While using Windows Azure Table Storage in WCFService WebRole, tried to create CloudStorageAccount by the following way:
storageAccount =
CloudStorageAccount.Parse(Microsoft.WindowsAzure.CloudConfigurationManager.GetSetting("[Setting name]"))
Get exception:
ConfigurationErrorsException "Could not create Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35."
MSDN help says that 1) Visual Studio must be run as an administrator. 2) A role must be running under full trust (change the .NET trust level option to Full Trust).
All Done, but I still have the same exception.
One thing that can cause this error is running the web role itself, instead of running the containing cloud project. If this is the issue, you could fix it by ensuring that the cloud project is set as the startup project for debugging, and not the web role.
It's possible, and sometimes useful, to run the ASP.NET project that defines the web role on its own. This can be a lot quicker than running things in the Azure Compute Emulator. It may also enable you to develop your project without having to run VS elevated. Also, I've found that the emulator tends to cause Visual Studio to report an invalid memory access error from time to time, at which point you need to restart VS. Running the web role directly avoids all these problems.
However, there are some things that can prevent this from working, and the exception you describe is a symptom of one of these problems. If your web role's Web.config includes configuration for Azure's DiagnosticMonitorTraceListener (and Visual Studio adds that by default when you create a web role) then the first thing that tries to generate trace output will crash with the error you describe if you run outside the emulator. And as it happens, retrieving a setting from the CloudConfigurationManager appears to do this.
This isn't peculiar to the CloudConfigurationManager by the way. All it's doing is producing some trace output. VS configures web roles to send all trace output to the Azure diagnostic listener, and because that listener can only run in either the compute emulator or an actual Azure instance, the first thing that tries to produce trace output will crash. CloudConfigurationManager is a common candidate because it happens to produce trace output, and it typically gets used early on when a role starts up. But in principle, anything that produces trace output could hit this exception.
A simple way to avoid this is to remove the relevant section from the configuration file. When you create a new web role, Visual Studio adds a <system.diagnostics> section that configures the default trace output to go to the Azure diagnostic listener. You could just comment that out. That will enable you to debug the web role directly in Visual Studio without using the compute emulator (assuming you aren't doing anything else that depends on being in a role environment).
Of course, the problem with that is that you'll no longer get any diagnostic traces when running in Azure. One way to solve that is to move the relevant configuration to the Web.config.Release file (adding the necessary xdt: attributes).
This change will also stop the Azure diagnostic trace listener from running when you use the local compute emulator. (That's less of a problem, because the trace messages will still appear in the debugger. It just means you won't get persistent copies of the traces copied to table storage like you would when running for real.) The obvious way to fix this would seem to be to make a similar modification to Web.config.Debug (or to run the release build in the emulator), but there's a snag: apparently cloud projects do not apply configuration file transforms when packaging for the emulator by default. Fortunately, you can fix this: http://blog.hill-it.be/2011/03/07/no-web-config-transformation-in-local-azure/ shows how to enable transforms for local debugging in the compute emulator. (Transforms are never applied when debugging an ASP.NET project directly from within VS, by the way.)
I've found that this error is caused by the wrong version in your web.config
Ie., you may not have
Version=1.0.0.0
Microsoft.WindowsAzure.Diagnostics is up to version 1.8.0.0 as of now
Try updating to the current version
Remove the lines in Web.config < add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener