Problem Description:
We are trying to transfer a zip file to a destination coming from an IBM MQ queue.
The Mulesoft flow has an IBM MQ 'On New Message' connector as a source.
Whenever there is a new message in the IBM MQ queue, the connector picks that up and the 'Write' connector is writing that into the local file system (in our original scenario we are using an SFTP connector to send the file to an SFTP server. However, for simplicity here we are just using the Write connector to write into our local file system to mimic the process of writing into the SFTP server file system).
The problem we are having is the zip file is getting corrupted when the Mule flow writes that in the file system.
I believe the reason behind that is the connector is trying to convert the character encoding, whereas we need a binary transfer of the data.
Please advise how we can configure the IBM MQ connector so that it transfers the data as binary without enforcing any character encoding.
The connector version we're currently leveraging is - 1.6.7 and we've tried upgrading its version as well but it didn't help either.
Any leads on the above problem statement would be very much appreciated.
Related
I'm looking for the advice of how to manually (i.e. without using Runtime Manager - RM) deploy a mule application package on the on-premises Mule cluster. The official documentation suggests using the RM for the purpose either via the gui or cli or api. However, the RM is not available on our environment.
I can manually deploy the package on a single node by copying it to the /apps folder. But this way the application is only deployed on a single node, not on the cluster.
I've tried using the AMC agent rest API for the purpose with the same result - it only deploys on a single node.
So, what's the correct way of manually deploying a mule application on the Mule servers cluster without using Anypoint RM?
We are on Mule 4.4 EE.
Copy the application jar file into the apps directory of every node. Mule clusters do not transfer applications between nodes.
Alternatively ou can use the Runtime Manager Agent instead however it also works in a per node basis. You need to send the same request to each node to deploy.
Each connector may or may not be cluster aware. Read each connector documentation to understand how they behave. In particular the documentation of the VM connector states:
When running in cluster mode, persistent queues are instead backed by the memory grid. This means that when a Mule flow uses VM Connector to publish content to a queue, Mule runtime engine (Mule) decides whether to process that message in the same origin node or to send it out to the cluster to be picked up and processed by another node.
You can register the multiple nodes through AMC agent on the cloudhub control plane and create a server group and deploy code through control plain runtime manager it does the job of deployment to same app in n nodes
I am trying to send content of word document and PDF to Apache OpenNLP. I am wondering if I can use ActiveMQ to read the MS word so that I can trigger a process to Apache Kafka to process the stream.
Any suggestion to stream the PDF or word other than ActiveMQ is welcome.
If you use ActiveMQ "Classic" (i.e. any 5.x version) you'll have problems moving large messages as there's no real support for that use-case. However, ActiveMQ Artemis (i.e. ActiveMQ's next-gen broker) has support for arbitrarily large messages which would facilitate your use-case. The nice thing about having large message support in the broker is that you don't have to involve some other kind of storage mechanism in your solution. That makes development and maintenance of your application and environment a bit simpler.
Message queues generally shouldn't be used for file transfer. Put the files in blob storage like S3, then send the URI between clients (e.g "s3://bucket/file.txt"), and download and process elsewhere... Other option is to use Apache POI or similar tools in the producer client to parse your files, then send that data in whatever format you want (JSON, Avro, or Protobuf, are generally used more often in streaming tools than XML)
Actual file processing has nothing to do with the queue technology used
We are developing a system which uses rabbitMQ for sending and receiving data between its clients and servers.
The internet connection may sometimes be lost.
1- Can all the messages in the queue be exported to a file ? And somehow be imported to the client using this file?
2- In a different scenario, a client wants to send some messages to the queue but it has no internet connection! So we want to export all the message from client and make a file and somehow send it to the server (eg. transfer it to another location which has internet), Is this possible to import this file to the queue?
I had the same questions as I wanted to replay messages for testing / load testing purposes.
I made RabbitDump, a dotnet tool, to do this. It allows you to do all possible transfers from AMQP to and from Zip (bunch of messages). Examples: AMQP => ZIP, AMQP => AMQP, ZIP => AMQP and ZIP => ZIP (because why not ..).
The tool can be found here. It's installable as a dotnet tool, using dotnet tool install --global MBW.Tools.RabbitDump.
This tool will be useful to export messages from the remote queue and push them on a local RabbitMQ.
https://github.com/jecnua/rabbitmq-export-to-local
You can import/export messages using QueueExplorer.
Disclaimers: I'm the author, it's a commercial tool, and for now on Linux it runs under Wine.
https://www.cogin.com/QueueExplorer/rabbitmq/
I want to stream data from on-premise to Cloud(S3) using Kafka. For which I need to intsall kafka on source machine and also on cloud. But I don't want to intsall it on cloud. I need some S3 connector through which I can connect with kafka and stream data from on-premise to cloud.
If your data is in Avro or Json format (or can be converted to those formates), you can use the S3 connector for Kafka Connect. See Confluent's docs on that
Should you want to move actual (bigger) files via Kafka, be aware that Kafka is designed for small messages and not for file transfers.
There is a kafka-connect-s3 project consisting of both sink and source connector from Spreadfast, which can handle text format. Unfortunately it is not really updated, but works nevertheless
We have a soap service that accepts a small file so we receive that in memory and we need to then sftp that off to another server. What would the configuration be for that? How to take our String(xml file) and send it to the server? ( i assume sftp connector is the best way to go here, but how to configure it as it looks like it takes one file as a parameter and I need it to be fed bytes to send instead with a filename that we specify).
thanks,
Dean