Send a Sh file in Rabbitmq - rabbitmq

I am new to rabbitmq and I am trying to send a .sh file in rabbitmq. I have setup my queue and exchanges. I am using spring-amqp and I can send json messages with my listerner container
public SimpleMessageListenerContainer messageListenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory());
container.setQueues(topicQueue());
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
container.setMessageListener(new MessageListenerAdapter(pageListener(), jsonMessageConverter()));
return container;
}
but I am not sure how to send a sh file and write it in my pagelistener. Any idea how to do it?

You need to read the file and send the content.
You can use a SimpleMessageConverter (the default) and if the content_type property is text/plain, you'll get a String; otherwise you'll get a byte[].
On the receiving side (presumably) you'd have to write it to a file and set the permissions.

Related

Redis connection pool configured using spring-data-redis but not working correctly

What I'm using:
spring-data-redis.1.7.0.RELEASE
Lettuce.3.5.0.Final
I configured Spring beans related to Redis as follows:
#Bean
public LettucePool lettucePool() {
GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
poolConfig.setMasIdle(10);
poolConfig.setMinIdle(8);
... // setting others..
return new DefaultLettucePool(host, port, poolConfig)
}
#Bean
public RedisConnectionFactory redisConnectionFactory() {
new LettuceConnectionFactory(lettucePool());
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<Stirng, Object> redisTemplate = new RedisTemplate<String, Object>();
redisTemplate.setConnectionFactory(redisConnectionFactory());
redisTemplate.setEnableTransactionSupport(true);
... // setting serializers..
return redisTemplate;
}
And redisTemplate Bean is autowired and used for Redis opertions.
The connections look correctly established when I check using an 'info' command via redis-cli. The client count is exactly the same with the value set to the lettucePool Bean + 1. (redis-cli is also a client)
However, my application's log says it sends operation requests through always the one same port. So I checked client status using 'client list' command and it says there are the pooling number of clients and just port is sending requests.
What am I missing?
This is caused by Lettuce's specific feature, 'sharing native connection'.
LettuceConnectionFactory in spring-data-redis has a setter method named setShareNativeConnection(boolean), set to true by default. This means no matter how many connections are created and pooled, only one native connection is used as long as non-blocking and non-transactional operation called.
As you can see, I didn't manually set the value so it was set to the default, 'true' and I had no blocking or transactional operations.
Additionally, the reason why the default value is set to true is that Redis itself is single-threaded, which means even though clients send many operations simultaneously, Redis must execute them one by one, so settings this value to 'false' doesn't mean that it increases Redis' throughput.

Is the below code correct to connect to a remote Linux host and get few tasks done using Apache Mina?

I want to switch from Jsch to Apache Mina to query remote Linux hosts and to get the few tasks done.
I need to achieve things like list files of a remote host, change directory, get file contents, put a file into the remote host etc.,
I am able to successfully connect and execute a few shell commands using session.executeRemoteCommand().
public byte[] getRemoteFileContent(String argDirectory, String fileName)
throws SftpException, IOException {
ByteArrayOutputStream stdout = new ByteArrayOutputStream();
StringBuilder cmdBuilder = new StringBuilder("cat" + SPACE + remoteHomeDirectory);
cmdBuilder.append(argDirectory);
cmdBuilder.append(fileName);
_session.executeRemoteCommand(cmdBuilder.toString(), stdout, null, null);
return stdout.toByteArray();
}
public void connect()
throws IOException {
_client = SshClient.setUpDefaultClient();
_client.start();
ConnectFuture connectFuture = _client.connect(_username, _host, portNumber);
connectFuture.await();
_session = connectFuture.getSession();
shellChannel = _session.createShellChannel();
_session.addPasswordIdentity(_password);
// TODO : fix timeout
_session.auth().verify(Integer.MAX_VALUE);
_channel.waitFor(ccEvents, 200);
}
I have the following questions,
How can I send a ZIP file to a remote host much easily in API level (not the Shell commands level)? And all other operations in API level.
Can I secure a connection between my localhost and remote through a certificate?
As of now, I am using SSHD-CORE and SSHD-COMMON version 2.2.0. Are these libraries enough or do I need to include any other libraries?
executeRemoteCommand() is stateless how can I maintain a state?
I needed sshd-sftp and its APIs to get the file transfer work.
Below code gets the proper API,
sftpClient = SftpClientFactory.instance().createSftpClient(clientSession);
On sftpClinet I called read() and write() methods get the task done. This answers my question fully.

Streaming S3 object to VertX Http Server Response

The title basically explains itself.
I have a REST endpoint with VertX. Upon hitting it, I have some logic which results in an AWS-S3 object.
My previous logic was not to upload to S3, but to save it locally. So, I can do this at the response routerCxt.response().sendFile(file_path...).
Now that the file is in S3, I have to download it locally before I could call the above code.
That is slow and inefficient. I would like to stream S3 object directly to the response object.
In Express, it's something like this. s3.getObject(params).createReadStream().pipe(res);.
I read a little bit, and saw that VertX has a class called Pump. But it is used by vertx.fileSystem() in the examples.
I am not sure how to plug the InputStream from S3'sgetObjectContent() to the vertx.fileSystem() to use Pump.
I am not even sure Pump is the correct way because I tried to use Pump to return a local file, and it didn't work.
router.get("/api/test_download").handler(rc -> {
rc.response().setChunked(true).endHandler(endHandlr -> rc.response().end());
vertx.fileSystem().open("/Users/EmptyFiles/empty.json", new OpenOptions(), ares -> {
AsyncFile file = ares.result();
Pump pump = Pump.pump(file, rc.response());
pump.start();
});
});
Is there any example for me to do that?
Thanks
It can be done if you use the Vert.x WebClient to communicate with S3 instead of the Amazon Java Client.
The WebClient can pipe the content to the HTTP server response:
webClient = WebClient.create(vertx, new WebClientOptions().setDefaultHost("s3-us-west-2.amazonaws.com"));
router.get("/api/test_download").handler(rc -> {
HttpServerResponse response = rc.response();
response.setChunked(true);
webClient.get("/my_bucket/test_download")
.as(BodyCodec.pipe(response))
.send(ar -> {
if (ar.failed()) {
rc.fail(ar.cause());
} else {
// Nothing to do the content has been sent to the client and response.end() called
}
});
});
The trick is to use the pipe body codec.

spring cloud stream unable to parse message posted to RabbitMq using Spring RestTemplate

I have an issue in getting the message to spring-cloud-stream spring-boot app.
I am using rabbitMq as message engine.
Message producer is a non spring-boot app, which sends a message using Spring RestTemplate.
Queue Name: "audit.logging.rest"
The consumer application is setup to listen that queue. This app is spring-boot app(spring-cloud-stream).
Below is the consumer code
application.yml
cloud:
stream:
bindings:
restChannel:
binder: rabbit
destination: audit.logging
group: rest
AuditServiceApplication.java
#SpringBootApplication
public class AuditServiceApplication {
#Bean
public ByteArrayMessageConverter byteArrayMessageConverter() {
return new ByteArrayMessageConverter();
}
#Input
#StreamListener(AuditChannelProperties.REST_CHANNEL)
public void receive(AuditTestLogger logger) {
...
}
AuditTestLogger.java
public class AuditTestLogger {
private String applicationName;
public String getApplicationName() {
return applicationName;
}
public void setApplicationName(String applicationName) {
this.applicationName = applicationName;
}
}
Below is the request being sent from the producer App in JSON format.
{"applicationName" : "AppOne" }
Found couple of issues:
Issue1:
What I noticed is the below method is getting triggered only when the method Parameter is mentioned as Object, as spring-cloud-stream is not able to parse the message into Java POJO object.
#Input
#StreamListener(AuditChannelProperties.REST_CHANNEL)
public void receive(AuditTestLogger logger) {
Issue2:
When I changed the method to receive object. I see the object is of type RMQTextMessage which cannot be parsed. However I see actual posted message within it against text property.
I had written a ByteArrayMessageConverter which even didn't help.
Is there any way to tell spring cloud stream to extract the message from RMQTextMessage using MessageConverter and get the actual message out of it.
Thanks in Advance..
RMQTextMessage? Looks like it is a part of rabbitmq-jms-client.
In case of RabbitMQ Binder you should rely only on the Spring AMQP.
Now let's figure out what your producer application is doing.
Since you get RMQTextMessage as value for the #StreamListener method that says me that the sender really uses rabbitmq-jms-client for producing, and therefore the real AMQP message in queue has that RMQTextMessage as a wrapper for real payload.
Why don't use Spring AMQP there as well?
It's a late reply but I have the exact problem and solved it by sending and receiving the messages in application/json format. use this in the spring cloud stream config.
content-type: application/json

Moving files on moveFailed isn't work

I'm new in working with Apache Camel. Could you help me with moving files? I have such route:
from("file:data?noop=true?move={{package.success}}&moveFailed={{package.failed}}")
.split(ExpressionBuilder.beanExpression(new InvoiceIteratorFactory(), "createIterator"))
.streaming()
.process(new ValidatorProcessor())
.choice()
.when(new Predicate() {
#Override
public boolean matches(Exchange exchange) {
..;
}
})
.to("jpa://...?consumer.transacted=true")
.otherwise()
.aggregate(header(PropertyNameConstants.AGGREGATOR_HEADER), new ErrorsAggregationStrategy())
.completionPredicate(new Predicate() {
#Override
public boolean matches(Exchange exchange) {
...;
}
})
.to("smtps://smtp.gmail.com?username={{remote.e-mail}}&password={{remote.password}}");
So, files with errors should be moved to directory "failed" and files without errors to directory "success". I try to generate exception after aggregating required messages (while parsing file with errors), so that to move file to directory "failed", but all files moved to directory "success", even there was an exception.
If I throw exception before aggregator, file moved to "failed" directory but last "to" (sending mail) isn't work.
If you have a copy of Camel in Action, then I suggest to read chapter 8 about the aggregator EIP to understand how it works, and the fact its a stateful EIP, so there is a handoff of the Exchange, so the consumer completes. And the aggregated exchange that comes out of the aggregator is executed independently from the original consumed exchange.
Also you may want to look at the composed message processor eip, and use the spliiter only version
http://camel.apache.org/composed-message-processor.html