Issuing commands over Spring Integration SFTP adapter - ssh

Is it possible to interact with a remote system by issuing command line instructions, for example running a script that returns some output over SSH using the Spring SFTP integration?
Or alternatively, is there a Spring Integration adapter available or in progress for SSH?

SFTP Outbound Gateway supports limited commands like
ls,get,mget,rm
here is the link:http://static.springsource.org/spring-integration/reference/html/sftp.html

Related

connecting redis via POSTMAN

Is it possible to connect redis db via postman, What I am trying to do is I am creating a test suite where I am Executing multiple testcases for that I am passing the test file via collection runner there I am passing the redis configuration values also,
I want to connect to redis first, and then execute the lpush for inserting into db, can we do this using postman?
No.
Postman only supports the HTTP protocol.
You must use a third party tool to access the redis via http protocol.
ex) https://github.com/nicolasff/webdis

Connect Mulesoft and Aurora RDS

I’m trying to connect my database mysql (RDS Aurora serverless) and Mulesoft. But I have to setup an SSH tunnel with an EC2 instance (AWS restrictions) So I try an ssh connection from Mulesoft to my EC2 instance, all tutorials I have found talk about sshmultiplexedconnector or SSH Connecter - Mule 3, and they seems to be deprecated.
Did you had the same problems, and do you have solutions or other methods ?
PS, I’m using : Mule server 4.2.0 Anypoint Studio 7.3.4 EC2 instance run Ubuntu 18.04
Tutorial : https://blogs.mulesoft.com/dev/connectivity-dev/mule-in-a-shell-new-ssh-connector/
Thanks and regards.
mule-code
mule-error
There is no out of the box connector for Mule 4 that implements an SSH tunnel. The connector you mentioned is for Mule 3, so it is not compatible, and hasn't been maintained for some years. You could attempt to build a connector for Mule 4 to do it, however I would argue that a communication tunnel, same as a VPN, is task better implemented at the server level. Just do it from your operating system and to the Mule application it should be transparent.

PCF / Cloud connector for Rabbit management API

All,
I'm running a simple SpringBoot app in PCF using a Rabbit on-demand service. The auto reconfiguration of the ConnectionFactory for the internal Rabbit service works just fine.
However I need a list of all queues on the Rabbit host. AFAIK this is only available through a call to the Rabbit management plugin (a REST API), see RabbitManagementTemplate::getQueues. This class expects an http URI with credentials.
I know the URI+credentials are exposed through the vcap.service variables as "http_api_uri', but I wonder if there's a more elegant way to get an instance of RabbitManagentTemplate with Spring magic cloud connectors / auto reconfiguration instead of manually reading the env vars and writing custom bean config.
It seems the ConnectionFactory only knows about the AMQP interface, and cannot create a RabbitManagementTemplate?
Thanks!
Spring Cloud Connectors won't help you here. It doesn't support setting up RabbitManagementTemplate, only a ConnectionFactory.
You don't have to parse the env yourself, you can use the flattened properties that Boot provides such as vcap.services.rabbitmq.credentials.http_api_uri. But you'll need to configure a RabbitManagementTemplate yourself using those Boot properties.

Connect to existing SFTP server instead of starting new SFTP subprocess

I'm thinking about writing a new SFTP server. The current SFTP servers are started for every session. If there are three users of SFTP, there are three SFTP servers. That's not what I want. I want one server where every new SFTP session is connected to. How to do this?
When you login the server to start an SFTP session, an SSH process is started and am SFTP subsystem is started as well. The SSH process takes care of the encryption etc. The io is done through the standard ports 0, 1 and 2 (stdin, stdout and stderr) of the SFTP process.
This all works when for every session there is a dedicated SFTP process. But how can I make it to work when there is one SFTP server I want to connect to. Via a "ssh-to-sftp-connect-agent"?
More information:
I want to use sftp version 6, which is better than version 3, which is used by openssh. The openssh community does not want to upgrade their sftp implementations:
https://bugzilla.mindrot.org/show_bug.cgi?id=1953
A very good open source sftp server is at:
http://www.greenend.org.uk/rjk/sftpserver/
and very usefull overview:
http://www.greenend.org.uk/rjk/sftp/sftpversions.html
This server us using sftp protocol version 6, but has (b)locking and handling of acl's not implemented. To implement these shared tables are necessary for all open files with their access flags and blocking mode by who for (b)locking to work. When every sftp session leads to another process with:
Subsystem sftp /usr/libexec/gesftpserver
(which is inevitable when you want to use any protocol higher than 3)
then a shared database is a sollution to handle locks and acl's.
Another sollution is that every new sftp session connects to one existing "super" sftp server, which is started at boot time. Simultaneous access, locking etc. is much easier to program.
Howto do this with this line:
Subsystem sftp /usr/libexec/exampleconnectagent
In the ideal case the agent enables the connection between the dedicated ssh process for the connection and the sftp-server, and terminates.
Long story, is this possible? Do I have to use the passing of fd's described here:
Can I share a file descriptor to another process on linux or are they local to the process?
Thanks in advance.
addition:
I'm working on a sftp file server listning to a server socket. clients can connect using the direct-streamlocal functionality to connect a channel to it in openssh. THis way I can have one server process for all clients and this is what I wanted in the first place.
The current SFTP servers are started for every session.
What do you mean by "current SFTP servers"? Which one specifically?
The OpenSSH (as the most widely used SSH/SFTP server), did indeed open a new subprocess for each SFTP session. And there's hardly any problem with that. Though the recent versions don't, anymore. With the (new) default configuration, an in-process SFTP server (aka internal-sftp) is used.
See OpenSSH: Difference between internal-sftp and sftp-server.
If you really want to get an answer to your question, you have to tell us, what SFTP/SSH server your question is about.
If it is indeed about OpenSSH:
Nothing needs to be done, the functionality is there already.
If you want to add your own implementation, you have to modify OpenSSH code, there's no way to plug it in. Just check how the internal-sftp is implemented.
The other way is using the agent architecture, as you have suggested yourself. If you want to take this approach and need some help, you should ask more specific question about inter-process communication and/or sharing file descriptors, and not about SFTP.

Legacy application to communicate with cloud foundry using RabbitMQ

I am new to cloud foundry and investigating possible ways for our legacy Java EE application to communicate asynchronously with an application running on cloud foundry.
We are doing a lot of asynchronous work already and are publishing events to Active MQ.
I know that cloud foundry has a possibility to bind with Rabbit MQ and my question is with the possibility for a cloud foundry running application to connect (listen) to an existing out of CF platform Rabbit MQ?
Any idea on other alternatives to achieve this?
Yes, that is possible. You can use a user provided service.
That allows you to inject the environment variables into your app, that are needed to connect to RabbitMQ (like host, port, vhost, username, password).
Once you create that service, you can bind it to your app. Inside your app code, you then can read the environment variables exactly the same way as you would do it, if you had used a RabbitMQ service provided by CloudFoundry.