Is the below code correct to connect to a remote Linux host and get few tasks done using Apache Mina? - apache-mina

I want to switch from Jsch to Apache Mina to query remote Linux hosts and to get the few tasks done.
I need to achieve things like list files of a remote host, change directory, get file contents, put a file into the remote host etc.,
I am able to successfully connect and execute a few shell commands using session.executeRemoteCommand().
public byte[] getRemoteFileContent(String argDirectory, String fileName)
throws SftpException, IOException {
ByteArrayOutputStream stdout = new ByteArrayOutputStream();
StringBuilder cmdBuilder = new StringBuilder("cat" + SPACE + remoteHomeDirectory);
cmdBuilder.append(argDirectory);
cmdBuilder.append(fileName);
_session.executeRemoteCommand(cmdBuilder.toString(), stdout, null, null);
return stdout.toByteArray();
}
public void connect()
throws IOException {
_client = SshClient.setUpDefaultClient();
_client.start();
ConnectFuture connectFuture = _client.connect(_username, _host, portNumber);
connectFuture.await();
_session = connectFuture.getSession();
shellChannel = _session.createShellChannel();
_session.addPasswordIdentity(_password);
// TODO : fix timeout
_session.auth().verify(Integer.MAX_VALUE);
_channel.waitFor(ccEvents, 200);
}
I have the following questions,
How can I send a ZIP file to a remote host much easily in API level (not the Shell commands level)? And all other operations in API level.
Can I secure a connection between my localhost and remote through a certificate?
As of now, I am using SSHD-CORE and SSHD-COMMON version 2.2.0. Are these libraries enough or do I need to include any other libraries?
executeRemoteCommand() is stateless how can I maintain a state?

I needed sshd-sftp and its APIs to get the file transfer work.
Below code gets the proper API,
sftpClient = SftpClientFactory.instance().createSftpClient(clientSession);
On sftpClinet I called read() and write() methods get the task done. This answers my question fully.

Related

MPI cannot execute with machine file or hosts from SSH using JSch exec channel [duplicate]

I have a piece of code which connects to a Unix server and executes commands.
I have been trying with simple commands and they work fine.
I am able to login and get the output of the commands.
I need to run an Ab-initio graph through Java.
I am using the air sandbox run graph command for this.
It runs fine, when I login using SSH client and run the command. I am able to run the graph. However, when I try to run the command through Java it gives me a "air not found" error.
Is there any kind of limit on what kind of Unix commands JSch supports?
Any idea why I'm not able to run the command through my Java code?
Here's the code:
public static void connect(){
try{
JSch jsch=new JSch();
String host="*****";
String user="*****";
String config =
"Host foo\n"+
" User "+user+"\n"+
" Hostname "+host+"\n";
ConfigRepository configRepository =
com.jcraft.jsch.OpenSSHConfig.parse(config);
jsch.setConfigRepository(configRepository);
Session session=jsch.getSession("foo");
String passwd ="*****";
session.setPassword(passwd);
UserInfo ui = new MyUserInfo(){
public boolean promptYesNo(String message){
int foo = 0;
return foo==0;
}
};
session.setUserInfo(ui);
session.connect();
String command="air sandbox run <graph-path>";
Channel channel=session.openChannel("exec");
((ChannelExec)channel).setCommand(command);
channel.setInputStream(null);
((ChannelExec)channel).setErrStream(System.err);
InputStream in=channel.getInputStream();
channel.connect();
byte[] tmp=new byte[1024];
while(true){
while(in.available()>0){
int i=in.read(tmp, 0, 1024);
if(i<0)break;
page_message=new String(tmp, 0, i);
System.out.print(page_message);
}
if(channel.isClosed()){
if(in.available()>0) continue;
System.out.println("exit-status: "+channel.getExitStatus());
break;
}
try{Thread.sleep(1000);}catch(Exception ee){}
}
channel.disconnect();
session.disconnect();
}
catch(Exception e){
System.out.println(e);
}
}
public static void main(String arg[]){
connect();
}
public String return_message(){
String ret_message=page_message;
return ret_message;
}
public static abstract class MyUserInfo
implements UserInfo, UIKeyboardInteractive{
public String getPassword(){ return null; }
public boolean promptYesNo(String str){ return false; }
public String getPassphrase(){ return null; }
public boolean promptPassphrase(String message){ return false; }
public boolean promptPassword(String message){ return false; }
public void showMessage(String message){ }
public String[] promptKeyboardInteractive(String destination,
String name,
String instruction,
String[] prompt,
boolean[] echo){
return null;
}
}
The "exec" channel in the JSch (rightfully) does not allocate a pseudo terminal (PTY) for the session. As a consequence a different set of startup scripts is (might be) sourced (particularly for non-interactive sessions, .bash_profile is not sourced). And/or different branches in the scripts are taken, based on absence/presence of the TERM environment variable. So the environment might differ from the interactive session, you use with your SSH client.
So, in your case, the PATH is probably set differently; and consequently the air executable cannot be found.
To verify that this is the root cause, disable the pseudo terminal allocation in your SSH client. For example in PuTTY, it's Connection > SSH > TTY > Don't allocate a pseudo terminal. Then, go to Connection > SSH > Remote command and enter your air ... command. Check Session > Close window on exit > Never and open the session. You should get the same "air not found" error.
Ways to fix this, in preference order:
Fix the command not to rely on a specific environment. Use a full path to air in the command. E.g.:
/bin/air sandbox run <graph-path>
If you do not know the full path, on common *nix systems, you can use which air command in your interactive SSH session.
Fix your startup scripts to set the PATH the same for both interactive and non-interactive sessions.
Try running the script explicitly via login shell (use --login switch with common *nix shells):
bash --login -c "air sandbox run sandbox run <graph-path>"
If the command itself relies on a specific environment setup and you cannot fix the startup scripts, you can change the environment in the command itself. Syntax for that depends on the remote system and/or the shell. In common *nix systems, this works:
String command="PATH=\"$PATH;/path/to/air\" && air sandbox run <graph-path>";
Another (not recommended) approach is to force the pseudo terminal allocation for the "exec" channel using the .setPty method:
Channel channel = session.openChannel("exec");
((ChannelExec)channel).setPty(true);
Using the pseudo terminal to automate a command execution can bring you nasty side effects. See for example Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
For a similar issues, see
Certain Unix commands fail with "... not found", when executed through Java using JSch even with setPty enabled
Commands executed using JSch behaves differently than in SSH terminal (bypasses confirm prompt message of "yes/"no")
JSch: Is there a way to expose user environment variables to "exec" channel?
Command (.4gl) executed with SSH.NET SshClient.RunCommand fails with "No such file or directory"
you could try to find out where "air" resides with
whereis air
and then use this outcome.
something like
/usr/bin/air sandbox run graph
You can use an ~/.ssh/environment file to set your AB_HOME and PATH variables.

S3A client and local S3 mock

To create end-to-end local tests of data workflow I utilize "mock S3" container (e.g adobe/S3Mock). Seems to work just fine. However, some parts of the system rely on S3A client. As far as I see, its format does not allow to point to particular nameserver or endpoint.
Is it possible to make S3A work in local environment?
you talking about the ASF Hadoop S3A Connector? Nobody has tested against S3 mock AFAIK (never seen it before!), but it does work with non-AWS endpoints
set fs.s3a.endpoint to the URL of your S3 connection. There's some settings about switching from https to http (fs.s3a.connection.ssl.enabled = false) and moving from virtual hosts to directories (fs.s3a.path.style.access = true) which will also be needed.
further reading
Like I said: nobody has done this. We developers just go against the main AWS endpoints with its problems (latency, inconsistency, error reporting, etc), precisely because its what you get in production. But for your local testing, it will simplify your life (and you can run it under jenkins without having to give it any secrets)
Answer by #stevel worked for me. Here is the code if someone wants to refer
class S3WriterTest {
private static S3Mock api;
private static AmazonS3 mockS3client;
#BeforeAll
public static void setUp() {
//start mock s3 service using findify
api = new S3Mock.Builder().withPort(8001).withInMemoryBackend().build();
api.start();
/* AWS S3 client setup.
* withPathStyleAccessEnabled(true) trick is required to overcome S3 default
* DNS-based bucket access scheme
* resulting in attempts to connect to addresses like "bucketname.localhost"
* which requires specific DNS setup.
*/
EndpointConfiguration endpoint = new EndpointConfiguration("http://localhost:8001", "us-west-2");
mockS3client = AmazonS3ClientBuilder
.standard()
.withEndpointConfiguration(endpoint)
.withPathStyleAccessEnabled(true)
.withCredentials(new AWSStaticCredentialsProvider(new AnonymousAWSCredentials()))
.build();
mockS3client.createBucket("test-bucket");
}
#AfterAll
public static void tearDown() {
api.shutdown();
}
#Test
void unitTestForHadoopCodeWritingUsingS3A {
Configuration hadoopConfig = getTestConfiguration();
........
}
private static Configuration getTestConfiguration() {
Configuration config = new Configuration();
config.set("fs.s3a.endpoint", "http://127.0.0.1:8001");
config.set("fs.s3a.connection.ssl.enabled", "false");
config.set("fs.s3a.path.style.access", "true");
config.set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider");
config.set("fs.s3a.access.key", "foo");
config.set("fs.s3a.secret.key", "bar");
return config;
}
}

VSTO-specific web request not working (System.Net.WebException)

I wrote a 2013/2016 VSTO app for Microsoft Word using C#. My app creates a new toolbar with buttons. One such button runs my app, which launches a basic Windows Form.
Before the user can work with my app, they need to enter information like their license code and email address. My code in turns sends a basic request to my licensing server and awaits a response.
All my code has been running just fine and now it no longer is. Now, when I run the code, I receive the following two error messages:
System.Net.WebException: 'The underlying connection was closed: An
unexpected error occurred on a send.' Inner Exception: IOException:
Unable to read data from the transport connection: An existing
connection was forcibly closed by the remote host.
and
System.Net.WebException: 'The underlying connection was closed: An
unexpected error occurred on a send.' Inner Exception:
SocketException: An existing connection was forcibly closed by the
remote host
I decided to run the code using a standard console app to see if I received the same error message, and sure enough, it worked great! Now I am wondering if Word or the Microsoft VSTO technology is blocking my app from accessing my server.
Here is the code in VSTO that does not work
Note 1: Created a basic 2013/2016 C# VSTO add-in, added a toolbar, and added
Note 2: Added a reference to System.Web.
Note 3: Modified the website link and the query strings as I did not want to publish them on this public forum.
using System;
using Microsoft.Office.Tools.Ribbon;
using System.Web;
using System.Net;
namespace WordAddIn3
{
public partial class Ribbon1
{
private void Ribbon1_Load(object sender, RibbonUIEventArgs e)
{
}
private void button1_Click(object sender, RibbonControlEventArgs e)
{
// Attempt to activate the product using the licensing server on the website.
Console.WriteLine("** ActivateLicense");
//build the url to call the website's software licensing component.
var builder = new UriBuilder("https://validwebsite.com");
builder.Port = -1;
//build the query string.
var query = HttpUtility.ParseQueryString(builder.Query);
query["license_key"] = "validactivationcdode";
query["product_id"] = "validproductid";
query["email"] = "validemailaddress";
builder.Query = query.ToString();
string url = builder.ToString();
Console.WriteLine("activation request:");
Console.WriteLine(url); //display the REST endpoint.
//make the synchronous call to the web service.
var syncClient = new WebClient();
var responseStream = syncClient.DownloadString(url);
Console.WriteLine("Response stream:");
Console.WriteLine(responseStream); //display the server json response.
}
}
}
Here is what is pretty much the same exact code in a console app that does work
Note 1: Created a basic C# console app.
Note 2: Added a reference to System.Web.
Note 3: Modified the website link and the query strings as I did not want to publish them on this public forum. You will receive an error, but that is due to the sample website not having a licensing server.
using System;
using System.Net;
using System.Web;
namespace ConsoleApp1
{
class Program
{
static void Main(string[] args)
{
// Attempt to activate the product using the licensing server on the website.
Console.WriteLine("** ActivateLicense");
//build the url to call the website's software licensing component.
var builder = new UriBuilder("https://validwebsite.com");
builder.Port = -1;
//build the query string.
var query = HttpUtility.ParseQueryString(builder.Query);
query["license_key"] = "validactivationcdode";
query["product_id"] = "validproductid";
query["email"] = "validemailaddress";
builder.Query = query.ToString();
string url = builder.ToString();
Console.WriteLine("activation request:");
Console.WriteLine(url); //display the REST endpoint.
//make the synchronous call to the web service.
var syncClient = new WebClient();
var responseStream = syncClient.DownloadString(url);
Console.WriteLine("Response stream:");
Console.WriteLine(responseStream); //display the server json response.
Console.ReadKey();
}
}
}
Can you help me determine why the code is no longer working in the add-in where it did before (with no code changes)?
I read a lot online and there seem to be too many reasons why this might happen. As an FYI, the website with the licensing server is running. It is (and always has been) a little slow, but when running the code with VSTO, the response is immediate (suggesting no timeout). The Console code runs and there is never a timeout.. I always get a response from the licensing server.
On another thread for a similar problem, someone recommended running WireShark. I am not really familiar with the product, but during my working console run, I received no error messages and instead I got messages like these:
Standard query 0x626a AAAA mywebsite.com
and
Standard query response 0x626a AAAA mywebsite.com
However, if I run the same code in VSTO, I get additional messages that are errors (this one shows up twice):
TCP 60 443 → 50308 [RST, ACK] Seq=1 Ack=125 Win=32768 Len=0

Authorization with Websphere MQ 6

I have the server side of IBM's WebSphere MQ version 6 on a virtual machine running Windows Server 2003, sitting on a Vista desktop. The desktop has the client installed.
I've got a little test program (from their code samples) that puts a message on a queue and takes it off again. This program worked when run on the server directly with the server binding. However, I can't get it to work from the client side with the client binding.
The error I get is CompCode 2, Reason 2035, which is an authorization failure.
I suspect this has to do with the fact that the program runs under my user by default, which is on a domain that the virtual machine doesn't know about (and can't access).
I have set up a local user on the vm that I'd like to connect as (user: websphere, password: websphere), but I'm not clear on how to get this all to work. I have the code that I'm using below, and I've tried various combinations of security exit settings on the channel and endpoints, but I can't get away from 2035.
Anyone have experience with this? Help would be much appreciated!
Code:
using System;
using System.Collections;
using IBM.WMQ;
class MQSample
{
// The type of connection to use, this can be:-
// MQC.TRANSPORT_MQSERIES_BINDINGS for a server connection.
// MQC.TRANSPORT_MQSERIES_CLIENT for a non-XA client connection
// MQC.TRANSPORT_MQSERIES_XACLIENT for an XA client connection
// MQC.TRANSPORT_MQSERIES_MANAGED for a managed client connection
const String connectionType = MQC.TRANSPORT_MQSERIES_CLIENT;
// Define the name of the queue manager to use (applies to all connections)
const String qManager = "QM_vm_win2003";
// Define the name of your host connection (applies to client connections only)
const String hostName = "vm-win2003";
// Define the name of the channel to use (applies to client connections only)
const String channel = "S_vm_win2003";
/// <summary>
/// Initialise the connection properties for the connection type requested
/// </summary>
/// <param name="connectionType">One of the MQC.TRANSPORT_MQSERIES_ values</param>
static Hashtable init(String connectionType)
{
Hashtable connectionProperties = new Hashtable();
// Add the connection type
connectionProperties.Add(MQC.TRANSPORT_PROPERTY, connectionType);
// Set up the rest of the connection properties, based on the
// connection type requested
switch (connectionType)
{
case MQC.TRANSPORT_MQSERIES_BINDINGS:
break;
case MQC.TRANSPORT_MQSERIES_CLIENT:
connectionProperties.Add(MQC.HOST_NAME_PROPERTY, hostName);
connectionProperties.Add(MQC.CHANNEL_PROPERTY, channel);
connectionProperties.Add(MQC.USER_ID_PROPERTY, "websphere");
connectionProperties.Add(MQC.PASSWORD_PROPERTY, "websphere");
break;
}
return connectionProperties;
}
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static int Main(string[] args)
{
try
{
Hashtable connectionProperties = init(connectionType);
// Create a connection to the queue manager using the connection
// properties just defined
MQQueueManager qMgr = new MQQueueManager(qManager, connectionProperties);
// Set up the options on the queue we wish to open
int openOptions = MQC.MQOO_INPUT_AS_Q_DEF | MQC.MQOO_OUTPUT;
// Now specify the queue that we wish to open,and the open options
MQQueue system_default_local_queue =
qMgr.AccessQueue("clq_default_vm_sql2000", openOptions);
// Define a WebSphere MQ message, writing some text in UTF format
MQMessage hello_world = new MQMessage();
hello_world.WriteUTF("Hello World!");
// Specify the message options
MQPutMessageOptions pmo = new MQPutMessageOptions();
// accept the defaults,
// same as MQPMO_DEFAULT
// Put the message on the queue
system_default_local_queue.Put(hello_world, pmo);
// Get the message back again
// First define a WebSphere MQ message buffer to receive the message
MQMessage retrievedMessage = new MQMessage();
retrievedMessage.MessageId = hello_world.MessageId;
// Set the get message options
MQGetMessageOptions gmo = new MQGetMessageOptions(); //accept the defaults
//same as MQGMO_DEFAULT
// Get the message off the queue
system_default_local_queue.Get(retrievedMessage, gmo);
// Prove we have the message by displaying the UTF message text
String msgText = retrievedMessage.ReadUTF();
Console.WriteLine("The message is: {0}", msgText);
// Close the queue
system_default_local_queue.Close();
// Disconnect from the queue manager
qMgr.Disconnect();
}
//If an error has occurred in the above,try to identify what went wrong.
//Was it a WebSphere MQ error?
catch (MQException ex)
{
Console.WriteLine("A WebSphere MQ error occurred: {0}", ex.ToString());
}
catch (System.Exception ex)
{
Console.WriteLine("A System error occurred: {0}", ex.ToString());
}
Console.ReadLine();
return 0;
}//end of start
}//end of sample
With Windows-to-Windows connections, WMQ will pass the SID as well as the "short ID" which in this case would be "websphere". This is a little better authorization than you get with non-Windows WMQ which only uses the short ID. The problem is that someone on a non-windows server can connect using the short ID "websphere" and since there is no SID WMQ will accept the connection as thought it were the Windows account.
Two ways to address this. On the QMgr host you can run setmqaut commands to authorize the SID you are actually using to connect. The VM must be able to inquire on the domain where the Windows account lives and the setmqaut command must use -p user#domain syntax.
Alternatively, you can just use the locally defined ID in the MCAUSER of the channel like
ALTER CHL(channel name) CHLTYPE(SVRCONN) MCAUSER('webaphere#vm')
...where 'vm' is the name of the virtual machine and you've authorized the account with setmqaut commands or by putting it into the mqm or administrators group.
Keep in mind this is only for testing! Any channel with a blank or administrative MCAUSER can not only administer WMQ but also execute arbitrary commands on the underlying host server. In the real world you would create accounts with access to queues and the QMgr but not access to administer and you'd put those into all MCAUSER values, then set MCAUSER('nobody') for all the SYSTEM.DEF and SYSTEM.AUTO channels.
Lots more on this available on my web site t-rob.net in the MQ and Links pages. Also, check out:
Comment lines: T.Rob Wyatt: What you didn't know you didn’t know about WebSphere MQ security
Comment lines: T.Rob Wyatt: WebSphere MQ security heats up
I used to have the same problem. the solution, we need to assign user window account to MQA group or administrator group. Then, add user name of the window account to MCA user in the channel.
Hope this helps

Self updating .net CF application

I need to make my CF app self-updating through the web service.
I found one article on MSDN from 2003 that explains it quite well. However, I would like to talk practice here. Anyone really done it before or does everyone rely on third party solutions?
I have been specifically asked to do it this way, so if you know of any tips/caveats, any info is appreciated.
Thanks!
This is relatively easy to do. Basically, your application calls a web service to compare its version with the version available on the server. If the server version is newer, your application downloads the new EXE as a byte[] array.
Next, because you can't delete or overwrite a running EXE file, your application renames its original EXE file to something like "MyApplication.old" (the OS allows this, fortunately). Your app then saves the downloaded byte[] array in the same folder as the original EXE file, and with the same original name (e.g. "MyApplication.exe"). You then display a message to the user (e.g. "new version detected, please restart") and close.
When the user restarts the app, it will be the new version they're starting. The new version deletes the old file ("MyApplication.old") and the update is complete.
Having an application update itself without requiring the user to restart is a huge pain in the butt (you have to kick off a separate process to do the updating, which means a separate updater application that cannot itself be auto-updated) and I've never been able to make it work 100% reliably. I've never had a customer complain about the required restart.
I asked this same question a while back:
How to Auto-Update Windows Mobile application
Basically you need two applications.
App1: Launches the actual application, but also checks for a CAB file (installer). If the cab file is there, it executes the CAB file.
App2: Actual application. It will call a web service, passing a version number to the service and retrieve a URL back if a new version exists (). Once downloaded, you can optionally install the cab file and shut down.
One potiencial issue: if you have files that one install puts on the file system, but can't overwrite (database file, log, etc), you will need two separate installs.
To install a cab: look up wceload.exe http://msdn.microsoft.com/en-us/library/bb158700.aspx
private static bool LaunchInstaller(string cabFile)
{
// Info on WceLoad.exe
//http://msdn.microsoft.com/en-us/library/bb158700.aspx
const string installerExe = "\\windows\\wceload.exe";
const string processOptions = "";
try
{
ProcessStartInfo processInfo = new ProcessStartInfo();
processInfo.FileName = installerExe;
processInfo.Arguments = processOptions + " \"" + cabFile + "\"";
var process = Process.Start(processInfo);
if (process != null)
{
process.WaitForExit();
}
return InstallationSuccessCheck(cabFile);
}
catch (Exception e)
{
MessageBox.Show("Sorry, for some reason this installation failed.\n" + e.Message);
Console.WriteLine(e);
throw;
}
}
private static bool InstallationSuccessCheck(string cabFile)
{
if (File.Exists(cabFile))
{
MessageBox.Show("Something in the install went wrong. Please contact support.");
return false;
}
return true;
}
To get the version number: Assembly.GetExecutingAssembly().GetName().Version.ToString()
To download a cab:
public void DownloadUpdatedVersion(string updateUrl)
{
var request = WebRequest.Create(updateUrl);
request.Credentials = CredentialCache.DefaultCredentials;
var response = request.GetResponse();
try
{
var dataStream = response.GetResponseStream();
string fileName = GetFileName();
var fileStream = new FileStream(fileName, FileMode.CreateNew);
ReadWriteStream(dataStream, fileStream);
}
finally
{
response.Close();
}
}
What exactly do you mean by "self-updating"? If you're referring to configuration or data, then webservices should work great. If you're talking about automatically downloading and installing a new version of itself, that's a different story.
Found this downloadable sample from Microsoft- looks like it should help.
If you want to use a third-party component, have a look at AppToDate developed by the guys at MoDaCo.