I am using Apache VFS to upload a file to an SFTP server, if the file is newer than the file on the server or doesn't exist there yet. The server connection uses SSH Keys for Authentication.
I am using the following java code (plus error handling etc.) to connect to the server and check the file modification date-time:
DefaultFileSystemManager manager = new DefaultFileSystemManager();
manager.addProvider("sftp", new SftpFileProvider());
manager.init();
FileSystemOptions opts = createDefaultOptions();
BytesIdentityInfo identityInfo = new BytesIdentityInfo(server.sshKey.getBytes(), null);
SftpFileSystemConfigBuilder.getInstance().setIdentityProvider(opts, identityInfo);
remoteFileObject = manager.resolveFile(new URI("sftp",server.UserName,server.HostName,server.Port,remoteFilePath,null,null).toString(), createDefaultOptions(server.Key));
FileContent content = remoteFileObject.getContent();
return content.getLastModifiedTime();
The SSH key is in the format -----BEGIN RSA PRIVATE KEY----- etc.; as exported by puttyGen under Conversions -> Export OpenSSH Key (i.e. the old format of OpenSSH key, not the new one).
I have tested this code on Windows, with a locally hosted SFTP server (i.e. also on the same Windows machine), and it works successfully.
I am now wanting to use this in a Linux environment (RHEL), connecting to an AWS Transfer SFTP server, secured using SSH keys as described.
I can connect successfully using the SFTP command from the Linux OS shell:
sftp -oIdentityFile=/path/to/test.ppk USER#xxx.xxx.xxx.xxx
But, when I try to run the java code, the code hangs on the call to manager.resolveFile.
After half an hour (I think - this might not be related), I get the following in /var/log/messages:
systemd-logind[1297]: Session 115360 logged out. Waiting for processes to exit.
systemd[1]: session-115360.scope: Succeeded.
systemd-logind[1297]: Removed session 115360.
I don't have SELinux enabled, so I don't think that's interfering in any way.
Can anyone help suggest what might be causing this?
There were a couple of things, as it turns out:
Timeout
The timeout can be set when you configure the SftpFileSystemConfigBuilder, by using the .setSessionTimeout(FileSystemOptions, Duration) method call. This reduces the timeout which, if nothing else, makes the issue easier to debug.
The Session comments in the messages log were not related to the issue. Instead, the issue happened because the exec channel is disabled on the SFTP server, but VFS is trying to use it. At a simple level, this can be disabled using setDisableDetectExecChannel on the SftpFileSystemConfigBuilder object - but you should know the implications of this before doing so.
Related
I am trying to run a series of commands to configure a vlan on a Dell EMC OS10 server using Paramiko. However I am running into a rather frustrating problem.
I want to run the following
# configure terminal
(config)# interface vlan 3
(conf-if-vl-3)# description VLAN-TEST
(conf-if-vl-3)# end
However, I can't seem to figure out how to achieve this with paramiko.SSHClient().
When I try to use sshclient.exec_command("show vlan") it works great, it runs this command and exits. However, I don't know how to run more than one command with a single exec_command.
If I run sshclient.exec_command("configure") to access the configuration shell, the command completes and I believe the channel is closed, since my next command sshclient.exec_command("interface vlan ...") is not successful since the switch is no longer in configure mode.
If there is a way to establish a persistent channel with exec_command that would be ideal.
Instead I have resorted to a function as follows
chan = sshClient.invoke_shell()
chan.send("configure\n")
chan.send("interface vlan 3\n")
chan.send("description VLAN_TEST\n")
chan.send("end\n")
Oddly, this works when I run it from a Python terminal one command at a time.
However, when I call this function from my Python main, it fails. Perhaps the channel is closed too soon when it goes out of scope from the function call?
Please advise if there is a more reasonable way to do this
Regarding sending commands to the configure mode started with SSHClient.exec_commmand, see:
Execute (sub)commands in secondary shell/command on SSH server in Python Paramiko
Though it's quite common that "devices" do not support the "exec" channel at all:
Executing command using Paramiko exec_command on device is not working
Regarding your problem with invoke_shell, it's quite possible that the server needs some time to get ready for the next command.
Quick-and-dirty solution is to "sleep" shortly between the individual send calls.
Better solution to is to wait for command prompt before sending the next command.
I have Data Virt running via the standalone.sh script, and can log in with my username and password. My next task is configuring it so that it automatically runs whenever the instance is up and running (without having to execute standalone.sh), and uses SSL (port 443) rather than my username and password to log me in. I added the vault.keystore, dv_keystore.jks, and dv_truststore.jks files, and modified both standalone.sh and standalone.xml, according to the JBoss and other online documentation, to account for using these files. I start the standalone.sh script, which runs without any errors. When I browse to:
http://<IP>:8443/dashboard
after starting standalone.sh, I get the following error:
This page can't be displayed
Turn on TLS 1.0, TLS 1.1, and TLS 1.2 in Advanced settings and try connecting to https://:8443 again. If this error persists, it is possible that this site uses an unsupported protocol or cipher suite such as RC4, which is not considered secure. Please contact your site administrator.
The settings Use TLS-1.0-ON, Use TLS-1.1-ON, and Use TLS-1.2-ON are all checked in the Browser properties.
By contrast, when I browse to
http://<IP>:8443/dashboard
when standalone.sh is not running, I get the following:
This page can't be displayed
- Make sure the web address https://:8443 is correct.
- Look for the page with your search engine.
- Refresh the page in a few minutes.
It appears the browser is sensing something going on when standalone.sh is running, but something is not allowing the browser to access the dashboard.
What am I missing here?
Have you validated any other ssl access? Is it just an issue with the dashboard application?
I'm searching for a way to get my Files synchronized (task) from a web server (Ubuntu 14) to a local server (Windows Server). The web server creates small files, which the local Server needs. The web server is in a DMZ, accessible through SSH. Only the local server is able to access folders on web server. It tried using Programs like WinSCP, but I'm not able to set a "get"-Job.
Is there a way to do this with SSH on Windows server without login every few seconds? Or is there a better solution? In the Future Web-Services are possible, but at the moment I need a quick solution.
Either you need to schedule a regular frequent job, that connects and downloads changes.
Or you need to have continuously running process, that keeps the connection opened and regularly watches for changes.
There's hardly a better solution (that's still quick and easy to implement).
Example of continuous process implemented using WinSCP .NET assembly:
// Setup session options
SessionOptions sessionOptions = new SessionOptions {
Protocol = Protocol.Sftp,
HostName = "example.com",
UserName = "user",
Password = "mypassword",
SshHostKeyFingerprint = "ssh-rsa 2048 xxxxxxxxxxx...="
};
using (Session session = new Session())
{
// Connect
session.Open(sessionOptions);
while (true)
{
// Download changes
session.SynchronizeDirectories(
SynchronizationMode.Local, localPath, remotePath, false).Check();
// Wait 10 seconds
Thread.Sleep(10000);
}
}
You will need to add a better error handling and reconnect, if connection breaks.
If you do not want to implement this as (C#) application, you can use PowerShell script. For a complete solution, see
Keep local directory up to date (download changed files from remote SFTP/FTP server).
I am using Windows Explorer to test the WebDAV implementation I am adapting to our system. The implementation is using IIS Express and is launched by Visual Studio 2013. I turned off Windows Explorer's requirement for SSL with WebDAV so I can test basic authentication (which works).
The problem I am having is with the Write method of the DavFile implementation. I connect to the web folder, navigate to a sub folder, then attempt to copy a JPG file from a folder on my computer's hard drive, into the WebDAV sub folder (using Windows Explorer).
The attempt to copy up a file (854kb) fails. When I set a break point, I notice that the "segment" stream (one of the input parameters on the "write" method, shows 0 (zero) bytes length.
Any tips on how to debug this problem? What is the most likely cause of 0 byte in the stream?
Here are some ideas about how to understand what is going wrong:
Examine the server log for exceptions. By default it is called WebDAVLog.txt and located in \App_Data\WebDAV\Logs\ folder. Are there any exceptions in it? Check your server log and make sure all requests were successful.
Examine WebDAV requests with a Fiddler tool or any other debugging proxy. While all requests that reached the WebDAV server Engine are logged, if the request failed before hitting the Engine you will not see it in a log. Usually this happens if the request failed during authentication stage.
Note that to capture requests using Fiddler on 'localhost' you must use 'localhost.fiddler' instead of 'localhost' when connecting to server, for example: http://localhost.fiddler:1234.
Exclude any client side issues. Finally there could be issues with client software that you are using, including with Microsoft miniredirector. Try to access server from any other machine. To get the idea if the problem is on the client or server side try also to reproduce the issue on ajaxbrowser.com.
You can post a part of the WebDAVLog.txt or fiddler log here or send it to IT Hit, it may give the idea of what is wrong.
I am using apache.net.ftp api to download from and upload to ftp server. Its working fine in normal scenarios.
But the issue starts when there is some latency or connection is closed by the server for some reasons.
Here comes the time-out. I found a parameter 'SO_TIMEOUT' which is considered when reading from socket. So, I used ftpClient.setSoTimeout(time in millis) method to set it which will be used while downloading a file. It worked fine.
I am not getting how to set time-out while uploading the file to the ftp-server.
Thanks in advance.
Check the following things to make sure everything is running fine,and then try again::
Check the Firewall setting,if any which might be blocking the incoming connections and the connection timeout.