I am using jsch for sftp file transfer. When I send file using sftp command by setting the buffer size 512 (-B option ) sftp B 512 [sftp server name] and invoking put command, I can transfer files in 8.0MBPS. (The regular speed is 3.0MBPS).
When I do the same file transfer using jsch api in java, I get only 2.6MBPS. Is there any option to increase the buffer size in jsch or improve the speed of jsch?
Here is my code...
Channel channel = null;
ChannelSftp channelSftp = null;
log("preparing the host information for sftp.");
try {
JSch jsch = new JSch();
session = jsch.getSession(username, hostname, port);
session.setPassword(password);
java.util.Properties config = new java.util.Properties();
config.put("StrictHostKeyChecking", "no");
session.setConfig(config);
session.connect();
System.out.println("Host connected.");
channel = session.openChannel("sftp");
channel.connect();
log("sftp channel opened and connected.");
channelSftp = (ChannelSftp) channel;
channelSftp.cd(SFTPWORKINGDIR);
File f = new File(fileName);
channelSftp.put(new FileInputStream(f), f.getName());
log("File transferred successfully to host.");
} catch (Exception ex) {
System.out.println("Exception found while transfer the response.");
ex.printStackTrace();
} finally{
channelSftp.exit();
log("sftp Channel exited.");
channel.disconnect();
log("Channel disconnected.");
session.disconnect();
log("Host Session disconnected.");
}
Check out the newer version of Jsch (1.50), it became faster downloading.
It may work but I am not sure. I saw it somewhere in jsch code base.
You can try it out:
getSession().setConfig("max_input_buffer_size", "increased_size");
Related
A message is displayed indicating that the process of waiting for the key to be updated times out when JSCH is used for SSH connection
Here is my configuration:
Properties config = new Properties();
config.put("StrictHostKeyChecking", "no");
//get the JSCH session
Session session = connectInfo.getJSch().getSession(serverConfig.getUsername(), serverConfig.getHost(), serverConfig.getPort());
session.setConfig(config);
//set password
session.setPassword(serverConfig.getPassword());
// Send null packet each 100s
session.setServerAliveInterval(100);
// Send 9999 max null packet
session.setServerAliveCountMax(9999);
// No connection timeout
session.connect(0);
ChannelExec channel = (ChannelExec) session.openChannel("exec");
channel.setCommand(command);
channel.setInputStream(null);
channel.setErrStream(System.err);
channel.connect();
InputStream inputStream = channel.getInputStream();
StringBuilder resultLines = new StringBuilder();
Here's the actual log#Martin Prikryl
I found the right answer!Error caused by my Linux server time inaccuracy,Synchronizing the exact time can solve this problem.
Getting authentication failed while trying to connect to the AWS S3 SMB path using SMB_3_1_1 of com.hierynomus.smbj client 0.11.5
Public.
Getting below error:
STATUS_ACCESS_DENIED (0xc0000022): Authentication failed for '[username]' using com.hierynomus.smbj.auth.NtlmAuthenticator#4b5189ac
SmbConfig cfg = SmbConfig.builder().withTimeout(120000, TimeUnit.SECONDS)
.withTimeout(120000, TimeUnit.SECONDS)
.withSoTimeout(180000, TimeUnit.SECONDS)
.withMultiProtocolNegotiate(true)
.withSecurityProvider(new JceSecurityProvider(new BouncyCastleProvider()))
.build();
try (SMBClient client = new SMBClient(cfg)){
**connection = client.connect(smbIP);
Session session = connection.authenticate(new AuthenticationContext(smbUserName, smbPassword.toCharArray(), smbDomain));** // from this line getting above exception
DiskShare share = (DiskShare) session.connectShare(smbSharePath);
if(!share.folderExists(destinationFolderName)) {
share.mkdir(destinationFolderName);
}
fileName = destinationFolderName+"\\"+fileName;
f = share.openFile(fileName,
new HashSet<>(Arrays.asList(AccessMask.GENERIC_ALL)),
new HashSet<>(Arrays.asList(FileAttributes.FILE_ATTRIBUTE_NORMAL)),
SMB2ShareAccess.ALL,
SMB2CreateDisposition.FILE_CREATE,
new HashSet<>(Arrays.asList(SMB2CreateOptions.FILE_DIRECTORY_FILE))
);
try(OutputStream os = f.getOutputStream()){
os.write(fileContent);
}catch (Exception e) {
}
}
I have read some answers which do not resolve my question fully e.g. Placing timeout for SSLSocket handshake. This answer requires me to layer a plaintext socket under an SSLSocket which I would rather not do if there is an alternative. The relevant part of my code is as follows (FYI I'm not hardcoding passwords its just for testing):
private static SSLSocket establishConnection(InetAddress ipv4, int port) {
try {
int ksn = Stub.getKeystoreNum();
SecurityUtilities su = new SecurityUtilities("truststore" + ksn + ".jks", "keystore" + ksn + ".jks", "trustcert", "mykey");
KeyStore keystore = su.loadKeyStore("password".toCharArray());
KeyStore truststore = su.loadTrustStore("password".toCharArray());
KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance(KEY_MANAGER);
keyManagerFactory.init(keystore, "password".toCharArray());
TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance(KEY_MANAGER);
trustManagerFactory.init(truststore);
// specify TLS version e.g. TLSv1.3
SSLContext serverContext = SSLContext.getInstance(TLS_VERSION);
serverContext.init(keyManagerFactory.getKeyManagers(), trustManagerFactory.getTrustManagers(), SecureRandom.getInstance(RNG_ALGORITHM, RNG_PROVIDER));
// THIS CODE IS MY ATTEMPT AT ESTABLISHING AN SSLSOCKET WITH A TIMEOUT
SSLSocketFactory fact = serverContext.getSocketFactory();
SSLSocket socket = (SSLSocket) fact.createSocket();
socket.connect(new InetSocketAddress(ipv4.getHostAddress(), port), CON_TIMEOUT);
return socket;
} catch (IOException | GeneralSecurityException e) {
System.out.println("tls node connection failed");
}
return null;
}
My code successfully establishes a connection and having tested it with tcpdump I found that it does indeed seem to encrypt the data transmitted with it. However because I have read that it's not possible to create an SSLSocket without having it immediately connect e.g.
return (SSLSocket) fact.createSocket(ipv4.getHostAddress(), port);
and because the connect method is defined in Socket and not SSLSocket I feel I am making some kind of mistake. additionally I have seen multiple examples which utilize the SSLSocket.startHandshake() method is this necessary as I have successfully established connections with the previous line of code alone?
Thanks for any help
I have a RabbitMQ instance deployed on a google cloud engine. I also have a hadoop instance deployed on a different google cloud engine but still in the same application. I am trying to connect to the RabbitMQ queue instance from the hadoop clusters but with no success.
I have a java application that should push items on the RabbitMQ queue and then receive them in the same application. The following is the connection java code:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("130.211.112.37:5672");
try {
connection = factory.newConnection();
channel = connection.createChannel();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
but i get the following result:
java.net.UnknownHostException: 130.211.112.37:5672
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:178)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at com.rabbitmq.client.impl.FrameHandlerFactory.create(FrameHandlerFactory.java:32)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:615)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:639)
at de.unibonn.iai.eis.luzzu.io.impl.SparkStreamProcessorObserver.<clinit>(SparkStreamProcessorObserver.java:157)
at de.unibonn.iai.eis.luzzu.evaluation.Main.main(Main.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I tried opening port 5672 on google cloud firewall. Does anyone has some pointers to the solution please?
Best
Jeremy
As wrote to the comment:
ConnectionFactory factory = new ConnectionFactory();
//factory.setHost("130.211.112.37:5672"); <----- sethost accepts only the host!
factory.setHost("130.211.112.37");
factory.setPort(5672);
try {
connection = factory.newConnection();
channel = connection.createChannel();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
By default the port is 5672, so setPort it is not necessary.
You have to use setPort only if you change the default port.
As explained here: https://www.rabbitmq.com/api-guide.html you need to call setHost and setPort to create a connection. In your app you are passing the host and port together on the same line.
Jsch, private.ppk based login.
Currently i have following code to ssh login but getting exception due to does not provide key.
Following is my error i am getting
om.jcraft.jsch.JSchException: Auth cancel
JSch jsch = new JSch();
Session session = jsch.getSession(user_name, host, 22);
UserInfo ui = new SSHUserInfo(password, true);
session.setUserInfo(ui);
//connect to remove server
session.connect();
//sudo login bamboo
if (null != session && session.isConnected()) {
session.disconnect();
}
JSch jsch = new JSch();
// Here privateKey is a file path like "/home/me/.ssh/secret_rsa "
// passphrase is passed as a string like "mysecr"
jsch.addIdentity(privateKey, passphrase);
session = jsch.getSession(user, host, port);
session.setConfig("StrictHostKeyChecking", "no");
// Or yes, up to you. If yes, JSch locks to the server identity so it cannot
// be swapped by another with the same IP.
session.connect();
channel = session.openChannel("shell");
out = channel.getOutputStream();
channel.connect();
The file suffix ".ppk" means that you are trying to use Putty's private key, I guess.
JSch has supported the Putty's private keys since 0.1.49,
and if your key is ciphered, you must install "Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files"[1] on your environment.
And then, if you are using Pageant usually, you may be interested in trying jsch-agent-proxy[2].
[1] http://www.oracle.com/technetwork/java/javase/downloads/jce-6-download-429243.html
[2] https://github.com/ymnk/jsch-agent-proxy