how do i send a jpeg file over an SSH channel - ssh

I have managed to read a text file over an SSH channel using an Ubuntu Linux to serve as an SSH server. My question is how do i send an image file over and display it in an application like a JPanel? I seem to have problems doing that.
Below is the code that I have used which is from this forum. Credits to user World
public static void main(String []args) throws Exception
{
String user="larry";
String password="123";
String host="192.168.174.131";
int port = 22;
String remoteFile="/home/larry/seohyun.jpg";
try
{
JSch jsch=new JSch();
Session session=jsch.getSession(user,host,port);
session.setPassword(password);
session.setConfig("StrictHostKeyChecking","no");
System.out.println("Establishing connection");
session.connect();
System.out.println("Connection Established");
System.out.println("Creating SFTP Channel.");
ChannelSftp sftpChannel=(ChannelSftp) session.openChannel("sftp");
sftpChannel.connect();
System.out.println("SFTP Channel Established");
InputStream out=null;
out=sftpChannel.get(remoteFile);
BufferedReader br=new BufferedReader(new InputStreamReader(out));
String imageName = br.readLine();
File input = new File(imageName);
image = ImageIO.read(input);
JFrame frame = new JFrame("Display Image");
Panel panel = new TestSSH();
frame.getContentPane().add(panel);
frame.setSize(500,500);
frame.setVisible(true);
}catch(Exception e)
{
System.err.print(e);
}
}
However I cant seem to be able to display the image on theJPanel`.
it gives me the following exception
Establishing connection
Connection Established
Creating SFTP Channel.
SFTP Channel Established
javax.imageio.IIOException: Can't read input file!
however, i have checked the file path countless times. It is correct.
May i know what`s wrong with my code?

Related

Too many files open when using generic packager with external packager.xml file

I am using jpos 2.1.0 where i am using external packager xml file for iso8583 client. Due to large number of request in two or three days, i encountered "Too Many Files Open" and i have set ulimit -n = 50000. I doubt that the packager files are not been closed properly due to which this limit has been exceeded. Please help me to close the open file properly.
JposLogger logger = new JposLogger(isoLogLocation);
org.jpos.iso.ISOPackager customPackager = new GenericPackager(isoPackagerLocation+iso8583Properties.getPackager());
BaseChannel channel = new ASCIIChannel(iso8583Properties.getServerIp(), Integer.parseInt(iso8583Properties.getServerPort()), customPackager);
logger.jposlogconfig(channel);
try {
channel.setTimeout(45000);
channel.connect();
}catch(Exception ex) {
log4j.error(ex.getMessage());
throw new ConnectIpsException("Unable to establish connection with bank.");
}
log4j.info("Connection established using ASCIIChannel");
ISOMsg m = new ISOMsg();
m.set(0, "1200");
........
m.set(126, "connectIPS");
m.setPackager(customPackager);
log4j.info(ISOUtil.hexdump(m.pack()));
channel.send(m);
log4j.info("Message has been send");
ISOMsg r = channel.receive();
r.setPackager(customPackager);
log4j.info(ISOUtil.hexdump(r.pack()));
String actionCode = (String) r.getValue("39");
channel.disconnect();
return bancsxfr;
}
You know when you open a file, a socket, or a channel, you need to close it, right?
I don't see a finally in your try that would close the channel.
You have a huge leak there.

AmazonS3: Getting warning: S3AbortableInputStream:Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection

Here's the warning that I am getting:
S3AbortableInputStream:Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
I tried using try with resources but S3ObjectInputStream doesn't seem to close via this method.
try (S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah blah
}
I also tried below code and explicitly closing but that doesn't work either:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
try (BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah
s3ObjectInputStream.close();
s3object.close();
}
Any help would be appreciated.
PS: I am only reading two lines of the file from S3 and the file has more data.
Got the answer via other medium. Sharing it here:
The warning indicates that you called close() without reading the whole file. This is problematic because S3 is still trying to send the data and you're leaving the connection in a sad state.
There's two options here:
Read the rest of the data from the input stream so the connection can be reused.
Call s3ObjectInputStream.abort() to close the connection without reading the data. The connection won't be reused, so you take some performance hit with the next request to re-create the connection. This may be worth it if it's going to take a long time to read the rest of the file.
Following option #1 of Chirag Sejpal's answer I used the below statement to drain the S3AbortableInputStream to ensure the connection can be reused:
com.amazonaws.util.IOUtils.drainInputStream(s3ObjectInputStream);
I ran into the same problem and the following class helped me
#Data
#AllArgsConstructor
public class S3ObjectClosable implements Closeable {
private final S3Object s3Object;
#Override
public void close() throws IOException {
s3Object.getObjectContent().abort();
s3Object.close();
}
}
and now you can use without warning
try (final var s3ObjectClosable = new S3ObjectClosable(s3Client.getObject(bucket, key))) {
//same code
}
To add an example to Chirag Sejpal's answer (elaborating on option #1), the following can be used to read the rest of the data from the input stream before closing it:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
try (S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent()) {
try {
// Read from stream as necessary
} catch (Exception e) {
// Handle exceptions as necessary
} finally {
while (s3ObjectInputStream != null && s3ObjectInputStream.read() != -1) {
// Read the rest of the stream
}
}
// The stream will be closed automatically by the try-with-resources statement
}
I ran into the same error.
As others have pointed out, the /tmp space in lambda is limited to 512 MB.
And if the lambda context is re-used for a new invocation, then the /tmp space is already half-full.
So, when reading the S3 objects and writing all the files to the /tmp directory (as I was doing),
I ran out of disk space somewhere in between.
Lambda exited with error, but NOT all bytes from the S3ObjectInputStream were read.
So, two things one need to keep in mind:
1) If the first execution causes the problem, be stingy with your /tmp space.
We have only 512 MB
2) If the second execution causes the problem, then this could be resolved by attacking the root problem.
Its not possible to delete the /tmp folder.
So, delete all the files in the /tmp folder after the execution is finished.
In java, here is what I did, which successfully resolved the problem.
public String handleRequest(Map < String, String > keyValuePairs, Context lambdaContext) {
try {
// All work here
} catch (Exception e) {
logger.error("Error {}", e.toString());
return "Error";
} finally {
deleteAllFilesInTmpDir();
}
}
private void deleteAllFilesInTmpDir() {
Path path = java.nio.file.Paths.get(File.separator, "tmp", File.separator);
try {
if (Files.exists(path)) {
deleteDir(path.toFile());
logger.info("Successfully cleaned up the tmp directory");
}
} catch (Exception ex) {
logger.error("Unable to clean up the tmp directory");
}
}
public void deleteDir(File dir) {
File[] files = dir.listFiles();
if (files != null) {
for (final File file: files) {
deleteDir(file);
}
}
dir.delete();
}
This is my solution. I'm using spring boot 2.4.3
Create an amazon s3 client
AmazonS3 amazonS3Client = AmazonS3ClientBuilder
.standard()
.withRegion("your-region")
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials("your-access-key", "your-secret-access-key")))
.build();
Create an amazon transfer client.
TransferManager transferManagerClient = TransferManagerBuilder.standard()
.withS3Client(amazonS3Client)
.build();
Create a temporary file in /tmp/{your-s3-key} so that we can put the file we download in this file.
File file = new File(System.getProperty("java.io.tmpdir"), "your-s3-key");
try {
file.createNewFile(); // Create temporary file
} catch (IOException e) {
e.printStackTrace();
}
file.mkdirs(); // Create the directory of the temporary file
Then, we download the file from s3 using transfer manager client
// Note that in this line the s3 file downloaded has been transferred in to the temporary file that we created
Download download = transferManagerClient.download(
new GetObjectRequest("your-s3-bucket-name", "your-s3-key"), file);
// This line blocks the thread until the download is finished
download.waitForCompletion();
Now that the s3 file has been successfully transferred into the temporary file that we created. We can get the InputStream of the temporary file.
InputStream input = new DataInputStream(new FileInputStream(file));
Because the temporary file is not needed anymore, we just delete it.
file.delete();

TCP Connection over a secure ssh connection

I am trying to use JSCH to connect to a remote server and then from that server open a telnet like session over a tcp/ip port. Say connect to server A, and once connected issue a tcp connection to server B over another port. In my webserver logs I see a GET / logged but not GET /foo as I would expect. ANything I m missing here? (I do not need to use Port forwarding since the remote port is accessible to the system I am connected to)
package com.tekmor;
import com.jcraft.jsch.*;
import java.io.BufferedReader;
.
.
public class Siranga {
public static void main(String[] args){
Siranga t=new Siranga();
try{
t.go();
} catch(Exception ex){
ex.printStackTrace();
}
}
public void go() throws Exception{
String host="hostXXX.com";
String user="USER";
String password="PASS";
int port=22;
Properties config = new Properties();
config.put("StrictHostKeyChecking", "no");
String remoteHost="hostYYY.com";
int remotePort=80;
try {
JSch jsch=new JSch();
Session session=jsch.getSession(user, host, port);
session.setPassword(password);
session.setConfig(config);
session.connect();
Channel channel=session.openChannel("direct-tcpip");
((ChannelDirectTCPIP)channel).setHost(remoteHost);
((ChannelDirectTCPIP)channel).setPort(remotePort);
String cmd = "GET /foo";
InputStream in = channel.getInputStream();
OutputStream out = channel.getOutputStream();
channel.connect(10000);
byte[] bytes = cmd.getBytes();
InputStream is = new ByteArrayInputStream(cmd.getBytes("UTF-8"));
int numRead;
while ( (numRead = is.read(bytes) ) >= 0) {
out.write(bytes, 0, numRead);
System.out.println(numRead);
}
out.flush();
channel.disconnect();
session.disconnect();
System.out.println("foo");
}
catch (Exception e){
e.printStackTrace();
}
}
}
Read your HTTP specification again. The request header should end with an empty line. So assuming you have no more header lines, you should at least have to line breaks at the end. (Line break here means a CRLF combination.)
Also, the request line should contain the HTTP version identifier after the URL.
So try this change to your program:
String command = "GET /foo HTTP/1.0\r\n\r\n";
As a hint: Instead of manually piping data from your ByteArrayInputStream to the channel's output stream, you could use the setInputStream method. Also, don't forget to read the result from the channel's input stream.

Is it possible to get Remote Servers for testing Upload Files functionality?

I have a simle program shown below which is resonsible to upload a file to a Remote Location
public static void main(String[] args) {
String server = "www.myserver.com";
int port = 21;
String user = "user";
String pass = "pass";
FTPClient ftpClient = new FTPClient();
try {
ftpClient.connect(server, port);
ftpClient.login(user, pass);
ftpClient.enterLocalPassiveMode();
ftpClient.setFileType(FTP.BINARY_FILE_TYPE);
File firstLocalFile = new File("D:/Test/Projects.zip");
String firstRemoteFile = "Projects.zip";
InputStream inputStream = new FileInputStream(firstLocalFile);
System.out.println("Start uploading first file");
boolean done = ftpClient.storeFile(firstRemoteFile, inputStream);
inputStream.close();
}
My question is , is it possible to test this program anyway , as i dont have a Remote Server currently .
Means is it possible to get remote server to upload files for temporary purpose ( Sorry but only open source please )
Is anybody aware of such websites ??
Screen shot

Can I have simultaneous streams on one physical file

I have a wcf service that allow clients to download some files. Although there is a new instance of service for every client's request, if two clients try to download same file at the same time, first request to arrive locks the file until it is finished with it. So the other client is actually waiting for first client to finish as there is no multiple services. There must be a way to avoid this.
Is there anyone who knows how I can avoid this without having multiple files on servers hard disk? Or am I doing something totally wrong?
this is server side code:
`public Stream DownloadFile(string path)
{
System.IO.FileInfo fileInfo = new System.IO.FileInfo(path);
// check if exists
if (!fileInfo.Exists) throw new FileNotFoundException();
// open stream
System.IO.FileStream stream = new System.IO.FileStream(path, System.IO.FileMode.Open, System.IO.FileAccess.Read);
// return result
return stream;
}`
this is client side code:
public void Download(string serverPath, string path)
{
Stream stream;
try
{
if (System.IO.File.Exists(path)) System.IO.File.Delete(path);
serviceStreamed = new ServiceStreamedClient("NetTcpBinding_IServiceStreamed");
SimpleResult<long> res = serviceStreamed.ReturnFileSize(serverPath);
if (!res.Success)
{
throw new Exception("File not found: \n" + serverPath);
}
// get stream from server
stream = serviceStreamed.DownloadFile(serverPath);
// write server stream to disk
using (System.IO.FileStream writeStream = new System.IO.FileStream(path, System.IO.FileMode.CreateNew, System.IO.FileAccess.Write))
{
int chunkSize = 1 * 48 * 1024;
byte[] buffer = new byte[chunkSize];
OnTransferStart(new TransferStartArgs());
do
{
// read bytes from input stream
int bytesRead = stream.Read(buffer, 0, chunkSize);
if (bytesRead == 0) break;
// write bytes to output stream
writeStream.Write(buffer, 0, bytesRead);
// report progress from time to time
OnProgressChanged(new ProgressChangedArgs(writeStream.Position));
} while (true);
writeStream.Close();
stream.Dispose();
}
}
catch (Exception ex)
{
throw ex;
}
finally
{
if (serviceStreamed.State == System.ServiceModel.CommunicationState.Opened)
{
serviceStreamed.Close();
}
OnTransferFinished(new TransferFinishedArgs());
}
}
I agree with Mr. Kjörling, it's hard to help without seeing what you're doing. Since you're just downloading files from your server, why are you opening it as R/W (causing the lock). If you open it as read only, then it won't lock. Please don't mod down if my suggestion is lacking as it is only my interpretation of the problem w/o a lot of information.
Try this, it should enable two threads to read the file concurrently and independently:
System.IO.FileStream stream = new System.IO.FileStream(path, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read);