I am developing a tool that use JSch library to read log files from remote machine for log parsing and some other functionality. The code to read the log file is given below.
JSch jsch=new JSch();
Session session=jsch.getSession(user, host, 22);
session.setPassword(password);
session.setConfig("StrictHostKeyChecking", "no");
session.connect();
ChannelSftp sftp = (ChannelSftp) session.openChannel("sftp");
sftp.connect();
InputStream stream = null;
try {
//this will be changed to execute on a regular time interval
while (true) {
stream = sftp.get(rfile);
read(stream);
}
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
stream.close();
sftp.disconnect();
session.disconnect();
}
But, there is a chance that the tool will be deployed to the machine where the log is generated. So, my question is,
1) I suspect a performance issue using the sftp.get() as the log file will be read in a regular interval(eg. 5 mins). Any suggestion to improve performance here ?
2) whether JSch will have the same network overhead for reading files from the local machine similar to the remote machine.
3) If yes, is there any way to improve the performance if the log is reading from a local machine.
May be i can check the ip address of both the target and host machine before reading the log file. And if both are same i can read the log directly. But, could there be any better alternative?
Thanks,
nks
Related
I am getting an error while uploading large files more than 50 MB using
PUTObjectRequest . It throws an error unable to write data to the transport connection: An existing connection was forcibly closed
by the remote host.
I am using federated user for this putobjectrequest.
Please help me out to solve this issue.
I am sending multiple files parallel using task as
Task.Factory.StartNew(()=>{
PutObjectRequest req=new PutObjectRequest()
{
bucketName=_bucketName,
key=fileKey,
FilePath=demoPath
};
PutObjectResponse resp= client.PurObjectRequest(req);
}
);
This was a bug in aws sdk version i was using 2.3.20
Now i am using aws sdk version 2.3.40 and it is working fine. Basically it was error due to time difference between client machine and server fixed in the updated aws sdk dll
I have tried to reuse an LDAP connection in Unboundid LDAP SDK using the following code:
if (ldapConnection.isConnected()) {
//Connection is still connected.
} else {
try {
// Connection is not connected. Try to reconnect
ldapConnection.reconnect();
} catch (LDAPException e) {
}
}
Unfortunately, ldapConnection.isConnected() returns true and I get exception later in my code.
What I do wrong?
How to reuse an LDAP connection in Unboundid LDAP SDK?
Why you are using the ldapConnection.reconnect() method vs simply using BindResult bindResult = ldapConnection.bind(bindRequest);
You might also consider using "a connection pool, even if that pool only has a single connection. Connection pools have excellent support for connection management and dealing with connections that have become invalid, and they also offer much better options for failover in that they can be configured with information about multiple servers (through the ServerSet API) so that the best server can be selected." (From http://sourceforge.net/p/ldap-sdk/discussion/1001257/thread/2cd4e0de/#14b5
-jim
I'm searching for a way to get my Files synchronized (task) from a web server (Ubuntu 14) to a local server (Windows Server). The web server creates small files, which the local Server needs. The web server is in a DMZ, accessible through SSH. Only the local server is able to access folders on web server. It tried using Programs like WinSCP, but I'm not able to set a "get"-Job.
Is there a way to do this with SSH on Windows server without login every few seconds? Or is there a better solution? In the Future Web-Services are possible, but at the moment I need a quick solution.
Either you need to schedule a regular frequent job, that connects and downloads changes.
Or you need to have continuously running process, that keeps the connection opened and regularly watches for changes.
There's hardly a better solution (that's still quick and easy to implement).
Example of continuous process implemented using WinSCP .NET assembly:
// Setup session options
SessionOptions sessionOptions = new SessionOptions {
Protocol = Protocol.Sftp,
HostName = "example.com",
UserName = "user",
Password = "mypassword",
SshHostKeyFingerprint = "ssh-rsa 2048 xxxxxxxxxxx...="
};
using (Session session = new Session())
{
// Connect
session.Open(sessionOptions);
while (true)
{
// Download changes
session.SynchronizeDirectories(
SynchronizationMode.Local, localPath, remotePath, false).Check();
// Wait 10 seconds
Thread.Sleep(10000);
}
}
You will need to add a better error handling and reconnect, if connection breaks.
If you do not want to implement this as (C#) application, you can use PowerShell script. For a complete solution, see
Keep local directory up to date (download changed files from remote SFTP/FTP server).
Hello World!
Currently I'm writing a simple Client/Server application which uses sockets to do the communitcation. My Client and my Server application are working fine with each other but if I try to query my Server application with a real web-browser (like Mozilla Firefox), then it comes to an exception.
I think that my streams are not compatible with Mozilla Firefox. This little code line always leads to an IOException with the error message "invalid stream header: 47455420".
From Firefox I try to connect via: http://localhost:7777/some-webpage.html
This is my code:
server = new ServerSocket(7777);
Socket socket = server.accept();
try
{
ObjectInputStream inputStream = new ObjectInputStream(new BufferedInputStream(socket.getInputStream()));
}
catch (IOException ex)
{
System.out.println("This exception happens :-(");
System.out.println(ex.getLocalizedMessage());
}
Does anybody know why this happens?
Help is seen with pleasure.
Greetings
Benny
The ObjectInputStream expects a binary format. You can't use a web browser to produce the binary format that it reads. The web browser will talk HTTP protocol, and your server is not expecting that at all.
You probably need to learn about web services. You might find the JAX-RS support in CXF convenient for what you seem to want to do.
To just drop in to HTTP, the minimal thing to do is implement a servlet: google would be your friend in learning about them.
I have a client app that tries every 10 seconds to send a message over a WCF web service. This client app will be on a computer on board a ship, which we know will have spotty internet connectivity. I would like for the app to try to send data via the service, and if it can't, to queue up the messages until it can send them through the service.
In order to test this setup, I start the client app and the web service (both on my local machine), and everything works fine. I try to simulate the bad internet connection by killing the web service and restarting it. As soon as I kill the service, I start getting CommunicationObjectFaultedExceptions--which is expected. But after I restart the service, I continue to get those exceptions.
I'm pretty sure that there's something I'm not understanding about the web service paradigm, but I don't know what that is. Can anyone offer advice on whether or not this setup is feasible, and if so, how to resolve this issue (i.e. re-establish the communications channel with the web service)?
Thanks!
Klay
Client service proxies cannot be reused once they have faulted. You must dispose of the old one and recreate a new one.
You must also make sure you close the client service proxy properly. It is possible for a WCF service proxy to throw an exception on close, and if this happens the connection is not closed, so you must abort. Use the "try{Close}/catch{Abort}" pattern. Also bear in mind that the dispose method calls close (and hence can throw an exception from the dispose), so you can't just use a using like with normal disposable classes.
For example:
try
{
if (yourServiceProxy != null)
{
if (yourServiceProxy.State != CommunicationState.Faulted)
{
yourServiceProxy.Close();
}
else
{
yourServiceProxy.Abort();
}
}
}
catch (CommunicationException)
{
// Communication exceptions are normal when
// closing the connection.
yourServiceProxy.Abort();
}
catch (TimeoutException)
{
// Timeout exceptions are normal when closing
// the connection.
yourServiceProxy.Abort();
}
catch (Exception)
{
// Any other exception and you should
// abort the connection and rethrow to
// allow the exception to bubble upwards.
yourServiceProxy.Abort();
throw;
}
finally
{
// This is just to stop you from trying to
// close it again (with the null check at the start).
// This may not be necessary depending on
// your architecture.
yourServiceProxy = null;
}
There was a blog article about this, but it now appears to be offline. A archived version is available on the Wayback Machine.