Too many files open when using generic packager with external packager.xml file - iso8583

I am using jpos 2.1.0 where i am using external packager xml file for iso8583 client. Due to large number of request in two or three days, i encountered "Too Many Files Open" and i have set ulimit -n = 50000. I doubt that the packager files are not been closed properly due to which this limit has been exceeded. Please help me to close the open file properly.
JposLogger logger = new JposLogger(isoLogLocation);
org.jpos.iso.ISOPackager customPackager = new GenericPackager(isoPackagerLocation+iso8583Properties.getPackager());
BaseChannel channel = new ASCIIChannel(iso8583Properties.getServerIp(), Integer.parseInt(iso8583Properties.getServerPort()), customPackager);
logger.jposlogconfig(channel);
try {
channel.setTimeout(45000);
channel.connect();
}catch(Exception ex) {
log4j.error(ex.getMessage());
throw new ConnectIpsException("Unable to establish connection with bank.");
}
log4j.info("Connection established using ASCIIChannel");
ISOMsg m = new ISOMsg();
m.set(0, "1200");
........
m.set(126, "connectIPS");
m.setPackager(customPackager);
log4j.info(ISOUtil.hexdump(m.pack()));
channel.send(m);
log4j.info("Message has been send");
ISOMsg r = channel.receive();
r.setPackager(customPackager);
log4j.info(ISOUtil.hexdump(r.pack()));
String actionCode = (String) r.getValue("39");
channel.disconnect();
return bancsxfr;
}

You know when you open a file, a socket, or a channel, you need to close it, right?
I don't see a finally in your try that would close the channel.
You have a huge leak there.

Related

PDF Creation using Diagnostics.Process and Wkhtmltopdf slow on server

People,
I’m generating a PDF file within a .NET application using wkhtmltopdf inside a System.Diagnostics.Process
While this process takes approx. 1 sec to run on my local machine (Win 10 Pro 16gb mem) once deployed to server and using the same data it takes approx. 40 secs on the server (win server 2012 8gb mem). The resultant PDF file in both cases is only about 34kb.
Have run some diagnostics times on each code line I have found that it is this line which takes all the time.
if (!process.WaitForExit(120000))
I have tried changing permissions on output folder and also changing output folder. I have also change identity on IIS application pool.
With such a disparity in performance I’m not convinced it is code issue just configuration. Can anyone shed any light on this.
Slightly abbreviated code below
I should also mention that I have run procmon on server while running at appears to be doing very little after originally loading program.
var temp = HttpContext.Current.Server.MapPath("~//temppdf//")
var outputPdfFilePath = Path.Combine(temp,
String.Format("{0}.pdf", Guid.NewGuid()));
document.Url = "-";
ProcessStartInfo si;
StringBuilder paramsBuilder = new StringBuilder();
paramsBuilder.Append("--page-size A4 ");
paramsBuilder.Append("--zoom 1.000 ");
paramsBuilder.Append("--disable-smart-shrinking ");
paramsBuilder.AppendFormat("\"{0}\" \"{1}\"", document.Url, outputPdfFilePath);
si = new ProcessStartInfo();
si.CreateNoWindow = false;
si.FileName = environment.WkHtmlToPdfPath; //path to exe in programs file(x86)
si.Arguments = paramsBuilder.ToString();
si.UseShellExecute = false;
si.RedirectStandardError = false;
si.RedirectStandardInput = true;
try
{
using (var process = new Process())
{
process.StartInfo = si;
process.Start();
if (document.Html != null)
using (var stream = process.StandardInput)
{
byte[] buffer = Encoding.UTF8.GetBytes(document.Html);
stream.BaseStream.Write(buffer, 0, buffer.Length);
stream.WriteLine();
}
if (!process.WaitForExit(120000))
throw new PdfConvertTimeoutException();
// THis above command takes 1 sec locally and 42 secs on server
}
}
finally
{
if (delete && File.Exists(outputPdfFilePath))File.Delete(outputPdfFilePath);
}
Well finally got to the bottom of this and had absolutely nothing to do with the code. It transpires that the HTML I was trying to convert had links to images on a live website. While this website was accessible from my local machine it was inaccessible from the server. Therefore time delay due to timeouts trying to access inaccessible images.
Doh!!!

AmazonS3: Getting warning: S3AbortableInputStream:Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection

Here's the warning that I am getting:
S3AbortableInputStream:Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
I tried using try with resources but S3ObjectInputStream doesn't seem to close via this method.
try (S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah blah
}
I also tried below code and explicitly closing but that doesn't work either:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
try (BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah
s3ObjectInputStream.close();
s3object.close();
}
Any help would be appreciated.
PS: I am only reading two lines of the file from S3 and the file has more data.
Got the answer via other medium. Sharing it here:
The warning indicates that you called close() without reading the whole file. This is problematic because S3 is still trying to send the data and you're leaving the connection in a sad state.
There's two options here:
Read the rest of the data from the input stream so the connection can be reused.
Call s3ObjectInputStream.abort() to close the connection without reading the data. The connection won't be reused, so you take some performance hit with the next request to re-create the connection. This may be worth it if it's going to take a long time to read the rest of the file.
Following option #1 of Chirag Sejpal's answer I used the below statement to drain the S3AbortableInputStream to ensure the connection can be reused:
com.amazonaws.util.IOUtils.drainInputStream(s3ObjectInputStream);
I ran into the same problem and the following class helped me
#Data
#AllArgsConstructor
public class S3ObjectClosable implements Closeable {
private final S3Object s3Object;
#Override
public void close() throws IOException {
s3Object.getObjectContent().abort();
s3Object.close();
}
}
and now you can use without warning
try (final var s3ObjectClosable = new S3ObjectClosable(s3Client.getObject(bucket, key))) {
//same code
}
To add an example to Chirag Sejpal's answer (elaborating on option #1), the following can be used to read the rest of the data from the input stream before closing it:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
try (S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent()) {
try {
// Read from stream as necessary
} catch (Exception e) {
// Handle exceptions as necessary
} finally {
while (s3ObjectInputStream != null && s3ObjectInputStream.read() != -1) {
// Read the rest of the stream
}
}
// The stream will be closed automatically by the try-with-resources statement
}
I ran into the same error.
As others have pointed out, the /tmp space in lambda is limited to 512 MB.
And if the lambda context is re-used for a new invocation, then the /tmp space is already half-full.
So, when reading the S3 objects and writing all the files to the /tmp directory (as I was doing),
I ran out of disk space somewhere in between.
Lambda exited with error, but NOT all bytes from the S3ObjectInputStream were read.
So, two things one need to keep in mind:
1) If the first execution causes the problem, be stingy with your /tmp space.
We have only 512 MB
2) If the second execution causes the problem, then this could be resolved by attacking the root problem.
Its not possible to delete the /tmp folder.
So, delete all the files in the /tmp folder after the execution is finished.
In java, here is what I did, which successfully resolved the problem.
public String handleRequest(Map < String, String > keyValuePairs, Context lambdaContext) {
try {
// All work here
} catch (Exception e) {
logger.error("Error {}", e.toString());
return "Error";
} finally {
deleteAllFilesInTmpDir();
}
}
private void deleteAllFilesInTmpDir() {
Path path = java.nio.file.Paths.get(File.separator, "tmp", File.separator);
try {
if (Files.exists(path)) {
deleteDir(path.toFile());
logger.info("Successfully cleaned up the tmp directory");
}
} catch (Exception ex) {
logger.error("Unable to clean up the tmp directory");
}
}
public void deleteDir(File dir) {
File[] files = dir.listFiles();
if (files != null) {
for (final File file: files) {
deleteDir(file);
}
}
dir.delete();
}
This is my solution. I'm using spring boot 2.4.3
Create an amazon s3 client
AmazonS3 amazonS3Client = AmazonS3ClientBuilder
.standard()
.withRegion("your-region")
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials("your-access-key", "your-secret-access-key")))
.build();
Create an amazon transfer client.
TransferManager transferManagerClient = TransferManagerBuilder.standard()
.withS3Client(amazonS3Client)
.build();
Create a temporary file in /tmp/{your-s3-key} so that we can put the file we download in this file.
File file = new File(System.getProperty("java.io.tmpdir"), "your-s3-key");
try {
file.createNewFile(); // Create temporary file
} catch (IOException e) {
e.printStackTrace();
}
file.mkdirs(); // Create the directory of the temporary file
Then, we download the file from s3 using transfer manager client
// Note that in this line the s3 file downloaded has been transferred in to the temporary file that we created
Download download = transferManagerClient.download(
new GetObjectRequest("your-s3-bucket-name", "your-s3-key"), file);
// This line blocks the thread until the download is finished
download.waitForCompletion();
Now that the s3 file has been successfully transferred into the temporary file that we created. We can get the InputStream of the temporary file.
InputStream input = new DataInputStream(new FileInputStream(file));
Because the temporary file is not needed anymore, we just delete it.
file.delete();

"Failure to send mail" Error [duplicate]

I am making an SMTP mail application with C#.Net. It is working ok for Gmail settings, but I have to work it for a connection to VSNL. I am getting an exception: "Failure sending mail"
My settings seem perfect. What is the problem? Why am I getting the exception?
MailMessage mailMsg = new MailMessage();
MailAddress mailAddress = new MailAddress("mail#vsnl.net");
mailMsg.To.Add(textboxsecondry.Text);
mailMsg.From = mailAddress;
// Subject and Body
mailMsg.Subject = "Testing mail..";
mailMsg.Body = "connection testing..";
SmtpClient smtpClient = new SmtpClient("smtp.vsnl.net", 25);
var credentials = new System.Net.NetworkCredential("mail#vsnl.net", "password");
smtpClient.EnableSsl = true;
smtpClient.UseDefaultCredentials = false;
smtpClient.Credentials = credentials;
smtpClient.Send(mailMsg);
I am getting an exception following...
System.IO.IOException: Unable to read data from the transport connection: net_io_connectionclosed.
at System.Net.Mail.SmtpReplyReaderFactory.ProcessRead(Byte[] buffer, Int32 offset, Int32 read, Boolean readLine)
at System.Net.Mail.SmtpReplyReaderFactory.ReadLines(SmtpReplyReader caller, Boolean oneLine)
at System.Net.Mail.SmtpReplyReaderFactory.ReadLine(SmtpReplyReader caller)
at System.Net.Mail.SmtpConnection.GetConnection(String host, Int32 port)
at System.Net.Mail.SmtpTransport.GetConnection(String host, Int32 port)
at System.Net.Mail.SmtpClient.GetConnection()
at System.Net.Mail.SmtpClient.Send(MailMessage message)
Check the InnerException of the exception, should tell you why it failed.
Try wrapping the Send call in a try catch block to help identify the underlying problem.
e.g.
try
{
smtpClient.Send(mailMsg);
}
catch (Exception ex)
{
Console.WriteLine(ex); //Should print stacktrace + details of inner exception
if (ex.InnerException != null)
{
Console.WriteLine("InnerException is: {0}",ex.InnerException);
}
}
This information will help identify what the problem is...
Make sure your Anti Virus is blocking sending mails. In my case McAfee Access protection Rules were blocking sending mails, untick blocks and reports.
I used some but i am checking only this if send mail fails:
default value is false;
but it can't be change to true;
smtpClient.Port = smtpServerPort;
smtpClient.UseDefaultCredentials = false;
smtpClient.Timeout = 100000;
smtpClient.Credentials = new System.Net.NetworkCredential(mailerEmailAddress, mailerPassword);
smtpClient.EnableSsl = EnableSsl;
All function must be surround by a try catch;
Try by removing
smtpClient.EnableSsl = true;
I am not sure whether vsnl supports SSL and the port number you are using
Without seeing your code, it is difficult to find the reason for exception. Following are assumptions:
The serverHost of VSNL is smtp.vsnl.net
Exception:
Unable to read data from the transport connection: net_io_connectionclosed
Usually this exception occurs only when there is mismatch in username or password.
Check to see if the machine is being referred to by an IPv6 address. In my case using machine name gave me the same error. Using the ip4 address it did work (i.e. 10.0.0.4). I got rid of ipv6 and it started to work.
Not the solution i was looking for but given my limited understanding of ipv6 I did not know of other choices.

how do i send a jpeg file over an SSH channel

I have managed to read a text file over an SSH channel using an Ubuntu Linux to serve as an SSH server. My question is how do i send an image file over and display it in an application like a JPanel? I seem to have problems doing that.
Below is the code that I have used which is from this forum. Credits to user World
public static void main(String []args) throws Exception
{
String user="larry";
String password="123";
String host="192.168.174.131";
int port = 22;
String remoteFile="/home/larry/seohyun.jpg";
try
{
JSch jsch=new JSch();
Session session=jsch.getSession(user,host,port);
session.setPassword(password);
session.setConfig("StrictHostKeyChecking","no");
System.out.println("Establishing connection");
session.connect();
System.out.println("Connection Established");
System.out.println("Creating SFTP Channel.");
ChannelSftp sftpChannel=(ChannelSftp) session.openChannel("sftp");
sftpChannel.connect();
System.out.println("SFTP Channel Established");
InputStream out=null;
out=sftpChannel.get(remoteFile);
BufferedReader br=new BufferedReader(new InputStreamReader(out));
String imageName = br.readLine();
File input = new File(imageName);
image = ImageIO.read(input);
JFrame frame = new JFrame("Display Image");
Panel panel = new TestSSH();
frame.getContentPane().add(panel);
frame.setSize(500,500);
frame.setVisible(true);
}catch(Exception e)
{
System.err.print(e);
}
}
However I cant seem to be able to display the image on theJPanel`.
it gives me the following exception
Establishing connection
Connection Established
Creating SFTP Channel.
SFTP Channel Established
javax.imageio.IIOException: Can't read input file!
however, i have checked the file path countless times. It is correct.
May i know what`s wrong with my code?

Redis on Appharbor - Booksleeve GetString exception

i am trying to setup Redis on appharbor. I have followed their instructions and again i have an issue with the Booksleeve API. Here is the code i am using to make it work initially:
var connectionUri = new Uri(url);
using (var redis = new RedisConnection(connectionUri.Host, connectionUri.Port, password: connectionUri.UserInfo.Split(new[] { ':' }, 2)[1]))
{
redis.Strings.Set(1, "greeting", "welcome to remember your stuff!");
try
{
var task = redis.Strings.GetString(1, "greeting");
redis.Wait(task);
ViewBag.Message = task.Result;
}
catch (Exception)
{
// It throws an exception trying to wait for the task?
}
}
However, the issue is that it sets the string correctly, but when trying to retrieve the same string from the key value store, it throws a timeout exception waiting for the task to eexecute. However, this code works on my local redis server connection.
Am i using the API in a wrong way? or is this something related to Appharbor?
Thanks
Like a SqlConnection, you need to call Open() (otherwise your messages are queued for delivery).
Unlike SqlConnection, you should not fire up a RedisConnection each time you need it - it is intended to be used as a shared, thread-safe, multiplexer - i.e. a single connection is held somewhere and used by lots and lots of unrelated callers. Unless of course you only need to do one thing!