Triggering 'connection reset by peer' - testing

I would like to test the logging that happens in our app (an embedded ftp server) when a 'connection reset by peer' error occurs. This post explains the source of the error pretty well, but doesn't really explain how to cause one. Does anybody know a way to trigger this error for a TCP connection?

tcpkill seems to do the job well.

Related

"Invalid authentication data. Connection reset" When trying to log in to github within IntelliJ

Whenever I try to log into github from IntelliJ, I get this error.
Even thought my authentication data is correct, it tells me it's not. And it doesn't matter if I use a token or just use my credentials. I get the same problem. I've tried to delete all tokens and generated a new one, this also didn't change anything. What could be the problem? Thanks.
The issue might be caused by some proxy in the middle. Invalid authentication data is a generic message that appears when call to GitHub API fails without clear reasons. Connection reset seems to be the real message, and could be caused by some proxy or network firewall.
Check the logs for more details on the error. If you think it is some bug in IntelliJ, it is worth reporting the issue to http://youtrack.jetbrains.com/ with logs attached.

What does this error error:140000DB:SSL routines:SSL routines:short read means

In our software, we are keep getting this warning/error message intermittently. Not sure how/why this message appears.
HTTP asio handshake failed: error:140000DB:SSL routines:SSL
routines:short read
I searched in the Internet, but the mostly the result pointing me to a VMware problem. Which not the case for me.
Until I found out that actually this error is thrown by OpenSSL that is used by Boost-Asio. I have downloaded the source code of OpenSSL/Asio/Boost but couldn't find this error code in the source. My question, Is there anyone knows what this error means? What could be the trigger of this error message? I just want to understand a bit to find out the reproduction. So we can fix our software if there is any hole in the software.
Many Thanks in advance!
Reference:
http://ib-krajewski.blogspot.my/2016/03/https-support-for-casablanca-client.html
how to clean boost::asio::ssl::stream after closed by server
A commit in OpenSSL removed the error SSL_R_SHORT_READ.
The commit before before OpenSSL removed the error SSL_R_SHORT_READ still has it defined as 219 == 0xDB. This error of 0xDB is what comes out of OpenSSL as 0x140000DB.
In general a short read happens on TCP when the connection ended before the other side could send enough data to decode the current message. This may happen for example because the other side crashed or a network problem.
Found the root cause for my problem. There is mismatch of cipher the host and the client that trying to connect to. Then this error is thrown from the client.

Forcing a DNS failure

I need to test a change in our application's DNS retry behavior.
It previously switched into another mode to report the issue to the end user, but we've found a bug when the retry attempt worked, it would proceed to try loading the now-found far-end service in that "error reporting" mode.
To fix this, we have disabled the switch to the error reporting mode, and expect that on a successful retry we will load into the expected mode.
Thus, I need DNS (rndc/named) to fail once, and only once, and provide a successful result on the second attempt.
The only thing I can think is to run a large load test, and hope DNS fails like this at some point... But I am hoping someone on here might know of a better solution.
Maybe a way to block the connection attempt once ? The DNS server is part of the application, though, so it would be blocking the connection to localhost.
for sure you can use docker/vm/dedicated os, change its dns settings and use it as a dns resolver. it will be probably a lot of work to script it but it seems possible. but before it i would look for some dns mock service/server

mod_perl2 with apache 2.22 Apache2::RequestIO::print: (103) Software caused connection abort

I’m trying to get a mod_perl2 application ported to AWS. As part of the port I thought I’d move from Debian Squeeze to Wheezy with the latest stable mod_perl & Apache2 combination.
The application works right up to the point I try and write JSON responses to the client. At this point, each request is canceled on the client and on the server I get the error
Apache2::RequestIO::print: (103) Software caused connection abort
whenever I write to the client, i.e.:
$self->req->print($output);
I’ve tried tcpdumping the response to the client, and I can see it being written out, but no response is received on the client end and it just barfs chips. I can’t find any information on how to get around this.
I found quite a few people asking about this question on the net without many answers. The solution to my problem was very specific but I thought I’d post what I did anyway, it may help someone.
The client was canceling the request before the response was fully written, which was crapping out Apache::RequestIO (for reasons I still don’t know).
I couldn’t work out why I was seeing this behavior.
By using tcpdump I could see that data was being written out to the client – and it looked fine.
By inspecting the page in Chrome and looking at the network stack, I could see that my request for data was being canceled after no response was received (which was odd because the code worked fine on other servers and I could see the response was being written). Debugging was may harder because with Apache crashing out with an error in print IO I couldn’t check if the bytes written equaled the bytes of data. I wasn’t sure if something was getting stuck on the server side.
So, I changed the Content-Type of the response from application/json to text/html, so that I could query the page and just look at the actual response as text. Once I did that, I could see that the response was fine.
I started to look for other causes, and I found that in the migration to the new server, I’d missed altering some URLs in the DB to point to the new server, which meant my application was trying to get some data from the old DB.
This in turn was causing a load of timing issues, which was causing my problems. Once I fixed the config, the problems went away.

Can uplodading a file using org.apache.net.ftp wait infinitely?

I am using apache.net.ftp api to download from and upload to ftp server. Its working fine in normal scenarios.
But the issue starts when there is some latency or connection is closed by the server for some reasons.
Here comes the time-out. I found a parameter 'SO_TIMEOUT' which is considered when reading from socket. So, I used ftpClient.setSoTimeout(time in millis) method to set it which will be used while downloading a file. It worked fine.
I am not getting how to set time-out while uploading the file to the ftp-server.
Thanks in advance.
Check the following things to make sure everything is running fine,and then try again::
Check the Firewall setting,if any which might be blocking the incoming connections and the connection timeout.