Application stops receiving data from remote machine - vb.net

I am developing an app in VB.NET that connects to a remote Linux server to execute a task; my application sends input data for the server program, starts the execution in the server and retrieves output information from the server when the command execution is done. I use Renci SSH.NET to exchange files (SFTP) and monitor the remote server (SSH). One of the critical tasks is to detect when the remote machine finishes command execution, and depending on the input this can take between 5 minutes and many hours.
Here is my problem: for short runs (5 minutes or less), the application runs smoothly, no problems whatsoever. But, when the execution of the remote command takes more than 10 minutes, my application does not detect any response from the server and just hangs, since it never receives the indication from the server that the remote task is finished. I use the Renci.SshNet.Common.Stream.DataAvailable property to detect when the command is done by testing if the shell prompt string is present when I invoke Stream.Read().
Now, I know that the problem is the app, since when I execute the remote code "manually" (logging into the server with a terminal app and executing the command) it finishes without problems. I suspect it has something to do with the way Renci ShellStream object polls the remote machine for data. Any ideas as to what might be happening? Thank you in advance for your answers.

Related

Airflow BashOperator freezes after some time

BashOperator executes a ssh command that runs process on another server and waits for its execution. Problem is a process on the other server freezes after some time (usually a few hours) whilst Airflow task is still in running mode. What might be the cause of a problem?
Problem exists on both Airflow 1 and Airflow 2 which are running on the same server.
I read that ssh sessions freeze after some time so I added theses configurations https://unix.stackexchange.com/questions/200239/how-can-i-keep-my-ssh-sessions-from-freezing
but unfortunately it didn't help :(
Also, I tried using SSHOperator instead, but I faced that problem again.

Ansible playbook stops after loosing connection (even for few seconds) with ssh window of VM on which it is running?

My ansible playbook consist several task in it and I am running my ansible playbook on Virtual Machine. I am using ssh method to log in to VM and run the playbook. if my ssh window gets closed during the execution of any task (when internet connection is not stable and not reliable), the execution of ansible playbook stops as the ssh window already got closed.
It takes around 1 hour for My play book to run, and sometimes even if I loose internet connectivity for few seconds , the ssh terminal lost its connection and thus entire playbook stops. any idea how to make ansible script more redundant to avoid this problem ?
Thanks in advance !!
If you need to run a job on an external system that hangs for a long time and it is relevant that the task completes. It is extremly bad idea to run that job in the foreground.
It is not important that the task is Ansible or the connection is SSH. In every case you would always just "push" the command to the remote host and send it to background with something like "nohup" if available. The problem is of course the tree of processes. Your connection creates a process on the remote system and that creates the job you want to run. Is the connection gets lost, al subprocesses will be killed automatically by the OS.
So - under Windows - maybe use RDP to open a screen that stays available even after connection is lost or use something like Cygwin and nohup via SSH to change the hung up your process from the ssh session.
Or - when you need to run a playbook on that system install for example a AWX container and use that. There are many options based on your requirements, resources and administrative options.

VPN connection disables automatic schedules on my tasks

I have created a task using automation anywhere to run automatically at specified times and the schedule kicks off well until i logon to the machine remotely using vpn access, when i start loging on to the machine using the vpn my automatic schedules stops working after that, what could be the cause of this issue and how do i resolve it? the machine currently runs windows 7 enterpise.
Kind Regards,
Reuben Kekana
Given your information, the first thing that comes to mind:
When you log into the environment hosting the bot, you essentially 'steal' the connection from the AA control room. When you then disconnect from the environment, neither you nor the control room have an active session to this environment. This in effect means the environment 'logs off' and thus no longer runs any scheduled tasks.
You would need to go to the control room and re-establish this connection.

How to detach Jprofiler8 'jpenable' remote agent

Since I need to profile the application runs in remote machine where GUI is not allowed. I started remote session profiling with JProfiler8 and ran /bin/jpenable agent in remote host. After the successful analysis I need to stop that remote jpenable jprofiler8 agent. How can I do that?
To make sure previously started agent is still in running state or not, I ran the /bin/jpenable agent again. Now I don't see previously binded JVM. So i assume it already bind with previous agent.
Unfortunately, it is not possible to unload a JVMTI profiling agent. The JVM only unloads agents when it shuts down.

Continue processing/execution while HTTP connection is lost (Web Server/GlassFish)

I got this question regarding web server (such as nginx, Cherokee or Oracle iPlanet) and Java containers (such as GlassFish): Can we control what happens to the connection if the user drops an unfinished connection?
When a browser opens an HTTP/HTTPS connection to a server, it hits the web server (nginx, Cherokee or Oracle iPlanet) and then reverse proxies to the Java container (GlassFish). The Java application then executes and does quite a lot of things such as calculation and finally needs to write to, say, 3 different databases. If it has finished writing to the 1st database - but not yet to the 2nd and 3rd database - and the user closes the connection (by closing the browser window, or looses a network connection, etc.) what will happen to the process?
Specifically, I would like the process to CONTINUE until it finishes executing all the code. I know of one way is to spin off the process on a new thread, but then this will incur computation costs. So, are there any settings/config I can do to make sure it will continue to execute even though the user has broken the connection?
With nginx, you can set proxy_ignore_client_abort on; and it will not close the connection to the backend if the client closes its connection.