I have erlang application. In this application i run process with spawn(?MODULE, my_foo, [my_param1, my_param2, my_param3]).
And my_foo:
my_foo(my_param1, my_param2, my_param3) ->
...
some code here
...
ok.
When i open etop i see that this my_foo/3 function status: proc_lib:sync_wait/2
Than i try to put exit(self(), normal) in the end of my function, but i see same behavior: proc_lib:sync_wait/2 in etop.
How can i kill or exit process correctly?
Thank you.
Note that exit(Pid, Reason) and exit(Reason) do NOT do the same thing if Pid is the process itself. exit/1 tells the current process to exit - from the inside if you like - while exit/2 sends an exit signal to the process, even if the process is itself. So when you do exit(self(), normal) you are actually sending the normal exit signal to yourself, which is ignored.
In this case putting the exit call at the end of the function should not make any difference as the process automatically dies (with reason normal) when the function with which it was started ends. It seems like the process is suspended somewhere before that.
proc_lib:sync_wait/2 is called inside proc_lib:start/start_link and sits and waits for the spawned process to do proc_lib:init_ack/1/2 to return the return value for start. It would appear that your process does not call init_ack.
Based on the limited information that you give in the question I would suspect that your process hasn't finished running yet.
Normally you don't need to add exit/2 to your process. It will exit automatically when the function has finished running.
You probably have a long running call in some code here that has not finished running. I recommend that you add logging information and see where you are stuck.
Related
I ran into an issue where I had long running JitterBit operations that were scheduled. I had them scheduled close together, since I needed to keep data flowing. But, when they would take longer than expected I would wind up with multiple instances of the operation set running at the same time. This was killing my performance.
I'll put the fix in the answer below.
To resolve this issue I added an additional Script Operation at the beginning of my operation set (with the schedule running on this operation). This script simply checks to see if one of the operations in this set is already running. If not, it starts the next operation. If there is anything running, it exists and waits till the next scheduled instance.
This is a sample of my script. This one assumes that there were originally two operations in this operation set.
<trans>
$isInQueue=GetOperationQueue("<TAG>Operations/OperationToCheck01</TAG>");
$isInQueue2=GetOperationQueue("<TAG>Operations/OperationToCheck02</TAG>");
$isRunning=$isInQueue[0][1];
$isRunning2=$isInQueue2[0][1];
if(($isRunning==1 && $isRunning!=Null()) || ($isRunning2==1 && $isRunning2!=Null()),
WriteToOperationLog("Skip for now: "+$isRunning+" / "+$isRunning2);,
WriteToOperationLog("Nothign is Running - Starting Operation Chain.");
RunOperation("<TAG>Operations/OperationToCheck01</TAG>");
);
</trans>
So I have a workflow which is supposed to throw an error after a certain condition is satisfied. (False condition) As you can see in the log directly below, it works: I do a loop exit first for the group 'coms' and an error is thrown. However, Flowgear seems to only read the last executed node and then determine the workflows status from that. Since the loop finishes last and is successful, if you look in the second log, you can see that the workflow has been evaluated as 'successful' although an error was thrown inside.
Any ideas how to make the loop break? Also why does flowgear only consider the last node? There should be an option in the error node to stop all execution.
Iterator nodes (Splitter and Loop) will consume the errors. The only way at this stage to get the workflow to return an error is to cause an error in the AnyError or UnhandledError part of the workflow. I've created a workflow to demonstrate this here: http://flowgear.me/s/UdpGBbd
Hope this helps.
I'm trying to understand the differences between spawn and spawn_link, but can't quite grasp what happens when the function the process executes ends.
defmodule SendAndDie do
def send_and_die(target) do
send(target, "Goodbye")
# Process.exit(self, :boom)
end
end
dying_process = spawn_link(SendAndDie, :send_and_die, [self])
:timer.sleep(500)
IO.puts("Dying process is alive: #{Process.alive?(dying_process)}")
receive do
msg -> IO.puts(msg)
end
I expected the main process to fail, as it linked to a process that clearly died before the end of the program. However, the "Goodbye" message is printed and then the program exits normally. Changing spawn_link to spawn have no effect.
When I uncomment the Process.exit in line 4 I do see the difference between spawn and spawn_link (the later stops the whole program while the former doesn't). But, the Process.exit is the last execution in the send_and_die function. Isn't the process going to exit anyway when the function ends?
From the erlang manual on processes
The default behaviour when a process receives an exit signal with an exit reason other than normal, is to terminate and in turn emit exit signals with the same exit reason to its linked processes.
When the initial function of the process returns it terminates with reason normal, so this default behaviour does not bring down the linked process.
If I were to have multiple batch files run one after another in VB.NET, would they run at the same time or would they wait for the first ones to finish before moving on the next?
They will run concurrently unless you go out of your way to prevent that from happening. Process.Start() does not block once the process has been launched. However, you can block by using Process.WaitForExit().
For example, this will run 3 batch files at the same time:
System.Diagnostics.Process.Start("batch.bat")
System.Diagnostics.Process.Start("batch.bat")
System.Diagnostics.Process.Start("batch.bat")
This will run them one at a time:
System.Diagnostics.Process.Start("batch.bat").WaitForExit()
System.Diagnostics.Process.Start("batch.bat").WaitForExit()
System.Diagnostics.Process.Start("batch.bat").WaitForExit()
You have more control over when the blocking takes place by saving the process to a variable and calling WaitForExit() later in the code:
Dim p1 = System.Diagnostics.Process.Start("batch.bat")
' Do stuff that doesn't need to wait for process to finish
p1.WaitForExit()
They will run together. If you want it to wait, create a ProcessStartInfo object, add it to the Process.Start call, and assign the Start method's response to a process. Then call the process's WaitForExit method.
So I need to stop a running Job in Sidekiq (3.1.2) programmatically, not a scheduled one. I did read the API documentation but didn't really find anything about cancelling running jobs. Is this possible with sidekiq?
When this is not directly possible, my idea was to circumvent this, by raising an exception in the job when I call the signal, then deleting the job from the retryset. This is clearly not optimal though.
Thanks in advance
Correct, the only way to stop a job is for the job to stop itself. Your application must implement that logic.
https://github.com/mperham/sidekiq/wiki/FAQ#how-do-i-cancel-a-sidekiq-job
If you know the long running job's Thread ID, its possible to terminate it from another task:
class ThreadLightly
include Sidekiq::Worker
def perform(tid)
puts "I'm %s, and I'll be terminating TID: %s..." % [self.class, tid]
Thread.list.each {|t|
if t.object_id.to_s == tid
puts "Goodbye %s!" % t
t.exit
end
}
end
end
You can trigger it from the sidekiq_pusher:
bundle exec ./pusher.rb ThreadLightly $YOURJOBSTHREADID
You'll need to log the Thread.current.object_id from each job since the UI dosn't show it. Also, if you run distributed sidekiqs, you'll need to run this task until it runs on the same instance.