I want to create a small delay so that my 1st set of codes run smoothly.
How can I do that in vb.net ?
Edit 1
Suppose I have a few lines of code like this
..................Statement Line 1..............
..................Statement Line 2..............
..................Statement Line 3..............
..................Statement Line 1..............
..................Statement Line 5..............
WAIT UNTIL STATEMENT 5 IS COMPLETED
..................Statement Line 6..............
..................Statement Line 7..............
..................Statement Line 8..............
..................Statement Line 9..............
..................Statement Line 10.............
Only when the execution of first five statements are complete then only the next five can be executed
First - given the code sample you've provided line 6 will not execute until line 5 finishes. You don't need to do anything; unless line 5 is kicking of an external application or creating a new thread.
Beyond that -
Thread.Sleep will introduce a delay, but more often than not it really isn't what you are looking for.
If you use Thread.Sleep the executing thread will sleep for however long you tell it. But your sample code indicates you want the thread to wait UNTIL some condition is met. Assuming you are waiting on a condition that happens outside of the thread you are sleeping, at best, you'd end up with a loop that keeps sleeping for X milliseconds and then checking the condition.
There are other approaches that are easier (in the long run)/more robust than that. If you truly want something to happen on another thread and to be alerted by it's completion; consider the BackgroundWorker class.
http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx
It's very handy for simple multithreading tasks. You create a BackgroundWorker and handle it's 'DoWork' event with the logic to happen on the new thread and handle the 'Completed' event (check the docs for the correct name, I'm going from memory). You call 'RunWorkerAsynch' to start the process and when the 'Worker_Completed' event fires, you can continue execution of line 6.
I hope that makes sense/helps.
Are you looking to make a thread sleep?
Thread.Sleep(100);
Where 100 is the number of millisecond you want the thread to sleep for.
Also make sure to have Imports System.Threading, which I assume you have if you already have multiple threads.
EDIT: Okay, so you've added a bit of code. Still, this should come down to whether you have more than one thread running, and from your question it looks like it's all in one thread. In this case, statement 5 will always finish first before statement 6 runs. That's how code works. The only case where it wouldn't would be if one of statements 1-5 spawns something on a new thread.
I think
Application.DoEvents()
should do that.
Use BackgroundWorker object property "IsBusy" and do not allow to execute line 6 until the worker process is busy.
Read more about BackgroundWorker here
Related
I currently have some code that requires running CFRunLoopRun().
This runs indefinitely. I would like to replace it with something that only runs for a set amount of time, say 30 seconds.
I tried CFRunLoopRunInMode(), but it exits immediately.
CFRunLoopRun(); // Works but never stops, I need to stop after 30s
CFStringRef mode = (__bridge CFStringRef)#"mode";
CFTimeInterval timeInterval = 10.0;
CFRunLoopRunInMode(mode, timeInterval, FALSE); // Doesn't work, syntax is wrong?, stops immediately
You are using a custom mode. It looks like you're just making it up for this one call. That would suggest that there are no input sources scheduled in that mode. All of the run-loop run functions exit immediately if there are no input sources.
CFRunLoopRunInMode() actually returns a value indicating why it exited. You should examine that.
The difference with CFRunLoopRun() is that it runs the run loop in the default mode (kCFRunLoopDefaultMode), not your custom mode. That mode almost certainly does have input sources scheduled (at least, assuming this is the main thread and thus the main run loop).
So, you could do this:
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 10.0, FALSE);
All of that said, running the run loop for a fixed time period is rarely the right approach. What are you actually trying to achieve? What led you to conclude that you need to run the run loop for 10 seconds? Why not just return to the normal event loop and use a timer to do some work in 10 seconds?
I have a usage issue I need some advice on.
I have a process with a main flow which loops, retrying a task every n hours until either a condition is met or a timeout is reached. So far so good.
There is a transactional sub process triggered to run in parallel to this main loop which, for as long as this main loop is active, carries out its own looping behaviour (every x days). This second loop should run for as long as the main loop continues, and be killed as soon as the main loop reaches one of its progression criteria.
The way I'd like to model it would be to use a message/signal throw event from the main flow after it has passed its progress criteria, with a corresponding catch message/signal as a boundary event on the sub process, which then triggers a sub process end/terminate event inside the boundaries of the sub process.
I've looked long and hard at resources and the standard, and I can't see any examples of people using boundary events in this way (as an input from outside the sub process, leading to an end event inside the sub process). Any idea if this is valid?
If not valid, anyone have a better method for having a main flow kill a sub process in this way?
Main Process: Start, parallel gateway (fork), first branch contains subprocess 1, second branch contains subprocess 2, exclusive gateway (join), end.
Subprocess 1: Start, loop, exit from the loop under some condition, then end.
Subprocess 2: Start, loop, no end node.
This way, subprocess 2 can't cause an end of looping of its own. But subprocess 1 can end, and by the exclusive join gateway, subprocess 2 will end as well.
I'm not quite sure whether a parallel fork, followed by an exclusive join, is actually allowed formally in BPMN. But some tools can handle it, and I received this hint from a tool vendor (Bonita).
I am currently reading the Programming Erlang Second Edition Writing Software for a concurrent world written by Joe Armstrong and I have the following assignment :
Write a function start(AnAtom, Fun) to register AnAtom as spawn(Fun). Make sure your program works correctly in the case when two parallel processes simultaneously evaluate start/2. In this case you must guarantee that one succeeds and the other fails.
I understand the first bit. I need to register the process of Fun to the AnAtom. However what does the second part want me to do?
If two processes call start/2 at the same time then one of them must fail? Why? Given that the AnAtom is different to any others (which will be done inside the body of start/2 why would I want to fail one of the processes?
From what I can understand so far we have:
a = spawn(process1).
b = spawn(process2).
a ! {self(), registerProcess} //which should call the start/2
b ! {self(), registerProcess} //which should call the start/2
What is the problem here? Two processes will evaluate start/2. Why fail one of them? I'm probably missing the logic here or what I understood so far is completely wrong. Can anybody explain this in easier terms so I can get my head around it?
I believe the exercise is asking you to think about what happens when two parallel process evaluate start/2 using the SAME atom as the first parameter. When start(a, MyFunction) completes, there should be a spawned function (running MyFunction) associated with the name (atom) a.... what happens if
start(cool, MyFun1) and
start(cool, MyFun2)
are both executed simultaneously? How do you guarantee that one succeeds and the other fails.... does this help?
EDIT: I think you are not understanding the register process part of the assignment. When you get done with start(name, MyFun), doing a whereis(name) from the repl should return the process identifier of the process that got created.
This is not about sending the process a message to give it a name, it is about registering the process your created under the name passed in as the first parameter to start/2
EDIT: I realized that I, unfortunately, overlooked a semicolon at the end of the while statement in the first example code and misinterpreted it myself. So there is in fact an empty loop for threads with threadIdx.x != s, a convergency point after that loop and a thread waiting at this point for all the others without incrementing the s variable. I am leaving the original (uncorrected) question below for anyone interested in it. Be aware, that there is a semicolon missing at the end of the second line in the first example and thus, s++ has nothing in common with the cycle body.
--
We were studying serialization in our CUDA lesson and our teacher told us that a code like this:
__shared__ int s = 0;
while (s != threadIdx.x)
s++; // serialized code
would end up with a HW deadlock because the nvcc compiler puts a reconvergence point between the while (s != threadIdx.x) and s++ statements. If I understand it correctly, this means that once the reconvergence point is reached by a thread, this thread stops execution and waits for the other threads until they reach the point too. In this example, however, this never happens, because thread #0 enters the body of the while loop, reaches the reconvergence point without incrementing the s variable and other threads get stuck in an endless loop.
A working solution should be the following:
__shared__ int s = 0;
while (s < blockDim.x)
if (threadIdx.x == s)
s++; // serialized code
Here, all threads within a block enter the body of the loop, all evaluate the condition and only thread #0 increments the s variable in the first iteration (and loop goes on).
My question is, why does the second example work if the first hangs? To be more specific, the if statement is just another point of divergence and in terms of the Assembler language should be compiled into the same conditional jump instruction as the condition in the loop. So why isn't there any reconvergence point before s++ in the second example and has it in fact gone immediately after the statement?
In other sources I have only found that a divergent code is computed independently for every branch - e.g. in an if/else statement, first the if branch is computed with all else-branched threads masked within the same warp and then the other threads compute the else branch while the first wait. There's a reconvergence point after the if/else statement. Why then does the first example freeze, not having the loop split into two branches (a true branch for one thread and a waiting false branch for all the others in a warp)?
Thank you.
It does not make sense to put the reconvergence point between the call to while (s != threadIdx.x) and s++;. It disrupts the program flow since the reconvergence point for a piece of code should be reachable by all threads at compile time. Below picture shows the flowchart of your first piece of code and possible and impossible points of reconvergence.
Regarding this answer about recording the convergence point via SSY instruction, I created below simple kernel resembling your first piece of code
__global__ void kernel_1() {
__shared__ int s;
if(threadIdx.x==0)
s = 0;
__syncthreads();
while (s == threadIdx.x)
s++; // serialized code
}
and compiled it for CC=3.5 with -O3. Below is the result of using cuobjdumbinary tool for the output to observe the CUDA assembly. The result is:
I'm not an expert in reading CUDA assembly but I can see while loop condition checks in lines 0038 and 00a0. At line 00a8, it branches to 0x80 if it satisfies the while loop condition and executes the code block again. The introduction of the reconvergence point is at line 0058 introducing line 0xb8 as the reconvergence point which is after the loop condition check near the exit.
Overall, it is not clear what you're trying to achieve with this piece of code. Also in the second piece of code, the reconvergence point should be again after while loop code block (I don't mean between while and if).
The reason why it "hangs" is neither a HW deadlock nor branching, at least not directly. You produce an endless loop for one or multiple threads (as already suspected).
In your example, there isn't really a convergence point. Since you do not use any synchronization, there aren't any threads that actually wait. What happens here with the while-loop is pretty much a busy-wait.
A kernel only finishes if all threads return. Since you have one (or multiple) endless loops (by accident maybe even none - this is unlikely however) the kernel will never finish.
You declared a shared variable s. This variable is known to all threads within a block.
With your while-statement you basically say (to each thread): increment s until it reaches the value of your (local) thread id. Since all threads are incrementing s in parallel, you introduce race conditions.
Example:
List item
Thread 5 is looping and checking for s to become 5
s is 4
Two threads increment s, it becomes 6
At the same time thread 5 only reached the end of its loop.
Now it reaches the next loop iteration and checks for s and it's not 5.
Thread 5 will never be able to finish since you check via == and the value of s already exceeded the value of the thread id.
Also your solution is quite confusing, because each thread executes the serialized code consecutively (which probably was the intention after all - even though that actually is strange):
Thread 0 will execute the serialized code
After that, thread 1 will execute the serialized code
and so on
Most examples show a program where each thread works on some code, then all threads are synchronized and only single thread executes some more code (maybe it needed the results of all threads).
So, your second example "works" because no thread is stuck in an endless loop, however I can't think of a reason why anyone would use such a code,
since it is confusing and, well, not parallel at all.
i have used following code to repeat a process creation/close iteratively
dim vProcessInfo as new ProcessInfo
For i= 1 to 100
dim p as new Process
vProcessInfo.Arguments = "some"+i.toString()
p.StartInfo = vProcessInfo
p.Start()
p.WaitForExit()
p.Close()
Next i
the above code worked for me successfully. but it takes too much time for process creation and dispose. i had to change process argument dynamically in the iteration. is there any way to change the process argument dynamically. or is there any better method to reduce time. pls help me
"Is there any way to change the process argument dynamically" - do you mean you want to start one process, and change its command line arguments after it's started? No, you can't do that - but you could communicate with it in other ways, for example:
Using standard input/output (e.g. write lines of text to its standard input)
Using files (e.g. you write to a file, it monitors the directory, picks up the file and processes it)
Using named pipes or sockets
Creating a process is a relatively slow operation. You can't easily speed that up - but if you can change your process in some way like the above, and just launch it once, that should make it a lot faster.