I've tried some variations of this without luck:
Process, Exist, Game.exe
Process, Close, GamePatcher.exe
Return
I'm playing a game where the launcher/patcher stays open even after the game launches.
Any ideas?
A While loop should help you out. Here is a solution using a little ProcExists function that can be reused.
Loop
{
If ProcExists("Game.exe") and ProcExists("GamePatcher.exe")
break
Sleep 500
}
; Both procs exist, wait for Game to close.
While ProcExists("Game.exe")
Sleep 500
Process, Close, GamePatcher.exe
Reload ; Reloads waiting for both to exist again
ProcExists(p)
{
Process, Exist, % p
Return ErrorLevel
}
If you want this to perform continuously (keep the script running at all times), it would be best to implement SetTimer like this:
#Persistent
SetTimer, checkGame, 1000
Return
CheckGame:
If ! ProcExists("Game.exe")
Process, Close, GamePatcher.exe
ProcExists(p)
{
Process, Exist, % p
Return ErrorLevel
}
Related
I'd like to preface this by saying that I really tried looking here and anywhere else for an answer but without a succes.
Here is the problem:
I'm trying to automate a test for a web application that is loading a bigger amount of data and therefore has to be loading for a while. I have to wait for the page to load to do the next step and I'm trying to figure how to wait for the loading gif to be complete so that my tests can continue. The gif is a basic spinning thingy.
I would like to avoid using implicit wait time (cy.wait()) that is really only the last resort if noone is able to solve this.
What I found o far are these two functions:
cy.waitFor - this seems to be used mostly in situations, where you are waiting for an element to apper - I've tried this in different scenarios and works perfcetly but I havent been able to apply it here.
cy.waitUntil - this seems to be the thing I'm looking for but there is one huge problem. This function seems to have a timeout and I haven't been able to change it any other way than by changing timeouts globally for all the functions. If I set the global timeout to some longer period (minute +), then it works exactly how I would want it to work but obviously I dont really want to have such a long timeout for everything.
The way I see it, there are two possible solutions to this problem>
to change/turn off the timeout for the waitUntil function.
2)Or to somehow make it work with waitFor, because there seems to be no implicit timeout there and it just kind of waits forever for something to happen.
here is the code snippet of the situation:
cy.get('#load-data-button').click() // this clicks the button that stars the loading proces
cy.waitUntil(() => cy.get('body > div > my-app > billing > div > div > div.text-center > img').should('not.be.visible',) ) // this is the function that waits until the loading gif is invisible
I would be eternaly grateful for a solution, because unfortunatelly no one at my company is really a Cypress expert so they are not able to help.
Could you try something like:
cy.get([loading-spinner-identifier]).should("not.be.visible", { timeout: 60000 });
cy.get([an element that should now be visible]).should("be.visible");
It's a bit rough and ready, but I've used it when waiting for spinners to finish up. This waits a minute to see the spinner isn't there and then checks to see that something I'm expecting is there. I'd had varying success with this, but it has helped in some situations.
You can indeed use cy.waitUntil(). As you can see in the documentation of that package (https://www.npmjs.com/package/cypress-wait-until#arguments), the timeout that the function has is just the default (5000 ms) of an argument you can change it. You can even change how often cypress checks the condition you want, so if you do something like:
cy.waitUntil(() => cy.get('body > div > my-app > billing > div > div > div.text-center > img').should('not.be.visible'), {
errorMsg: 'The loading time was too long even for this crazy thing!',
timeout: 300000,
interval: 5000
});
...it will try for 300 seconds (5 minutes) each 5 seconds (just an example with very long timeout).
Also, maybe you can consider to wait until some element is visible after that loading spinner, instead of checking if it dissapeared. If you want to specifically test it, is fine, but if some element appears or becomes interactable after the loading, it could happen that is still not in the state you want, for a fraction of a second, after the spinner is not visible. If that is the case, I would avoid the checking of the spinner dissapearing in first place, unless it adds value to your test, and I would just wait for that element to be in the state you want (for example wait until some input is not disabled, element is visible, text appeared somewhere...).
Add positive and negative assertion to make sure loading spinner is visible.
Default Time is 4s for assert statement.
You can increase defaultTime to wait for assertion to validate by adding configuration in cypress.json
{
...
"defaultCommandTimeout": 4000,
...
}
const waitForSpinner = () => {
// First, make sure the loading indicator shows up (positive assertion)
cy.get('[data-qa="qa-waiting-spinner"]').should('be.visible')
// Then Assert it goes away (negative assertion)
cy.get('[data-qa="qa-waiting-spinner"]').should('not.be.visible')
}
I want to run multiple shell processes, but when I try to run more than 63, they hang. When I reduce max_threads in the thread pool to n, it hangs after running the nth shell command.
As you can see in the code below, the problem is not in start blocks per se, but in start blocks that contain the shell command:
#!/bin/env perl6
my $*SCHEDULER = ThreadPoolScheduler.new( max_threads => 2 );
my #processes;
# The Promises generated by this loop work as expected when awaited
for #*ARGS -> $item {
#processes.append(
start { say "Planning on processing $item" }
);
}
# The nth Promise generated by the following loop hangs when awaited (where n = max_thread)
for #*ARGS -> $item {
#processes.append(
start { shell "echo 'processing $item'" }
);
}
await(#processes);
Running ./process_items foo bar baz gives the following output, hanging after processing bar, which is just after the nth (here 2nd) thread has run using shell:
Planning on processing foo
Planning on processing bar
Planning on processing baz
processing foo
processing bar
What am I doing wrong? Or is this a bug?
Perl 6 distributions tested on CentOS 7:
Rakudo Star 2018.06
Rakudo Star 2018.10
Rakudo Star 2019.03-RC2
Rakudo Star 2019.03
With Rakudo Star 2019.03-RC2, use v6.c versus use v6.d did not make any difference.
The shell and run subs use Proc, which is implemented in terms of Proc::Async. This uses the thread pool internally. By filling up the pool with blocking calls to shell, the thread pool becomes exhausted, and so cannot process events, resulting in the hang.
It would be far better to use Proc::Async directly for this task. The approach with using shell and a load of real threads won't scale well; every OS thread has memory overhead, GC overhead, and so forth. Since spawning a bunch of child processes is not CPU-bound, this is rather wasteful; in reality, just one or two real threads are needed. So, in this case, perhaps the implementation pushing back on you when doing something inefficient isn't the worst thing.
I notice that one of the reasons for using shell and the thread pool is to try and limit the number of concurrent processes. But this isn't a very reliable way to do it; just because the current thread pool implementation sets a default maximum of 64 threads does not mean it always will do so.
Here's an example of a parallel test runner that runs up to 4 processes at once, collects their output, and envelopes it. It's a little more than you perhaps need, but it nicely illustrates the shape of the overall solution:
my $degree = 4;
my #tests = dir('t').grep(/\.t$/);
react {
sub run-one {
my $test = #tests.shift // return;
my $proc = Proc::Async.new('perl6', '-Ilib', $test);
my #output = "FILE: $test";
whenever $proc.stdout.lines {
push #output, "OUT: $_";
}
whenever $proc.stderr.lines {
push #output, "ERR: $_";
}
my $finished = $proc.start;
whenever $finished {
push #output, "EXIT: {.exitcode}";
say #output.join("\n");
run-one();
}
}
run-one for 1..$degree;
}
The key thing here is the call to run-one when a process ends, which means that you always replace an exited process with a new one, maintaining - so long as there are things to do - up to 4 processes running at a time. The react block naturally ends when all processes have completed, due to the fact that the number of events subscribed to drops to zero.
I have my game setup so that it starts and goes back to a loading screen room for 45 steps after which the next room is randomized. So at alarm[0] the following code activates:
randomize();
chosenRoom = choose(rm_roomOne, rm_roomTwo, rm_roomThree, rm_roomFour);
room_goto(chosenRoom);
The code here works fine the first time, but when it goes back from the randomly chosen room to the loading screen room it stays there and doesn't execute the code again.
Any help would be very much appreciated.
This may sound stupid but did you remember to set the alarm again after it's gone off? I know I've done this several times without thinking. Without seeing your code, I assume that after the alarm goes off it's not being set again, so it won't go off again.
I'm guessing the control object is "persistant", thus the Control Object only exists once and will remain forever (also after swithcing rooms) - thus thie create event only gets fired once - thus the alarm only gets set once.
Try to move your code to the event "Room Start" in your controller and it will work.
you can use event_perform(ev_alarm,0);.
The code here performs alarm[0] after 45 steps. after 45 steps again it triggers alarm[0]. Note that you have to put it in step event. And you have to initialize wait variable and times to zero in create event.
times is the repeat and wait is distance between events.
if(wait == 45 && times !=2){
event_perform(ev_alarm,0);
times++;
wait = 0;
}
else{
wait++;
}
#include <IE.au3>
Local $oIE = _IECreate("http://www.demo.com/")
Local $oLinks = _IELinkGetCollection($oIE)
Local $iNumLinks = #extended
_IELinkClickByIndex($oIE, Random(0, $iNumLinks))
Sleep(60000)
_IEQuit($oIE)
In this script, after opening a random link, I want it to wait for the specified time, but it looks like sleep() is not working properly sometimes; it's waiting forever, and sometimes it's waiting longer than the sleep() time I set!
Is there any way to make that page wait for one minute before closing IE without using the sleep() command?
This will do the trick
Local $oIE = _IECreate("http://www.demo.com/")
Local $oLinks = _IELinkGetCollection($oIE)
Local $iNumLinks = #extended
_IELinkClickByIndex($oIE, Random(0, $iNumLinks, 1), 0)
Sleep(60000)
_IEQuit($oIE)
Wait times were different because the page load time(which is inconsistent) is added.
By setting the 3rd parameter to zero _IELinkClickByIndex will not wait for the link to load hence not adding the sleep time.
I have erlang application. In this application i run process with spawn(?MODULE, my_foo, [my_param1, my_param2, my_param3]).
And my_foo:
my_foo(my_param1, my_param2, my_param3) ->
...
some code here
...
ok.
When i open etop i see that this my_foo/3 function status: proc_lib:sync_wait/2
Than i try to put exit(self(), normal) in the end of my function, but i see same behavior: proc_lib:sync_wait/2 in etop.
How can i kill or exit process correctly?
Thank you.
Note that exit(Pid, Reason) and exit(Reason) do NOT do the same thing if Pid is the process itself. exit/1 tells the current process to exit - from the inside if you like - while exit/2 sends an exit signal to the process, even if the process is itself. So when you do exit(self(), normal) you are actually sending the normal exit signal to yourself, which is ignored.
In this case putting the exit call at the end of the function should not make any difference as the process automatically dies (with reason normal) when the function with which it was started ends. It seems like the process is suspended somewhere before that.
proc_lib:sync_wait/2 is called inside proc_lib:start/start_link and sits and waits for the spawned process to do proc_lib:init_ack/1/2 to return the return value for start. It would appear that your process does not call init_ack.
Based on the limited information that you give in the question I would suspect that your process hasn't finished running yet.
Normally you don't need to add exit/2 to your process. It will exit automatically when the function has finished running.
You probably have a long running call in some code here that has not finished running. I recommend that you add logging information and see where you are stuck.