concurrently handle input does not work as expected - npm-scripts

I am trying to combine two shell commands with concurrently lib and there should be option to forward user input into one of the child processes, see docs here, section --handle-input. But it somehow does not work in my case:
npm script
"test:unit": "concurrently --kill-others --handle-input --names test,build \"vitest --environment jsdom\" \"vite build --watch\"",
results in
[test] Tests 1 failed | 12 passed (13)
[test] Time 94ms
[test]
[test]
[test] FAIL Tests failed. Watching for file changes...
[test] press u to update snapshot, press h to show help
I am pressing "u" as when I run it separately but nothing happen, even I tried to confirm by enter key.
So maybe I understand lib wrongly, or something. How can I pass my key strokes to child process "test"?
Thank you for any advise.

I get help in mentioned github issue. CREDIT to Mr. Pascal Jufer
Using --raw option did the thing!
So my final script is:
"test:unit": "concurrently --raw --kill-others --handle-input --names test,build \"vitest --environment jsdom\" \"vite build --watch\"",

Related

Unit-testing expect script

I have an expect script that I would like to do unit-tests for, but I'm unsure how to go about it.
My initial thought was to override keychain, lpass and bw somehow, but I have no idea how to do this without modifying the original script, in my other tests I have overridden functions with shell function stubs and set PATH='' in some cases. I guess I could test all the 3 executed commands manually, but that doesn't really test the project as a whole and leaves some code untested which is vital to the functionality.
#!/usr/bin/expect --
set manager [lindex $argv 0]
# strip manager part
set argv [lrange $argv 1 end]
spawn -noecho keychain --quick --quiet --agents ssh {*}$argv
foreach key $argv {
if {$manager == "lastpass"} {
set pass [exec lpass show --name $key --field=Passphrase | tr -d '\n']
}
if {$manager == "bitwarden"} {
set pass [exec bw get password $key | tr -d '\n']
}
expect ":"
send "$pass\r"
}
interact
Any suggestions would be be highly appreciated!
For unit testing, you can just put a directory at the start of the PATH with mock scripts for the keychain, lpass and bw commands. After all, in a unit test you're just really checking that the code in the script itself is plausible and doesn't contain stupid errors. Yes, there are other ways of doing that, but mocking the commands via a PATH tweak is definitely the easiest and most effective way.
However, this is definitely a case where the useful testing is integration testing where you run against the real commands. Of course, you might do that in some sort of testing environment; a VM (especially something comparatively lightweight like Docker) might help here.
You do not need to test whether exec and spawn obey the PATH. That's someone else's job and definitely is tested!

How to fetch a launch plan using flyte api's without specifying a sha?

I would like to use the flyte api's to fetch the latest a launchplan for a deployment environment without specifying the sha.
Users are encouraged to specify the SHA when referencing Launch Plans or any other Flyte entity. However, there is one exception. Flyte has the notion of an active launch plan. For a given project/domain/name combination, a Launch Plan can have any number of versions. All four fields combined identify one specific Launch Plan. Those four fields are the primary key. One, at most one, of those launch plans can also be what we call 'active'.
To see which ones are active, you can use the list-active-launch-plans command in flyte-cli
(flyte) captain#captain-mbp151:~ [k8s: flytemain] $ flyte-cli -p skunkworks -d production list-active-launch-plans -l 200 | grep TestFluidDynamics
NONE 248935c0f189c9286f0fe13d120645ddf003f339 lp:skunkworks:production:TestFluidDynamics:248935c0f189c9286f0fe13d120645ddf003f339
However, please be aware that if a launch plan is active, and has a schedule, that schedule will run. There is no way to make a launch plan "active" but disable its schedule (if it has one).
If you would like to set a launch plan as active, you can do so with the update-launch-plan command.
First find the version you want (results truncated):
(flyte) captain#captain-mbp151:~ [k8s: flytemain] $ flyte-cli -p skunkworks -d staging list-launch-plan-versions -n TestFluidDynamics
Using default config file at /Users/captain/.flyte/config
Welcome to Flyte CLI! Version: 0.7.0b2
Launch Plan Versions Found for skunkworks:staging:TestFluidDynamics
Version Urn Schedule Schedule State
d4cf71c20ce987a4899545ae01286f42297a8f3b lp:skunkworks:staging:TestFluidDynamics:d4cf71c20ce987a4899545ae01286f42297a8f3b
9d3e8d156f7ba0c9ac338b5d09949e88eed1f6c2 lp:skunkworks:staging:TestFluidDynamics:9d3e8d156f7ba0c9ac338b5d09949e88eed1f6c2
248935c0f189c928b6ffe13d120645ddf003f339 lp:skunkworks:staging:TestFluidDynamics:248935c0f189c928b6ffe13d120645ddf003f339
...
Then
flyte-cli update-launch-plan --state active -u lp:skunkworks:staging:TestFluidDynamics:d4cf71c20ce987a4899545ae01286f42297a8f3b

how to see if a process by name is running in tcl

I want to use the pidof by a process given by name in tcl. I have used [exec pidof $proc_name ], but it always returns an error: child process exited abnormally.
I read somewhere exec always treat non-zero return as error as pidof return the process id number. Does anyone know if there is a workaround? Thanks in advance!
I want to use pidof is that i want to see if that process is running if not i will restart the process.
The problem is that pidof does strange things with exit codes:
Exit Status
At least one program was found with the requested name.
No program was found with the requested name.
This interacts badly with exec which treats a non-zero exit code as indicating that it should tell the rest of Tcl that there was an error.
The simplest way of dealing with this is a little extra shell script wrapper. Let's hide it inside a procedure for convenience:
proc pidof {name} {
exec /bin/bash -c "pidof '$name'; exit \$(( \$? - 1 ))"
}
All that does is subtract 1 from the exit code before it hits back into Tcl.
(You could also fix this using the techniques described in the exec manual but I think it's simpler to fix on the bash side this time.)
I ran into this and ended up causing some issues with the old linux environment I run in (no bash and exit code handling was a bit different with busybox).
My solution that should work anywhere would be similar to what a few suggested:
proc pidof {name} {
catch {exec -ignorestderr -- pidof $name} pid
if {[string is entier -strict $pid]} {
return $pid
}
}

Gitlab-CI runner hangs after makefile test fails

I am using Gitlab-CI for my build tests. I have a very simple test which compares the output of the test install/build with the known output. I put the test in a makefile.
The Makefile entry looks like this:
test:clean
make install DESTDIR=$(TEST_DIR)
$(TEST_DIR)/path/to/executable > $(TEST_DIR)/tmp.out
diff test/test.result $(TEST_DIR)/tmp.out
When the diff passes, an exit code of 0 is returned, a exit code of 1 is returned if the diff shows a difference in the files.
What I've tried:
Running make test from any shell runs the tests and exits, regardless of diff result
Running make test from the shell as gitlab_ci_runner runs the tests and exists regardless of diff result
When ran from Gitlab-CI, and the diff exit status is 0, the build returns success
The problem:
When ran in the Gitlab-CI and the diff exit status is non-0, the build hangs.
The output on the build screen is the output of the diff, and the last line is the expected error: make: *** [test] Error 1
After that, the cycle symbol keeps on, the runner does not exit with a build fail.
Any ideas? I thought that it might be something with Makefiles, but the Gitlab-CI will exit with a fail status if the Make exits with Error 1 for any other test. I can only see it happening on the output of the diff.
Thanks!
Also posted this to the GitLab mailinglist https://groups.google.com/d/msgid/gitlabhq/77e82813-b98e-4abe-9755-f39e07043384%40googlegroups.com?utm_medium=email&utm_source=footer

Bad spawn_id while executing expect command

I am writing a script that will copy Valgrind onto whatever shelf that we enter on the command line. The syntax is as follows:
vgrindCopy [shelf number]
For some reason, the files will copy over without any issue, but after the copy completes the follow error is observed:
bad spawn_id (process died earlier?)
while executing
"expect "#""
Here is a copy of the relevant code:
function login_shelf {
expect -c "
set timeout 15
spawn $1
expect \"password:\"
send \"$PW\r\"
expect \"#\"
sleep 1
exit
"
}
# login and make the valgrind directory at /sfs/software/shelf/current
set -- /opt/swe/tools/ext/gnu/valgrind-3.7.0/i686-linux2.6/lib/valgrind/*
login_shelf "/opt/corp/projects/shelftools/bin/app rsync -Lau $* $shelf:/shelf/valgrind"
After playing around with the code, I found that if I remove the line "expect \"#\"", then the program doesn't copy any of the files over anymore. What odd as well is that I'm seeing the issue when I run the script, but a co-worker is not.
Has anyone had a similar issue and determined the cause? Any help would be greatly appreciated as always!
Your code is spawning the rsync and at the expect \"#\" is waiting for rsync to output a #, which it never does, so it exits and expect reports the error.
When you remove the expect \"#\" the expect script exits, terminating the rsync.
Instead of expect \"#\" you should wait for rsync to exit:
expect eof
wait