Comma says "test file died" while zef test . passes - raku

In some cases, and I'm not sure why, comma indicates "test file died" while, apparently, all tests have passed and zef does not indicate any problem. Happens for instance in this file where the only only quirk seems to be that plan * is being used.
However, while all tests that fail have that in common, it does not seem to be the only reason, since others with the same plan do work. Any idea

Related

Why can't Comma IDE find `raku` binary after a reboot?

I have a test that I'm running in Comma IDE from a Raku distro downloaded from github.
The tests passed last night. But after rebooting this morning, the test no longer passes. The test runs the raku on the machine. After some investigation, I discovered, that the binary was not getting found in the test:
say (run 'which', 'raku', :out).out.slurp; # outputs nothing
But if I run the test directly with prove6 from the command line, I get the full path to raku.
I'm using rakubrew.
I can easily fix this by adding the full path in the test, but I'm curious to know why Comma IDE sudddenly can't find the path to the raku binary.
UPDATE: I should also mention I reimported the proejct this morning and that caused some problems so I invalidated caches. So it may have been this and not the reboot that caused the problem. I'm unsure.
UPDATE 2: No surprise but
my $raku-path = (shell 'echo $PATH', :out).out.slurp;
yields only /usr/bin:/bin:/usr/sbin:/sbin
My best guess: in the situation where it worked, Comma was started from a shell where rakubrew had set up the environment. Then, after the reboot, Comma was started again, but from a shell where that was not the case.
Unless you choose to do otherwise, environment variables are passed on from parent process to child process. Comma inherits those from the process that starts it, and those are passed on to any Raku process that is spawned from Comma. Your options:
Make your Raku program more robust by using $*EXECUTABLE instead of which raku (this variable holds the path to the currently executing Raku implementation)
Make sure to start Comma from a shell where rakubrew has tweaked the path.
Tweak the environment variables in the Run Configuration in Comma.

Expect scripts cannot match

I am using the Community Edition of ActiveTcl from ActiveState on Windows with the Expect package installed. I have tried writing my own scripts, downloading some from various websites, and even copying and pasting them from the ActiveState website itself, but I always run into the same problem. My scripts can send commands perfectly and configure network devices as expected, but only by sandwiching sends with sleep periods. Whenever I try matching anything with expect, I always get the same thing (when using exp_internal 1): expect: does "" (spawn_id exp4) match glob pattern "AnyString"? no. And the same thing for regular expressions: expect: does "" (spawn_id exp4) match regular expression "AnyString"? no. The only time it seems to work is with a single wildcard: expect: does "" (spawn_id exp4) match glob pattern "*"? yes
expect: set expect_out(0,string) ""
expect: set expect_out(spawn_id) "exp4"
expect: set expect_out(buffer) ""
But no other combination of wildcards, literal or regex, seem to work. I have watched videos and seen screenshots. From what I can tell expect should iterate each character until a match is found, but it literally just stops at "", times out, and continues on to the next line. I am sure I am doing something obviously fundamentally wrong if the expect command does not work in Expect, but I just don't know what. It's as if it cannot read any of terminal output, but the send commands work perfectly so I know it's connected. Whatever terminal program I am trying (telnet, plink, netcat, etc.) all have the same problem. Expect really seems to be an awesome automation tool, so I'd really appreciate any suggestions that might help me get over this. I am fully expecting to feel stupid after receiving the answer.
This is probably due to known issues with Expect on later versions of Windows, which are unfortunately poorly documented. Expect may work on Win7 or Win8, probably not on Win10. It will generally work better on 32-bit Windows than 64-bit Windows.

grunt lesslint how to prevent output from being written to console

We are trying to use grunt-lesslint in our project, as our UI developer is comfortable fix errors in less file. grunt-recess seems more powerful but not sure if it can point errors in less file itself. I am unable to comprehend enough from lesslint page, and there do not seem to be many examples. Does anyone know the following:
How to prevent lesslint from displaying on the console. I use formatters and the report file is generated, but it also prints on console, which I do not want to.
How to make lesslint fail only in the case of errors (not warnings). Also csslint seems to report errors also, while lesslint mostly gives warnings only, why is that so? Does lesslint throw errors as well? How to make it fail only in case of errors?
I tried using 'checkstyle-xml' formatter, but it does not seem to use it (I have used in jshint and it gives a properly formatted xml, which it does not give for lesslint).
Is it possible to compile less (many files or directories) in conjunction with lesslint? Any example?
Thanks,
Paddy
I'd say it's more of a common practice to display stdout for this kind of thing; the JSHint plugin does it, as does any other linting plugin that I've used. If you get in another developer that uses Grunt they'll probably expect stdout too. If you really want to override this, use grunt-verbosity: https://npmjs.org/package/grunt-verbosity
Again, this is a convention in Grunt; if a task has any warnings then it fails. The reason being if you lint a file and the linter flags something up it should be dealt with straight away, rather than delay it; six months time you have 500 errors that you haven't fixed and you're less likely to fix them then. Most linting plugins allow you to specify custom options (I've used CSS Lint and that is very customisable), so if you don't like a rule you can always disable it.
This should work. If there's a bug with this feature you should report it on the issue tracker, where it will be noticed by the developers of the plugin. https://github.com/kevinsawicki/grunt-lesslint/issues
Yes. You can set up a custom task that runs both your linter and compile in one step: something like grunt.registerTask('buildless', 'Lint and compile LESS files.', ['lesslint', 'less']); note that you'll have to install https://github.com/gruntjs/grunt-contrib-less to get that to work. Also note that, failing linting will not compile your LESS files; mandate that your code always passes the lint check; you'll help everyone involved in the project.

Workaround for enoent error from Erlang's Common Test on Windows?

When using Common Test with Erlang on Windows, I run into a lot of bugs with Common Test and Erlang. For one, if there are any spaces in the project's path, Common Test often fails outright. To workaround this, I moved the project to a path with no spaces (but I really wish the devs would fix the libraries so they work better on Windows). Now, I got Common Test to mostly run, except it won't print out the HTML report at the end. This is the error I get after the tests run:
Testing myapp.ebin: EXIT, reason {
{badmatch,{error,enoent}},
[{test_server_ctrl,start_minor_log_file1,4,
[{file,"test_server_ctrl.erl"},{line,1959}]},
{test_server_ctrl,run_test_case1,11,
[{file,"test_server_ctrl.erl"},{line,3761}]},
{test_server_ctrl,run_test_cases_loop,5,
[{file,"test_server_ctrl.erl"},{line,3032}]},
{test_server_ctrl,run_test_cases,3,
[{file,"test_server_ctrl.erl"},{line,2294}]},
{test_server_ctrl,ts_tc,3,
[{file,"test_server_ctrl.erl"},{line,1434}]},
{test_server_ctrl,init_tester,9,
[{file,"test_server_ctrl.erl"},
{line,1401}]}]}
This happened in sometimes in Erlang R15 and older if the test function names were either too long or had too many underscores in the name (which I suspect is also a bug) or when too many tests failed (which means Common Test is useless to me for TDD). But now it happens on every ct:run from Common Test in R15B01. Does anyone know how I can workaround this? Has anyone had any success with TDD and Common Test on Windows?
Given the last comment you might want to disable the buildin_hooks. You can do this by passing the following to ct:run/1 or ct_run
{enable_builtin_hooks,false}
That should disable the cth_log_redirect hook and maybe solve your problem during overload.

what are difference between a bug and script issue?

what are difference between a bug and script issue?
and when ever a testcase is failed, how to solve that testcase, i mean what are basic
points to check how the testcase is failed.
The term "test case" is to generic - what do you mean by it in your context?
However generally in the test case you should identify the condition(s) that will make this test "passed". So if condition(s) is not satisfied the test is failed.
Generally the issue is some problem with something (for example, problem with the script). It may be the system configuration issue that prevents the script from successful execution or it may be an error in the script code, and this will be a bug in the script.